Lisp, the Universe and Everything


How to write an English POS tagger with CL-NLP


The problem of POS tagging is a sequence labeling task: assign each word in a sentence the correct part of speech. For English, it is considered to be more or less solved, i.e. there are taggers that have around 95% accuracy. It also has a rather high baseline: assigning each word its most probable tag will give you up to 90% accuracy to start with. Also, only around 10% of the words have more than 1 possible tag. However, their usage accounts for around 40% of total text volume.

Let's start our exercise by first verifying these facts in a data-driven manner. This will also allow us to examine the basic building blocks for our solution.

POS taggers are usually built as statistical models trained on some existing pre-labeled data. For English language there is quite a lot of it already available. For other languages, that don't possess such data, an unsupervised or a rule-based approach can be applied.

Data sources

The standard dataset that is used not only for training POS taggers, but, most importantly, for evaluation is the Penn Tree Bank Wall Street Journal dataset. It contains of not only POS tag, but also noun phrase and parse tree annotations.

Here's an example of the combined POS tag and noun phrase annotations from this corpus:

[ Pierre/NNP Vinken/NNP ]
[ 61/CD years/NNS ]
old/JJ ,/, will/MD join/VB
[ the/DT board/NN ]
[ a/DT nonexecutive/JJ director/NN Nov./NNP 29/CD ]

The tagset used in the annotation contains such symbols as NNP for proper nouns, , for commas, and CD for cardinal numbers. The whole set is provided for CL-NLP in the file src/syntax/word-tags.txt. Here's a snippet from it:

X Unknown, uncertain, or unbracketable. X is often used for bracketing typos and in bracketing the...the-constructions.
CC Coordinating conjunction

It's, obviously, possible to extend it with other tags if necessary. All of them are, finally, available as symbols of the tag package in CL-NLP.

Available data and tools to process it

What we're interested in, is obtaining a structured representation of this data. The ncorp package implements interfaces to various raw representations, such as this one.

Different NLP corpora exist in various formats:

  • slightly annotated text, like the above
  • XML-based formats (a well-known example is the Reuters corpus)
  • a treebank format used for representing syntax trees
  • and many others

Most of these representations are supported by the ncorp adapters at least to some extent. The interface of this module consists of the following entities:

  • a text structure for representing an individual text of the corpus (most of the corpora are, thankfully, divided into separate files)
  • a generic function read-corpus-file that should read a raw file and return as multiple values its several representations. The common representations supported by almost all its methods are: raw or almost raw text, clean text - just sentences stripped of all annotations, and tokens - a list of token lists extracted from each sentence. Additionally, other corpus-specific slots may be present. For example, a treebank-text structure adds a trees slot to hold the syntax trees extracted from the treebank
  • a generic function read-corpus creates a corpus structure out of all corpus' texts. In practice, it's usually not feasible to do that due to large sizes of most of the useful corpora. That's why the next function is more practical
  • a generic function map-corpus reads and streams each corpus file as a text structure to the argument function. This is the default way to deal with corpus processing

For our task we'll be able to utilize just the tokens slot of the ptb-tagged-text structure, produced with map-corpus. Let's collect the tag distribution for each word from the WSJ section of the PTB:

NLP> (let ((words-dist #h(equal))
       (map-corpus :ptb-tagged (corpus-file "ptb/TAGGED/POS/WSJ")
                   #`(dolist (sent (text-tokens %))
                       (dolist (tok sent)
                         (unless (in# (token-word tok) words-dist)
                           (:= (get# (token-word tok) words-dist) #h()))
                         (:+ (get# (token-tag tok)
                                   (get# (token-word tok) words-dist)
                   :ext "POS")
#<HASH-TABLE :TEST EQUAL :COUNT 51457 {10467E6543}>
NLP> (reduce #'+ (mapcan #'ht-vals (ht-vals *)))

So, we have around 50k distinct words and 1,3m tokens.

But, actually, the resources in the field has made some progress in the last decades, and there's a bigger corpus now available that contains not only the whole Penn Tree Bank, but also some more data from other domains. The annotations of the WSJ section in it were also improved. It is called OntoNotes. Let's do the same with its data:

NLP> (let ((words-dist #h(equal)))
       (map-corpus :treebank (corpus-file "ontonotes/")
                   #`(dolist (sent (text-tokens %))
                       (dolist (tok sent)
                         (with-accessors ((tag token-tag) (word token-word)) tok
                           (unless (eql tag 'tag:-NONE-)
                             (unless (in# word words-dist)
                               (:= (get# word words-dist) #h()))
                             (:+ (get# tag (get# word words-dist) 0))))))
                   :ext "parse")
 #<HASH-TABLE :TEST EQUAL :COUNT 60925 {1039EAE243}>

So, in the OntoNotes 4.0 there are 60925 distinct words. 50925 of them (~84%) are tagged with a single tag. I.e. we have a 16% of multi-tag words which corresponds well with the theoretical data. Also, there are 2,1m tokens in the corpus in total.

Calculating the number of words with distinct tags:

(count-if-not #`(= 1 (ht-count (rt %)))
              (ht->pairs words-dist))

And what about the total volume?

NLP> (let ((total1 0)
           (total 0))
      (map-corpus :treebank "ontonotes"
                  #`(dolist (sent (text-tokens %))
                      (dolist (tok sent)
                        (unless (eql (token-tag tok) 'tag:-NONE-)
                          (:+ total)
                          (when (in# (token-word tok) *single-tag-words*)
                            (:+ total1)))))
                  :ext "parse")
      (float (/ total1 total)))

Only 24% instead of 60%! What's wrong?

OK, here's the trick: let's add words that have more than 1 tag, but >99% of their occurrences are labeled with a single tag. For instance, the word "the" has 9 distinct tags in OntoNotes, but 0.9997 of the times it's a DT.

If we consider such words to have a single tag, we'll get just a slight increase in the number of single-tag words (+386: 51302 instead of 50916), but a dramatic increase in the volume of their occurrence - now it's 63%! Just as the literature tells us.

(NB. Taking such shortcut will only increase the quality of the POS tagger as 99% is above the accuracy it will be able to achieve anyhow, which is at most 97% on the same-domain data and even lower for out-of-domain data.)

Here's how we can determine such set of words:

(remove-if-not #`(let ((tag-dist (ht-vals (rt %))))
                   (> (/ (reduce #'max tag-dist)
                         (reduce #'+   tag-dist))
               (ht->pairs tag-dist))

NB. The above code samples contain some non-standard utilities and idioms that may look somewhat alien to some Lisp programmers. All of them are from my RUTILS library, and you'll see more below. Mostly, these include some hash-table-specific operators, new iteration constructs, a few radical abbreviations for common operations, and literal syntax for hash-tables (#h()) and anonymous functions (#`()).

Some of them are:

  • get#/in#/set# specialized hash-table access routines, and dotable hash-table iteration
  • a pair data type with lt/rt accessors and conversion routines to/from hash-tables
  • ? generic element access operator with support for generic setf, plus := abbreviation for setf (that is using a common assignment symbol) and :+ analogy for incf

Building the POS tagger

We have explored how to access different corpus data that we'll need to train the POS tagger. To actually do that, we'll re-implement the approach described by Matthew Honnibal in "A good POS tagger in about 200 lines of Python". In fact, due to the expressiveness of Lisp and efficiency of SBCL, we'll need even less than 200 lines, and we'll get the performance comparable to a much more complex Cyton implementation of the parser (6s against 4s on 130k tokens), but that's details... ;)

Here's the source code we'll be discussing below on github.

Our tagger will use a greedy averaged perceptron model with single-tag words dictionary lookup:

(defclass greedy-ap-tagger (avg-perceptron tagger)
  ((dict :initform #h(equal) :initarg :dict :accessor tgr-dict)
   (single-tag-words :initform #h(equalp) :initarg :single-tag-words
                     :accessor tgr-single-tag-words))
   "A greedy averaged perceptron tagger with single-tag words dictionary lookup."))

As you see, it is derived from a generic class tagger and an avg-perceptron learning model. It also has a dict slot that holds all the words known to the tagger.

Every tagger has just one generic function associated with it. You guessed its name - tag :)

(defmethod tag ((tagger greedy-ap-tagger) (sentence sentence))
  (let ((prev :-START-)
        (prev2 :-START2-)
        (ctx (sent-ctx sentence)))
    (doindex (i token (sent-tokens sentence))
      (:= (token-tag token)
          (classify tagger
                    (extract-fs tagger i (token-word token) ctx prev prev2))
          prev2 prev
          prev (token-tag token)))

It accepts an already tokenized sentence and (destructively) assigns tags to each of its tokens.

The main job is performed by the call to classify method that is defined for every statistical learning model in CL-NLP. Another model-associated method here is extract-fs which produces a list of features that describe the current sample.

Now, let's take a look at the implementation of these learning model-related methods.

(defmethod classify ((model greedy-ap-tagger) fs)
  (or (get# (first fs) (tgr-single-tag-words model))
      (call-next-method model (rest fs))))

For the tagger, we first check the current word against the dictionary of single-tag-words that we've built in the previous part and then call the classify method of the avg-perceptron model itself. That one is a matter of simply returning a class that is an argmax of a dot product between model weights and sample features fs that in this case can only have values of 1 or 0.

(defmethod classify ((model greedy-ap) fs)
  (let ((scores #h()))
    (dotable (class weights (m-weights model))
      (dolist (f fs)
        (:+ (get# class scores 0) (get# f weights 0))))
    (keymax scores)))  ; keymax is argmax for hash-tables

As you see, averaged perceptron is very straightforward - a simple linear model that has a weights slot which is a mapping of feature weights for every class. In the future this method will probably be assigned to a linear-model class, but it hasn't been added to CL-NLP so far.


Let's take a look at the training part. It consists of 2 steps. extract-fs performs feature extraction. What it, basically, does in our case of a simple perceptron model is returning a list of features preceded by the word we're currently tagging.

(cons word (make-fs model
                    ("i pref1" (char word 0))
                    ("i word" word)
                    ("i-1 tag" prev-tag)

The make-fs macro is responsible for interning the features as symbols in package f by concatenating the provided prefixes and calculated variables. This is a standard Lisp practice to use symbols instead of raw strings for such things. So, in the above example for the word "the" preceded by a word tagged as VBZ will get the following list of features:

'("the" f:|bias| f:|i pref1 t| f:|word the| f:|i-1 tag VBZ| ...)

The second part of learning is training. It is the most involved procedure here, yet still very simple conceptually. Just like with the tag method, we're iterating over all tokens preceded by a dummy :-START2- and :-START- ones, and guessing the current tag using classify. Afterwards we're updating the model's weights in train1. The only difference is that we need to explicitly first consider the case of single-tag-words not to run the model update step for it.

This is how it all looks modulo debugging instrumentation. Note the use of psetf to update prev and prev2 simultaneously.

(defmethod train ((model greedy-ap-tagger) sents &key (epochs 5))
  (with-slots (single-tag-words dict) model
    ;; expand dict
    (dolist (sent sents)
      (dolist (tok (sent-tokens sent))
        (set# (token-word tok) dict nil)))
    ;; expand single-tag-words
    (dotable (word tag (build-single-tag-words-dict (mapcar #'sent-tokens sents)
                                                    :ignore-case? t))
      (unless (in# word single-tag-words)
        (set# word single-tag-words tag)))
    ;; train
    (dotimes (epoch epochs)
      (dolist (sent (mapcar #'sent-tokens sents))
        (let* ((prev :-START-)
               (prev2 :-START2-)
               (ctx (sent-ctx sent)))
          (doindex (i token sent)
            (let ((word (token-word token)))
              (psetf prev
                     (or (get# word single-tag-words)
                         (let* ((fs (extract-fs model i word ctx prev prev2))
                                (guess (classify model fs)))
                           (train1 model (rest fs) (token-tag token) guess)
                     prev2 prev)))))
      (:= sents (shuffle sents))))

Note the additional expansion of the single-tag-words dict of the model (as well as of the normal dict).

An interesting feature of the problem's object-oriented decomposition in this case is that we have a generic perceptron machinery we'd like to capture and reuse for different concrete models, and a model-specific implementation details.

This dichotomy is manifested in the training phase:

  • the train method is specific to the greedy-ap-tagger. The generic perceptron training is much simpler, because it doesn't operate in a sequence labeling scenario (see the source code here)

  • however, there's also an :after method defined for train on the avg-perceptron model which averages all the weights in the end and prunes the model by removing zero weights

  • there are also 2 more methods that are not specialized for greedy-ap-tagger: train1 and update1. They perform 1 step of the normal perceptron training and model update

    (defmethod train1 ((model perceptron) fs gold guess)
      (:+ (ap-step model))
      (dolist (f fs)
        (ensure-f-init model f gold guess)
           :for class :in (list gold guess)
           :for val :in '(1 -1) :do
           (update1 model f class val))))
    (defmethod update1 ((model avg-perceptron) f class val)
      (with-slots (step timestamps weights totals) model
        (:+ (? totals class f) (* (- step (? timestamps class f))
                                  (? weights class f)))
        (:+ (? weights class f) val)
        (:= (? timestamps class f) step)))

Evaluation & persisting the model

We have reached the last part of every machine learning exercise - evaluation. Usually it's about measuring precision/recall/f-measure, but in the tagger case both precision and recall are the same, because the sets of relevant and retrieved items are the same, so we can calculate just the accuracy:

NLP> (accuracy *tagger* *gold-test*)

A "gold" corpus is used for evaluation. This one was performed on the standard evaluation set which is the Wall Street Journal corpus (parts 22-24), OntoNotes 4.0 version. The model was also trained on the standard training set (0-18). Its results are consistent with the performance of the reference model from the blog post. The "gold" features where obtained by calling the extract-gold method of our model on the data from the treebank.

But wait, we can do more.

First, on the evaluation part. It's not being a secret already for a long time in the NLP community that WSJ corpus is far from representative to the real-world use cases. And I'm not even talking of twitter here, but just various genres of writing have different vocabularies and distributions of sentence structures. So, the high baselines shown by many results on the WSJ corpus may not be that robust. To help with such kind of evaluation Google and Yahoo have recently released another treebank called WebText that collect 5 different types of texts seen on the web: from dialogues to blog posts. It's smaller than Penn Treebank: 273k tokens isntead of 1,3m with 23k distinct word types. If we evaluate on it the accuracy drops substantially: only 89.74406!

What we can do is train on more data with better variability. Let's retrain our model on the whole OntoNotes (minus the evaluation set of WSJ 22-24). Here are the results:

  • on WSJ evaluation set: 96.76323 - a modest gain of 0.4%: we're already at max here
  • on Webtext: 92.9431 - a huge gain of more than 4%!

So, broader data helps. What else can we do?

Another aspect we haven't touched is normalization. There are some variants of generating arbitrary tokens in English which lend themselves well to normalization to some root form. These include numbers, emails, urls, and hyphenated words. The normalization variant proposed by Honnibal is rather primitive and can be improved.

Here's an original variant:

(defmethod normalize ((model greedy-ap-tagger) (word string))
    ((and (find #\- word) (not (char= #\- (char word 0))))
    ((every #'digit-char-p word)
     (if (= 4 (length word)) "!YEAR" "!DIGITS"))
    (t (string-downcase word))))

And here's a modified one:

(defmethod normalize ((model greedy-ap-tagger) (word string))
    ((re:scan *number-regex* word) (make-string (length word) :initial-element #\0))
    ((re:scan *email-regex* word) "!EMAIL")
    ((re:scan *url-regex* word) "!URL")
    ((in# word (tgr-dict model)) (string-downcase word))
    ((position #\- word :start 1 :from-end t)
     (let ((suffix (slice word (1+ it))))
       (if (in# suffix (tgr-dict model))
           (string-downcase suffix)
    (t (string-downcase word))))

Such change allows to gain another 0.06% accuracy on the Webtext corpus. So, normalization improvement doesn't help that much. However, I think it should be more useful in real-world scenarios.

Now, as we finally have the best model we need a way to persist and restore it. The corresponding save-model/load-model methods exist for any categorical model. They use the handy ZIP and USERIAL libraries to save models into a single zip file, serializing textual (categories and feature names) and binary data (floating point weights) into separate files. Here's how our serialized POS tagger model looks like:

  Length  File
--------  --------------------
     552  classes.txt
 4032099  fs.txt
 2916012  fs.bin
 2916012  weights.bin
   35308  single-tag-words.txt
  484712  dict.txt
--------  --------------------
10384695  6 files

Finally, I believe, it's an essential practice to make all results we post online reproducible, but, unfortunately, there are restrictions on the use of the Pen Treebank corpus data, so we can't just add an automated test that will reproduce the contents of this post. Still, a small portion of OntoNotes WSJ corpus can be used under the fair use policy, and it is provided with CL-NLP for evaluation purposes.

Let's add such a test to give the users confidence in the performance of our model. For testing CL-NLP I'm using yet another my own library which is called SHOULD-TEST - I'll have another blog devoted to it some time in the future.

Here's a test we need:

(defun extract-sents (text)
  (mapcar #`(make 'ncore:sentence :tokens (ncorp:remove-dummy-tokens %))
          (ncore:text-tokens text)))

(defvar *tagger* (load-model (make 'greedy-ap-tagger)
                             (models-file "pos-tagging/")
                             :classes-package :tag))
(defvar *gold*
  (let (test)
    (ncorp:map-corpus :treebank (corpus-file "onf-wsj/")
                      #`(appendf test (extract-sents %)))
    (extract-gold *tagger* test)))

(deftest greedy-ap-tagger-quality ()
  (should be = 96.31641
          (accuracy *tagger* *gold*)))

Summing up

In this article I've tried to describe the whole process of creating a new statistics-based model using CL-NLP. As long as you have the necessary data, it is quite straightforward and commonplace.

If you want to use one of the existing models (namely, greedy averaged perceptron, as of now) you can reuse almost all of the machinery and just add a couple of functions to reflect the specifics of your task. I think, it's a great demonstration of the power of the generic programming capabilities of CLOS.

Obviously, feature engineering is on you, but training/evaluation/saving/restoring the model can be handled transparently by CL-NLP tools. There's also support for common data processing and calculation tasks.

We have looked at some of the popular corpora in this domain (all of which, unfortunately, have some usage restrictions and are not readily available, but can be obtained for research purposes). And we've observed some of factors that impact the performance and robustness of machine learning models. I'd say that our final model is of the production-ready state-of-the-art level, so you can safely use it for your real-world tasks (under the licensing restrictions of the OntoNotes dataset used for training it). Surely, if you have your own data, it should be straightforward to re-train the model with it.

You can also add your own learning algorithms, and I'm going to be continue doing the same likewise.

Stay tuned and have fun!


Announcing RADICAL-UTILS (a.k.a RUTILS 3.0)


RUTILS is to my knowledge the most comprehensive and all-encompassing suite of syntactic utilities to support modern day-to-day Common Lisp development. It aims to simplify the experience of both serious system-level development and rapid experimentation in the REPL.
The library has stabilized over more than 3 years of evolution, it introduces some substantial improvements not available elsewhere, has a thorough documentation and a good test suite. The only thing it lacks is, probably, a manual, yet example usage can be found in my other libraries (notably, CL-REDIS and SHOULD-TEST).

A short history

According to git history I've started the project RUTILS (originally, REASONABLE-UTILITIES) on Saturday, Sep 12 2009, almost 4,5 years ago. Yet, I've never done any serious announcement of it. One of the reasons was that it's kind of controversial to release "yet another"™ utilities library for Common Lisp, so I wanted to see if it would stick without any promotion. And it hasn't - i.e. it hasn't gotten any serious adoption in the CL crowd (e.g. only one of the authors of libraries in Quicklisp has used it, and only once there was a serious external interest to contribute). Yet, it did stick very well with me and has grown quite well over the years.
And now it has reached version 3 by finally getting a proper test suite, and I've also come to the conclusion that it makes sense to rename the project to RADICAL-UTILITIES - just to add a bit of self-irony that should be an essential component of any grown-up open source project. ;)

So, why RUTILS?

Initially, the concept of the library was expressed in its manifesto. Here are the three main points:
  • Become an extension to Common Lisp standard by adding the missing pieces that were proposed by notable members of the community in the past or have been proven useful in other languages. This aim included support of as many CDRs, as possible.
  • Accumulate all the useful syntactic utilities in one suite. Contrary to ALEXANDRIA), the most widely-used Lisp utilities library, that proclaims the concept of a "good citizen" (i.e. it doesn't include such libraries as SPLIT-SEQUENCE, ANAPHORA, or ITERATE), RUTILS aims to include all the needed pieces, regardless of their origin and presence in other libraries.
  • Introduce fine-grained modularity - one of the common complaints about the standard is the lack of this property. So, as RUTILS is intended as an extension of the standard, it makes sense to address the complaint This point, actually, arouse interest from the folks at Mathematical Systems Institute, who proposed to co-develop the library, and that prompted the appearance of version 2.0. Yet, their plans had changed, and this collaboration hadn't worked out. However, it pushed me to the understanding of what should be done to make the library production-ready and generally useful.
REASONABLE-UTILITIES has failed in several aspects. First of all, there was not enough effort put into documentation and testing. Also, CDR support didn't prove very useful in practice. Finally, the modularity aspect, regardless of its theoretical appeal, doesn't seem to make a difference as well: :shadow is just enough to deal with modularity at this level :)
However, all that was not critical, because the benefits for me personally were always substantial.

Version 3.0. Radical?

So, RUTILS has gradually solidified. In 2011, I had found the time to document everything, and last year I had finally added a good test suite (after, also finally finding a comfortable way to write tests with SHOULD-TEST).
So, here are the main benefits of RUTILS:
  • Adding a big set of list, hash-table, string, sequence, tree, array and other general-purpose utilities.
  • Providing a readtable with support for short lambdas (similar to Clojure's), literal hash-tables and vectors, and heredoc strings. This is the first "radical" step, as it radically cracks on boilerplate, and is a good use of the standard reader facilities (as well, as the excellent NAMED-READTABLES library). In general, I would say that support making hash-tables a first-class citizen is the main achievement of RUTILS.
  • And the second "radical" step - adding shortcuts for many standard Common Lisp operators and some of RUTILS utilities, as well. This serves three purposes: simplifies introduction of many operations to beginners, reduces typing in the REPL, and saves on horizontal line space (I adhere to 80-characters rule). This is a debatable decision, yet the stance of RUTILS is to provide the choice, but not enforce it.
  • Adding a pair structure instead of cons-pair. Frankly, there's nothing very wrong with cons-pair, except for the ugly middle-dot syntax for it, but it's, I believe, a general agreement in the CL community, that cons-pair is legacy and should be deprecated. Getting rid of it also allows to retire car and cdr - which is a necessary step despite a strong affection to them with many seasoned lispers (myself included, although I'm not very seasoned :)
Some more radicality may be found in the system RUTILSX, which is treated as contrib. Here I plan to place the stuff which value may not really be well-proven. Currently, it includes iterate with keywords from keyword package (this solves naming conflicts and allows to use arbitrary clause keywords), my own version of a generic bind (I had a couple of attempts at it), and, perhaps, the most radical of all - the addition of ~ operator for generic access to elements of arbitrary structures. Think:
(~ #{:foo '(1 2 3}} :foo 1) => 2  ;; #{} is a literal hash-table syntax
Finally, as mentioned earlier, RUTILS also includes a comprehensive test suite that covers all, but the most basic and straightforward functions and macros. Most other utility libraries don't.

FAQ (imaginary)

ALEXANDRIA is the most widely used CL utilities library, and, actually, CL library in general. Why not use it?
I don't have anything against it, as it's just useful. Yet, if I was using it, I'd still need to depend on several other libraries (which is not a problem now with [quicklisp]). Still, I'd also have to include my own stuff in each project, as no external library provides reader macros for hash-tables and lambdas, as well as many other small things provided by RUTILS.
What about quickutil?
I think, it solves the wrong problem. The main problem is utilities' raw utility :) Lack of modularity is often mentioned as a drawback of the Lisp standard, but practice shows that it's just a perception issue, not a real-world one. I believe that the level of modularity provided by RUTILS is just good enough, and even it isn't utilized so far (but, maybe, in the future it will be to some extent). The QUICKUTIL approach just adds unnecessary bookkeeping and introduces another import mechanism, while sticking with the standard package import also would work and doesn't create additional mental tax and confusion.
Also, I've come to the conclusion that Lisp's flat namespaces are, actually, a win over most implementations of hierarchical namespaces, but that's a subject of another rant.
What about other individuals' utility suits (metatilities, arnesi, fare-utils, ...) - I remember, that Xach has counted around 13?
Most of them are not supported. They also are usually a random collection of stuff without a vision. Yet, sometimes it makes sense to use them as well: for instance, I've used ARNESI for its delimited continuations facility and had even put it a fork on my github to fix some bugs. Although, as discussed below, a finer-grained approach is better - see CL-CONT.
Have you heard of cl21? How does it compare with RUTILS?
Surely. It is a similar effort to RUTILS, but with a much bolder aim. I, personally, doubt that it will succeed in it, because Lisp's culture is quite conservative, and you need to get a buy-in from the community in order to make such radical moves. The problem with this is that no one has enough motivation, given that Lisp is already good enough, and there's no central authority to steward the evolution process.
So, the way to go, as for me, is to make small incremental improvements and get them adopted. This has always worked in the past, and there are many good examples: CLOSER-MOP, CL-PPCRE, NAMED-READTABLES, Marco Antoniotti's libraries, CL-CONT, OPTIMA, etc.
The main improvements, proposed by cl21 can or are already addressed by such libraries, including RUTILS. For instance, cl21 proposes the same literal syntax for hash-tables. Another utility already present in RUTILS is a generic sequence iteration loop: doeach in cl21 and doseq in RUTILSX.
The only thing that, IMHO, may make sense to explicitly modernize in Lisp is the reader, as it has 2 hard-coded cases for handling . and :. I'm thinking in the lines of providing access to the previously read item and thus allowing for definition of arbitrary infix syntaxes. But this needs much more research...
What about speed and performance?
Lisp is a dynamic language with good performance characteristics. One of the reasons for that is the standard library engineered with performance in mind. It is manifested in the presence of various specific functions for working with different data structures (like nth, svref, aref, char accessors). This prompts a valid criticism as being unfriendly to newcomers. The solution, in my opinion, is to build on top of that generic versions using capabilities provided by CLOS. The sequence-manipulation part of the standard sets such an example with the likes of elt. RUTILS continues along this track. But there's an obvious drawback of loosing the performance benefit. I think that the Lisp approach here is balanced, as it's always possible to fall back to the "low-level" specific functions (that should always be provided), and at the same time to use the easy approach in 95% of the case when performance isn't critical.
In RUTILS, there's very little support for functional programming. Why is it missing?
There are several aspects of functional programming that are not present in CL standard. One that is usually addressed by utilities is function composition and currying. It is also addressed by RUTILS but in an uncommon way - with sharp-backquote reader macro. In my opinion it's a more concise, yet easier to understand approach. Here are a couple of examples taken from cl21 docs and elsewhere:
 #`(member % '("foo" "bar") :test 'string=) is a generalized approach to currying
 #`(sin (1+ %)) is the equivalent of (compose #'sin #'1+) in cl21
 #`(and (integerp %) (evenp %)) is the equivalent of (conjoin #'integerp #'evenp) in cl21
 #`(or (oddp %) (zerop %)) is the equivalent of (disjoin #'oddp #'zerop) in cl21
The benefit of this approach is that it is the same for any kinds of composition, while you need to define your own functions to accommodate them with the "functional" style: for example, how do you join on xor? (btw, xor is also provided by RUTILS)
Other functional features, like lazy evaluation, pattern matching, or functional data-structures, are much more specific in use (for example, I use rarely use them if at all), and they have dedicated support elsewhere. See CLAZY, OPTIMA, and FSet for examples.
Where's the documentation you're mentioning?
It's in the docstrings and subsequently in the excellent quickdocs. With its appearance, I can only recommend everyone not to waste time on creating a function documentation for your library, and focus on manual and use cases instead. Though, we need to wait for the update of the project to the most recent quicklisp.


Another Concurrency FUD

Today I've stumbled on an article entitled Locks, Actors, And STM In Pictures which looks rather cool, but is a total FUD. It tells a FoxNews kind of story of how troublesome using locks are compared to the new synchronization primitives.

First of all, the author uses an example synchronization problem that is very well solved with locks, and just tells that it is a bad solution without any concrete argumentation.

What's more important though is that he needs to modify the original problem to fit with the alternative solutions.

In the Actor's case we need to resort to a waiter. Can the problem be solved without it? Surely, but the code will be, probably twice as big, and very complicated compared to the lock-based solution. Not to mentioned that it will also be prone to a deadlock. The philosophers code-based solution with a waiter is even simpler: lock the waiter, tell it what to do, unlock it.

And in the STM case we are only taking 2 forks at once, but in the task we need to do it one by one. This seems like a minor issue, but now try to add printing to the mix: can we print "Aristotle took left fork" after he takes left fork, "Aristotle put left fork" if he couldn't take right fork?

PS. I like actors and STM and try to use them where appropriate. I also teach them to students as part of the Operating Systems course.


NLTK 2.2 - Conditional Frequency Distributions

The summer is over, and it's time to go back to school and continue our exploration of the NLTK book. We're finally getting at the last part of the second chapter - conditional frequency distributions.

What is a conditional frequency distribution? From a statistical point of view, it is a function of 2 arguments - a condition and a concrete outcome - producing an integer result that is a frequency of the outcome's occurrence under the condition. The simplest variant of such function is a table.

In object-oriented programming there's a tendency to put everything in a program in objects, including functions, and NLTK authors took the same route by creating a ConditionalFreqDist class. Yet, in my view, it is quite dubious and I have a hard time with such objects. What we really need here is just a protocol to work with tables. At the same time there's a classic separation of concerns issue here, as this protocol shouldn't mandate the internal representation or a way to generate different values in the distribution. This is where object come to the rescue in OOP: they allow you to change the value storage/generation policy via subclassing. But, as Joe Armstrong has wittily pointed, "you wanted to have a banana, but got a gorilla holding a banana"...

So, let's get to the protocol part for which a proper concept in OOP languages will be an interface. Its main operations are: creating a distribution and examining it (tabulating, plotting, etc.)

Creating a distribution

Let's create a CFD from the Brown corpus. Remember that its topics are the following (and they'll become the conditions of our CFD):

NLTK> (keys (corpus-groups +brown-corpus+))

Let's create the simplest distribution:

NLTK> (make-cfd (maptable (lambda (topic texts)
                            (declare (ignore topic))
                            (mapcar #'token-word
                                    (flatten (mapcar #'text-tokens texts))))
                          (corpus-groups +brown-corpus+)))
NLTK> (defvar *cfd* *)

We'll use a very concrete hash-table representation for the distribution, and we can go a very long way with it. Although, later we'll see how to deal with other representations.

There's a new utility here, maptable, that is an equivalent of mapcar in its simple form, when it works with a single list, but operating on hash-table instead. (And, being a generic function, it can also operate on any other table-like structures).

Each frequency distribution in a CFD is an instance of ngrams class that we've worked with in the previous chapter:

NLTK> (~ *cfd* :fiction-romance)
#<TABLE-NGRAMS order:1 count:8451 outcomes:70022 {101C030083}>

Another new utility here, ~, is an alias to the generic element access operator generic-elt that is meant to provide uniform access to elements of different linguistic structures that are defined throughout CL-NLP. It's the analogue of [] access operator in Python and other C-like languages, with the upside that it's not special — just a function. And it also supports chaining using the generic-function :around method-combination:

(defgeneric generic-elt (obj key &rest keys)
   "Generic element access in OBJ by KEY.
    Supports chaining with KEYS.")
  (:method :around (obj key &rest keys)
    (reduce #'generic-elt keys :initial-value (call-next-method obj key))))

As you can see, a lot of utilities, including very generic ones, are defined along the way of our exploration. Most of them or similar ones come built into Python. This can be seen as a failure of Lisp standard at the first sight. Yet it may as well be considered a win, because it's so easy to add these things on top of the existing core with generic functions and macros, and there's no issue of "second-class citizenship". In Python the [] operator comes pre-built: you can redefine it with special hooks but only in one way, you can't have different [] implementations for one class. The other benefit of Lisp's approach is more control: you can go with the fast functions that work directly with the concrete data-structures, like mapcar for lists or gethash for hash-table. Or you can use a generic operator and pay the price of generic dispatch: standard map for abstract sequences or user-defined ~ etc, when you need future extensibility.

So, we can also use generic-elt to access individual frequency in the CFD:

NLTK> (~ *cfd* :fiction-romance "could")

For the first key it will be mapped to hash-table accessor (gethash) and for the second to the method freq of the ngrams object.

Examining the distribution

Now, we can move to the more interesting part: analysing the distribution.

The first way to explore the CFD shown in NLTK book is tabulating. It prints the distribution values in a nice and readable 2D table.

NLTK> (tabulate *cfd*
                :conditions '(:press-reportage :religion :skill-and-hobbies
                              :finction-science :fiction-romance :humor)
                :samples '("can" "could" "may" "might" "must" "will"))

                     can  could  may  might  must  will
    PRESS-REPORTAGE   93     86   66     38    50   389
           RELIGION   82     59   78     12    54    71
  SKILL-AND-HOBBIES  268     58  131     22    83   264
    FICTION-ROMANCE   74    193   11     51    45    43
              HUMOR   16     30    8      8     9    13

And, by the way, here's how we can get the individual numbers for one category:

NLTK> (dolist (m '("can" "could" "may" "might" "must" "will"))
        (format t "~A: ~A " m (~ *cfd* :press-reportage m)))
can: 93 could: 86 may: 66 might: 38 must: 50 will: 389

Interestingly enough, these results differ from NLTK's ones:

can: 94 could: 87 may: 93 might: 38 must: 53 will: 389

What's the reason? The answer is case-sensitivity. Let's try the same with the case-insensitive version of the CFD:

NLTK> (let ((cfd (make-cfd (maptable (lambda (topic texts)
                                       (mapcar #`(string-downcase (token-word %))
                                               (flatten (mapcar #'text-tokens
                                     (corpus-groups +brown-corpus+)))))
        (dolist (m '("can" "could" "may" "might" "must" "will"))
          (format t "~A: ~A " m (~ cfd :press-reportage m))))
can: 94 could: 87 may: 93 might: 38 must: 53 will: 389

Now, there's an exact match :)

The printing of each row in tabulate is performed in the following loop:

(dotable (k v table)
  (when (or (null conditions)
            (member k conditions))
    (format stream "  ~V:@A" key-width k)
    (let ((total 0))
      (dolist (s samples)
        (format stream "  ~V:@A" (strlen s)
                (funcall (if cumulative
                             #`(incf total %)
                         (or (~ v s) 0)))))))

It has to account for scenarios when conditions may or may not be provided, as well as to treat cumulative output.

And, finally, let's draw some nice pictures. But first we need to create the inaugural corpus from individual texts:

(defparameter *inaugural* (make-corpus-from-dir
                           "Inaugural speeches corpus"
                           (data-file "../nltk/data/inaugural/")
                           :test #`(re:scan "^\\d+" %)))

To do that we introduce the function make-corpus-from-dir in the ncorpus package that walks the directory of raw text files and creates a basic corpus from them.

(let (texts)
  (fad:walk-directory dir
                      #`(when (funcall test (pathname-name %))
                          (push (pair (pathname-name %)
                                      (read-file %))
   :desc name
   :texts (mapcar #`(make-text
                     :name (lt %) :raw (rt %)
                     (mapcar #`(ncore:make-token :word %)
                             (mapcan #`(ncore:tokenize ncore:<word-tokenizer> %)
                                     (ncore:tokenize ncore:<sentence-splitter>
                                                     (rt %)))))

(Here, pair, lt, and rt are yet another generic utility group — the equivalents of cons, car, and cdr).

Now, we need to make a CFD from the corpus. It is somewhat complicated, because the corpus is keyed by text names, and we want a CFD keyed by words:

(make-cfd (let ((ht (make-hash-table :test 'equalp)))
            (dolist (text (corpus-texts *inaugural*))
              (let ((year (sub (text-name text) 0 4)))
                (dolist (word (mapcar #'token-word (text-tokens text)))
                  (when-it (find word '("america" "citizen")
                                 :test #`(search %% % :test 'string-equal))
                    (set# it ht (cons year (get# it ht)))))))
          :eq-test 'equalp)

And now, we can plot it:

NLTK> (plot-table *)

Plot of usage of 'america' and 'citizen' in Inaugural speeches

Once again, plotting is performed with gnuplot. This time, though, with a general plot-table function that is able to handle any table-like structure. Here is its code:

(defun plot-table (table &rest args
                   &key keys cols cumulative (order-by (constantly nil)))
  "Plot all or selected KEYS and COLS from a TABLE.
   CUMULATIVE counts may be used, as well as custom ordering with ORDER-BY."
  (mv-bind (file cols-count keys-count)
      (apply #'write-tsv table args)
    (let ((row-format (fmt "\"~A\" using ~~A:xtic(2) with lines ~
                            title columnheader(~~:*~~A)"
      (cgn:with-gnuplot (t)
        (cgn:format-gnuplot "set grid")
        (cgn:format-gnuplot "set xtics rotate 90 1")
        (cgn:format-gnuplot "set ylabel \"~@[Cumulative ~]Counts\"" cumulative)
        (cgn:format-gnuplot "plot [0:~A] ~A"
                            (strjoin "," (mapcar #`(fmt row-format (+ 3 %))
                                                 (range 0 keys-count))))))
    (delete-file file)))

A generic function write-tsv creates a temporary file in tab-separated format. Here, we use its method for hash-table objects (which in our case represent a CFD — each condition is a key and each sample is a column). So, the TSV file's contents for our last distribution look like this:

No Label citizen america
0 1789 5 2
1 1793 1 1

Using different data sources

Finally, as promised, here's how to use the newly developed CFD machinery with other ways to obtain raw data for the distribution. Oftentimes, it's infeasible to read the whole contents of some files in memory, i.e. it makes sense to use line-by-line processing. There's a handy dolines macro that allows just that. Let's see, how we can use it to build the male/female names last letters distribution from NLTK Figure 2.10.

First of all, we can just use it when we create the distribution:

(make-cfd (let ((ht (make-hash-table)))
            (dolist (gender '(:female :male))
              (dolines (word (fmt "../nltk/data/names/~(~A~).txt" gender))
                (let ((last-char (char word (1- (length word)))))
                  (when (alpha-char-p last-char)
                    (set# gender ht
                          (cons last-char (get# gender ht)))))))

And here's the plot of the distribution for reference:

(plot-table * :order-by 'char<)

Plot of the distribution of last letters in male/female names

Another approach would be to use a proxy object, that allows to pull data for a CFD from a list of files:

(defclass file-list ()
  ((files :initarg :files :reader files)))

and specialize make-cfd generic function for it (adding a transform keyword argument to be able to manipulate file data):

(defmethod make-cfd ((raw file-list)
                     &key (eq-test 'equal) (transform 'identity)
  (let ((rez (make-hash-table :test eq-test)))
    (dolist (file (files raw))
      (set# (mkeyw (pathname-name file)) rez
            (make 'table-ngrams :order 1
                  :table (count-ngram-freqs
                          (mapcar transform
                                   (read-file file)
                                   :remove-empty-subseqs t))))))

And this is how we would call it:

(make-cfd (make 'file-list
                :files (mapcar #`(fmt "../nltk/data/names/~(~A~).txt" %)
                               '(:male :female)))
          :eq-test 'eql
          :transform #`(char % (1- (length %))))

This looks much heavier than the original approach, but sometimes it may be inevitable.

Yet another way is to use a different plotting function that dynamically builds the plot as it pulls data. This will require different style of communication with gnuplot: not via a temporary file, but giving commands to it dynamically. This is left as an exercise to the reader :)

Parting words

This part was more a discussion of various rather abstract design issues, than a demonstration of NLP techniques.

Overall, I don't see a lot of value in special treatment of conditional frequency distributions as it is done in NLTK. Surely, it's one of the many useful analytic tools, but I haven't seen such objects used in production code. Unlike conditional frequencies per se or ngrams that are all over the places.


Ищем Лиспера с горящими глазами

Уже 3 года я работаю в Grammarly, где мы строим (и довольно успешно) самый точный сервис исправления и улучшения английских текстов. В экстремуме эта задача упирается в полноценное понимание языка, но пока мы не достигли этого уровня, приходится искать обходные пути. :) Понятно, что работы у нашей команды, которую мы называем "core team", выше крыши. Но эта работа довольно сильно отличается от обычной программной инженерии, точнее, инженерия — это ее конечный этап, которому предшествует огромное количество экспериментов и исследования.

Я как-то назвал нашу систему Grammar OS, поскольку она чем-то похожа на ОС: в ней есть низкий уровень языковых сервисов, есть промежуточные слои проверщиков, есть и пользовательские приложения. Одним из ключевых компонентов является написанная на Лиспе и постоянно развивающаяся система проверка грамматики. Этот проект помимо собственно сложной задачи, на которую отлично легли возможности Лиспа, high-load'а, также интересен и тем, что над ним работают около 5 компьютерных лингвистов, которые все время дописывают в него какой-то код...

К чему я это веду, это к тому, что мы уже больше полугода ищем Лисп-разработчика, желающего подключиться к развитию этой системы, а в перспеткиве и к работе над новыми Лисп-проектами, о которых мы думаем. Помимо Лиспа в ядре у нас используется много Джавы (даже больше), и мы, в общем-то, ее тоже любим. И немного Питона. А основной фокус сейчас все больше и больше сдвигается в сторону различных алгоритмов машинного обучения.

Наконец, у нас в Grammarly отличная атмосфера и куча людей, у которых есть чему поучиться.


В общем, как по мне, практически работа мечты для программиста, которому нравится обработка текстов, R&D или Лисп. Логичный вопрос: почему же мы так долго не можем найти подходящего человека? Ответов несколько:
  • Во-первых, мы всех нанимаем очень долго (почти полгода искали даже Java-разработчика). Почему — смотри выше — качество команды для нас важнее скорости.
  • Во-вторых, лисперов с индустриальным опытом у нас не так и много: в Киеве их от силы пару десятков, в России — больше, но мало кто из них хочет переезжать.
  • В-третьих, даже у технически сильных кандидатов часто не горят глаза :(
Короче говоря, ситуация такова:
  • Есть сложная и интересная работа, с хорошей зарплатой, отличной командой, компанией и перспективами.
  • Нужен увлеченный своей работой программист, который хорошо понимает алгоритмы, математику и статистику, и хочет писать на Лиспе.
  • Опыт на Лиспе нужен, но не обязательно из реального мира: open-source или хобби-проекты тоже покатят. Но тогда важно желание и способность быстро развиваться в этом направлении.
Если что, пишите на


Free "Lisp Hackers" Ebook

More than a year passed since I had started to publish a series of interviews with Lisp hackers. It was a great project, but I'm going to finalize it for now. All of the interviews are collected in a free ebook that is available through leanpub. You should share it with your friends and on the interwebs! ;)

Below is a few observations on the whole series that I think are interesting.

The Road to Lisp

There was a famous survey on the Association of Lisp Users site called "The Road to Lisp". I have somewhat replicated it in the interview and some of the answers where quite surprising, as far as I'm concerned.

First of all, people came to Lisp through friends, who gave them a copy of SICP or showed some fascinating Lisp programs. There's also a mention of online forums (usenet, irc, etc.)

So SICP is the most influential book, followed by Peter Norvig's PAIP. The other mentioned sources are "Writing GNU Emacs Extensions" by Bob Glickstein, "The UNIX-Haters Handbook", and Paul Graham's "Beating the Averages".

Two interesting things get notable mentions as leading to Lisp: computer graphics and Caml-light (for the French guys).

And receiving it as legacy from the dad is somewhat priceless!

Parens transfer

I'd also like to add my experience here: the 2 things which brought me to Lisp were Eric S. Raymond's "How to Become a Hacker" which mentioned that you need to learn Lisp to become a better programmer, and Peter Seibel's great "Practical Common Lisp", which came out after all the interviewed hackers were already into Lisp, so they couldn't have been influenced by it.


This was an interesting question, because it made most of the persons go out of their comfort zone a little: after all, most of them are fond of Common Lisp. And the answers are interesting, although expected.

Basically, it boils down to 3 things:

  • the arrogance of the community and/or outsiders prejudice against it
  • lack of static typing and other ways to express constraints on the program
  • disdain for the platform it lives on, which includes problems with embedding into other systems, no good way to write one-liners, etc.

Also noted were:

  • necessity to struggle to achieve high performance
  • the unspecified corners, but this problem is becoming less and less acute

A Lisp project

This was a question, proposed by Zach Beane: what Lisp project would you do if you weren't constrained in resources and other stuff. For the sake of completeness, I've also followed up with him, and here's his answer:

Right now I'd love to spend more time polishing Quicklisp. I'd love to make a good system for discovering the functionality you need (it's not easy to make the connection from "I need to fetch web pages" to "drakma"). I'd like to make interfaces for accepting user feedback and reviews. I'd like to make it easy to make connections from a project to its website, bug tracker, mailing list, documentation, etc. I'd like to publish my build tools and build logs. I'd like to make EVERYTHING a lot better, to make the system not just better than what CL had before but better than what other language ecosystems offer, too. It's all a lot of work.

As a fantasy project, I'd love to make a system for interesting visualization of complex data, something where it's easy to splat something quick and dirty on the screen/page, but which can grow in capability as the need arises. Or maybe just some tool for making audiovisual toys, with cool pictures and noises coming out.

The other interesting answers were:

  • another take on reflection
  • an OS, based on Linear Lisp, bootstrapped from Racket or Maru
  • Lisp bridges to other runtime systems
  • a Common Lisp object database
  • a Go-playing program
  • and, surely, an own Lisp dialect or even a separate programming language :)

Lisp companies

It's one of the popular myths that it's impossible to find a Lisp job. Well, it's definitely harder than to find a Java one, but speaking about the job's quality YMMV. Some of the Lisp companies were mentioned in the interviews:

  • ITA Software that employed up to 50 Lisp developers in its Boston office and was bought by Google for almost $1B
  • Franz in the Silicon Valley
  • Teclo Networks in Switzerland
  • MSI in Japan
  • Novasparks which operates from Boston and Paris

But a lot more weren't mentioned. To name a few:

  • probably, the biggest Lisp company in the world — the Portuguese SISCOG with 70+ Lisp developers
  • Clozure Associates band of Lisp gurus from the US East Coast
  • Copyleft from Norway
  • RavenPack from Spain
  • Australia's division of Accenture
  • Agri-Esprit from France

I would say, that most of them work in pretty interesting domains and with challenging problems. There are also many more one- or two-man Lisp shops scattered around the world. So, yes, Lisp companies are rare, but there's nothing wrong with relying on Lisp in a company: it doesn't fail you and may even bring some outstanding results. Not to mention the fun of the process itself...



Lisp Hackers: Marc Battyani

Marc Battyani is one of the people who put Lisp at the foundation of their business, and he doesn't seem to regret that decision. And the business itself is exploring very interesting aspects of high-performance and reconfigurable computing. Besides, he has a notable open-source contribution with cl-pdf/cl-typesetting libraries (which even I use, so thanks Marc!). He elaborates on all these and more in much detail in the interview.

Tell us something interesting about yourself.

I think maybe the most unusual things I do is that I work on very different application domains which are even sometimes completely at opposite extremes both in electronics and software. For instance form ultra low power smart sensors based on $1 microprocessors which will run continuously for 5+ years only powered by a small coin battery up to the world's lowest latency supercomputers based on FPGA costing thousands of $ per chip. On the software side I use programming languages ranging from the lowest level languages like assembly or even below with VHDL up to really high level languages like Common Lisp.

I've always enjoyed mixing electronics design and higher level computer science and all that diversity probably gives me a different and original view on say computing and programming in general.

What's your job? Tell us about your company.

I'm the CTO of NovaSparks a startup I founded in 2008 to make ultra-low latency FPGA based supercomputers for the financial markets. BTW These things are really incredibly fast. For instance on 10Gb/s Ethernet market data packets coming from exchanges like the NASDAQ we process the IP/UDP/multicast network stack, extract the messages from the packets, parse/decode/filter/normalize those messages, maintain the indexed order book data structures, aggregate the price levels per stock, generate output messages and finally send them to a server through PCI-express or 10Gb/s Ethernet network stacks. The nice thing is that we do all that fully pipelined at a rate of one message every 12 nanoseconds! To have an idea of how fast it is, in 12 ns the light will only travel 3.6 meters (11.8 ft). Another way to view this performance is that the system can process 83 Millions of financial messages per second without any queuing.

As an aside it is interesting to note that the Domain Specific Language Compilers and various other tools written in Common Lisp have been key enabling factors for the creation of NovaSparks.

I'm also the CEO and CTO of Fractal Concept which is the company where we were developing that technology before starting NovaSparks but as NovaSparks has been using more than 100% of my time for the last years Fractal Concept has been less active with only the development and maintenance of the smart sensors going on.

Do you use Lisp at work? If yes, how you've made it happen? If not, why?

I've always used Common Lisp for most of my work and even when I need to use other programming languages (like VHDL for NovaSpark's FPGAs or javascript for web stuff) I generally use Common Lisp at least for a lot of related tasks like prototyping, designing and testing algorithms, extracting statistics, performing simulations, generating test data, analysing test runs.

The next step is very often to generate some or all the code in other languages by designing various domain specific languages (DSL) which will take care of the tedious aspects of programming in less powerful languages. I really like it when from a few 100s of lines written in a easy to use high level DSL we generate 10000 to 60000+ lines of very low level VHDL code saving months of development.

About that point I think I would even say that I'm mostly doing Language Oriented Programming by trying to abstract the domain specific knowledge into various domain specific languages with very different syntaxes like s-expressions, C like or other languages or even sometimes adding to the mix some GUI or data-based inputs. Then those DSLs would generate most if not all the application code in whatever language is needed be it VHDL, asm, C, javascript or Common Lisp.

What brought you to Lisp? What holds you?

A friend of mine gave me a version of Le_Lisp a version of Lisp used in the 80's by the French universities and engineering schools to teach high level programming concepts. At that time I was mostly programming in Z80 assembly language on a TRS80 and the straight jump from ASM to Lisp was quite a shock and an eye opener.

Since then I've used Common Lisp for countless projects ranging from a few hours of work to multi-years ones and I still find it awesome. Where else can you find a language providing such powerful and multi-paradigms features like CLOS, generic functions, the MOP, macros, closures, s-expressions, lambdas, an interactive REPL (Read Eval Print Loop), native compilation of applications, on the fly native compilation of generated code, the condition system, interactive live programming, real time live debugging of running software and more!

What's the most exciting use of Lisp you had?

I've got so many of them than picking only one would be difficult so here are a few of them:
  • softscan: real time driving and data acquisition of automated non-destructive testing installations with real time 3D display (a first in 1995)
  • my web application framework: Automatic generation of really complex web applications (the whole stuff from the HTML/javascript front-end to the server back-end). It has been fully Ajax-based with dynamic modification of the displayed pages without reloading the page. This seems obvious nowadays but it was also a first in 2000 and IIRC the term "Ajax" was only invented 5 years later.
  • hpcc: The awesome DSL to VHDL compiler. In many aspects VHDL is the complete opposite of Common Lisp. It's a hardware description language used to program FPGA and it deals with the lowest possible programming level with data types like signals, clocks, and bits. Programming at that level is really tedious, time consuming and verbose so it's really a relief to be able to generate tens of thousands of lines of highly optimized VHDL code from just a few hundred lines of some high level Domain Specific Language.
  • cl-pdf/cl-typesetting: Being able to generate the first PDF files from scratch in only 107 lines of Common Lisp was an Haha moment.

What you dislike the most about Lisp?

The language itself is somewhat good enough and anyway Common Lisp makes it really easy to change most of itself to add the new and cool stuff or ideas of the day.

In fact what I dislike about Lisp is outside the language and more related to the (mis)perception that people have about it. Having almost everytime to justify its use and sometimes even to fight to be able to use it is somewhat annoying and tiresome.

Of course, Lisp is not for everybody but this is not a reason to have nobody using it. In fact I view Lisp as some kind of amplifier which will give awesome things when used by brilliant developers but will end up giving an incredible mess when used by people without any clues about what they are doing. That's one aspect that makes Lisp very different from other languages which are especially designed to try to normalize and average what people can do with them.

Describe your workflow, give some productivity tips to fellow programmers.

I obviously use emacs and all the nice stuff that work with it like slime, org-mode, magit and lots of other packages. I switched from subversion (svn) to git for all my projects and I now use git mostly form emacs with magit.

In general I have several instances of Lispworks running as standalone apps on my laptop but connected to Slime through M-x slime-connect rather than being started by slime. I do a lot of exploratory programming and interactively play with the code while I write it and if I have to optimize some code I very often use #'disassemble.

About my Lisp coding style I really do like generic functions, CLOS and the MOP. As mentioned earlier I try to generate as much of the code as possible by using macros in simple cases and full blown DSLs in more complex ones. Sometimes I do not dislike making some premature optimizations when I know that speed will be important for some project. BTW when speed matters I like to generate and compile code on the fly when it's useful (and possible) and try to generate really optimized code. It's always cool when you have Common Lisp applications running much faster than C++ ones.

I also refactor a lot. Generally when I start on some new stuff I try to make a first version that works well enough but then once it's done or generally after some time I have some cool ideas to make something much better. At that point I'm really happy that Common Lisp makes refactoring very easy thanks to optional and key args, macros and generic functions with multiple dispatch because all that enables me to make even very deep modifications very easily. I really find those Common Lisp features make software very resilient to code modifications.

What are the advantages of using Lisp in the HPC field? What are the drawbacks? Are you happy with your technology choice?

The HPC field is huge and I can only talk about the small corner of it I know which is the ultra-low latency processing of vast amounts of data we do in NovaSparks. For that the various DSL compilers generating VHDL code have really been a key enabling factor. Programming FPGAs is notoriously difficult and time consuming and this is the major factor limiting the use of reconfigurable computing in the HPC field. The use of those DSL compilers has made it possible to use those FPGA on a more practical basis for processing financial data.

BTW I'm looking for other HPC domains in which those DSL VHDL compilers could be used so feel free to contact me if you have some ideas.

If you had all the time in the world for a Lisp project, what would it be?

Again that's a difficult choice. Speaking about pure Lisp projects I've been willing for a long time to clean up and modernize my web app framework before releasing it as open-source so maybe that could be a good project to start.

Otherwise as mentioned above, I'm looking for possible applications and opportunities of leveraging the DSL => hardware compilers and the FPGAs around some Big Data processing. I have a few ideas but nothing specific for now.

Anything else I forgot to ask?

A conclusion? (Everybody wants a conclusion.) So here is mine:

Common Lisp is Awesome! It is much easier nowadays to use it thanks to all the projects and libraries that Quicklisp makes available. If you do not know Common Lisp then learn it and this will make you a better programmer anyway even if you do not use it directly after.