2020-11-23

The Common Lisp Condition System Book

Several months ago I had a pleasure to be one of the reviewers of the book The Common Lisp Condition System (Beyond Exception Handling with Control Flow Mechanisms) by Michał Herda. I doubt that I have contributed much to the book, but, at least, I can express my appreciation in the form of a reader review here.

My overall impression is that the book is very well-written and definitely worth reading. I always considered special variables, the condition system, and multiple returns values to be the most underappreciated features of Common Lisp, although I have never imagined that a whole book may be written on these topics (and even just two of them). So, I was pleasantly flabbergasted.

The book has a lot of things I value in good technical writing: a structured and logical exposition, detailed discussions of various nuances, a subtle sense of humor, and lots of Lisp. I should say that reading the stories of Tom, Kate, and Mark was so entertaining that I wished to learn more about their lives. I even daydreamt (to use the term often seen throughout the book) about a new semi-fiction genre: stories about people who behave like computer programs. I guess a book of short stories containing the two from this book and the story of Mac from "Practical Common Lisp" can already be initialized. "Anthropomorphic Lisp Tales"...

So, I can definitely recommend reading CLCS to anyone interested in expanding their Lisp knowledge and general understanding of programming concepts. And although I can call myself quite well versed with the CL condition system, I was also able to learn several new tricks and enrich my understanding. Actually, that is quite valuable as you never know when one of its features could become handy to save your programming day. In my own Lisp career, I had several such a-ha moments and continue appreciating them.

This book should also be relevant to those, who have a general understanding of Lisp, but are compelled to spend their careers programming in inferior languages: you can learn more about one of the foundations of interactive programming and appreciate its value. Perhaps, one day you'll have access to programming environments that focus on this dimension or you'll be able to add elements of interactivity to your own workflow.

As for those who are not familiar with Lisp, I'd first start with the classic Practical Common Lisp.

So, thanks to Michał for another great addition to my virtual Lisp books collection. The spice mush flow, as they say...

2020-10-14

Why RDF* Is a Mess... and How to Fix It

TL;DR

RDF* is a new group of standards that aims to bridge the gap between RDF and property graphs. However, it has taken an "easy" route that made it ambiguous and backward incompatible. An alternative approach that doesn't suffer from the mentioned problems would be to introduce the notion of triple labels instead of using the embedded triples syntax.

How Standards Should Be Developed

Our CTO used to say that standards should be written solely by those who are implementing them. And although this statement may be a little too extreme in some cases, it's a good rule of thumb. The main reason for it is not that it will make the standards simple to implement. Moreover, I don't want to argue that allowing a simple implementation is the main requirement for a standard. What's more important is that the implementors have a combined exposure to the whole variety of potential use cases both from the user feedback and own experience of being a consumer of their own dogfood. Besides, it doesn't hurt, in the long run, that if something is simple to implement it's also, usually, simple to understand, reason about, and use.

Obviously, given all power to the implementers might lead to abuse of this power, but it's a second-order problem and there are known ways to mitigate it. Primarily, by assembling representatives of several implementations in a committee. An approach that is often frowned upon by hotheads due to alleged bureaucracy and the need for compromise, yet leading to much more thought-out and lasting standards. A good example of such is the Common Lisp standard.

RDF*

But I digressed, let's talk about RDF*. This is the major candidate to solve the problem of RDF triple reification that is crucial to basic compatibility between RDF and property graph representations. In short, RDF defines the simplest elegant abstraction for representing any kind of data — a triple that comprises a subject, predicate, and object. Triples are used to represent facts. Besides, there's a third component to a triple called a graph. So, in fact, the historic name triple, in the realm of RDF triple-stores, currently stands for a structure of 4 elements. The graph may be used to group triples. And despite the beauty of the simple and elegant concept of a triple, in theory, having this fourth component is essential for any serious data modeling.

Now, we know how to represent facts and arbitrarily group them together. So, the usual next-level question arises: how to represent facts about facts? Which leads to the problem of reification. Let's consider a simple example:

:I :have :dream .

This is a simple triple represented in the popular Turtle format. As RDF deals with resources, it assumes that there's some default prefix defined elsewhere (let's say it's https://foo.com/). The sam triple may be represented in the basic NTriples format like this:

<https://foo.com/I> <https://foo.com/have> <https://foo.com/dream> .

What if we want to express the facts that it is a quote by Martin Luther King and that it was uttered in 1963? There are at least 3 ways to approach it:

  1. The obvious but wrong (as it reimplements the semantics of RDF in RDF introducing unnecessary complexity) one of turning it into a set of triples each one describing some of the properties of the original statement with all of subject, predicate, and object being one of the properties:
    _:fact1 rdf:subject :I ;
            rdf:predicate :have ;
            rdf:object :dream ;
            meta:author wiki:Martin_Luther_King_Jr. ;
            meta:date "1963" .
    

    This works but, as I said, is conceptually wrong. It's the RDF's Java-style cancer of the semicolon. It leads to storage waste and poor performance.

  2. The other one is to use singleton predicates and assign metadata to them:
    :I :have#1 :dream .
    :have#1 meta:author wiki:Martin_Luther_King_Jr. ;
            meta:date "1963" .
    

    This is complete nonsense as it makes SPARQL queries unreasonably complex unless you implement special syntax that will ignore the #1 suffix, in the query engine.

  3. Yet another one, which I consider to be the best (close to perfect), is to use the graph to attach triple metadata instead of grouping the triples.
    :t1 { :I :have :dream . }
    :t1 meta:author wiki:Martin_Luther_King_Jr. ;
        meta:date "1963" .
    

    Here, we use another (there's many more of them :) ) RDF format — TriG, which is an extension to Turtle for representing graphs. :t1 is a unique graph that is associated with our triple, and it is also used as a subject resource for metadata triples. This approach also has minor drawbacks, the most important of which is that grouping triples needs more overhead. We'll have to add an additional triple if we'd like to express that :t1 belongs to a graph :g1:

    :t1 meta:graph :g1 .
    

    On the flip side, that will open the possibility of putting the triple into more than a single graph. In other words, now grouping may be expressed as yet another triple property, which it, in fact, is.

  4. RDF* takes a different approach: embedding triples. We may say it tries to do the obvious by attaching metadata directly to the triple:
    << :I :have :dream >> meta:author wiki:Martin_Luther_King_Jr. ;
                          meta:date "1963" .
    

    Besides, you can also embed a triple into an object:

    wiki:Martin_Luther_King_Jr. meta:quote << :I :have :dream >> .
    

    And do nesting:

    << wiki:Martin_Luther_King_Jr. meta:quote << :I :have :dream >> >> meta:date "1963" .
    

    Neat, at first glance... Yet, there are many pitfalls of this seemingly simple approach.

What's Wrong with RDF*

The first obvious limitation of this approach is that this syntax is not able to unambiguously express all the possible cases. What if we want to say something like this:

<< << :I :have :dream >> meta:author wiki:Martin_Luther_King_Jr. ;
                         meta:date "1963" >>
   meta:timestamp "2020-10-13T01:02:03" .

Such syntax is not specified in the RFC and it's unclear if it is allowed (it seems like it shouldn't be), although this is perfectly legit:

<< << :I :have :dream >> meta:author wiki:Marthin_Luther_King_Jr. >>
   meta:timestamp "2020-10-13T01:02:03" .

What about this:

wiki:Martin_Luther_King_Jr. meta:quote << :I :have :dream >> .
wiki:John_Doe meta:quote << :I :have :dream >> .

Do these statements refer to the same :I :have :dream . triple or two different ones? RDF* seems to assume (although the authors don't say that anywhere explicitly) that each subject-predicate-object combination is a unique triple, i.e. there can be no duplicates. But RDF doesn't mandate it. So, some triple stores support duplicate triples. In this case, there is no way to express referencing the same embedded triple in object position from multiple triples in Turtle*.

Moreover, there's a not in the RDF* spec that mentions that the embedded triples should not, actually, be asserted (at least, in the object position — it is unclear whether that also applies to them in the subject position). I.e. in the following example:

wiki:Martin_Luther_King_Jr. meta:quote << :I :have :dream >> .

the triple :I :have :dream might be treated differently then the toplevel triples, and the SPARQL query like SELECT ?obj { :I :have ?obj } will not return :dream. And only SELECT ?obj { ?s ?p << :I :have ?obj >> } will be an acceptable way of accessing the embedded triple. We're now questioning the most basic principles of RDF...

And I haven't even started talking about graphs (for there's no TriG* yet). With graphs, there're more unspecified corner cases. For instance, the principal question is: can an embedded triple have a different graph than the enclosing property triple. It seems like a desirable property, moreover, it will be hard to prevent the appearance of such situations from directly manipulating the triple store (and not by reading serialized TriG* statements).

This is, actually, the major problem with Turtle*: it makes an impression that it exists in a vacuum. To see RDF* in context, we have to understand that the core of RDF comprises a group of connected standards: NTriples/NQuads, Turtle/TriG, and SPARQL. Turtle is a successor to NTriples that makes it more human-friendly, but all of them build on the same syntax. And this syntax is used by SPARQL also. Yet, there's no NTriples* and it's unclear whether it can exist. GraphDB implements a hack by embedding the triple (or rather its hash, but that doesn't matter much) in a resource (like <urn:abcdefgh>), but, first of all, that's ugly, and, secondly, it also assumes no duplicates. Yet, NTriples is the basic data interchange format for RDF, and forsaking it is a huge mistake. There's also no TriG* yet as I mentioned. Another sign that RDF* is mostly a theoretical exercise. TriG* can be defined as an extension to TriG with Turtle* syntax, but I have already briefly mentioned the issue it will face.

To sum up, the main deficiencies of Trutle* are:

  • poor backward compatibilty (up to a point of not taking into account other related standards)
  • limited syntax
  • lots of underspecified corners

And, in my opinion, they originate from the desire to make the most obvious UI not paying attention to all other considerations at all.

An Obvious Fix

What's the alternative? Well, probably, Turtle* will end up being implemented in some way or another by all the triple-store vendors. Although, I expect the implementations to be quite incompatible due to the high level of underspecification in the RFC.

Yet, you don't have to wait for Turtle* as graph-based reification is already available and quite usable.

Also, if we still had a choice to define an extension to RDF with the same purpose as RDF*, I'd take another quite obvious route. It may be less sexy, but it is at least as simple to understand and much more consistent both within itself and with other RDF standards. Moreover, a similar approach is already part of RDF — blank nodes.

Blank nodes are resources that are used just as ids:

We could as well use a blank node instead of http://foo.com/t1 as our graph label resouce:


_:b1 { :I :have :dream . }
_:b1 meta:author wiki:Martin_Luther_King_Jr. ;
      meta:date "1963" .

The underscore syntax is for blank nodes so _:b1 will create a graph node that is used to connect other nodes together but we don't care about its representation at all.

Similarly to blank nodes syntax, we could introduce triple label syntax:

^:t1 :I :have :dream .

This statement will mean that our triple has a t1 label. Now, we could add metadata to that label — in exactly the same manner as with graph-based reification (*:t1 is a "dereference" of the triple label):

*:t1 meta:author wiki:Martin_Luther_King_Jr. ;
     meta:date "1963" .

This would map directly to the implementation that will be able to unambiguously link the triple to its properties. Also, it would enable this:

wiki:Martin_Luther_King_Jr. *:t1 .
wiki:John_Doe meta:quote *:t1 .

And defining NTriples*/NQuads* becomes possible, as well. Here are NQuads triples for the MLK quote (with a graph g1 added for completeness).

^:t1 <https://foo.com/I> <https://foo.com/have> <https://foo.com/dream> <https://foo.com/g1> .
^:t2 *:t1 <https://meta.org/author> <https://en.wikipedia.org/wiki/Martin_Luther_King_Jr.> .
*:t1 <https://meta.org/date> "1963" .

Alas, this simple and potent approach was overlooked for RDF*, so now we have to deal with a mess that is both hard to implement and will likely lead to more fragmentation.

2020-08-12

Announcing CL-AGRAPH

AllegroGraph (agraph) is one of the hugely underappreciated pieces of software freely available to the programming community. Especially this relates to the Lisp community as agraph is one of the largest, most successful, and mature Lisp software projects around, yet it is hardly used by lispers. In part, its relative obscurity may be explained by the following factors:

  • the software is commercial... but it also has a free version with very reasonable limitations that can be used in the majority of hobby and other small projects
  • it is written in a commercial Lisp — Allegro CL... but it also has a free version; and it can run agraph
  • out-of-the-box, agraph has only an ACL client (in addition to the ones in mainstream languages like java or python)... in this post, a portable Lisp client is introduced to fill this gap

In fact, free access to agraph may enable the development of a wide variety of useful applications, and I plan another post about the unbeknown value that RDF may bring to almost any software project. Yet, to showcase it in full with code, I was missing the client. Besides, I have an occasional personal need for it, and so, with some hacking over several weekends, here it is — a minimal portable Lisp client for agraph that has, IMHO, a couple of interesting high-level features and can also be rather easily extended to support other RDF-backends.

Disclosure: for the last 2,5 years, I've been working for Franz on AllegroGraph. Over that period I was able to participate in the development and support of different parts of the system and come to gradually appreciate it both as an engineering accomplishment and an extremely useful data store.

The HTTP API

cl-agraph provides a way to interact from a running Lisp process with AllegroGraph via its HTTP API. I call it minimal as the client implements only the essential CRUD commands and the SPARQL interface. That is the critical part that enables the usage of the triple store as part of any application. However, the agraph API also provides many administrative capabilities. Those are not (yet) supported by cl-agraph, although they may be implemented rather easily (I'll show how this can be done below). Yet, those endpoints are accessible directly both via the WebView management web interface and the agtool command-line utility. So, the absence of their support in the client doesn't preclude the effective use of agraph from any application.

The client uses nquads as the data interchange format. The availability of standard data formats, such as nquads, is one of the great advantages of RDF as a way to model any data. And it also made the development of this client much easier. To work with nquads, I have extended the cl-ntriples library by Viktor Anyakin (ntriples is a simpler version of the nquads format).

The basic data structure of agraph is a `triple` (actually, a quad, but it's the name "triple" is more common):

(defstruct (triple (:conc-name nil)
                   (:print-object print-triple))
  s p o g
  triple-id
  obj-lang obj-type)

Triple components are, uris, blank-nodes, and literals (strings, numbers, booleans).

When the triple is printed, it is displayed in the standard nquads format:

AGRAPH> (<> (make-blank-node) "http://foo.com/foo" "bar" :g "http://foo.com/baz" :lang "en")
_:bn1899  "bar"@en  .
AGRAPH> (s *)
_:bn1899

I have chosen the diamond sign (<>) to signify triples (as in ntriples/nquads formats, the URIs are enclosed in it). So, the API functions that deal with triples are mostly accompanied by this sign. The parts enclosed in <> in the nquads representation are uris. Also, the very short names s, p, o, and g are used as triple parts accessors. This is a generally discouraged approach, but from my experience working with AG, I have learned that these functions are used very often and no one will be mistaken when seeing them, in the context of triple-store interactions. Also, usually, they will be used with a package prefix anyway, so the common code pattern, in a convenient setup, may look like this:

(defpackage #:foo
  (:local-nicknames (#:ag #:agraph))
  ...)

FOO> (ag:with-ag (:repo "bar")
       ;; Though, there's a more efficient variant of triple iteration
       ;; that will be shown below
       (dolist (tr (ag:get<> :p (uri "baz:quux")))
         (when (ag:blank-node-p (ag:s *))
           (ag:rem<> tr))))

The function <> ensures proper types of the triple components. There's also raw make-triple that creates the triple structure using the arguments as is.

RDF permits specifying aliases for uri prefixes and the uri function is aware of that:

AGRAPH> (register-prefix "foo" "http://foo.com/")
"http://foo.com/"
AGRAPH> (<> (make-blank-node) "rdf:type" (uri "foo:quux"))
_:bn1921 <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://foo.com/quux> .

You can see that we have used the default expansion for prefix "rdf" and the user-defined one for prefix "foo". The object of the triple needed to be explicitly converted to an uri (unlike the predicate) before it was passed to the <> function as objects may be also strings and it's impossible to reliably distinguish in the background.

The other core data structure of CL-AGRAPH is ag-config. It lists the connection parameters that are used to make the client HTTP requests. Most of the parameters have reasonable defaults. The macro with-ag is a common Lisp with-style macro that is intended for creating an interaction context with fixed config parameters. Usually, it should be given at least the :repo argument.

Here are some simple interactions with agraph:

AGRAPH> (open-ag :repo "test" :port 12345)
NIL
AGRAPH> (let ((subj (make-blank-node)))
          (add<> (<> subj "rdf:type" (uri "foo:bar"))
                 (<> subj "foo:baz" "quux" :lang "en")))
2
AGRAPH> (get<>)
(_:bF049DE41x7325578 <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://foo.com/bar> .
 _:bF049DE41x7325578 <http://foo.com/baz> "quux"@en .)
AGRAPH> (rem<> :g (uri "foo:bar"))
0
AGRAPH> (get<>)
(_:bF049DE41x7325578 <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://foo.com/bar> .
 _:bF049DE41x7325578 <http://foo.com/baz> "quux"@en .)
AGRAPH> (rem<> :o (uri "foo:bar"))
1
AGRAPH> (get<>)
(_:bF049DE41x7325578 <http://foo.com/baz> "quux"@en .)
AGRAPH> (count<>)
1
AGRAPH> (close-ag)
T

CL-AGRAPH defines the function map<> and the macro do<> in the standard Lisp iteration paradigm. Map performs iteration with accumulation, while do is intended to be used just for the side-effects. Actually, do<> is expressed, internally, in terms of map<>. The main advantage of using these functions instead of just calling the normal mapcar or dolist on the results of get<> is their streaming mode of operation. Instead of pulling, potentially, millions of triples from the triple-store into the program's memory (or organizing paging-based iteration, as get<> has a :limit option), the triples are streamed from the backend and discarded after being processed.

AGRAPH> (map<> 'o)
("quux")
Unlike the usual mapcar, this call didn't have the second argument: it iterated all triples in the repository. Yet, surely, it can be limited to certain subjects, predicates, objects, and/or graphs:
AGRAPH> (do<> (tr :s "foo" :p "bar" :o "baz" :g "quuz")
          (print tr))
NIL  ; sorry, there were no triples with such parameters

AGRAPH> (do<> (tr :p (uri "foo:baz"))
          (print tr))
_:bF049DE41x7325578 <http://foo.com/baz> "quux"@en .

Defining Your Own Commands

All these commands use, under the hood, the ag-req utility that can be utilized to define other API wrappers. For example, here is a function to get all the duplicate triples in the repository (that's one of the features of agraph that permits several triples with the same SPOG to be added):


(defun duplicates (&key mode)
  (ag-req "/statements/duplicates" (list "mode" mode)))

However, the simplest commands can be defined even without ag-req, by using just the high-level functions. Here is a small example — the function that checks if a triple exists in the repository:


(defun storedp (tr)
  (assert (triple-p tr))
  (when (get<> :s (s tr) :p (p tr) :o (o tr) :limit 1)
    t))

NB. As the client uses a REST HTTP + nquads protocol, it should be rather easy to extend it to support other triple-store backends such as GraphDB, Stardog or Virtuoso. Provided they also support this method of interaction.

Sessions & Transactions

Now, let's return to open-ag and with-ag. Both of them have a sessionp keyword argument (that is, by default, true for with-ag and nil for open-ag). A session is a mechanism for speeding up some agraph operations and for running transactions. Without an active session, each update is committed at once. It is much more costly than batching up groups of operations. However, if a session is established, you need to explicitly call commit to enact the modifications to the triple-store. I.e. sessions create an implicit transaction. with-ag will commit the transcation after executing its body. It is also possible to manually rollback the changes. Any unhandled error inside with-ag will also, effectively, cause a rollback: the session will be terminated without a commit.

An agraph session has a certain lifetime/timeout that can also be specified as a parameter to open-ag/with-ag. However, there's also a maximum possible lefitime that is configured by the triple-store admin. Once the timeout expires, the session is terminated. with-ag will try to rerun the transaction if it encounteres a terminated session — but that will be done just once. And the user should be careful not to place transaction-unfriedly code in the body of with-ag. open-ag, on the contrary, defaults to sessionless mode. This way the additional complexity of timeouts and transactions is removed. In this mode, the only thing that open-ag does is configuring the connection spec and internal caches.

Symbolic SPARQL

Another thing worth some attention in this client is its symbolic SPARQL facility that allows generating SPARQL requests from s-expressions. Query generation from sexps is a common Lisp trick that can be found in such libraries as CLSQL & co. However, the implementation I came up with is, from my point of view, much simpler.

Here are a few simple examples that give a general impression of symbolic SPARQL:


AGRAPH> (generate-sparql '(select * (?s ?p ?o))
                         nil)
"SELECT 
* 
{
?S ?P ?O .
 }
"
AGRAPH> (generate-sparql '(select * (:union (?s ?p ?o)
                                            (:graph ?g (?s ?p ?o))))
                         nil)
"SELECT 
* 
{ {
?S ?P ?O .
 }
UNION
{
GRAPH ?G {
?S ?P ?O .
 } } }
"

The function generate-sparql uses a very simple evaluation rule. It will print any symbols as is, while lists are processed recursively in 3 possible ways depending on the first element:

  • a keyword in first position means that custom rules should be invoked;
  • any other symbol causes the list to be treated as a tiples pattern containing subject, predicate(s), and object(s);
  • another list invokes recursive processing.

Now, the custom rules are defined as methods of a generic function process-custom, which makes this mechanism quite extensible. Let's see an example SPARQL sexps and the custom rules that were used to handle it:


(defun storedp (tr)
  (assert (triple-p tr))
  (when (get<> :s (s tr) :p (p tr) :o (o tr) :limit 1)
    t))
AGRAPH> (generate-sparql '(select ?date ?title
                                  ((?g |dc:date| ?date)
                                   (:filter (:> ?date (:|^| "2005-08-01T00:00:00Z"
                                                            |xsd:dateTime|)))
                                   (:graph ?g (?b |dc:title| ?title))))
                         nil)
"SELECT 
?DATE 
?TITLE 
{ ?G dc:date ?DATE .
 FILTER ( (?DATE > \"2005-08-01T00:00:00Z\"^^xsd:dateTime ) )
 GRAPH ?G {
?B dc:title ?TITLE .
 } }
"
(defgeneric process-custom (key tree out &key)
  ...
  (:method ((key (eql :|^|)) tree out &key)
    (assert (and (dyadic tree)
                 (stringp (first tree))
                 (symbolp (second tree))))
    (format out "~S^^~A" (first tree) (second tree)))
  (:method ((key (eql :filter)) tree out &key)
    (assert (single tree))
    (format out "FILTER ( ~A )~%"
            (process-expr (first tree) nil :top-level nil)))
  (:method ((key (eql :graph)) tree out &key)
    (assert (dyadic tree))
    (format out "GRAPH ~A " (first tree))
    (process-expr (second tree) out :top-level t))
  (:method ((key (eql :>)) tree out &key top-level)
    (process-arithm key tree out :top-level top-level))
  ...

The sexp-based form of SPAQRL queries may seem unusual, but it is much more convenient and powerful than the standard string format:

  • it is more convenient to edit;
  • passing variables is easy;
  • and you can write function and macros to construct these expressions from parts, which is very rough and error-prone using the string-based format.

I considered implementing symbolic SPARQL ever since I started working with it as programmatically filling string templates is so primitive. Finally, I've found time to realize this idea!

Afterword

This announcement is targeted mainly at those who are already "enlightened" about RDF triple stores and were eagerly waiting for a chance to try agraph. :) I hope that it provides a good starting point for you to actually do it. I believe, the agraph download webpage gives enough guidance regarding installing it either on your machine or running it from the AWS Marketplace.

As I said, there will be another post (for now, unclear when) that will be an introduction to RDF cabilities for those developers who are still "in ignorance" about the possibilities that triple stores may open for their applications. Stay tuned...

2020-07-17

Programming Algorithms 2nd Edition

Apress — the most dedicated publisher of Common Lisp books, famous for giving the world "Practical Common Lisp" and "Common Lisp Recipes" — has approached me to publish "Programming Algorithms", and, after some consideration, I have agreed. So, the book will be released under the title "Programming Algorithms in Lisp" and with some slight modifications to the content.

It was not an easy decision to make. Ultimately, my goal for the book is to make it as widely read as possible. For these three months since it had been published on Leanpub, it was downloaded more than 1500 times, and almost 250 people have also donated some money in its support. The paperback book was shipped to around 40 locations around the globe: even to Australia and Colombia. Besides, I have received lots of positive feedback and some improvement suggestions. I'm very grateful and happy that it has seen such positive reception.

In my opinion, the book has the potential to hit at least an order of magnitude more readers. However, to achieve that, targeted promotion effort is necessary. I have already mostly exhausted the capacity of the free PR channels I had access to (such as Hacker News, Reddit, and Twitter). I had a long-term promotion strategy though, but it required spending the time and (possibly) financial resource that could be used elsewhere.

The Apress edition of the book will not be free, but it will have the full power of this respected publisher behind it. So, my hope is that thus it will reach an even wider audience. Very soon I will have to take down the free version of the book, so this is the last chance to download it (if you or some of your friends planned to do it). The book webpage will remain active and will collect relevant information and news, so stay tuned...

vseloved.github.io/progalgs

2020-06-22

Eval Spotted in the Wild

(#lisptips on the dynamic nature of CLOS magnified by eval)

Since starting programming in Lisp, I always had an impression that using eval is a taboo. Or rather, a cul-de-sac that you never want to touch. When I was only learning Lisp, I had a couple of unsuccessful and rather stupid attempts of utilizing it to bend the language to my view of how it should function — only to learn how it is really intended to function. After that, it occupied its rightful place on my mind's shelf of "low-level constructs only needed to implement the language".

Yet, recently, I saw a legitimate use case for it and even wrote a piece of production code containing eval! That was such a revelation that I wanted to share it in this short post.

So, here is the case I needed to solve: I was developing a parser for a new data format that had to fit into an existing set of parsers. The parsers not only decode the data but also store it in the datastore using the CLOS machinery for datastore access. I.e. there's a generic function to store an individual piece of data that is specialized for different connection/datastore types. Now, my parser had to prepare the individual pieces and, eventually, they would be fed to this function. But that may happen independently of the parser operation: when the data store commit is performed.

Yet, there was another issue at play: the data format allows the individual items to be interdependent, i.e. reference one another via an implicit reference. And when the data is persisted, due to the properties of the data store, these references should be changed to the internal ids of the referenced items. And those are not known before the commit happens. I.e. I was in the following situation:

  • my parser produces an array of items that are to be persisted to the dataset at some later time
  • the order of their addition matters as the dependent items should be added after the ones they reference
  • and as the referenced item is added its id should be saved
  • and assigned to a field of the dependent item before that item is also added

This program logic is quite normal, apart from the fact that my parser doesn't have control over the whole workflow. Actually, the data persistence stage operates in the inversion of control paradigm, i.e. I can only override (rather, augment) the part of the program that is responsible for processing an individual item. Needless to say that I had no desire or intention to reimplement all the different (I believe, 7) ways of interaction with the datastore that had their own methods plus a number of before/after/around-methods.

In fact, CLOS is very flexible and provides a way, using an object of my own mixin-class to hold the state and around-method specialized on it, to achieve my goal of fitting into this whole machinery without distracting it or having to reimplement anything. If not for one issue: limited facilities for dynamic creation of classes.

So, here's what I wanted to do and to avoid:

  1. I wanted to define a mixin-class and an around-method for it to augment the data storing procedure with saving of the ids of specified items and assigning them to fields in other items before persisting them. Here's the sketch of the relevant code:
    
    (defclass my-data-store-mixin ()
      ((linked-items-table :reader my-mixin-table
                           :initform (make-hash-table))))
    
    (defmethod add-item :around ((db my-data-store-mixin) item)
      (let ((linked-items-table (my-mixin-table db))
            (item-id (call-next-method)))
        (dolist (it (gethash item linked-items-table))
          (remhash it linked-items-table)
          (setf (reference it) item-id))
        (remhash item linked-items-table)
        item-id)) 
    
  2. Yet, I didn't want this code to run when other data formats are imported, hence my mixin should have been "activated" if and only if my specific format is parsed.
  3. In other words, I needed a way to dynamically add this mixin to an existing connection object, in the context of the parser call, and then restore the connection to its previous state. In general, CLOS also provides such a facility with its change-class operator. I would say, this would have been a manifestation of a dynamic object system in all its glory if not for one deficiency.
  4. You can't just dynamically define a temporary class: the one that will inherit from the class of the current connection and my mixin. defclass is a macro that's expected to deal with names known ahead-of-time and coded as part of its call: it doesn't evaluate variables. Usually, all such APIs in Lisp have a make-function counterpart. I.e. what I needed was:
    
    (let ((temp-class (gensym))
          (current-db-class (class-of *db*)))
      (make-class temp-class (list (class-name current-db-class) my-data-store-mixin) nil)
      (unwind-protect (progn (change-class *db* temp-class)
                             ;; execute my code
                      )
        (change-class *db* current-db-class)))
    
    But CLOS just doesn't have an API for that. (Which might be perfectly reasonable — and I don't want to delve into the discussion of those reasons in this post). Actually, there's MOP for that. But I'd prefer not to take the MOP route here for another set of reasons I want to skip discussing now :) Suffice to say that it is complicated and, from my experience with the MOP, I developed a stance that it's another area intended for language implementation usage — not for user-level code.
  5. And here's where eval comes to the rescue. In place of the nonexisting make-class I could just put this piece:
    
    (let ((class (intern (format nil "my-mixed-~a" (class-name current-db-class)))))
      (when (not (find-class class nil))
        (eval `(defclass ,class (,(class-of *db*) my-data-store-mixin) ()))))
    

So, eval is an escape hatch into the world of ultimate dynamism. This operator can add it anywhere: whether an appropriate API was left out due to lack of foresight or even when it was not intended to exist... :)

2020-05-18

"Programming Algorithms" Book Gets its own Page

Recently, I was busy organizing the process of postal delivery of the paperback version of the book to all the interested people. Here, I have created a map showing everyone who has already ordered:

I'm glad to see the global geography of readership and all the positive feedback. Thanks a lot! Please, share your thoughts online :)

Finally, the book got its own webpage with all the relevant details.

2020-05-08

Dead-Tree Version of "Programming Algorithms"

I have finally obtained the first batch of the printed "Programming Algorithms" books and will shortly be sending them to the 13 people who asked for a hardcopy.

Here is a short video showing the book "in action":

If you also want to get a copy, here's how you do it:

  1. Send an email to vseloved@gmail.com with your postal address — I'll send you a Paypal money request.
  2. Once I see the donation, I'll go to the post office and send you the book.
  3. Optionaly step: if you want it to be signed, please, indicate it in your letter.
Shipping details: As I said originally, the price of the dead-tree version will be $20+shipping. I'll ship via the Ukrainian national post. You can do the fee calculation online here (book weight is 0.58 kg, size is 23 x 17 x 2 cm): https://calc.ukrposhta.ua/international-calculator. Alas, the interface is only in Ukrainian. According to the examples I've tried, the cost will be approximately $10-15. To make it easier, I've just settled on $10 shipping without a tracking number of $15 if you want a tracking number. Regardless of your country. I don't know how long it will take - probably depends on the location (I'll try to inquire when sending).

The book was already downloaded more than 1170 times (I'm not putting the exact number here as it's constantly growing little by little). I wish I knew how many people have, actually, read it in full or in part. I've also received some error corrections (special thanks goes to Serge Kruk), several small reviews and letters of encouragement. Those were very valuable and I hope to see more :)

Greetings from the far away city of Lima, Peru!
I loved this part: "Only losers don’t comment their code, and comments will be used extensively"
Thank you so much for putting this comprehensive collection of highly important data structures, i'm already recommending this to two of my developers, which I hope i'll induce into my Lisp addiction.
--Flavio Egoavil

And here's another one:

Massively impressive book you've written! I've been a Lisp programmer for a long time and truly appreciate the work put in here. Making Lisp accessible for more people in relation to practical algorithms is very hard to do. But you truly made it. You'll definitely end up in the gallery of great and modern Lisp contributions like "Land of Lisp" and "Let Over Lambda". Totally agree with your path to focus on practical algorithmic thinking with Lisp and not messing it up with macros, oop and other advanced concepts.
--Lars Hård

Thanks guys, it's really appreciated!

If you feel the same or you've liked the book in some respect and have found it useful, please, continue to share news about it: that definitely helps attract more readers. And my main goal is to make it as widely read as possible...

2020-04-15

"Programming Algorithms" Book Freely Available

The book "Programming Algorithms (A comprehensive guide to writing efficient programs with examples in Lisp)" has been completed. It turned out to be more than 360 pages in a standard technical book format, with over 100k words (that's 2 NanoWriMos :). It covers more than 100 topics that made it to the TOC — but, actually, more. Phew, making it to this point was quite a challenge...

This book is, surely, not perfect. Hopefully, most of the mistakes in it were fixed with the help of many nice people who commented on the chapters as they were published on my blog.

Also, the book is terribly incomplete. Almost every chapter could be expanded by a factor of two or three with relevant details and concrete implementations of some of the general ideas that are presented, currently. But neither did I have the time to write those down, nor, what's much more important, anyone would have had the time to read them, in entirety. I believe I have put enough concrete examples with executable code to illustrate all the important concepts in each part. This is a great advantage of using Lisp for the book: the code is clear and compact enough to serve both to explain the algorithms and to permit testing them for real, in the REPL. The main compromise each author has to make is between brevity and completeness. I hope that I made the right choices in this regard, but, for sure, there's much more to learn about every piece of technology mentioned. My hope is that the book lays a solid groundwork to facilitate further deeper exploration.

There are also a couple of topics that I would like to cover but couldn't find a good place for them. Probabilistic data structures is the most important of them. Yet, they are not big enough to justify a separate chapter and, also, don't fit into any of the existing chapters.

But enough with the whining :) In fact, I'm quite satisfied with the end result as my main goal was to sufficiently develop the following key themes:

  • The main one, obviously, was the description of all the important data structures and the associated algorithms.
  • The next, also very important, was the demonstration of the essential tools that help in the development, testing, and verification of the produced algorithmic code: tracing, profiling, pretty-printing, etc.
  • We have also discussed, when it was relevant, the real-world engineering considerations and constraints that influence the programs using our algorithms. And sometimes these constraints have more impact than the purely theoretical complexity calculations.
  • Finally, in each chapter, I tried to present the practical use case of the algorithms we have studied, showing the broad variety of such applications. In fact, it spans all the different corners of the software landscape we're used to. We have talked, albeit briefly, about such different domains as neural networks, plagiarism detection, web search, mapping, chess-playing, image compression, and many others.

There is a lot of books on algorithms, but I haven't seen any that primarily aims to bridge the gap between the theory and practice. This is one of the key distinctions of "Programming Algorithms". It is definitely not the best exposition of the theoretical ideas, but I hope that, instead, it builds sufficient understanding and skill, for the common developer, to start writing efficient algorithmic programs.

I wanted to finish the book with the following statement: programming craft is primarily about making choices. What approach to prefer, which algorithm to choose, what tradeoffs to make. And, at the other level, what properties to give more priority: speed or safety, brevity or consistency, space or debuggability, clarity or conciseness, and so on and so forth. Lisp is one of the few languages that are "pro-choice". Its authors understood very well the importance of freedom to make the critical choices, and it is felt in the design of the language. For instance, with the help of declaim we can even signal our preferences to the compiler, to some extent, at the level of a single file or even an individual form. (declaim (optimize (speed 3) (safety 1) (debug 0) (compilation-speed 0))) will ask the compiler to produce the fastest possible code. Yes, this language will not guard you against poor choices like some others claim to do. Sometimes, you're not wise enough to make a correct choice, but, much more often, every choice just has its pros and cons, so someone will approve of it and someone won't. And that's what freedom is about: ownership and responsibility. So, use Lisp if you liked it. And if you prefer other languages, I'd urge you to still take advantage of the concept of freedom of choice in programming. Don't be constrained by the prevailing paradigms and try to use the best parts of all the different approaches you know...

Acknowledgments

Finally, the most pleasant part. I'm very thankful to those who helped me in the work on "Programming Algorithms" by providing support, advice, corrections, and suggestions. First of all, many thanks to my wife Ksenya who encouraged me to work on it despite the time for that is, in part, taken from my family duties. Also, I am very grateful to Dr. Robert Standh who humbly volunteered his help as an editor to make it sound more native (as my English is far from perfect since I'm not a native speaker) and point out the mistakes that I made. He and his wife had contributed lots of improvements to more than half of the chapters, and I tried to follow their advice in the subsequent ones. Thanks to Rainer Joswig for commenting on the Lisp choices. Thanks to @dzenicv on reddit who posted links to almost all of the chapters there, which triggered some helpful discussions. Thanks to @tom_mellior on Hacker News for pointing a serious deficiency in the explanation of Union-Find. Thanks to all those people who shared the links to the chapters, contributed their comments and attention.

If you've found the book helpful and interesting, you can also support it's past and (potential) further development in several ways. First and foremost, you can share it with your friends, colleagues, social network. The book was made free and will remain free as its main premise, for me, was to spread the knowledge gathered inside. Yet, you can also make a donation at Leanpub if you believe that it has brought you some value. Last but not least, if you find some way the book can be improved, don't hesitate to contact me.

Finally, I wanted to solicit reviews: if you've read the book and liked it, please, write a paragraph or two to let others know your opinion!

2020-03-30

Programming Algorithms: Synchronization

This is a snippet of the "Synchronization" chapter of the book.

This is the final chapter of the book, in which we will discuss optimization of parallel computations: whether concurrently on a single machine in and a shared-memory setting or in a distributed shared-nothing environment. This is a huge topic that spans synchronization itself, parallelization, concurrency, distributed computations, and the functional approach. And every senior software developer should be well-versed in it.

Usually, synchronization is studied in the context of system or distributed programming, but it has a significant algorithmic footprint and is also one of the hottest topics for new algorithm research. In fact, there are whole books that concentrate on it, but, usually, they attack the problem from other angles, not focusing on the algorithmic part. This chapter will be more algorithm-centered, although it will also present an overview of the problem space. SO that, in the end, you'll have a good foundation to explore the topic further if a desire or need for that will appear.

Let's start from the basics. In the previous chapters of the book we, mainly, viewed algorithms as single computations running without external interruptions. This approach, obviously, removes the unnecessary complexity, but it also isn't totally faithful to reality. Most of the programs we deal with, now, run in multiprocessing environments (sometimes, even distributed ones), and even when they don't utilize these capabilities, those are still available, they sometimes have their impact, and, besides, might have improved the performance of the programs if they would have been utilized. The majority of the backend stuff, which, currently, is comprised of services running in the datacenters, are multithreaded. There's a notorious "Zawinski's Law" that states that every program attempts to expand until it can read mail. Those programs which cannot so expand are replaced by ones which can. Being a good joke it also reflects an important truth about the tendency of all programs over time to become network-aware, and thus distributed to at least some extent.

There are two principally different types of environments in which the programs that need synchronization run: shared-memory and shared-nothing ones.

In a shared-memory setting, there exists some shared storage (not necessarily, RAM) that can be directly accessed by all the threads of the application. Concurrent access to data in this shared memory is the principal source of the synchronization challenges, although not the only one. The example of a shared-memory program is a normal application that uses multithreading provided either directly by the OS or, more frequently, by the language runtime.

The opposite of shared-memory is a shared-nothing environment, in which all threads don't have any common data storage and can coordinate only by sending messages directly to other processes. The contents of the messages have to be copied from the memory of the sender to the receiver. In this setting, some of the synchronization problems disappear, but others still remain. At the fundamental level, some synchronization or coordination still needs to happen. From a performance standpoint, however, the shared-nothing mode is, usually, inferior due to the need for additional data copying. So, both paradigms have their place and the choice, which one to utilize, depends on the context of a particular task.

The main goal of synchronization is ensuring program correctness when multiple computations are running in parallel. Another side of the coin is achieving optimal performance, which is also addressed by parallelization that we have somewhat discussed in a couple of prior chapters. Prioritizing performance before correctness, although tempting, is one of the primary sources of bugs in the concurrent systems. The trivial example would be building a shared-memory program without explicit use of any synchronization mechanisms. It is, definitely, the most performant approach, but non-coordinated access to the shared data will inevitably result in failures like data corruption.

More details about of the book may be found on its website.

2020-02-19

Programming Algorithms: Compression

This is a snippet of the "Compression" chapter of the book.

Compression is one of the tools that every programmer should understand and wield confidently. Such situations when the size of the dataset is larger than the program can handle directly and it becomes a bottleneck are quite frequent and can be encountered in any domain. There are many forms of compression, yet the most general subdivision is between lossless one which preserves the original information intact and lossy compression which discards some information (assumed to be the most useless part or just noise). Lossless compression is applied to numeric or text data, whole files or directories — the data that will become partially or utterly useless if even a slight modification is made. Lossy compression, as a rule, is applied to data that originates in the "analog world": sound or video recordings, images, etc. We have touched the subject of lossy compression slightly in the previous chapter when talking about such formats as JPEG. In this chapter, we will discuss the lossless variants in more detail. Besides, we'll talk a bit about other, non-compressing, forms of encoding.

Encoding

Let's start with encoding. Lossless compression is, in fact, a form of encoding, but there are other, simpler forms. And it makes sense to understand them before moving to compression. Besides, encoding itself is a fairly common task. It is the mechanism that transforms the data from an internal representation of a particular program into some specific format that can be recognized and processed (decoded) by other programs. What we gain is that the encoded data may be serialized and transferred to other computers and decoded by other programs, possibly, independent of the program that performed the encoding.

Encoding may be applied to different semantic levels of the data. Character encoding operates on the level of individual characters or even bytes, while various serialization formats deal with structured data. There are two principal approaches to serialization: text-based and binary. The pros and cons are the opposite: text-based formats are easier to handle by humans but are usually more expensive to process, while binary variants are not transparent (and so, much harder to deal with) but much faster to process. From the point of view of algorithms, binary formats are, obviously, better. But my programming experience is that they are a severe form of premature optimization. The rule of thumb should be to always start with text-based serialization and move to binary formats only as a last resort when it was proven that the impact on the program performance will be significant and important.

Base64

Encoding may have both a reduction and a magnification effect on the size of the data. For instance, there's a popular encoding scheme — Base64. It is a byte-level (lowest level) encoding that doesn't discriminate between different input data representations and formats. No, the encoder just takes a stream of bytes and produces another stream of bytes. Or, more precisely, bytes in the specific range of English ASCII letters, numbers, and three more characters (usually, +, /, and =). This encoding is often used for transferring data in the Web, in conjunction with SMTP (MIME), HTTP, and other popular protocols. The idea behind it is simple: split the data stream into sextets (6-bit parts — there's 64 different variants of those), and map each sextet to an ASCII character according to a fixed dictionary. As the last byte of the original data may not align with the last sextet, an additional padding character (=) is used to indicate 2 (=) or 4 (==) misaligned bits. As we see, Base64 encoding increases the size of the input data by a factor of 1.25.

Here is one of the ways to implement a Base64 serialization routine:

(defparameter *b64-dict*
  (coerce (append (loop :for ch :from (char-code #\A) :to (char-code #\Z)
                        :collect (code-char ch))
                  (loop :for ch :from (char-code #\a) :to (char-code #\z)
                        :collect (code-char ch))
                  (loop :for ch :from (char-code #\0) :to (char-code #\9)
                        :collect (code-char ch))
                  '(#\+ #\/ #\=))
          'simple-vector))

(defun b64-encode (in out)
  (let ((key 0)
        (limit 6))
    (flet ((fill-key (byte off beg limit)
             (:= (ldb (byte limit off) key)
                 (ldb (byte limit beg) byte))
             (:= off (- 6 beg)))
           (emit1 (k)
             (write-byte (char-code (svref *b64-dict* k)) out)))
      (loop :for byte := (read-byte in nil) :while byte :do
        (let ((beg (- 8 limit)))
          (fill-key byte 0 beg limit)
          (emit1 key)
          (fill-key byte (:= limit (- 6 beg)) 0 beg)
          (when (= 6 beg)
            (emit1 key)
            (:= limit 6))))
      (when (< limit 6)
        (:= (ldb (byte limit 0) key)
            (ldb (byte limit 0) 0))
        (emit1 key)
        (loop :repeat (ceiling limit 2) :do
          (emit1 64))))))

This is one of the most low-level pieces of Lisp code in this book. It could be written in a much more high-level manner: utilizing the generic sequence access operations, say, on bit-vectors, instead of the bit manipulating ones on numbers. However, it would be also orders of magnitude slower due to the need to constantly "repackage" the bits, converting the data from integers to vectors and back. I also wanted to show a bit of bit fiddling, in Lisp. The standard, in fact, defines a comprehensive vocabulary of bit manipulation functions and there's nothing stopping the programmer from writing performant code operating at a single bit level.

One important choice made for Base64 encoding is the usage of streams as the input and output. This is a common approach to such problems based on the following considerations:

  • It is quite easy to wrap the code so that we could feed/extract strings as inputs and outputs. Doing the opposite, and wrapping a string-based code for stream operation is also possible, but it defeats the whole purpose of streams, which is...
  • Streams allow to efficiently handle data of any size and not waste memory, as well as CPU, for storing intermediary copies of the strings we're processing. Encoding a huge file is a good illustration of why this matters: with streams, we do it in an obvious manner: (with-open-file (in ...) (with-out-file (out) (base64-encode in out)). With strings, however, it will mean, first, reading the file contents into memory — and we may not even have enough memory for that. And, after that, filling another big chunk of memory with the encoded data. Which we'll still, probably, need to either dump to a file or send over the network.

So, what happens in the code above? First, the bytes are read from the binary input stream in, then each one is slashed into 2 parts. The higher bits are set into the current base64 key, which is translated, using b64-dict, into an appropriate byte and emitted to the binary output stream out. The lower bits are deposited in the higher bits of the next key in order to use this leftover during the processing of the next byte. However, if the leftover from the previous byte was 4 bits, at the current iteration, we will have 2 base64 bytes available as the first will use 2 bits from the incoming byte, and the second will consume the remaining 6 bits. This is addressed in the code block (when (= 6 beg) ...). The function relies on the standard Lisp ldb operation which provides access to the individual bits of an integer. It uses the byte-spec (byte limit offset) to control the bits it wants to obtain.

Implementing a decoder procedure is left as an exercise to the reader...

Taking the example from the Wikipedia article, we can see our encoding routine in action (here, we also rely on the FLEXI-STREAMS library to work with binary in-memory streams):

CL-USER> (with-input-from-string (str "Man i")
           (let ((in (flex:make-in-memory-input-stream 
                      (map 'vector 'char-code
                           (loop :for ch := (read-char str nil) :while ch :collect ch))))
                 (out (flex:make-in-memory-output-stream)))
             (b64-encode in out)
             (map 'string 'code-char (? out 'vector))))
"TWFuIGk="

This function, although it's not big, is quite hard to debug due to the need for careful tracking and updating of the offsets into both the current base64 chunk (key) and the byte being processed. What really helps me tackle such situations is a piece of paper that serves for recording several iterations with all the relevant state changes. Something along these lines:

        M (77)    |    a (97)     |    n (110)
   0 1 0 0 1 1 0 1|0 1 1 0 0 0 0 1|0 1 1 0 1 1 1 0
0: 0 1 0 0 1 1    |               |                 19 = T
               0 1|               |
1:             0 1|0 1 1 0        |                 22 = W
                  |        0 0 0 1|
2:                |        0 0 0 1|0 1               5 = F

Iteration 0:

beg: 2
off: 0
limit: 6

beg: 0
off: (- 6 2) = 4
limit: 2


Iteration 1:

beg: 4
off: 0
limit: 4

beg: 0
off: (- 6 4) = 2
limit: 4

Another thing that is indispensable, when coding such procedures, is the availability of the reference examples of the expected result, like the ones in Wikipedia. Lisp REPL makes iterating on a solution and constantly rechecking the results, using such available data, very easy. However, sometimes, in makes sense to reject the transient nature of code in the REPL and record some of the test cases as unit tests. As the motto of my test library SHOULD-TEST declares: you should test even Lisp code sometimes :) The tests also help the programmer to remember and systematically address the various corner cases. In this example, one of the special cases is the padding at the end, which is handled in the code block (when (< limit 6) ...). Due to the availability of a clear spec and reference examples, this algorithm lends itself very well to automated testing. As a general rule, all code paths should be covered by the tests. If I were to write those tests, I'd start with the following simple version. They address all 3 variants of padding and also the corner case of an empty string.

(deftest b64-encode ()
  ;; B64STR would be the function wrapped over the REPL code presented above
  (should be blankp (b64str ""))
  (should be string= "TWFu" (b64str "Man"))
  (should be string= "TWFuIA==" (b64str "Man "))
  (should be string= "TWFuIGk=" (b64str "Man i")))

Surely, many more tests should be added to a production-level implementation: to validate operation on non-ASCII characters, handling of huge data, etc.

More details about of the book may be found on its website.

2020-02-10

prj-nlp v.3

Ми з Мар'яною Романишин розпочали третій набір курсу з обробки людської письмової мови (чи як краще перекласти NLP :-p) в Проджекторі. Хотів написати трохи деталей про нього, бо курс нам дуже подобається і, звісно, кожного року хочеться, щоб його рівень продовжував зростати. А для цього треба, щоб потенційні студенти про нього знали і розуміли, як він влаштований.

Курс є трьохмісячним інтенсивом, який ставить на меті підготувати спеціаліста, здатного самостійно і якісно вирішувати NLP-задачі будь-якої складності. Як наодинці, так і в команді. Для того, щоб бути максимально успішним в ньому, треба мати певну базу. Як правило, найкраще себе показують, звісно, програмісти. Але є й винятки: ми залюбки беремо лінгвістів, журналістів, та й, загалом, спеціалістів з інших галузей за умови, що вони володють достатніми навиками програмування, щоб самостійно писати програми, налаштовувати зручне для себе середовище розробки і розуміти базові алгоритмічні концепції. Для курсу не треба знати ML, хоча якесь уявлення про нього бажано мати. Але курс побудований так, що ми почергово розбираємо NLP-теми і пов'язані з ними розділи ML. Звісно, це не значить, що в результаті людина буде гарно розбиратись у машинному навчанні, але необхідну базу для продовження поглиблення у цій сфері отримає.

Другою передумовою успіху на курсі є наявність достатнього часу. Ми кажемо, що необхідний мінімум — це 10 годин самостійної роботи на тиждень, плюс 5 годин на заняттях. Іншими словами, враховуючи час на дорогу, це вже пів робочих ставки. Але, звісно, комусь може знадобитись і більше самостійного часу. Крім того мозок буде досить сильно завантажений новими темами, тому на час занять доведеться відмовитись від інших додаткових проєктів, хоббі і т.п. Також не дуже добре виходить, якщо більше тижня підряд випадає з якоїсь зовнішньої причини: хвороби, відрядження, шлюбу, народження дітей... :)

Як побудований цей курс? Ми збираємо групу з 15 студентів і зустрічаємось два рази на тиждень: одне заняття — теоретичне у четвер увечері, присвячене розбору певної теми, друге — практичне у суботу, на якому ми показуємо приклад вирішення задачі по цій теми і разом програмуємо його. У більшості випадків, ця програма буде основою для розв'язку більш просунутої, але подібної задачі, яка дається на домашню роботу. Відповідно, у нас є 12 тижнів основної роботи, тобто 12 тем і близько 10 повноцінних домашніх проєктів рівня побудувати систему аналізу тональності, перевірки фактів чи синтаксичного розбору. Звісно, в умовах обмеженого часу, кожний з проєктів робиться в рамках певного обмеженого домену.

Курс розбитий на 3 частини:

  • перший місяць — це дуже ґрунтовна підготовча робота: основи структурної лінгвістики, робота з даними, метрики, правильні експерименти, підхід на правилах. В кінці місяця в голові у студента має сформуватись цілком структуроване уявлення про те, як правильно підходити до вирішення NLP-задач. Інший результат цієї частини — сформульоване завдання для курсового і початок роботи над ним: зібрані перші дані, визначена метрика, пророблений план експериментів
  • другий місяць — це занурення в класичне NLP з паралельною проробкою наійбільш розповсюджених ML-технік, які в ньому використовуються. В кінці місяця, після вирішення на лекціях, практичних і вдома майже десятка NLP-задач у студентів вже мають сформуватись навички для самостійного застосування цих технік у реальних проєктах. Ну і зроблена основна частина курсової роботи
  • останній місяць — це deep learning в NLP. Ми одразу попереджаємо, що цей курс не ставить на меті розказати побільше найгарячішого і проривного: для цього є достатньо інших майданчиків. Ми хочемо сформувати систематичне розуміння NLP, з всією його 70-річною історією. Бо в цій історії є дуже багато корисних і, можливо, timeless речей. Тож до state-of-the-art ми підходимо тільки під кінець (і на останньому занятті у нас виступає запрошений лектор, який розказує щось про bleeding edge :) Але принципові речі, пов'язані з DL, ми також пророблюємо як на заняттях, так і в рамках курсового. Ті зі студентів, кого цікавить саме ця сфера, під кінець курсу ганяють навчання десяток нейронок в своїх пісочницях, а також випробовують можливості глибинного навчання у своєму курсовому проєкті.. Втім, тут ми не можемо похвалитись приголомшливими результатами по якості, адже для їх досягнення замало пари тижнів, які по-максимуму є на ту чи іншу задачу: навчання глибинних моделей потребує багато як обчислювальних ресурсів, так і часових. Але тому, як до цього підходити, ми навчаємося

Як можна побачити з цього опису, дуже велику увагу ми приділяємо курсовому проєкту, роботу над яким стимулюємо кожного тижня. В результаті, у більшості виходять досить непогані і цікаві штуки, понад 70% студентів доходять до фінішу з якісною завершеною роботою (нечувана кількість для інших курсів, в яких мені доводилось брати участь). Деякі з проєктів навіть виходять у великий світ: хтось робить речі, пов'язані з роботою, хтось — з хоббі. За 2 роки у нас було 2 дослідження для журналістики даних, проєкт з аналізу конфліктів ліків між собою та з хронічними хворобами людини (на основі обробки інструкцій), система пошуку у соціальному застосунку для конференцій з запитами природньою мовою. Був і ряд цікавих проєктів для себе, які досягли класних результатів по якості та були зроблені з душею. Все це студенти презентують після закінчення на великій фінальній вечірці в офісі Grammarly.

Одна з основних цілей цього курсу для нас полягає в тому, щоб вирощувати українську NLP-спільноту. Адже школи комп'ютерної лінгвістики у нас, по суті, ніколи не було. І ми сподіваємось, що нам вдастся долучитись до її формування разом з іншими прогресивними проєктами навчання в цій сфері, зокрема магістерською програмою по Data Science в УКУ. У курсу вже більше 30 випускників, які увійшли до закритого клубу prj-nlp-alumni де ми ділимось цікавими речами та можливостями, а також плануємо періодично зустрічатись у неформальній атмосфері, а не тільки на івентах. Тож сподіваємось на розширення цього клубу ще на половину у червні цього року :)

P.S. До речі, про УКУ. Я також беру участь як викладач і ментор дипломних робіт у курсі NLP в їх програмі. Це трохи інший досвід, ніж цей курс. Звісно, УКУ пропонує більш академічну програму, яка триває довший час. Студенти отримують там гарну і, що важливо, систематичну підготовку з ML та DL. Тому цьому зовсім не треба приділяти увагу на курсі по NLP. З іншого боку, курс більш короткий і читається декількома викладачами, тому в його рамках важче сформувати цілісну картинку, нема можливості організувати такий же рівень занурення і концентрації, як на курсі в Проджекторі. Зате на магістерську роботу у них більше часу, ніж на весь наш курс. Але, найголовніше, що і тут, і там підбираються гарно підготовлені і мотивовані студенти, тож результати в УКУ також виходять гарної якості з великою кількістю цікавих робіт. Деякі з яких мають рівень статей на топові наукові конференції у цій галузі. Хоча мені особисто все-таки більше подобається формат Проджектора, адже він дає можливість відчути дух інтенсивної командної роботи протягом трьох місяців, зітхнувши з полегшенням в кінці у передчутті дев'ятимісячного перепочинку і нової ітерації...

2020-01-11

Programming Algorithms: Approximation

This is a snippet of the "Approximation" chapter of the book.

This chapter will be a collection of stuff from somewhat related but still distinct domains. What unites it is that all the algorithms we will discuss are, after all, targeted at calculating approximations to some mathematical functions. There are no advanced data structures involved, neither is the aim to find a clever way to improve the runtime of some common operations. No, these algorithms are about calculations and computing an acceptable result within the allocated time budget.

Combinatorial Optimization

Dynamic Programming is a framework that can be used for finding the optimal value of some loss function when there are multiple configurations of the problem space that result in different values. Such search is an example of discrete optimization for there is a countable number of states of the system and a distinct value of the cost function we're optimizing corresponding to each state. There are also similar problems that have an unlimited and uncountable number of states, but there is still a way to find a global or local optimum of the cost function for them. They comprise the continuous optimization domain. Why is optimization not just a specialized area relevant to a few practitioners but a toolbox that every senior programmer should know how to utilize? The primary reason is that it is applicable in almost any domain: the problem just needs to be large enough to rule out simple brute force. You can optimize how the data is stored or how the packets are routed, how the blueprint is laid out or the servers are loaded. Many people are just not used to looking at their problems this way. Also, understanding optimization is an important prerequisite for having a good grasp of machine learning, which is revolutionizing the programming world.

DP is an efficient and, overall, great optimization approach, but it can't succeed if the problem doesn't have an optimal substructure. Combinatorial Optimization approaches deal with finding a near-optimum for the problems where an exhaustive search requires O(2^n) computations. Such problems are called NP-hard and a classic example of those is the Travelling Salesman (TSP). The task is to find an optimal order of edges in a cycle spanning all vertices of a fully-connected weighted graph. As we saw previously, this problem doesn't have an optimal substructure, i.e. an optimal partial solution isn't necessarily a part of the best overall one, and so taking the shortest edge doesn't allow the search procedure to narrow down the search space when looking at the next vertex. A direct naive approach to TSP will enumerate all the possible variants and select the one with a minimal cost. However, the number of variants is n!, so this approach becomes intractable very fast. A toy example of visiting all the capitals of the 50 US states has 10^64 variants. This is where quantum computers promise to overturn the situation, but while we're waiting for them to mature, the only feasible approach is developing approximation methods that will get us a good enough solution in polynomial (ideally, linear) time. TSP may look like a purely theoretical problem, but it has some real-world applications. Besides vehicle routing, automated drilling and soldering in electronics is another example. Yet, even more important is that there are many other combinatorial optimization problems, but, in essence, the approaches to solving one of them apply to all the rest. I.e., like with shortest path, coming up with an efficient solution to TSP allows to efficiently solve a very broad range of problems over a variety of domains.

So, let's write down the code for the basic TSP solution. As usual, we have to select the appropriate graph representation. From one point of view, we're dealing with a fully-connected graph, so every representation will work and a matrix one will be the most convenient. However, storing an n^2-sized array is not the best option, especially for a large n. A better "distributed" representation might be useful here. Yet, for the TSP graph, an even better approach would be to do the opposite of our usual optimization trick: trade computation for storage space. When the graph is fully-connected, usually, there exists some kind of an underlying metric space that contains all the vertices. The common example is the Euclidian space, in which each vertex has a coordinate (for example, the latitude and longitude). Anyway, whichever way to represent the vertex position is used, the critical requirement is the existence of the metric that may be calculated at any time (and fast). Under such conditions, we don't have to store the edges at all. So, our graph will be just a list of vertices.

Let's use the example with the US state capitals. Each vertex will be representated as a pair of floats (lat & lon). We can retireve the raw data from the Wikipedia article about the US capitols (with an 'o') and extract the values we need with the following code snippet[1], which cuts a few corners:

(defstruct city
  name lat lon)

(defparameter *wp-link* "https://en.wikipedia.org/w/index.php?title=List_of_state_and_territorial_capitols_in_the_United_States&action=edit&section=1")

(defparameter *cs*
  (with ((raw (drakma:http-request *wp-link*))
         (coords-regex (ppcre:create-scanner "\\{\\{coord\\|(\\d+)\\|(\\d+)\\|([.\\d]+)\\|.\\|(\\d+)\\|(\\d+)\\|([.\\d]+)\\|.\\|type"))
         (capitals (list)))
    (flet ((dms->rad (vec off)
             (* (/ pi 180)
                (+     (? vec (+ off 0))
                    (/ (? vec (+ off 1)) 60)
                    (/ (? vec (+ off 2)) 3600)))))
      (dolist (line (split #\Newline (slice raw
                                            (search "{| class=\"wikitable sortable\"" raw)
                                            (search "</textarea><div class='editOptions'>" raw))))
        (when-it (and (starts-with "|" line)
                      (search "{{coord" line))
          (with ((_ coords (ppcre:scan-to-strings coords-regex line))
                 (coords (map* 'read-from-string coords)))
            (push (make-city :name (slice line (position-if 'alpha-char-p line)
                                          (position-if (lambda (ch) (member ch '(#\] #\|)))
                                                       line :start 1))
                           :lat (dms->rad coords 0)
                           :lon (dms->rad coords 3))
                capitals)))))
    (coerce capitals 'vector)))

CL-USER> (length *cs*)
50

We also need to define the metric. The calculation of distances on Earth, though, is not so straightforward as on a plain. Usually, as a first approximation, the haversine formula is used that provides the estimate of the shortest distance over the surface "as-the-crow-flies" (ignoring the relief).

(defun earth-dist (c1 c2)
  (with ((lat1 (? c1 'lat))
         (lat2 (? c2 'lat))
         (a (+ (expt (sin (/ (- lat2 lat1) 2))
                     2)
               (* (cos lat1)
                  (cos lat2)
                  (expt (sin (/ (- (? c2 'lon) (? c1 'lon)) 2)) 
                        2)))))
    (* 1.2742e7  ; Earth diameter
       (atan (sqrt a) (sqrt (- 1 a)))))) 

With the metric at our disposal, let's define the function that will calculate the length of the whole path and use it for a number of random paths (we'll use the RUTILS function shuffle to produce a random path).

(defun path-length (path)
  (let ((rez (earth-dist (? path 0) (? path -1))))
    (dotimes (i (1- (length path)))
      (:+ rez (earth-dist (? path i) (? path (1+ i)))))
    rez))

CL-USER> (path-length *cs*)
9.451802301259182d7
CL-USER> (path-length (shuffle *cs*))
9.964776273250546d7
CL-USER> (path-length (shuffle *cs*))
1.009761841183094d8

We can see that an average path may have a length of around 10k kilometers. However, we don't know anything about the shortest or the longest one, and to find out reliably, we'll have to evaluate 50! paths... Yet, as we accept the sad fact that it is not possible to do with our current technology, it's not time to give up yet. Yes, we may not be able to find the absolute best path, but at least we can try to improve on the random one. Already, the three previous calculations had a variance of 5%. So, if we're lucky, maybe we could hit a better path purely by chance. Let's try a thousand paths using our usual argmin pattern:

(defun random-search (path n)
  (let ((min (path-length path))
        (arg path))
    (loop :repeat n :do
      (with ((path (shuffle path))
             (len (path-length path)))
        (when (< len min)
          (:= min len
              arg path))))
    (values arg
            min)))

CL-USER> (:= *print-length* 2)
2
CL-USER> (random-search *cs* 1000)
(#S(CITY :NAME "Atlanta" :LAT 0.5890359059538811d0 ...)
 #S(CITY :NAME "Montpelier, Vermont" :LAT 0.772521512027179d0 ...) ...)
7.756170773802838d7

OK, we've got a sizable 20% improvement. What about 1,000,000 combinations?

CL-USER> (time (random-search *cs* 1000000))
Evaluation took:
  31.338 seconds of real time
...
(#S(CITY :NAME "Boise, Idaho" :LAT 0.7612723873453388d0 ...)
 #S(CITY :NAME "Helena, Montana" :LAT 0.813073800024579d0 ...) ...)
6.746660953705506d7

Cool, another 15%. Should we continue increasing the size of the sample? Maybe, after a day of computations, we could get the path length down by another 20-30%. And that's already a good gain. Surely, we could also parallelize the algorithm or use a supercomputer in order to analyze many more variants. But there should be something smarter than simple brute force, right?

More details about of the book may be found on its website.