The updated version of my book "Programming Algorithms" has been released by Apress recently. It has undergone a number of changes that I want to elaborate on in this post.

But first, I'd like to thank all the people who contributed to the book or supported my work on it in other ways. It was an honor for me to be invited to Apress as "Practical Common Lisp" published by them a decade ago was my one-way ticket to the wonderful world of Lisp. Writing "Programming Algorithms" was, in a way, an attempt to give something back. Also, I was very curious to see how the cooperation with the publisher would go. And I can say that they have done a very professional job and helped significantly improve the book through the review process. That 5-10% change that is contributed by the editors, although it may seem insignificant, is very important to bring any book to the high standard that allows not to annoy many people. Unfortunately, I am not a person who can produce a flawless result at once, so helping with correcting those flaws is very valuable. Part of gratitude for that also, surely, goes to many of the readers who have sent their suggestions.

I was very pleased that Michał "phoe" Herda has agreed to become the technical reviewer. He has found a number of bugs and suggested lots of improvements, of which I could implement, maybe, just a third. Perhaps, the rest will go into the second edition :)

Now, let's speak about some of those additions to Programming Algorithms in Lisp.

## Curious Fixes

First of all, all the executable code from the book was published in a github repo (and also republished to the oficial Apress repo). As suggested by Michał, I have added automated tests to ensure (for now, partially, but we plan to make the test suite all-encompassing) that everything compiles and runs correctly. Needless to say that some typos and other issues were found in the process. Especially, connected with handling different corner cases. So, if you have trouble running some code from the book, you can use the github version. Funny enough, I got into a similar situation recently, when I tried to utilize the dynamic programming example in writing a small tool for aligning outputs of different ASR systems and found a bug in it. The bug was is in the matrix initialization code:

```
- (dotimes (k (1+ (length s1))) (setf (aref ld k 0) 0))
- (dotimes (k (1+ (length s2))) (setf (aref ld 0 k) 0)))
+ (dotimes (k (1+ (length s1))) (setf (aref ld k 0) k))
+ (dotimes (k (1+ (length s2))) (setf (aref ld 0 k) k)))
```

Another important fix that originated from the review process touched not only the book but also the implementation of the slice function in RUTILS! It turned out that I was naively assuming that displaced arrays will automatically recursively point into the original array, and thus, inadvertently, created a possibility for `O(n)`

slice performance instead of `O(1)`

. It explains the strange performance of array sorting algorithms at the end of Chapter 5. After fixing `slice`

, the measurements started to perfectly resemble the theoretical expectations! And, also the performance has improved an order of magnitude :D

```
CL-USER> (let ((vec (random-vec 10000)))
(print-sort-timings "Insertion " 'insertion-sort vec)
(print-sort-timings "Quick" 'quicksort vec)
(print-sort-timings "Prod" 'prod-sort vec))
= Insertion sort of random vector (length=10000) =
Evaluation took:
0.632 seconds of real time
...
= Insertion sort of sorted vector (length=10000) =
Evaluation took:
0.000 seconds of real time
...
= Insertion sort of reverse sorted vector (length=10000) =
Evaluation took:
1.300 seconds of real time
...
= Quicksort of random vector (length=10000) =
Evaluation took:
0.039 seconds of real time
...
= Quicksort of sorted vector (length=10000) =
Evaluation took:
1.328 seconds of real time
...
= Quicksort of reverse sorted vector (length=10000) =
Evaluation took:
1.128 seconds of real time
...
= Prodsort of random vector (length=10000) =
Evaluation took:
0.011 seconds of real time
...
= Prodsort of sorted vector (length=10000) =
Evaluation took:
0.011 seconds of real time
...
= Prodsort of reverse sorted vector (length=10000) =
Evaluation took:
0.021 seconds of real time
...
```

Also, there were some missing or excess closing parens in a few code blocks. This, probably, resulted from incorrectly copying the code from the REPL after finishing experimenting with it. :)

### New Additions

I have also added more code to complete the full picture, so to say, in several parts where it was lacking, from the reviewers' point of view. Most new additions went into expanding "In Action" sections where it was possible. Still, unfortunately, some parts remain on the level of general explanation of the solution as it was not possible to include whole libraries of code into the book. You can see a couple of snippets below:

#### Binary Search in Action: a Fast Specialized In-Memory DB

We can outline the operation of such a datastore with the following key structures and functions.

A dictionary `*dict*`

will be used to map words to numeric codes. (We'll discuss hash-tables that are employed for such dictionaries several chapters later. For now, it will be sufficient to say that we can get the index of a word in our dictionary with `(rtl:? *dict* word)`

). The number of entries in the dictionary will be around 1 million.

All the ngrams will be stored alphabetically sorted in 2-gigabyte files with the following naming scheme: `ngram-rank-i.bin`

. `rank`

is the ngram word count (we were specifically using ngrams of ranks from 1 to 5) and `i`

is the sequence number of the file. The contents of the files will constitute the alternating ngram indices and their frequencies. The index for each ngram will be a vector of 32-bit integers with the length equal to the rank of an ngram. Each element of this vector will represent the index of the word in `*dict*`

. The frequency will also be a 32-bit integer.

All these files will be read into memory. As the structure of the file is regular — each ngram corresponds to a block of `(1+ rank)`

32-bit integers — it can be treated as a large vector.

For each file, we know the codes of the first and last ngrams. Based on this, the top-level index will be created to facilitate efficiently locating the file that contains a particular ngram.

Next, binary search will be performed directly on the contents of the selected file. The only difference with regular binary search is that the comparisons need to be performed `rank`

times: for each 32-bit code.

A simplified version of the main function `get-freq`

intended to retrieve the ngram frequency for ranks 2-5 will look something like this:

```
(defun get-freq (ngram)
(rt:with ((rank (length ngram))
(codes (ngram-codes ngram))
(vec index found?
(bin-search codes
(ngrams-vec rank codes)
:less 'codes<
:test 'ngram=)))
(if found?
(aref vec rank)
0)))
```

where

```
(defun ngram-codes (ngram)
(map-vec (lambda (word) (rtl:? *dict* word))
ngram))
(defun ngrams-vec (rank codes)
(loop :for ((codes1 codes2) ngrams-vec) :across *ngrams-index*
:when (and (<= (aref codes1 0) (aref codes 0))
(codes< codes codes2 :when= t))
:do (return ngrams-vec)))
(defun codes< (codes1 codes2 &key when=)
(dotimes (i (length codes1)
;; this will be returned when all
;; corresponding elements of codes are equal
when=)
(cond ((< (aref codes1 i)
(aref codes2 i))
(return t))
((> (aref codes1 i)
(aref codes2 i))
(return nil)))))
(defun ngram= (block1 block2)
(let ((rank (1- (length block1))))
(every '= (rtl:slice block1 0 rank)
(rtl:slice block2 0 rank)))
```

We assume that the `*ngrams-index*`

array containing a pair of pairs of codes for the first and last ngram in the file and the ngrams data from the file itself was already initialized. This array should be sorted by the codes of the first ngram in the pair. A significant drawback of the original version of this program was that it took quite some time to read all the files (tens of gigabytes) from disk. During this operation, which measured in several dozens of minutes, the application was not responsive. This created a serious bottleneck in the system as a whole and complicated updates, as well as put normal operation at additional risk. The solution we utilized to counteract this issue was a common one for such cases: switching to lazy loading using the Unix `mmap`

facility. With this approach, the bounding ngram codes for each file should be precalculated and stored as metadata, to initialize the `*ngrams-index*`

before loading the data itself.

#### Pagerank MapReduce Explanation

;; this function will be executed by mapper workers
(defun pr1 (node n p &key (d 0.85))
(let ((pr (make-arrray n :initial-element 0))
(m (hash-table-count (node-children node))))
(rtl:dokv (j child (node-children node))
(setf (aref pr j) (* d (/ p m))))
pr))
(defun pagerank-mr (g &key (d 0.85) (repeat 100))
(rtl:with ((n (length (nodes g)))
(pr (make-arrray n :initial-element (/ 1 n))))
(loop :repeat repeat :do
(setf pr (map 'vector (lambda (x)
(- 1 (/ d n)))
(reduce 'vec+ (map 'vector (lambda (node p)
(pr1 node n p :d d))
(nodes g)
pr)))))
pr))

Here, we have used the standard Lisp `map`

and `reduce`

functions, but a map-reduce framework will provide replacement functions which, behind-the-scenes, will orchestrate parallel execution of the provided code. We will talk a bit more about map-reduce and see such a framework in the last chapter of this book.

One more thing to note is that the latter approach differs from the original version in that each mapper operates independently on an isolated version of the `pr`

vector, and thus the execution of Pagerank on the subsequent nodes during a single iteration will see an older input value `p`

. However, since the algorithm is stochastic and the order of calculations is not deterministic, this is acceptable: it may impact only the speed of convergence (and hence the number of iterations needed) but not the final result.

### Other Significant Changes

My decision to heavily rely on syntactic utilities from my RUTILS library was a controversial one, from the start. And, surely, I understood it. But my motivation, in this regard, always was and still remains not self-promotion but a desire to present Lisp code so that it didn't seem cumbersome, old-fashioned, or cryptic (and, thankfully, the language provides all possibilities to tune its surface look to your preferences). However, as it bugged so many people, including the reviewers, for the new edition, we have come to a compromise to use all RUTILS code only qualified with the `rtl`

prefix so that it was apparent. Besides, I have changed some of the minor purely convenience abbreviations to their standard counterparts (like returning `funcall`

instead of `call`

).

Finally, the change that I regret the most, but understand that it was inevitable, is the change of title and the new cover, which is in standard Apress style. However, they have preserved the Draco tree in the top right corner. And it's like a window through which you can glance at the original book :)

So, that is an update on the status of the book.

For those who were waiting for the Apress release to come out, it's your chance to get it. The price is quite affordable. Basically, the same as the one I asked for (individual shipping via post is a huge expense).

And for those who have already gotten the original version of the book, all the major changes and fixes are listed in the post. Please, take notice if you had any issues.

I hope the book turns out to be useful to the Lisp community and serves both Lisp old-timers and newcomers.