Recently I've been thinking about programming languages in general and how they relate to Common Lisp, spurred by such questions (and discussions) as Why Ruby is an acceptable Lisp?, Does Common Lisp need a better type system?, What do you LISPers think of Haskell?, Paul Graham's Arc is released today... what is the long term impact? etc.
As far as I understand, what people mean, when they talk about acceptable Lisp, is not Lisp per se, but a universal high-level programming language. It's just, that Lisp has made a bold claim to be such one. Is it? More on that later.
...So, first I've analyzed the languages, which are wide-spread now, whether they can deliver universality, so to say.
Kent Pitman has very accurately noted (Q16: Scheme in CS), that most people program in assembly languages, be they concrete, like C/C++, or abstract, like Java. And that is understandable, because these languages allow a straightforward solution, and for the majority is content with that. And that's one of the most important arguments, why purely functional programming languages can't become universally-accepted.
What about high-level languages? First of all, what kind of an animal is that? My idea is, that it should be a language, in which you can declare what to do and not how, think of any problem in terms of its domain vocabulary and not in terms of moving bits and bytes around. But that's, obviously, an oversimplification. One of the most important qualities of a high-level language is, that it provides efficient means of abstraction (and the function is the most important — that's why languages without first-class functions don't count as high-level). The high-level language should not only provide means to solve a real-world problem, but as well to solve a problem of solving such problems. Well, although I can't express concisely and clearly, what's that, I'm sure, that there exists a common understanding, or rather feeling, of that concept among the people, who raise such questions.
So, what languages do we consider high-level?
Python is an attempt at a high-level language. But it's stuck in the middle of a road from imperative to high-level. The goal of the language's design was not to achieve universality, but to create a robust dynamically-typed object-oriented language for scripting, and it's reached, but there's no more ambitious aims ahead...
Ruby is definitely not an "acceptable Lisp" — it's a clean Perl (with a "mixin" of Smalltalk). The aim of a language is power, but in a somewhat myopic (Perl) view, i.e.: the ability to hack (which in the context is the opposite of build complex systems) neatly and fast. That's why everything, which was considered in the historical context of the language's creation as a powerful feature to have, was incorporated into it. From classes to keyword parameters to regular-expressions to anonymous functions. Surely a lot of the achievements of language design from Lisp ans Smalltalk and other languages are in Ruby, but it's not "turtles all the way down" -- not uniform enough to be able to efficiently develop new high-level concepts on top of it, which will fit (so you're mostly left with what's already in the language. (Btw, Mr. Braitwaite tries to prove the opposite).
C# (it seems, they are incorporating first-class functions in C#3) is an interesting example of a language, which is based on C++ and Java and gradually moves to high-levelness. But still it lacks underlying vision of universality, it's very similar to Python in a way, that the imperative paradigm with it's class-oriented extension if considered the most important and basic one, thus limiting the others to being second-level citizens.
Erlang is just a DSL for concurrency programming. It's functional, but not at all intended to be universal.
Haskell is definitely different from the previously discussed languages, because it really is built on the basis of an abstract ideology and not as a pile of features. It's 2 fundamentals are pure functionality and type-inference, and as a manifestation of this ideas it achieves prominent results. But this ideas do not lead to a language, capable of adapting to every problem and producing the most natural solution.
Forth, J et al.. Why they can't be universal? I think it's a debate between an ordinary to ourselves (alphabetic) language and an hieroglyphic one. They are definitely a way of their own, and a very interesting one, but not a way to unification, I think.
Scheme is a high-level language, built from a set of Lisp "axioms", whose biggest flaw is prohibition of real macros. Paul Graham's Arc is an attempt to bring macros to Scheme, but, i think, it's early to say, whether it will succeed as a separate language or a successor to Scheme.
Common Lisp
Is Common Lisp an acceptable Lisp, i.e. is it a universal high-level programming language? I don't think, that it is, and that has been proven by history, so to say (it's not even near to being universally accepted). But it, surely, is the most advanced language in this direction and it possesses some of the high-level features, that are not present in other languages (and these features are actually the features of "ideal" Lisp, from which Common Lisp derives its name). Briefly about them:
* Parens are not a drawback. On the contrary, they are a powerful basis for abstraction, because they help to denote a form — the universal building block of it. Form is both data and code. Then there are S-expressions which are supported by parens, but have the other underlying concept — prefix notation, which unifies operations (mostly coded in infix notation in other languages) and function application (either prefix or suffix, but never infix).
* As Paul Graham has pointed, the best way to design a language is to start from the most general set of axioms. And Lisp (Common Lisp included) follows this principle, moreover its set of axioms is probably the most general one.
* Macros are built on top of Lisp's general-purpose s-expressions being code and data. I think it's obvious, why they are obligatory to a high-level programming language -- because syntactic abstraction is one of the most common ones.
* No "religious" adherence to any specific programming paradigm, but taking the best principles from every one (at least at the time of language's design).
* A concise standard. It's funny, when other language programmers "complain" about the CL standard being bloated, incomplete and outdated. Maybe, the only language, which can claim to have at least a somewhat accurate, complete and up to date one is Java (or rather could), while others either don't have any (Ruby, Python etc.) or have a lot of revisions, vendor-specific parts and incompatibilities between them (JavaScript, C++ and so on). And the fact, that the functional programming languages, which usually have clear formal specifications, are constantly in the process of revising them, as for me, seems to prove, that they just didn't reach maturity yet.
But, still, there are some features, or rather underlying principles, that Common Lisp lacks.
* What static typing folks want?
I "belong" to the dynamic typing camp, but I think, I understand, what the other camp needs. The Lisp idea, that variables don't have types and only values have, should be acceptable to them. But they want a compiler subsystem, which checks one of the dimension of the program's correctness (the dimension, that really allows verification), based on the programmer's declaration. And that surely can be considered an important high-level feature. Moreover, it seems to be a reachable goal.
What CL has is optional type declaration and runtime check. Qi implements static type-checking and type-inference, and besides other functional-paradigm features. But it abandons prefix-notation and multi-paradigm approach as a whole. Thus it's a very advanced functional language with roots in Lisp, but not Lisp. Neither it is universal.
What can be done inside Lisp to add static type-checking to dynamic one? I think there's a possibility at least to implement partial "lexical" type-checking for lexical variables. But how to do that properly is a different question. Anyway, I'm sure, that although a "universal language's" type system can have a deep mathematical foundation, it should allow not only logical formalisms for defining types, but as well plainer and simpler variants.
* What Schemers wanted, and why they chose Scheme?
As far as I understand, people who choose Scheme over CL do that not because it nil and false, nor for the lack of macros. And not because it's Lisp-1, but to be more precise, because it supports evaluation of the car of a list, while CL only allows function (and macros, and special-forms) names in it. I don't see, why there can't be a compromise between the 2 approaches: leave multiple namespaces, which is, as practice shows, very beneficial, and at the same time treat the function cell (car of a list) differently: if it's a name, it should be from the function namespace, while if it's a list -- it's a form, which should be evaluated and produce either a function object or a function name.
* Some idiosyncrasies, mostly inherited, should be removed (not all functions are generic; errors do not subclass standard-object etc.). The MOP should be in the standard. And the tradition of incorporating the best from every paradigm should be continued: at least the concurrency-oriented paradigm is relevant.
* Community process.
If the ideal Lisp is ever to appear it should have a standard and a process of modifying it without loosing backward compatibility.
I understand, what kind of a difficult and bureaucratic process was the effort for the Common Lisp standard, and that it's not going to be repeated. Well, it's not necessary. The times have changed, and now excellent implementations of a language are produced by the community, which is globally distributed. There's no central power and a big client (like DARPA), which demands official standard. The standard should and could be produced by the community as a result of a consensus. Why Python, Perl or Arc will never become universal? Because their development is mostly determined by the tastes and preferences of their "benevolent dictators", and their views, whatever enlightened and advanced be them, are subjective and can't be accepted universally. Today there's virtually no other power in the Lisp world, except its users. Nor big organizations, like Microsoft, Google or DARPA, neither dominant implementation producers. So I think, that if the members of the CL community with the biggest impact on it, like the participants of the original standardization process, who are still active in the Common Lisp world: Kent Pitman and Daniel Weinreib,-- as well as the new generation of prominent lispers: Pascal Constanza, Edi Weitz, Paul Graham,-- and others were able to self-organize and setup a community process for the creation of the new Lisp standard, it could yield a desired result.
Maybe it's time to start advancing from Common Lisp to Universal Lisp?..
Didn't read all your posts, but I should point out that the main weakness you mention in Scheme, that it lacks real macros, is not true. Any implementation of Scheme that you would use offers Common Lisp style macros. They aren't part of the Scheme standard, but they are available anyway.
ReplyDeleteIn discussing the advantages of Scheme, you left off full and guaranteed support for tail calling. Common Lisp is substantially lower-level than Scheme because of this one feature.
ReplyDeleteAlso, the advantages of Haskell's types go beyond verification, into allowing conceptual structures such as Monads to be defined as types. The compiler can validate mathematical axioms about code (dependently typed languages can do this even more) and so mathematical knowledge can be safely imported and used to construct very high level tools.
2 Ethan Herdrick:
ReplyDeleteThat's interesting. Could you please provide some link, because what I've found (for PLT Scheme) is only syntax-rules.
Besides, as far as I know, there's Typed Scheme and Termite Scheme (for multiprocessing). But they may be incompatible between each other, because of the other problem often mentioned in discussions about Scheme -- insufficient specification in the standard.
2 Julian Morrison:
ReplyDeleteI think it's a long (and interesting) discussion, and I don't have enough experience/knowledge in this part of mathematics, although I'd like to have. From what I've read so far I haven't seen any advanced examples of what you are talking about (like the usage of monads beyond side-effects and state preservation), which show things, that can't elegantly be done in other ways , except using type system. Maybe you can point me to some?