This is a very interesting thread. I feel I must pick up on several points that have been made. On 2 Apr 2013, at 11:45, Makarius makarius@sketis.net wrote:
People have occasionally pointed out to me that the bias of Poly/ML to the good old image dumping looks very alien to the man on the street. Systems like OCaml imitate the compile/link phase of cc more faithfully, which gives many people a better feeling. (I am personally not very excited about how OCaml compilation works.)
Here for "man in the street", I read "the diehard user of a language like C or C++". The persistent object store model is actually what the vast majority of men and women in the street that do something like programming use from day to day (I am thinking of databases and spreadsheets, and VB macros in MS Word and ...). If you think of an ML compiler along the lines of Poly/ML (or SML/NJ) as offering an amazingly powerful programmable database, then image dumping, or even better, the hierarchical state saving mechanism of Poly/ML is a very good fit.
Of course, the good news is that, unlike SQL :-), Poly/ML also lets you compile stand-alone programs just like the C or C++ programmer.
On 2 Apr 2013, at 12:50, Makarius makarius@sketis.net wrote:
? Actual separate compilation is a different thing. I don't think there is a real need for that in Poly/ML. The compiler is fast enough to compile huge applications from scratch.
I think separate compilation would be useful if it were supported. Unfortunately, my understanding is that the modularity features of Standard ML are not compatible with separate compilation. (This is a related to a very old complaint of mine, namely that the signature of a structure doesn't give you all the information you need to type-check code that uses the structure.)
Moscow ML had a form of separate compilation, but the semantics were a little surprising. If the separately compiled code contains something like:
val x = long_and_complex_function y;
then long_and_complex_function was executed at link-time rather than compile-time. This is much less satisfactory than the persistent object store model at least if you are using ML for what it was invented for (i.e., implementing interactive theorem provers).
On 2 Apr 2013, at 13:44, Gergely Buday gbuday@gmail.com wrote:
? That is the point I had. Writing scripts in ML could show Fedora developers that ML, through the POSIX modules, is indeed a platform for writing robust programs. The usual way now is to write python scripts and rewrite them in C if needed for better memory usage and speed. An ML script could be just tailored rewriting those critical parts in ML itself and compiled with mlton if necessary.
I strongly endorse this. One of the bizarre things about python is that method call is extraordinarily inefficient. You can do something quite like functional programming in python, but the performance can be appalling. Writing efficient python is a black art and the results are typically not pretty. With ML you can generally achieve C-like performance in a fraction of the development time.
Regards,
Rob.