Hi,
I would like to feed an sml program into poly from standard input:
$ cat hello.sml |poly Poly/ML 5.4.1 Release
# Hello World!val it = (): unit
Is it possible to use this so that the compiler itself does not print anything? I have found poly -q which does not print the release message but that still prints all the rest.
- Gergely
I'm not sure what your exact requirements are but a possible solution may be to create an executable. Then compile-time output would not be mixed with run-time output. It's straightforward: wrap everything into a toplevel function and export that, e.g.
[pclayton@rizzo ~]$ cat hello.sml fun main () = print "Hello World!\n"; PolyML.export ("hello", main);
Compile: cat hello.sml | poly
Link: POLYHOME=/opt/polyml/polyml-5.5 # your Poly/ML installation POLYLIB=${POLYHOME}/lib LD_RUN_PATH=${POLYLIB}:${LD_RUN_PATH} cc -ggdb -o hello -L${POLYLIB} -lpolymain -lpolyml hello.o
Run: ./hello
Phil
On 28/03/13 20:49, Gergely Buday wrote:
Hi,
I would like to feed an sml program into poly from standard input:
$ cat hello.sml |poly Poly/ML 5.4.1 Release
# Hello World!val it = (): unit
Is it possible to use this so that the compiler itself does not print anything? I have found poly -q which does not print the release message but that still prints all the rest.
- Gergely
polyml mailing list polyml@inf.ed.ac.uk http://lists.inf.ed.ac.uk/mailman/listinfo/polyml
Sorry, I did not make it clear what I want. I want ML scripts to invoke from the command line, without compiling them. I made it work, see how. There is an shc [1] translator that compiles shell scripts to C code. I have used a one-liner [2]:
$ cat polyscript.sh #!/bin/bash
tail -n +2 $1 | poly
I compiled this with shc and in turn with gcc:
$ shc-3.8.9/shc -f polyscript.sh $ gcc -Wall polyscript.sh.x.c -o polyscript
Now, I was able to create a first script written in ML:
$ cat smlscript #!/home/gergoe/projects/shebang/polyscript $0
print "Hello World!"
and, I was able to run it:
$ chmod u+x smlscript $ ./smlscript Poly/ML 5.4.1 Release
# Hello World!val it = (): unit
It might be interesting to write polyscript directly in C, but probably that wouldn't make it faster.
Back to the original question: this is why I would like to suppress any compiler message.
I did not find such a flag in the manual, would it be possible to add one, David?
- Gergely
[1] http://www.datsi.fi.upm.es/~frosal/
[2] http://stackoverflow.com/questions/15665119/how-to-define-script-interpreter... holgero's answer
On 29 March 2013 02:03, Phil Clayton phil.clayton@veonix.com wrote:
I'm not sure what your exact requirements are but a possible solution may be to create an executable. Then compile-time output would not be mixed with run-time output. It's straightforward: wrap everything into a toplevel function and export that, e.g.
[pclayton@rizzo ~]$ cat hello.sml fun main () = print "Hello World!\n"; PolyML.export ("hello", main);
Compile: cat hello.sml | poly
Link: POLYHOME=/opt/polyml/polyml-5.**5 # your Poly/ML installation POLYLIB=${POLYHOME}/lib LD_RUN_PATH=${POLYLIB}:${LD_**RUN_PATH} cc -ggdb -o hello -L${POLYLIB} -lpolymain -lpolyml hello.o
Run: ./hello
Phil
On 28/03/13 20:49, Gergely Buday wrote:
Hi,
I would like to feed an sml program into poly from standard input:
$ cat hello.sml |poly Poly/ML 5.4.1 Release
# Hello World!val it = (): unit
Is it possible to use this so that the compiler itself does not print anything? I have found poly -q which does not print the release message but that still prints all the rest.
- Gergely
______________________________**_________________ polyml mailing list polyml@inf.ed.ac.uk http://lists.inf.ed.ac.uk/**mailman/listinfo/polymlhttp://lists.inf.ed.ac.uk/mailman/listinfo/polyml
______________________________**_________________ polyml mailing list polyml@inf.ed.ac.uk http://lists.inf.ed.ac.uk/**mailman/listinfo/polymlhttp://lists.inf.ed.ac.uk/mailman/listinfo/polyml
On 29 Mar 2013, at 08:42, Gergely Buday gbuday@gmail.com wrote:
Back to the original question: this is why I would like to suppress any compiler message.
The function PolyML.compiler lets you write your own customised read-eval-print loop. In the code below, the fun my_read_eval_print_loop is a variant of the usual one that sends all compiler output to standard error and exits when standard input runs out. if you put it in my-revl.ML and run
poly < my-revl.ML 1>a 2>b
there will be a small predictable set of compiler messages at the head of file a followed by the standard output of the other code in my-revl.ML and all the rest of the compiler messages go in file b. If you make an executable that runs my_read_eval_print_loop following Phil Clayton's post, then you will get a program that compiles and runs ML code and sends all compiler messages to standard error (so you can discard them using 2>/dev/null on the command line).
Regards,
Rob.
=== beginning of my-revl.ML ==== fun read_or_exit () = ( case TextIO.input1 TextIO.stdIn of NONE => Posix.Process.exit 0wx0 | some => some );
fun my_read_eval_print_loop () = ( PolyML.compiler (read_or_exit, [PolyML.Compiler.CPOutStream (fn s => TextIO.output(TextIO.stdErr, s))]) (); my_read_eval_print_loop () );
val _ = my_read_eval_print_loop();
(* could also do PolyML.export("my-revl", read_eval_print_loop) at this point *)
fun repeat f n = if n <= 0 then () else (f (); repeat f (n-1));
fun say s () = TextIO.output(TextIO.stdOut, s ^ "\n");
val hello = repeat (say "Hello World!");
hello 10;
val goodbye = repeat (say "Goodbye cruel World!");
goodbye 10; === end of my-revl.ML ====
On Fri, 29 Mar 2013, Gergely Buday wrote:
I want ML scripts to invoke from the command line, without compiling them.
As Phil has already pointed out, you can produce standalone executables from some Poly/ML program that do whatever you want them to do.
For that ML part of the executable you can invoke the Poly/ML compiler at runtime, to compile and run your "script". Doing this yourself, instead of using the default ML toplevel loop, you can control compiler output to a large extent, by options provided in structure PolyML.Compiler.
See also this answer on Stackoverflow how to wrap up the Poly/ML compiler as "eval" function (without special options):
http://stackoverflow.com/questions/9555790/does-sml-poly-have-a-cl-like-repl...
If you want we can also continue that Q&A game on Stackoverflow, although I have no particular preference for mailing list vs. social network here.
Makarius
On 29/03/2013 08:42, Gergely Buday wrote:
Back to the original question: this is why I would like to suppress any compiler message.
I did not find such a flag in the manual, would it be possible to add one, David?
There have been a few suggestions for how to write your own top level and that's definitely the best way if you really want control over the output. I've just committed a change so that the -q option now sets PolyML.print_depth to zero so that by default the output won't be printed. To suppress the prompt you would be better off using the --use option to run the file directly. You will need to add OS.Process.exit OS.Process.success: unit; at the end if you don't want to enter the main read-eval-print-loop.
David
David,
On 29 Mar 2013, at 11:50, David Matthews David.Matthews@prolingua.co.uk wrote:
On 29/03/2013 08:42, Gergely Buday wrote:
Back to the original question: this is why I would like to suppress any compiler message.
I did not find such a flag in the manual, would it be possible to add one, David?
There have been a few suggestions for how to write your own top level and that's definitely the best way if you really want control over the output. I've just committed a change so that the -q option now sets PolyML.print_depth to zero so that by default the output won't be printed. To suppress the prompt you would be better off using the --use option to run the file directly. You will need to add OS.Process.exit OS.Process.success: unit; at the end if you don't want to enter the main read-eval-print-loop.
Quite a common thing to do in UN*X applications is not to prompt if the input isn't a terminal. Obviously, I can write my own read-eval-print loop that does that (indeed the read-eval-print loop in my earlier post on this topic doesn't prompt at all), but it might be a nice companion to the change you have just made to make the top level do that out of the box. That would give Gergely Buday exactly what he is asking for (i.e., the ability to have poly read code from the standard input and only output what the compiled code outputs).
Regards,
Rob,.
On 29/03/2013 20:24, Rob Arthan wrote:
Quite a common thing to do in UN*X applications is not to prompt if the input isn't a terminal. Obviously, I can write my own read-eval-print loop that does that (indeed the read-eval-print loop in my earlier post on this topic doesn't prompt at all), but it might be a nice companion to the change you have just made to make the top level do that out of the box. That would give Gergely Buday exactly what he is asking for (i.e., the ability to have poly read code from the standard input and only output what the compiled code outputs).
I guess it could suppress the prompt if the input was from a file but generally I wouldn't want to suppress it if it came from a pipe. On Windows I normally run Poly/ML within an environment that provides editing and communicates to the Poly process through pipes. Since it's interactive I want the normal prompts. It's difficult to see how to distinguish this from "cat file.ML | poly".
David
"David" == David Matthews David.Matthews@prolingua.co.uk writes:
David> On 29/03/2013 20:24, Rob Arthan wrote:
Quite a common thing to do in UN*X applications is not to prompt if the input isn't a terminal. Obviously, I can write my own read-eval-print loop that does that (indeed the read-eval-print loop in my earlier post on this topic doesn't prompt at all), but it might be a nice companion to the change you have just made to make the top level do that out of the box. That would give Gergely Buday exactly what he is asking for (i.e., the ability to have poly read code from the standard input and only output what the compiled code outputs).
David> I guess it could suppress the prompt if the input was from a David> file but generally I wouldn't want to suppress it if it came David> from a pipe. On Windows I normally run Poly/ML within an David> environment that provides editing and communicates to the Poly David> process through pipes. Since it's interactive I want the David> normal prompts. It's difficult to see how to distinguish this David> from "cat file.ML | poly".
Other tools do this by providing an extra flag argument. They prompt if stdin is a tty; or if '-i' (interactive) is provided.
Peter C -- Dr Peter Chubb peter.chubb AT nicta.com.au http://www.ssrg.nicta.com.au Software Systems Research Group/NICTA
On Fri, 29 Mar 2013, Rob Arthan wrote:
On 29 Mar 2013, at 11:50, David Matthews David.Matthews@prolingua.co.uk wrote:
On 29/03/2013 08:42, Gergely Buday wrote:
Back to the original question: this is why I would like to suppress any compiler message.
I did not find such a flag in the manual, would it be possible to add one, David?
There have been a few suggestions for how to write your own top level and that's definitely the best way if you really want control over the output. I've just committed a change so that the -q option now sets PolyML.print_depth to zero so that by default the output won't be printed. To suppress the prompt you would be better off using the --use option to run the file directly. You will need to add OS.Process.exit OS.Process.success: unit; at the end if you don't want to enter the main read-eval-print-loop.
Quite a common thing to do in UN*X applications is not to prompt if the input isn't a terminal. Obviously, I can write my own read-eval-print loop that does that (indeed the read-eval-print loop in my earlier post on this topic doesn't prompt at all), but it might be a nice companion to the change you have just made to make the top level do that out of the box. That would give Gergely Buday exactly what he is asking for (i.e., the ability to have poly read code from the standard input and only output what the compiled code outputs).
Note that for a full stand-alone solution, Gergely also needs to take care of the shebang line in the ML "script".
Anyway, looking at the bigger picture, the general question is how much conventional Unixoid behaviour Poly/ML should provide by default. Using it as script interpreter is one thing. Doing cc-like compilation of object modules and executables is another.
People have occasionally pointed out to me that the bias of Poly/ML to the good old image dumping looks very alien to the man on the street. Systems like OCaml imitate the compile/link phase of cc more faithfully, which gives many people a better feeling. (I am personally not very excited about how OCaml compilation works.)
One could do a little bit here by including certain polyml options or shell scripts by default, to address both the scripting and the batch-compilation problem in a way that looks familiar to the masses.
Makarius
On 02/04/2013 11:45, Makarius wrote:
On Fri, 29 Mar 2013, Rob Arthan wrote:
On 29 Mar 2013, at 11:50, David Matthews David.Matthews@prolingua.co.uk wrote: Quite a common thing to do in UN*X applications is not to prompt if the input isn't a terminal. Obviously, I can write my own read-eval-print loop that does that (indeed the read-eval-print loop in my earlier post on this topic doesn't prompt at all), but it might be a nice companion to the change you have just made to make the top level do that out of the box. That would give Gergely Buday exactly what he is asking for (i.e., the ability to have poly read code from the standard input and only output what the compiled code outputs).
Note that for a full stand-alone solution, Gergely also needs to take care of the shebang line in the ML "script".
This is a bit of problem. At least in the languages I know that are used for scripting the # symbol at the beginning of a line begins a comment. That's not true for ML. I guess it would be possible to add an option to simply discard the first line of input.
Anyway, looking at the bigger picture, the general question is how much conventional Unixoid behaviour Poly/ML should provide by default. Using it as script interpreter is one thing. Doing cc-like compilation of object modules and executables is another.
Well, that was the intention behind the changes I made with version 5.0. 5.0 originally only had PolyML.export as the way of exporting the state. PolyML.SaveState was added later.
People have occasionally pointed out to me that the bias of Poly/ML to the good old image dumping looks very alien to the man on the street. Systems like OCaml imitate the compile/link phase of cc more faithfully, which gives many people a better feeling. (I am personally not very excited about how OCaml compilation works.)
One could do a little bit here by including certain polyml options or shell scripts by default, to address both the scripting and the batch-compilation problem in a way that looks familiar to the masses.
Do you has something else in mind apart from PolyML.export? Perhaps some form of separate compilation of modules? I'm not familiar with OCaml.
David
On Tue, 2 Apr 2013, David Matthews wrote:
Well, that was the intention behind the changes I made with version 5.0. 5.0 originally only had PolyML.export as the way of exporting the state. PolyML.SaveState was added later.
People have occasionally pointed out to me that the bias of Poly/ML to the good old image dumping looks very alien to the man on the street. Systems like OCaml imitate the compile/link phase of cc more faithfully, which gives many people a better feeling. (I am personally not very excited about how OCaml compilation works.)
One could do a little bit here by including certain polyml options or shell scripts by default, to address both the scripting and the batch-compilation problem in a way that looks familiar to the masses.
Do you has something else in mind apart from PolyML.export? Perhaps some form of separate compilation of modules? I'm not familiar with OCaml.
Actual separate compilation is a different thing. I don't think there is a real need for that in Poly/ML. The compiler is fast enough to compile huge applications from scratch.
Personally I don't have any requirements beyond what Poly/ML does already. What "the man in the street" wants to see, though, is something that looks and feels like "polymlc ..." just like "cc ...", although that might sound a bit silly.
Makarius
On 02/04/2013 12:50, Makarius wrote:
On Tue, 2 Apr 2013, David Matthews wrote:
Do you has something else in mind apart from PolyML.export? Perhaps some form of separate compilation of modules? I'm not familiar with OCaml.
Actual separate compilation is a different thing. I don't think there is a real need for that in Poly/ML. The compiler is fast enough to compile huge applications from scratch.
Personally I don't have any requirements beyond what Poly/ML does already. What "the man in the street" wants to see, though, is something that looks and feels like "polymlc ..." just like "cc ...", although that might sound a bit silly.
Well it wouldn't be hard to provide a slight variation of the top-level that when it reached end-of-file it looked in the name-space for a variable called "main", checked that it had the correct function type and then called PolyML.export on it. If this would appeal to some current sceptics about Poly/ML then I'm happy to do it.
Actually it's probably not much more than polyml --use myprogram.ML --use export.ML Where export.ML is PolyML.export("polyml.o", main); OS.Process.exit OS.Process.success: unit;
David
On Tue, 2 Apr 2013, David Matthews wrote:
Personally I don't have any requirements beyond what Poly/ML does already. What "the man in the street" wants to see, though, is something that looks and feels like "polymlc ..." just like "cc ...", although that might sound a bit silly.
Well it wouldn't be hard to provide a slight variation of the top-level that when it reached end-of-file it looked in the name-space for a variable called "main", checked that it had the correct function type and then called PolyML.export on it. If this would appeal to some current sceptics about Poly/ML then I'm happy to do it.
Actually it's probably not much more than polyml --use myprogram.ML --use export.ML Where export.ML is PolyML.export("polyml.o", main); OS.Process.exit OS.Process.success: unit;
You get the idea. It is mostly about trivialities for people who know Poly/ML sufficiently well.
Note that the majority of people out there are not even sceptics of Poly/ML, because they have not even heard of it. Instead they are using much worse systems that they are used to.
Makarius
Makarius wrote:
Note that the majority of people out there are not even sceptics of Poly/ML, because they have not even heard of it. Instead they are using much worse systems that they are used to.
That is the point I had. Writing scripts in ML could show Fedora developers that ML, through the POSIX modules, is indeed a platform for writing robust programs. The usual way now is to write python scripts and rewrite them in C if needed for better memory usage and speed. An ML script could be just tailored rewriting those critical parts in ML itself and compiled with mlton if necessary.
- Gergely
On Tue, 2 Apr 2013, Gergely Buday wrote:
An ML script could be just tailored rewriting those critical parts in ML itself and compiled with mlton if necessary.
I hear that part about Mlton occasionally, and wonder if it is really significant. Do you have concrete performance figures at hand that show that the extra time for Mlton compilation is worth waiting? (Real applications, not just micro-benchmarks.)
Makarius
On Tue, Apr 2, 2013 at 8:51 AM, Makarius makarius@sketis.net wrote:
On Tue, 2 Apr 2013, Gergely Buday wrote:
An ML script could be just tailored rewriting those critical parts in ML
itself and compiled with mlton if necessary.
I hear that part about Mlton occasionally, and wonder if it is really significant. Do you have concrete performance figures at hand that show that the extra time for Mlton compilation is worth waiting? (Real applications, not just micro-benchmarks.)
There are some (admittedly, outdated in terms of compiler versions) performance comparisons of ML compilers at: http://mlton.org/Performance In general, though, if MLton does better than Poly/ML on micro-benchmarks, then I would imagine that it would tend to do better than Poly/ML on larger programs, where there are more opportunities for whole-program optimization. Of course, it also depends on the "real application" itself. No amount of compiler optimization can help if your application is I/O bound. Also, Poly/ML supports some kinds of applications such as Isabelle that require the ability to dynamically enter new code, which isn't compatible with MLton's compilation strategy.
For larger stand-alone programs, MLton's compilation gets better with respect to Poly/ML's; for example, here is Poly/ML 5.5 and MLton 20100608 compiling MLton (on a 2009 MacPro (2.66GHz Quad-Core Intel Xeon; 6GB 1066MhZ DDR3; MacOSX 10.7 (Lion)):
[mtf@fenrir mlton]$ /usr/bin/time make polyml-mlton ... 202.07 real 241.34 user 8.13 sys [mtf@fenrir mlton]$ /usr/bin/time make mlton-compile ... 305.59 real 285.22 user 17.18 sys
So, paying about 1.5X compile time to use MLton instead of Poly/ML. Watching the build, it appears that Poly/ML is spending quite a bit of time in the final 'PolyML.export'.
Now, here are the resulting executables compiling (a slightly old) version of HaMLet:
[mtf@fenrir tests]$ /usr/bin/time ../../build.polyml/bin/mlton -verbose 1 hamlet.sml MLton starting ... MLton finished in 9.63 + 57.56 (86% GC) 17.99 real 65.64 user 1.67 sys [mtf@fenrir tests]$ /usr/bin/time ../../build.mlton/bin/mlton -verbose 1 hamlet.sml MLton starting ... MLton finished in 5.95 + 2.52 (30% GC) 8.59 real 6.80 user 1.77 sys
So, paying about 2.0X run time to use Poly/ML instead of MLton (looking at wall-clock time). Note that things would be a bit worse on a single-core machine --- Poly/ML's parallel GC does a good job of utilizing this machine's 8 cores (technically, 4 cores w/ 2-way hyper threading), yielding a wall-clock time that is about a quarter of the total processor time.
And, here are the resulting executables compiling MLton:
[mtf@fenrir mlton]$ /usr/bin/time ../build.polyml/bin/mlton -verbose 2 mlton.mlb MLton starting ... MLton finished in 728.89 + 49105.93 (99% GC) 9615.53 real 49693.05 user 142.17 sys
[mtf@fenrir mlton]$ /usr/bin/time ../build.mlton/bin/mlton -verbose 2 mlton.mlb MLton starting ... MLton finished in 209.28 + 52.13 (20% GC) 262.58 real 254.47 user 7.06 sys
So, paying about 36.6X run time to use Poly/ML instead of MLton (looking at wall-clock time). Of course, it's clear that the Poly/ML compiled MLton is essentially GC bound when compiling MLton. Also, I didn't do anything special with adjusting Poly/ML's heap parameters --- I'm sure one could do better, if not much better (but, default behavior is the one that makes the first impression). In any case, one would still is paying about 3.5X run time to use Poly/ML instead of MLton (looking at mutator time, as reported by Timer.checkCPUTimes).
I'm sure that are some aspects of the MLton code base that make it more suitable for compilation by MLton than by other SML compilers, but I'd guess that this gives a reasonable estimate: pay a (one-time) 1.5X-2.0X compile time to use MLton instead of Poly/ML to gain a (many-time) 0.33X-0.5X run time. Maybe not the right trade off for development, but quite possibly the right trade off for deployment --- which is precisely the kind of scenario that Gergely had in mind.
-Matthew
On Tue, 2 Apr 2013, Matthew Fluet wrote:
There are some (admittedly, outdated in terms of compiler versions) performance comparisons of ML compilers at: http://mlton.org/Performance In general, though, if MLton does better than Poly/ML on micro-benchmarks, then I would imagine that it would tend to do better than Poly/ML on larger programs, where there are more opportunities for whole-program optimization. Of course, it also depends on the "real application" itself. No amount of compiler optimization can help if your application is I/O bound. Also, Poly/ML supports some kinds of applications such as Isabelle that require the ability to dynamically enter new code, which isn't compatible with MLton's compilation strategy.
I don't want to say anything inappropriate about Mlton -- We can't use it in Isabelle, due to the inherent alternation of compilation and execution that is never really finished, so we will never know its performance there. The way how Isabelle and similar theorem provers from the HOL family work violates the basic assumptions about whole-program optimization.
Incidently, the main "benchmark" for Isabelle/ML is the Isabelle/HOL image, and that also includes a lot of compile time. It needs both online compilation, and *fast* online compilation. (Presently Isabelle/HOL requires 1:30 min on recent consumer CPUs like i7 with 4 core * hyperthreading; historically it was up to 25min, although much smaller back then.)
Note that the extrapolation from microbenchmarks to real applications was done by the SML/NJ guys many years ago. According to that Isabelle on SML/NJ would have to be much faster than on Poly/ML, but historically it was always within a factor of 1.2 .. 2 slower in the best of its time. Now SML/NJ is approx. 40..100 times slower. What proved deadly to NJ were two things:
* Poor scalability of heap management (anything beyond approx. 100 MB is getting really slow). So "IO" should also include data moved between the CPU and the memory subsystem.
* Lack of support for multicore systems.
Any benchmark these days should include parallel processing routinely, but we should be glad to have support for multicore hardware at all for a few surviving implementations of Standard ML. (OCaml is really in a pitch there -- maybe some users can evade to F#.)
Makarius
Am 02.04.2013 14:02, schrieb David Matthews:
On 02/04/2013 12:50, Makarius wrote:
On Tue, 2 Apr 2013, David Matthews wrote:
Do you has something else in mind apart from PolyML.export? Perhaps some form of separate compilation of modules? I'm not familiar with OCaml.
Actual separate compilation is a different thing. I don't think there is a real need for that in Poly/ML. The compiler is fast enough to compile huge applications from scratch.
Personally I don't have any requirements beyond what Poly/ML does already. What "the man in the street" wants to see, though, is something that looks and feels like "polymlc ..." just like "cc ...", although that might sound a bit silly.
Well it wouldn't be hard to provide a slight variation of the top-level that when it reached end-of-file it looked in the name-space for a variable called "main", checked that it had the correct function type and then called PolyML.export on it. If this would appeal to some current sceptics about Poly/ML then I'm happy to do it.
This would definitly be a great help for people new to Poly/ML. When I had to produce a standalone executable from an ML-file it took me quite some time before figuring out how it works.
Actually it's probably not much more than polyml --use myprogram.ML --use export.ML Where export.ML is PolyML.export("polyml.o", main); OS.Process.exit OS.Process.success: unit;
Make the name of the resulting obj file a parameter and it's fine, I think. Could look like (shell script):
polyml --use myprogram.ML <<EOF PolyML.export("${output}", main); EOF
Perhaps one can also pour in the gcc-compile phase already, so it's one step from ML-file to executable.
- Ren?
On 02/04/2013 13:25, Ren? Neumann wrote:
Am 02.04.2013 14:02, schrieb David Matthews: This would definitly be a great help for people new to Poly/ML. When I had to produce a standalone executable from an ML-file it took me quite some time before figuring out how it works.
Actually it's probably not much more than polyml --use myprogram.ML --use export.ML Where export.ML is PolyML.export("polyml.o", main); OS.Process.exit OS.Process.success: unit;
Make the name of the resulting obj file a parameter and it's fine, I think. Could look like (shell script):
polyml --use myprogram.ML <<EOF PolyML.export("${output}", main); EOF
Yes, of course. Having the object file name as a parameter, perhaps defaulting to the name of the source file with ".o" on the end, was my "much more than".
Perhaps one can also pour in the gcc-compile phase already, so it's one step from ML-file to executable.
I don't know how that works so perhaps that's for later.
So, in summary, it should look like: -c option: Compile a source file which must contain a function main of type unit -> unit (or string list => unit or string list -> int ???) and export the object file.
-o option: Used with -c to override the default object file name.
-i option: Force interactive mode. The default is interactive only if stdin is a terminal. This controls whether to print a prompt.
--skip-first-line option: Skip the first line of the input stream. Used with scripts with #! at the start.
Does that seem reasonable? I don't think there's much there that is very complicated. Should some of these imply the -q option?
David
On Tue, 2 Apr 2013, David Matthews wrote:
--skip-first-line option: Skip the first line of the input stream. Used with scripts with #! at the start.
You should probably insist in an actual "#!" before skipping the first line, just as a sanity check.
Makarius
So, in summary, it should look like: -c option: Compile a source file which must contain a function main of type unit -> unit (or string list => unit or string list -> int ???) and export the object file.
Currently, PolyML.export expects unit -> unit, so in my opinion one should stick with that to avoid confusion.
- Ren?
Dear all,
shortly after I started to use PolyML (outside of Isabelle), I created the attached bash script for compilation of executables and custom top-levels (also there is some special treatment to reuse Isabelle's library, which you may safely ignore). Maybe it is of use to somebody.
cheers
chris
On 04/02/2013 10:36 PM, David Matthews wrote:
On 02/04/2013 13:25, Ren? Neumann wrote:
Am 02.04.2013 14:02, schrieb David Matthews: This would definitly be a great help for people new to Poly/ML. When I had to produce a standalone executable from an ML-file it took me quite some time before figuring out how it works.
Actually it's probably not much more than polyml --use myprogram.ML --use export.ML Where export.ML is PolyML.export("polyml.o", main); OS.Process.exit OS.Process.success: unit;
Make the name of the resulting obj file a parameter and it's fine, I think. Could look like (shell script):
polyml --use myprogram.ML <<EOF PolyML.export("${output}", main); EOF
Yes, of course. Having the object file name as a parameter, perhaps defaulting to the name of the source file with ".o" on the end, was my "much more than".
Perhaps one can also pour in the gcc-compile phase already, so it's one step from ML-file to executable.
I don't know how that works so perhaps that's for later.
So, in summary, it should look like: -c option: Compile a source file which must contain a function main of type unit -> unit (or string list => unit or string list -> int ???) and export the object file.
-o option: Used with -c to override the default object file name.
-i option: Force interactive mode. The default is interactive only if stdin is a terminal. This controls whether to print a prompt.
--skip-first-line option: Skip the first line of the input stream. Used with scripts with #! at the start.
Does that seem reasonable? I don't think there's much there that is very complicated. Should some of these imply the -q option?
David _______________________________________________ polyml mailing list polyml@inf.ed.ac.uk http://lists.inf.ed.ac.uk/mailman/listinfo/polyml
On 02/04/2013 14:36, David Matthews wrote:
Perhaps one can also pour in the gcc-compile phase already, so it's one step from ML-file to executable.
I don't know how that works so perhaps that's for later.
So, in summary, it should look like: -c option: Compile a source file which must contain a function main of type unit -> unit (or string list => unit or string list -> int ???) and export the object file.
-o option: Used with -c to override the default object file name.
-i option: Force interactive mode. The default is interactive only if stdin is a terminal. This controls whether to print a prompt.
--skip-first-line option: Skip the first line of the input stream. Used with scripts with #! at the start.
I've implemented the -i option and check for interactive mode and also a --script option. The latter compiles and executes the input file, stripping off the first line if it begins with #!. It also quietens the output as with the -q option.
david@dunedin:~$ cat /tmp/hello.ML #! /usr/local/bin/poly --script print "Hello World\n"; david@dunedin:~$ /tmp/hello.ML Hello World david@dunedin:~$
It seems that when the script is called CommandLine.arguments() returns all the arguments that have been given to the script as the first item in the list with the name of the file itself as the second. That means that, for the moment at least, there must be just one --script argument and nothing else.
I'm still wondering about the compilation options. It would be nice to find some way of including the linking phase with compilation. Currently it's not that difficult to export an object file but there's a bit of messing about working out exactly what to put on the "ld" line to link the object file with the Poly libraries and any C++/gmp etc libraries. I have in mind some sort of script that would be built as part of building Poly/ML that would include the path to where the libraries will be installed as well as the dependent libraries. This is all there somewhere in the libtool/autoconf/Makefile but I'm not sure how to get it out.
David
* David Matthews:
This is a bit of problem. At least in the languages I know that are used for scripting the # symbol at the beginning of a line begins a comment. That's not true for ML. I guess it would be possible to add an option to simply discard the first line of input.
The number of arguments which can be specified on shebang lines is fairly restricted on most systems. I think you'd have to add --use-skip-first-line in addition to --use for this to work.
Lua just ignores the first line if it starts with "#". (Otherwise, "#" is an operator in Lua.)
On Fri, Apr 5, 2013 at 2:47 PM, Florian Weimer fw@deneb.enyo.de wrote:
Lua just ignores the first line if it starts with "#". (Otherwise, "#" is an operator in Lua.)
OpenAxiom does the same thing.
This is a very interesting thread. I feel I must pick up on several points that have been made. On 2 Apr 2013, at 11:45, Makarius makarius@sketis.net wrote:
People have occasionally pointed out to me that the bias of Poly/ML to the good old image dumping looks very alien to the man on the street. Systems like OCaml imitate the compile/link phase of cc more faithfully, which gives many people a better feeling. (I am personally not very excited about how OCaml compilation works.)
Here for "man in the street", I read "the diehard user of a language like C or C++". The persistent object store model is actually what the vast majority of men and women in the street that do something like programming use from day to day (I am thinking of databases and spreadsheets, and VB macros in MS Word and ...). If you think of an ML compiler along the lines of Poly/ML (or SML/NJ) as offering an amazingly powerful programmable database, then image dumping, or even better, the hierarchical state saving mechanism of Poly/ML is a very good fit.
Of course, the good news is that, unlike SQL :-), Poly/ML also lets you compile stand-alone programs just like the C or C++ programmer.
On 2 Apr 2013, at 12:50, Makarius makarius@sketis.net wrote:
? Actual separate compilation is a different thing. I don't think there is a real need for that in Poly/ML. The compiler is fast enough to compile huge applications from scratch.
I think separate compilation would be useful if it were supported. Unfortunately, my understanding is that the modularity features of Standard ML are not compatible with separate compilation. (This is a related to a very old complaint of mine, namely that the signature of a structure doesn't give you all the information you need to type-check code that uses the structure.)
Moscow ML had a form of separate compilation, but the semantics were a little surprising. If the separately compiled code contains something like:
val x = long_and_complex_function y;
then long_and_complex_function was executed at link-time rather than compile-time. This is much less satisfactory than the persistent object store model at least if you are using ML for what it was invented for (i.e., implementing interactive theorem provers).
On 2 Apr 2013, at 13:44, Gergely Buday gbuday@gmail.com wrote:
? That is the point I had. Writing scripts in ML could show Fedora developers that ML, through the POSIX modules, is indeed a platform for writing robust programs. The usual way now is to write python scripts and rewrite them in C if needed for better memory usage and speed. An ML script could be just tailored rewriting those critical parts in ML itself and compiled with mlton if necessary.
I strongly endorse this. One of the bizarre things about python is that method call is extraordinarily inefficient. You can do something quite like functional programming in python, but the performance can be appalling. Writing efficient python is a black art and the results are typically not pretty. With ML you can generally achieve C-like performance in a fraction of the development time.
Regards,
Rob.
I've added a "polyc" script that is generated from the build process. The idea of this is to provide the similar sort of functionality that users of C expect from the "cc" command. It's very simple at the moment and is limited to a few options. It compiles an ML source file and exports the "main" function. The -o option specifies where the executable is to be placed, defaulting to a.out on Unix.
david@dunedin:~$ cat hello.ML fun main() = print "Hello World\n"; david@dunedin:~$ polyc hello.ML -o hello david@dunedin:~$ ./hello Hello World
The script includes the path names to where the poly binary and the libraries will be installed so it's not possible to run it within the build directory. David
On Fri, Apr 5, 2013 at 7:24 AM, David Matthews David.Matthews@prolingua.co.uk wrote:
I've added a "polyc" script that is generated from the build process. The idea of this is to provide the similar sort of functionality that users of C expect from the "cc" command. It's very simple at the moment and is limited to a few options. It compiles an ML source file and exports the "main" function. The -o option specifies where the executable is to be placed, defaulting to a.out on Unix.
GHC's output default to the basename of the input file. I think it is a sensible default in modern environments.
GHC also has automatic dependency tracking, I don't know how practical or easy that is to implement for PolyML.
-- Gaby
polyc looks very useful. I just tried it out for 1723 and noticed a few things:
1. -L${LIBDIR} is missing in the case when -o is not specified, causing the error: /usr/bin/ld: cannot find -lpolymain /usr/bin/ld: cannot find -lpolyml collect2: ld returned 1 exit status
2. The Makefile.am has EXTRA="$(EXTRALDFLAGS)" which should, perhaps, be EXTRALDFLAGS="$(EXTRALDFLAGS)" as EXTRALDFLAGS is used inside the script.
3. In the polyc that was built, I see EXTRA=" " As it stands, the Poly/ML lib directory is not added to the linker path so I presume (for Linux) that this was meant to contain something like -Wl,-rpath ${LIBDIR} so that it is not necessary to set LD_LIBRARY_PATH before running the executable.
Phil
On 05/04/13 13:24, David Matthews wrote:
I've added a "polyc" script that is generated from the build process. The idea of this is to provide the similar sort of functionality that users of C expect from the "cc" command. It's very simple at the moment and is limited to a few options. It compiles an ML source file and exports the "main" function. The -o option specifies where the executable is to be placed, defaulting to a.out on Unix.
david@dunedin:~$ cat hello.ML fun main() = print "Hello World\n"; david@dunedin:~$ polyc hello.ML -o hello david@dunedin:~$ ./hello Hello World
The script includes the path names to where the poly binary and the libraries will be installed so it's not possible to run it within the build directory. David _______________________________________________ polyml mailing list polyml@inf.ed.ac.uk http://lists.inf.ed.ac.uk/mailman/listinfo/polyml
Thanks for trying that out and letting me have your comments.
On 07/04/2013 23:55, Phil Clayton wrote:
polyc looks very useful. I just tried it out for 1723 and noticed a few things:
- -L${LIBDIR} is missing in the case when -o is not specified, causing
the error: /usr/bin/ld: cannot find -lpolymain /usr/bin/ld: cannot find -lpolyml collect2: ld returned 1 exit status
Oops. That was a typo.
- The Makefile.am has EXTRA="$(EXTRALDFLAGS)"
which should, perhaps, be EXTRALDFLAGS="$(EXTRALDFLAGS)" as EXTRALDFLAGS is used inside the script.
Also an error. I've just committed a fix.
- In the polyc that was built, I see EXTRA=" "
As it stands, the Poly/ML lib directory is not added to the linker path so I presume (for Linux) that this was meant to contain something like -Wl,-rpath ${LIBDIR} so that it is not necessary to set LD_LIBRARY_PATH before running the executable.
I really had in mind the Windows build which needs a few extra options. I hadn't thought of "rpath". My only concern would be if there are linkers around that don't support it. There doesn't seem to be a simple way in autoconf to find out if the linker supports it.
David
On 08/04/13 12:21, David Matthews wrote:
- In the polyc that was built, I see EXTRA=" "
As it stands, the Poly/ML lib directory is not added to the linker path so I presume (for Linux) that this was meant to contain something like -Wl,-rpath ${LIBDIR} so that it is not necessary to set LD_LIBRARY_PATH before running the executable.
I really had in mind the Windows build which needs a few extra options. I hadn't thought of "rpath". My only concern would be if there are linkers around that don't support it. There doesn't seem to be a simple way in autoconf to find out if the linker supports it.
I have found something called config.rpath which seems to be part of the GNU portability library. This appears to calculate potentially useful values for passing rpath to linkers. acl_cv_hardcode_libdir_flag_spec may of use - I don't know. Nor do I know how you would go about using it!
Alternatively, LD_RUN_PATH could be set for the link command. Although probably benign if unsupported, I don't know how portable it is to other platforms.
Presumably I am experiencing this linker path issue because I have installed Poly/ML to a non-standard location(?) In such a set up, maybe it is reasonable to require LD_LIBRARY_PATH to be set for executables from polyc. Perhaps it is worth considering what would users expect if the compiled executables are copied to different systems.
Phil
On 08/04/2013 21:22, Phil Clayton wrote:
On 08/04/13 12:21, David Matthews wrote: I have found something called config.rpath which seems to be part of the GNU portability library. This appears to calculate potentially useful values for passing rpath to linkers. acl_cv_hardcode_libdir_flag_spec may of use - I don't know. Nor do I know how you would go about using it!
Alternatively, LD_RUN_PATH could be set for the link command. Although probably benign if unsupported, I don't know how portable it is to other platforms.
Presumably I am experiencing this linker path issue because I have installed Poly/ML to a non-standard location(?) In such a set up, maybe it is reasonable to require LD_LIBRARY_PATH to be set for executables from polyc. Perhaps it is worth considering what would users expect if the compiled executables are copied to different systems.
I found some references to rpath in the porting guidelines for Debian. The general recommendation was to avoid it because while it would work for one library it might have an adverse effect if an application used other shared libraries.
Is there a reason you're installing to a "non-standard location"? In Linux that basically means "not listed in /etc/ld.so.conf" so the simplest solution is to add your location there. Another solution is to use static linking so you don't have to worry about it. Use ./configure --disable-shared.
My idea with the polyc script was to try to make it easy for people familiar with C to be able to try out Poly/ML. Of course it may evolve beyond that but for the moment I want to keep it simple.
David
On 09/04/13 11:28, David Matthews wrote:
On 08/04/2013 21:22, Phil Clayton wrote:
On 08/04/13 12:21, David Matthews wrote: I have found something called config.rpath which seems to be part of the GNU portability library. This appears to calculate potentially useful values for passing rpath to linkers. acl_cv_hardcode_libdir_flag_spec may of use - I don't know. Nor do I know how you would go about using it!
Alternatively, LD_RUN_PATH could be set for the link command. Although probably benign if unsupported, I don't know how portable it is to other platforms.
Presumably I am experiencing this linker path issue because I have installed Poly/ML to a non-standard location(?) In such a set up, maybe it is reasonable to require LD_LIBRARY_PATH to be set for executables from polyc. Perhaps it is worth considering what would users expect if the compiled executables are copied to different systems.
I found some references to rpath in the porting guidelines for Debian. The general recommendation was to avoid it because while it would work for one library it might have an adverse effect if an application used other shared libraries.
Yes, it appears people feel quite strongly about that, so probably best avoided!
Is there a reason you're installing to a "non-standard location"?
I install to /opt/polyml/polyml-${version} to allow multiple versions to coexist, for at least two reasons: 1. To quickly allow (performance) regression tests to be performed between versions of Poly/ML. 2. I have needed a pre-release due to an enhancement not yet available in the main release. (For example, a while ago I was making use of an FFI enhancement before 5.5 was released.) Other people may have other reasons for multiple versions.
In Linux that basically means "not listed in /etc/ld.so.conf" so the simplest solution is to add your location there.
You're right, I should really do that. In fact, on Fedora, I would add the file /etc/ld.so.conf.d/polyml-${version}.conf that contains the libdir.
Another solution is to use static linking so you don't have to worry about it. Use ./configure --disable-shared.
My idea with the polyc script was to try to make it easy for people familiar with C to be able to try out Poly/ML. Of course it may evolve beyond that but for the moment I want to keep it simple.
I misunderstood the motivation for polyc. I thought that it was to allow those without compiling/linking knowledge to easily build executables, i.e. to de-skill the process. Whilst such users may realize that
./configure --prefix <non-standard location>
requires
PATH=${PATH}:${bindir}
they would not realize that they need (assuming no super-user privileges)
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:${libdir}
Phil
On 10/04/2013 20:03, Phil Clayton wrote:
In Linux that basically means "not listed in /etc/ld.so.conf" so the simplest solution is to add your location there.
You're right, I should really do that. In fact, on Fedora, I would add the file /etc/ld.so.conf.d/polyml-${version}.conf that contains the libdir.
I think that will probably work on Debian as well and probably other Linux distros.
My idea with the polyc script was to try to make it easy for people familiar with C to be able to try out Poly/ML. Of course it may evolve beyond that but for the moment I want to keep it simple.
I misunderstood the motivation for polyc. I thought that it was to allow those without compiling/linking knowledge to easily build executables, i.e. to de-skill the process. Whilst such users may realize that
./configure --prefix <non-standard location>
requires
PATH=${PATH}:${bindir}
they would not realize that they need (assuming no super-user privileges)
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:${libdir}
I just wonder how common that case is. I would expect that most people manage their own machines and have sufficient privileges to be able to install software to the standard locations. The easiest way to try something out is usually to use the package management system to install a package configured for the distro and with all the dependencies sorted out. Personally, I would avoid installing something unfamiliar to a non-standard location because of the possibility of it breaking in unexpected ways.
David
On Thu, 11 Apr 2013, David Matthews wrote:
I misunderstood the motivation for polyc. I thought that it was to allow those without compiling/linking knowledge to easily build executables, i.e. to de-skill the process. Whilst such users may realize that
./configure --prefix <non-standard location>
requires
PATH=${PATH}:${bindir}
they would not realize that they need (assuming no super-user privileges)
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:${libdir}
I just wonder how common that case is. I would expect that most people manage their own machines and have sufficient privileges to be able to install software to the standard locations. The easiest way to try something out is usually to use the package management system to install a package configured for the distro and with all the dependencies sorted out. Personally, I would avoid installing something unfamiliar to a non-standard location because of the possibility of it breaking in unexpected ways.
For the Isabelle distribution (which includes a multi-platform Poly/ML version) we do exactly the opposite quite sucessfully for > 5 years: the many different package managers of the many different operation system distributions are ignored as much as possible.
Instead there are shell scripts and environment variable settings to make things work under most circumstances, even if the user happens to have alternative versions of Isabelle or Poly/ML already installed by other means: OS packages can be very annoying, because they assume to be the one and only one way to have a certain program of a certain name installed.
I have seen so many good programs turned into bad packages, not just Poly/ML.
Mac OS X is especially nasty, since there are several package managers to choose from, but none of them is really native. The Apple app store is probably better, but I don't know how it works.
The most elementary wrapper script for standalone ("portable") Poly/ML directories is included in the attachment. The real one for Isabelle is more advanced. Next time I will consider "./configure --disable-shared", which I did not know before.
Makarius
On 11/04/2013 11:44, Makarius wrote:
On Thu, 11 Apr 2013, David Matthews wrote:
I misunderstood the motivation for polyc. I thought that it was to allow those without compiling/linking knowledge to easily build executables, i.e. to de-skill the process. Whilst such users may realize that
./configure --prefix <non-standard location>
requires
PATH=${PATH}:${bindir}
they would not realize that they need (assuming no super-user privileges)
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:${libdir}
I just wonder how common that case is.
Perhaps I'm wrong. I guess if there was a simple, portable way of sorting out the paths I'd consider it.
From some of the comments on this thread I had the idea that there were potential users of Poly/ML out there who were put off by the read-eval-print-loop. After all, that isn't the way most other programming languages/implementations work. Having the --script option allows those used to scripting to code something up quickly so that's one group catered for. Another group of potential users are those used to the compile-link-execute model and that was the group I was trying to target with the polyc script.
For the Isabelle distribution (which includes a multi-platform Poly/ML version) we do exactly the opposite quite sucessfully for > 5 years: the many different package managers of the many different operation system distributions are ignored as much as possible.
I have seen so many good programs turned into bad packages, not just Poly/ML.
I have also suffered from bad Poly/ML packages but I feel I would rather help get the packages fixed than discourage them. Isabelle and Poly/ML are rather different both in the complexity and in the motivation of potential users.
The most elementary wrapper script for standalone ("portable") Poly/ML directories is included in the attachment. The real one for Isabelle is more advanced. Next time I will consider "./configure --disable-shared", which I did not know before.
I wonder if --disable-shared should be the default with ./configure. It would solve a lot of these problems. I can see that packagers who are going to build a package to install to the standard location would want to build the shared library but users building a stand-alone system probably don't want it. If you're only building "poly" anyway there's no saving by having libpolyml as a separate library.
David
On 11/04/13 16:11, David Matthews wrote:
On 11/04/2013 11:44, Makarius wrote:
On Thu, 11 Apr 2013, David Matthews wrote:
I misunderstood the motivation for polyc. I thought that it was to allow those without compiling/linking knowledge to easily build executables, i.e. to de-skill the process. Whilst such users may realize that
./configure --prefix <non-standard location>
requires
PATH=${PATH}:${bindir}
they would not realize that they need (assuming no super-user privileges)
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:${libdir}
I just wonder how common that case is.
Perhaps I'm wrong. I guess if there was a simple, portable way of sorting out the paths I'd consider it.
Maybe this is just something to be prominently documented in installation instructions!
I am also wondering whether libtool helps. I know very little about it though.
From some of the comments on this thread I had the idea that there were potential users of Poly/ML out there who were put off by the read-eval-print-loop. After all, that isn't the way most other programming languages/implementations work. Having the --script option allows those used to scripting to code something up quickly so that's one group catered for. Another group of potential users are those used to the compile-link-execute model and that was the group I was trying to target with the polyc script.
Understand. I think I probably came up with a different group based on what I thought a Python programmer may typically know. I don't have any experience with Python though nor any Python compilers (if such things exists).
The most elementary wrapper script for standalone ("portable") Poly/ML directories is included in the attachment. The real one for Isabelle is more advanced. Next time I will consider "./configure --disable-shared", which I did not know before.
I wonder if --disable-shared should be the default with ./configure. It would solve a lot of these problems. I can see that packagers who are going to build a package to install to the standard location would want to build the shared library but users building a stand-alone system probably don't want it. If you're only building "poly" anyway there's no saving by having libpolyml as a separate library.
If, in future, people are building Poly/ML-based applications, I think it would be preferable for the default to be dynamic. For example, I have just tried building the (SML!) GTK+ Hello World demo: the executable is much larger with static linking (768k with -ggdb, 560k stripped) than with dynamic linking (160k, 148k).
The default linking method is quite a significant decision because, if I understand correctly, it fixes the way Poly/ML-based applications are linked, i.e. it is not possible to choose the linking method (to Poly/ML) when building a Poly/ML-based application. Is that correct? (I am intrigued to know why.)
Phil
On 11/04/2013 22:17, Phil Clayton wrote:
I am also wondering whether libtool helps. I know very little about it though.
I wonder if --disable-shared should be the default with ./configure. It would solve a lot of these problems. I can see that packagers who are going to build a package to install to the standard location would want to build the shared library but users building a stand-alone system probably don't want it. If you're only building "poly" anyway there's no saving by having libpolyml as a separate library.
If, in future, people are building Poly/ML-based applications, I think it would be preferable for the default to be dynamic. For example, I have just tried building the (SML!) GTK+ Hello World demo: the executable is much larger with static linking (768k with -ggdb, 560k stripped) than with dynamic linking (160k, 148k).
The default linking method is quite a significant decision because, if I understand correctly, it fixes the way Poly/ML-based applications are linked, i.e. it is not possible to choose the linking method (to Poly/ML) when building a Poly/ML-based application. Is that correct? (I am intrigued to know why.)
I've only used libtool with Poly/ML and as part of autoconf/automake so I'm far from an expert. In the case of Poly/ML what happens is this. The default is --enable-shared --enable-static. This builds two versions of libpolyml; one for dynamic loading, the other static. When "poly" is linked it uses the dynamic library if it is present, resulting in an executable that depends on being able to load libpolyml at run-time. If poly has been linked with the dynamic version of libpolyml libtool builds a shell script and, within the build directory, it uses this as "poly". That enables "./poly" within the build directory to work since this script sets the appropriate library path before running the actual executable from the ".libs" sub-directory. "make install" installs the actual executable and the libraries so after installation running "/installdirectory/poly" requires either the library path to find libpolyml or for libpolyml to be in a standard place.
If the dynamic version of libpolyml is not present when poly is linked, probably because --disable-shared has been given, poly will be linked with the static version of libpolyml. This removes the need to find libpolyml at run-time. It doesn't affect any other libraries so the C++ libraries, for example, will still be linked dynamically. This is in contrast with "-static" which links all the libraries statically.
The situation is the same when linking any other object file produced with PolyML.export. If only the static version of libpolyml is present there's no need for LD_LIBRARY_PATH at run-time. I think it is possible to use static linking for specific libraries even if there is a dynamic version available but it's quite messy. See http://stackoverflow.com/questions/4156055/gcc-static-linking-only-some-libr... .
Based on this, I think there would be a case for setting the default for Poly/ML to be --disable-shared so that producing the dynamic version requires an explicit option.
David
On 12/04/2013 12:54, David Matthews wrote:
Based on this, I think there would be a case for setting the default for Poly/ML to be --disable-shared so that producing the dynamic version requires an explicit option.
I've now committed this change. The default is now not to build the shared library but that can be overridden with --enable-shared. Like any change in SVN it's always provisional so it can be reverted if necessary. I'd like feedback either for or against.
David
David,
Thanks for the explanation in your previous message. I was wrong to think that the linking method used to build Poly/ML fixes the way Poly/ML-based applications are linked. I must have been getting the gcc flags wrong when I was trying this a few years ago. Most likely, it was just the absence/presence of the dynamic libraries due to Poly/ML configure flags that actually caused what I saw.
On 15/04/13 12:17, David Matthews wrote:
On 12/04/2013 12:54, David Matthews wrote:
Based on this, I think there would be a case for setting the default for Poly/ML to be --disable-shared so that producing the dynamic version requires an explicit option.
I've now committed this change. The default is now not to build the shared library but that can be overridden with --enable-shared. Like any change in SVN it's always provisional so it can be reverted if necessary. I'd like feedback either for or against.
The main downside that I can see with this approach is that the SO files are not built or installed. Therefore, with a standard installation, it would not be possible for a Poly/ML-based application to choose whether to link dynamically or statically to the Poly/ML libs.
Now that I have had a chance to play around, dropping the SO files seems a little drastic to get polyc to link statically, so avoid the LD_LIBRARY_PATH issue. I found that replacing (both occurrences of)
-L${LIBDIR} -lpolymain -lpolyml
in polyc with
${LIBDIR}/libpolymain.a ${LIBDIR}/libpolyml.a
effected static linking even when Poly/ML was built with --enable-shared. In fact only libpolyml needs changing, i.e.
-L${LIBDIR} -lpolymain ${LIBDIR}/libpolyml.a
suffices, because libpolymain has only a static library.
Phil
On 18/04/2013 00:46, Phil Clayton wrote:
On 12/04/2013 12:54, David Matthews wrote:
Based on this, I think there would be a case for setting the default for Poly/ML to be --disable-shared so that producing the dynamic version requires an explicit option.
I've now committed this change. The default is now not to build the shared library but that can be overridden with --enable-shared. Like any change in SVN it's always provisional so it can be reverted if necessary. I'd like feedback either for or against.
The main downside that I can see with this approach is that the SO files are not built or installed. Therefore, with a standard installation, it would not be possible for a Poly/ML-based application to choose whether to link dynamically or statically to the Poly/ML libs.
This is really quite difficult and I can see arguments for and against. It's all about trying to make it easy for users to get up and running with Poly/ML and to be able to do what they would like to do without much hassle. The difficulty is in trying to understand how it might be used in various contexts.
As far as I can see the only real losers as a result of the change to static library by default would be those who build the system from source and then want to compile and link several executables produced with PolyML.export.
I expect that anyone packaging Poly/ML for a distro where the binary and libraries are installed to, say /usr/bin and /usr/lib, would know enough to be able to specify --enable-shared. They would probably be overriding --prefix anyway. That takes care of users who install through their package manager.
Anyone installing Poly/ML from source and only building "poly" won't really be any worse off because although the "poly" binary will be bigger there won't be the extra space for the shared library. It will also avoid the need for library paths if it is installed to a non-standard place.
David