Thanks to some financial support from the Verisoft project and TUM I've been working through the summer on an updated version of the Poly/ML run-time system and a new code-generator for the AMD 64-bit architecture.
The major new features are: Support for stand-alone binaries Support for additional platforms: AMD64, Intel Macs, Cygwin No artificial limits on size of heaps or saved image Uses standard GNU tools for building Fixed address mmap and trap-handling removed
This comes at one significant cost: the persistent store system that has been a feature of Poly/ML almost since the beginning has been removed. The reason for this is that to be effective the persistent store needed to be able to reload a database at the same address every time but that could conflict with other libraries and result in seg faults. Rather than constantly adjust the location of the database with each new kernel release I decided to make the leap and change to making stand-alone binaries. This achieves much the same as the persistent store but without the complications.
This version is currently only available by cvs from sourceforge and is a beta release. Full details and example code showing how to export a function and build an application can be found in http://www.polyml.org/Version5Beta.html .
Makarius has been testing this with Isabelle and after some bug fixes it all seems to work. I would like to get some more feedback before making a final release but I'm aiming for a release in about a month.
David.
On Monday 23 Oct 2006 11:49 am, David Matthews wrote:
Thanks to some financial support from the Verisoft project and TUM I've been working through the summer on an updated version of the Poly/ML run-time system and a new code-generator for the AMD 64-bit architecture.
The major new features are: Support for stand-alone binaries Support for additional platforms: AMD64, Intel Macs, Cygwin No artificial limits on size of heaps or saved image Uses standard GNU tools for building Fixed address mmap and trap-handling removed
This comes at one significant cost: the persistent store system that has been a feature of Poly/ML almost since the beginning has been removed. The reason for this is that to be effective the persistent store needed to be able to reload a database at the same address every time but that could conflict with other libraries and result in seg faults. Rather than constantly adjust the location of the database with each new kernel release I decided to make the leap and change to making stand-alone binaries. This achieves much the same as the persistent store but without the complications.
One advantage of the persistent object store is that it is organised as a hierarchy: if many people want to share a body of ML code and data, the common material can be put in one database and the individuals can work in descendants of that database. At least one group of ProofPower users have large applications that are only feasible when organised in this way. Is anything like this possible under the new set-up?
Regards,
Rob.
Rob Arthan wrote:
On Monday 23 Oct 2006 11:49 am, David Matthews wrote:
This comes at one significant cost: the persistent store system that has been a feature of Poly/ML almost since the beginning has been removed.
One advantage of the persistent object store is that it is organised as a hierarchy: if many people want to share a body of ML code and data, the common material can be put in one database and the individuals can work in descendants of that database. At least one group of ProofPower users have large applications that are only feasible when organised in this way. Is anything like this possible under the new set-up?
Rob, I did wonder about this but I felt that on balance the advantages outweighed the drawbacks. The new version does mean that every image, the equivalent of a database in the old version, has to contain everything that is in the hierarchy but the only effect is that this requires more disc space than it used to. Given that the maximum size of all the databases in a chain from child to root parent was 400Mbytes and hard discs routinely come in the hundreds of gigabytes I find it hard to believe that this duplication would be a problem. What exactly do you mean by "only feasible"? What is the constraint?
Regards, David.
On Friday 10 Nov 2006 12:17 pm, David Matthews wrote:
Rob Arthan wrote:
On Monday 23 Oct 2006 11:49 am, David Matthews wrote:
This comes at one significant cost: the persistent store system that has been a feature of Poly/ML almost since the beginning has been removed.
One advantage of the persistent object store is that it is organised as a hierarchy: if many people want to share a body of ML code and data, the common material can be put in one database and the individuals can work in descendants of that database. At least one group of ProofPower users have large applications that are only feasible when organised in this way. Is anything like this possible under the new set-up?
Rob, I did wonder about this but I felt that on balance the advantages outweighed the drawbacks. The new version does mean that every image, the equivalent of a database in the old version, has to contain everything that is in the hierarchy but the only effect is that this requires more disc space than it used to. Given that the maximum size of all the databases in a chain from child to root parent was 400Mbytes and hard discs routinely come in the hundreds of gigabytes I find it hard to believe that this duplication would be a problem. What exactly do you mean by "only feasible"? What is the constraint?
The group in question have a hierarchy which reflects the structure of a largeish Ada programs that they are analysing using ProofPower and other tools. In a typical case there are about 280 smallish databases totalling about 630Mb that share two common ancestors totalling about 90Mb. That 720 Mb total will expand to well over 25,000Mb without the hierarchy.
The problems we fear come not so much from the cost of discs but from the overheads in handling the volume of data, e.g., the costs of transferring data over the network when poly is started (with the hierarchic arrangement the common data will generally be cached by the local OS, we would expect). Archiving the extra volume of data is also a concern.
Would it be technically feasible to have an option to export a shared object file rather than a binary? That should give the same space-sharing effect as we have with the persistent object store and would surely have many other benefits too.
Regards,
Rob.