Robert Roessler wrote:
The execution issue happens early in the "Custom Build Step" use of PolyImport.exe: during startup, the polymain(...) function obtains the size of installed physical memory, and then defaults to using half of it.
IMHO, this logic could use a new look, given modern system memory levels (4GB and up is becoming common, especially in today's dev environments)... in any case, the first actual call to VirtualAlloc is for something like 1.5 GB, and is failed. Since I was willing to believe that finding that much contiguous virtual address space could be hard - or maybe it was just that the demand to commit all of it too was the actual cause - I just forcibly cut the memory usage back to 256M and the rest of the build proceeded flawlessly, creating a functioning PolyML system.
This issue comes up with both Unix and Windows. Makarius has just sent me an email about this issue in a virtual machine with Cygwin.
There's a question about what value to use for the initial heap size and it's not clear what the answer is. I'm raising it here in case people have some ideas. The issue is that the larger the heap the less frequently the garbage-collector has to run so in terms of getting performance big is better. This applies at least up to the stage where the heap reaches the size of the physical memory. Once it exceeds the physical memory and has to use the page file/swap space then the cost of garbage collection, at least in terms of real time if not CPU time, can rise dramatically.
The current code gets the physical memory size on the machine, up to a maximum of 4G in 32-bit mode, and sets the default heap size to half of that. If the -H option is given that is used instead.
Generally this will work although there are circumstances in which it may not. In particular, on a shared machine or if there are other processes running Poly/ML there may be insufficient swap space. One possibility might be to have the code try allocating a large heap and progressively reduce the size if that fails.
Regards, David