David Matthews wrote:
David Matthews wrote:
Robert Roessler wrote:
The execution issue happens early in the "Custom Build Step" use of PolyImport.exe: during startup, the polymain(...) function obtains the size of installed physical memory, and then defaults to using half of it.
This issue comes up with both Unix and Windows. Makarius has just sent me an email about this issue in a virtual machine with Cygwin. Generally this will work although there are circumstances in which it may not. In particular, on a shared machine or if there are other processes running Poly/ML there may be insufficient swap space. One possibility might be to have the code try allocating a large heap and progressively reduce the size if that fails.
Following up on my last message, I've now updated the code to try to allocate a smaller heap if the requested size fails. Currently, this works whether the heap size is given by -H option or by the default of half the real memory. I've tested it on Windows but not on Unix.
This does work now on my 4GB Vista x64 box... in particular, on the case encountered while Poly/ML is bootstrapping itself.
Once the [Release non-interpreted] executable is running, it seems to have 2 "private" memory pools of about 210 MB and 60 MB - with a WS of 10 MB.
WRT to your comment above on how you got here, I still question the strategy of grabbing such a huge chunk of memory, i.e., grabbing an amount based on a large fraction of the [currently possibly large] physical memory size - especially if all of this is committed at the time of allocation?
Both from the perspective of being a well-behaved task in the shared execution environment, or even in the "it's all about Poly/ML" case, this could lead to trouble.
The former case (based solely on the size of physical memory) appears to ignore the current memory loading, while even in the latter case, things may not be optimal if you want to run a number of Poly/ML instances.
Perhaps the advantage of Poly's current allocation method (over capping and/or gradual commitment) is simplicity... both of these alternatives, while allowing for better resource sharing, do potentially require more advance planning. As in "can I go with the default [smaller] heaps?" or "what happens if a Poly instance attempts to use delayed commit - but finds itself unable to get the space when it needs it?"
Robert