On 20/10/2016 12:14, Rob Arthan wrote:
During interactive development of a proof script it is usual to try things out repeatedly with frequent evaluations of minor variants of an attempted proof step until you find one that works. It is ls also perfectly possible for a user to develop a ProofPower database the way people develop spreadsheets and SQL databases, incrementally adding things over a prolonged period of time. It is not clear to me how bad the impact would be in these use cases. Is there a way to find out how much memory is occupied by compiled code? If so, then I could try some experiments to quantify the impact.
On 19 Oct 2016, at 12:34, David Matthews <David.Matthews at prolingua.co.uk> wrote: The new mechanism for handling pieces of code deals with most of the issues apart from the question of garbage collection. I was really trying to get a feeling for how serious this was. From the comments on the mailing list it looks as though this is something that is wanted so I'll try and see if I can find a solution.
And thank you for thinking of us!
If it makes the solution easier, I don?t think there is any need to include garbage collection for code in the on-the-fly garbage collection. I think it would be fine to implement it either as an ML function on its own or built into PolyML.SaveState.saveChild (and friends) and PolyML.export.
Just one point. There is only a leak for redefinitions of functions while they are in local memory. If you load a saved state, redefine a function in it and then save a new state then the old code will be replaced with the new code, just as before.
The only difficulty is with garbage collecting code that might possibly still be executing. Once it has been written to disc there's no problem.
David