On Mon, 15 Feb 2016, David Matthews wrote:
On 14/02/2016 21:13, Matthew Fernandez wrote:
On Sunday, 14 February 2016, Makarius <makarius at sketis.net> wrote:
On Sun, 14 Feb 2016, Matthew Fernandez wrote:
I received several off-list replies to this, including a pointer to
previous experiments by Data 61 colleagues
How is this related to the experiment with heap hierarchies? I.e. a stack of heaps according to the session parent structure.
Makarius: are you referring to the Poly/ML saved state mechanism or something else?
Yes, but Isabelle puts a certain interpretation on top of it. The terms "session" and "heap" are from that vocabulary.
As far I'm aware Isabelle just uses the shared state facilities of Poly/ML. That provides the ability to save states that are the direct descendant of the executable or the descendant of another saved state.
It has fallen a bit behind in recent years. Most notably, heaps are not hierarchic: after loading a saved state and adding to it, an independent saved state is produced with all content taken together. This can add up to rather large heaps, especially for structured developments like Isabelle examples + AFP taken together.
People at former NICTA now Data61 have started to reduce heap file sizes recently. So my questions go ultimately in the direction: What are typical heap sizes seen in these big L4.verified applications? How is the "before" and "after" situation?
From my own point of view I'm concerned that compacting the heap files may add to the cost of loading.
I also remember that from ancient SML/NJ times, when it was still used in practice and Poly/ML not freely available. SML/NJ heaps were much bigger than in Poly/ML, and people started to think about implicit compression. Luckily such complication was avoided, because Poly/ML became freely available.
In my experimental IDE, while a file is being edited a lot of saved states are loaded and reloaded repeatedly to provide the context. I'd like to see what effect adding compaction has on it but it may be necessary to provide separate functions to save states with and without compaction. Loading is easier because the loader can look at the format. Note that there's no need to provide backwards compatibility because a saved state can only ever be reloaded into the executable that created it.
There is definitely a time vs. space trade-off.
My general impression is that loading and saving of states consumes significant time for highly structured applications like AFP. I have started thinking about avoiding to save states in the first place: just plough through all Isabelle sessions of AFP (excluding the "slow" group) in one big Poly/ML process.
Compression can take a lot of additional time, but there is also the paradoxical situation of SSDs (solid-state disks, which are not disks at all): heaps that require many GBs on conventional hard-disks may still fit into small SSDs and thus save/load much faster.
Makarius