I've finally had an opportunity to spend some time analyzing the problem you had loading that large SVG.
Your file did eventually load, although I think it may have taken as much as half an hour to do so on my machine.
In Inkscape, sibling XML nodes are currently kept in singly-linked lists, which are not an efficient data structure when dealing with large numbers of siblings (in this case, more than 87000).
It is possible to use faster data structures, but they would come at the cost of using extra memory. As it was, once the document loaded my machine ran out of memory when the renderer kicked in.
So, I'm not really sure what the best course of action would be. I do think we have a problem dealing with large documents in general.
I am CCing the development list for their suggestions.
-mental
On Wed, 26 Jan 2005 18:01:41 -0500, MenTaLguY <mental@...32...> wrote:
I've finally had an opportunity to spend some time analyzing the problem you had loading that large SVG.
Your file did eventually load, although I think it may have taken as much as half an hour to do so on my machine.
Wow, it was under 2 minutes on mine :) (I'm speaking about 1236745up.svg, is this what you tried?)
So, I'm not really sure what the best course of action would be. I do think we have a problem dealing with large documents in general.
Just FYI, Batik ran out of memory on that document, and Adobe never loaded it either (maybe it would if I waited long enough, but I doubt that). So Inkscape was the only software capable of loading and even editing it.
On Wed, 2005-01-26 at 18:22, bulia byak wrote:
On Wed, 26 Jan 2005 18:01:41 -0500, MenTaLguY <mental@...32...> wrote:
I've finally had an opportunity to spend some time analyzing the problem you had loading that large SVG.
Your file did eventually load, although I think it may have taken as much as half an hour to do so on my machine.
Wow, it was under 2 minutes on mine :) (I'm speaking about 1236745up.svg, is this what you tried?)
Yep. I was also starting to swap a bit towards the end, which may have contributed.
So, I'm not really sure what the best course of action would be. I do think we have a problem dealing with large documents in general.
Just FYI, Batik ran out of memory on that document, and Adobe never loaded it either (maybe it would if I waited long enough, but I doubt that). So Inkscape was the only software capable of loading and even editing it.
Yes, Batik failed for me as well. I didn't have Adobe handy.
-mental
I've comitted the O(1) append fix. Turns out I'd just implemented appendChild() using lastChild(), so it was only lastChild() that I really needed to modify.
Loading does go noticably faster, though it feels like there are still (likely non-Repr-related) bottlenecks we can also address.
[ I must say it's rather nice to finally be able to make changes to the structure of what used to be SPRepr without requiring half the tree to be rebuilt. ]
Please test, as I have tested but I've not been so good at finding broken things lately.
-mental
I think I just succeeded in replicating gavin's hang. It's our old friend the infinite (or at least very long...) loop calling GC_collect_or_expand() repeatedly.
I think perhaps I am going to try an alternate implementation of GC::Anchored.
With a massive document like Gavin's the present implementation is essentially creating several hundred thousand individual GC roots (one gc_malloc_uncollectable()ed shim per object), which might not be such a good thing for the collector.
I am going to CC the libgc mailing list in case Hans wants to reassure me that allocating so many small gc_malloc_uncollectable() objects that all point to nodes in the same large tree is not a problem.
[ note that this is now with the default free space divisor, so we should be using the normal code path ... however, I've not upgraded libgc as 6.3 is still the most recent version available in Debian ]
-mental
participants (2)
-
bulia byak
-
MenTaLguY