I have just a few Qs regarding performance that I'm just curious about (not complaints, just wanting to understand).
1) Why does inkscape respond much slower when moving grouped/multiple items via arrow keys vs mouse?
1a) Why does it take so long to regain control of objects when they've been moved? (once again, those grouped/multiple objects)
2) Why does it sometimes take up to 2 mins to start responding again after moving grouped/multiple items (via KB)... longest I've seen by mouse is 20 seconds? (sometimes processor is pegged, other times only using 3%, but still not responsive)
3) Why does inkscape explode in RAM so much? Basically if I have an 800k document, why would inkscape be using 133megs of RAM (not always, but if working with it for like 30mins), vs a smaller (500k) document that only uses like 40megs.
Thanks
-Josh
- Why does inkscape respond much slower when moving grouped/multiple
items via arrow keys vs mouse?
Because while you're dragging with mouse, it does not actually move anything, it just pretends. The change to the document happens only as you release the mouse, and then the delay is the same as when moving by an arrow key. It cannot possibly be different as it's the same code at work.
1a) Why does it take so long to regain control of objects when they've been moved? (once again, those grouped/multiple objects)
I don't know because I haven't looked. I guess updates to the document's SVG code are expensive in general. If you want to help, try to debug it to find out which function exactly takes most of that time.
- Why does it sometimes take up to 2 mins to start responding again after
moving grouped/multiple items (via KB)... longest I've seen by mouse is 20 seconds? (sometimes processor is pegged, other times only using 3%, but still not responsive)
I never saw such response times. Please submit a bug with such a file attached. Especially if mouse/keyboard delays are demonstrably different.
- Why does inkscape explode in RAM so much? Basically if I have an 800k
document, why would inkscape be using 133megs of RAM (not always, but if working with it for like 30mins), vs a smaller (500k) document that only uses like 40megs.
Because the new garbage collector does not free the memory unless asked or unless it runs into a limit. So, yes, it tends to take a huge pile of memory, but it can free most of it if e.g. you launch another application. This is an area where more debugging is needed, though.
- Why does inkscape respond much slower when moving
grouped/multiple
items via arrow keys vs mouse?
Because while you're dragging with mouse, it does not actually move anything, it just pretends. The change to the document happens only as you release the mouse, and then the delay is the same as when moving by an arrow key. It cannot possibly be different as it's the same code at work.
Well, it is different... but your explanation tells me why. It's only updating the document once with mouse, as opposed to 25 times when I use shift+arrow a bit. =)
1a) Why does it take so long to regain control of objects when
they've
been moved? (once again, those grouped/multiple objects)
I don't know because I haven't looked. I guess updates to the document's SVG code are expensive in general. If you want to help, try to debug it to find out which function exactly takes most of that time.
I'll see what I can figure out this weekend (but my guess is it just related to updating the docs code).
- Why does it sometimes take up to 2 mins to start responding
again
after moving grouped/multiple items (via KB)... longest I've seen by mouse is 20 seconds? (sometimes processor is pegged, other times only using 3%, but still not responsive)
I never saw such response times. Please submit a bug with such a file attached. Especially if mouse/keyboard delays are demonstrably different.
Well, my issue is irrelevant, as it appears the delayed response is probably just due to number of updates as above.
Thanks for answering!
Because while you're dragging with mouse, it does not actually move anything, it just pretends. The change to the document happens only as you release the mouse, and then the delay is the same as when moving by an arrow key. It cannot possibly be different as it's the same code at work.
Well, it is different... but your explanation tells me why. It's only updating the document once with mouse, as opposed to 25 times when I use shift+arrow a bit. =)
Ah, of course. When you press arrow once, Inkscape has no way of knowing you're gonna press it 10 more times after that, so it does the entire document updating routine for all objects on each keypress. Use Shift+arrows to move objects 10 times as far in one go.
Still, document updating should be optimized if possible. I remember Mental saying something about hordes of listeners attached to repr nodes and the time needed to process them all. Mental, can you comment - is there any chance this can be sped up noticeably?
On Fri, 2004-10-08 at 19:51, bulia byak wrote:
Still, document updating should be optimized if possible. I remember Mental saying something about hordes of listeners attached to repr nodes and the time needed to process them all. Mental, can you comment
- is there any chance this can be sped up noticeably?
Probably. I was refering specifcally to clones in that case -- when you have a lot of clones, each clone has its own listener and the same work (largely parsing) is redone for each.
AST should help remedy that, as it removes the need to repeatedly parse the same string.
I think there are bigger problems, though.
As far as general performance, we need to sit down with a profiling build and see where the CPU time is actually going. These things tend to be very counterintuitive.
[ By the way, I've experienced ~2 minute update times myself, even on simple files, at high enough zoom levels -- update time appears to grow superlinearly with zoom level. ]
-mental
Probably. I was refering specifcally to clones in that case -- when you have a lot of clones, each clone has its own listener and the same work (largely parsing) is redone for each.
Even without clones, updating a complex document may take seconds. It just looks like it does something non-trivial for all reprs in the tree on each change, instead of just changing one repr.
[ By the way, I've experienced ~2 minute update times myself, even on simple files, at high enough zoom levels -- update time appears to grow superlinearly with zoom level. ]
That's entirely orthogonal to document update slowness. It's the slowness of the renderer at high zooms - a known issue ever since we switched to livarot renderer.
Some time in the distant past, maybe earlier this year, the possibility of having an on/off switch for antialiasing was brought up. Did anything happen of it? The reason I ask is that occasionally there are comments on the net about pdf quality being affected by antialiasing and I am particularly interested in pdf production at the moment. It would be nice to have this as a preference.
vellum
Some time in the distant past, maybe earlier this year, the possibility of having an on/off switch for antialiasing was brought up. Did anything happen of it?
No, it's a renderer stuff which hasn't been touched for a while.
The reason I ask is that occasionally there are comments on the net about pdf quality being affected by antialiasing and I am particularly interested in pdf production at the moment. It would be nice to have this as a preference.
This may only be an issue for embedded bitmaps. Regular paths are just paths; the way Inkscape displays them cannot affect the way they're exported into PS or PDF.
On Fri, 2004-10-08 at 18:31, bulia byak wrote:
- Why does inkscape explode in RAM so much? Basically if I have an 800k
document, why would inkscape be using 133megs of RAM (not always, but if working with it for like 30mins), vs a smaller (500k) document that only uses like 40megs.
Because the new garbage collector does not free the memory unless asked or unless it runs into a limit. So, yes, it tends to take a huge pile of memory, but it can free most of it if e.g. you launch another application. This is an area where more debugging is needed, though.
Well, no. The garbage collector isn't sensitive to external memory pressure (e.g. from starting another application), and the limit in question is just whenever the number of bytes collected exceeds some percentage of the total (garbage-collected) heap size.
The truth is that we're just fairly memory-hungry. The infinite undo list always requires some memory, of course (particularly if you're working with complex paths -- every minor edit currently creates a new copy of the entire path string -- another reason for AST), and the particular document structure can affect the amount of memory required as well.
Also, if he's using 0.39, that did not have the collector, but it did have a number of memory leaks that I fixed in CVS when I was trying to get export fixed for you.
-mental
Why do we need infinite undo? It often seems to cause problems in other programs. CorelDraw allows you to set the level as a preference. You can set it at 10000 if you want but I set mine at 20 and never need to change it. Why not put it into preferences with a low default value of say 10 or so, and allow people to set higher if they want?
vellum.
Mental wrote:
The truth is that we're just fairly memory-hungry. The infinite undo
list always requires some memory, of course (particularly if you're
working with complex paths -- every minor edit currently creates a new
copy of the entire path string -- another reason for AST), and the
particular document structure can affect the amount of memory required
as well.
Why do we need infinite undo? It often seems to cause problems in other programs. CorelDraw allows you to set the level as a preference. You can set it at 10000 if you want but I set mine at 20 and never need to change it. Why not put it into preferences with a low default value of say 10 or so, and allow people to set higher if they want?
We used to have a limit. I talked us into removing it. It was SO incredibly annoying when you run into the limit and cannot undo any further. I also argued that a vector editor undo is much less memory-hungry than a bitmap editor, so a limit is less necessary.
Now a limit, especially a low limit, will affect everyone. That's bad because the "low memory because of undo list" problem only affects a minority (if it really affects anyone at all).
Here's what I think needs to be done instead of adding a limit:
- optimize, optimize, optimize
- add a "flush undo" command for those who really want to free some memory but don't want to restart the program
bulia byak wrote:
Here's what I think needs to be done instead of adding a limit:
optimize, optimize, optimize
add a "flush undo" command for those who really want to free some
memory but don't want to restart the program
You could simply flush the undo list to disk once in a while. It might even be possible to create a "track changes" feature out of something like that.
bulia said
We used to have a limit. I talked us into removing it. It was SO incredibly annoying when you run into the limit and cannot undo any further. I also argued that a vector editor undo is much less memory-hungry than a bitmap editor, so a limit is less necessary.
Now a limit, especially a low limit, will affect everyone. That's bad because the "low memory because of undo list" problem only affects a minority (if it really affects anyone at all).
I'm not suggesting a fixed low limit.
Why not put it into preferences with a low default value of say 10 or so, and allow people to set higher if they want?
If it can be set as a preference then people who want infinity can set it high and those who don't can set it low...or off.
Here's what I think needs to be done instead of adding a limit:
- optimize, optimize, optimize
??
- add a "flush undo" command for those who really want to free some
memory but don't want to restart the program
This sounds ok as well.
vellum
If it can be set as a preference then people who want infinity can set it high and those who don't can set it low...or off.
Making something inconvenient so that people can change it via prefs is a bad idea. As I said, the number of people annoyed by a low limit will likely be much higher than of those who will ever run into memory problems because of unlimited undo.
On Sun, 2004-10-10 at 08:25, vellum wrote:
Here's what I think needs to be done instead of adding a limit:
- optimize, optimize, optimize
??
One of our big memory problems right now is that we copy data places rather than sharing it.
I've already done some work along those lines, in terms of sharing whole strings, but we really would benefit from being able to store finer-grained deltas.
-mental
participants (5)
-
bulia byak
-
Jasper van de Gronde
-
Joshua A. Andler
-
MenTaLguY
-
vellum