On 05-Apr-2015 14:53, Krzysztof KosiĆski wrote:
Whenever something is changed in the document, Inkscape does the following:
- Store the change in the DOM tree (aka SP tree).
- Write the change to the XML, also storing undo data.
- In response to this XML change, update / recompute the values in
the DOM tree.
Seems like (1) is redundant given (3). In other words, and I'm not saying this very well, if we view the "job" of Inkscape as just this:
A. maintain a syntactically correct SVG file. B. maintain an object model of that SVG file for the purposes of rendering, selecting, moving things around.
Then any time a user's action working on the objects makes a change which should be reflected in the SVG, it should do (A) and then (B). From what you are saying, it is doing (B) (A) (B).
One way to improve Inkscape's performance is to avoid the readback step after every change. In many cases, it leads to large portions of the DOM tree being unnecessarily recomputed.
Yeah, I've seen that. It also can make debugging harder because you make one little change and all hell breaks loose, which you have to wade through in the debugger until you finally get to the one object change which is actually a result of the change.
However, not doing readback is rather dangerous - if what we write does not actually parse back to what we have, we end up with data corruption - we are displaying a document that does not match the one that we would display when reopening the file.
If, in terms of the above (A)->(B) can be limited with some sort of conserved mapping between the two, then the extent of the recalculation can be limited as well.
Therefore, the way to go would be to have the development version always do readback, while the release version would just assume it wrote the correct data.
Big red flag. Any time the release acts differently than the development you open the door for release bugs which cannot be reproduced in the development. In my experience this can be something as minor as changing the optimization level, which often exposes/hides memory corruption bugs.
In the case of paths and other coordinate values, I think we should always store them with full double precision, taking advantage of the double-conversion library (I incorporated it into in 2Geom), and only allow reducing the precision in a separate output filter.
That goes back to the SVG -> Object conversion. The one that counts is the value that is held in the SVG, and I believe that precision is set by that standard. That said, given the amount of memory in most computers these days, there isn't much reason to use single precision in favor of double precision.
Regards,
David Mathog mathog@...1176... Manager, Sequence Analysis Facility, Biology Division, Caltech