
2010/4/22 Krzysztof KosiĆski <tweenk.pl@...400...>:
Is there any speed advantage over just accumulating the dirty events into one Geom::Rect?
There may be none. We need to test that, but let's not assume automatically that it's better :)
This kind of tiling will not speed up HW accelerated rendering, because in practice the speed of HW accelerated 2D drawing does not depend on the area redrawn but on the number of commands issued. Tiling the image into 16 parts and redrawing all of them will be, in the worst case (when all objects intersect all tiles), 16x slower than drawing everything in one go. For software rendering, the current aggresive tiling strategy is probably better, as the time taken is more dependent on the number of pixels rendered. So I think the best idea is to leave the tiling system intact but provide for opportunities to bypass it.
As I explained, the goal of this is not overall speed increase, but finding a way to insert interruption points during rendering, even if the overall render time grows. If, as you say, accelerated rendering depends on the number of command and not on the area to paint (which may be true to an extent, but I doubt it is literally true), then we'll need to break into chunks the stream of commands, not the area to paint, so that interrupting would be possible with all of the area but part of the objects painted, not part of the area with all of the objects as now.
Another thing, instead of allocating 256K buffers on demand, it might be better to create one big Cairo surface, and then split it into chunks using cairo_surface_create_for_region. This could simplify multithreaded software rendering, as threads won't need to allocate any memory for the output, but I don't see this function anywhere in the public API of Cairo.
I second the suggestion to contact Cairo developers on this matter.
For dragging, the first optimization that comes to my mind is as follows
- We determine whether all the dragged or modified objects are
adjacent in the z-order 2. If they aren't we skip this optimization as it's going to be too complex 3. We render the canvas into 3 layers: below the dragged object(s), above it and a layer with the object(s) itself 4. On every drag we redraw only the portion with the affected object and composite the output from those three layers
I think this might not work when filters come into the picture though.
You seem to refer to dragging SVG objects, but I was referring to dragging canvas items over drawing, such as nodes or handles. They are always on top of the drawing, and for them a much simpler solution is possible where each canvas item remembers what was under it the last time it was repainted, and restores that when it is moved.
One final note: please try to keep both new and old rendering functioning together for as long as possible, with an easy way to switch between them (ideally without restart), so that the new renderer can be quickly checked for correctness and speed. This is a critical point in ensuring a smooth transition! Also, this will be another reason to limit refactoring to the necessary minimum, so as to not disable the old rendering code unless absolutely unavoidable.
I am afraid this might go the way of the failed gtkmm rewrite attempt, and could limit my options. I would need to duplicate the display and libnr directories to do anything meaningful. Isn't it enough to work in a branch?
I would not insist on that, but I know from experience that rendering artefacts and glitches may be very hard to reproduce if you have to start a new copy of the program for that, as opposed to hitting a single key to redraw with a different renderer.