
just a small question here ..I've compiled last year I think ..in September or October the gsoc-gpu - branch of Inkscape. ..it was a work in progress but it was so fast for some filters
the question is - can we see some things added gradually to normal releases ? - or this work has to be included as a whole ?.
2011/3/15 Jasper van de Gronde <th.v.d.gronde@...528...>:
On 2011-03-13 15:52, Vladimir Savic wrote:
On Sun, 2011-03-13 at 15:37 +0100, the Adib wrote:
Is there any plan how we will speed-up performance in future. I mean:
- fast (visual) response to the user
- fast rendering
I know that we have:
- tiling the image into pieces to decrease user response time
- add OpenMP to filter functions
On the roadmap is conversion to cairo. But cairo does not render filters at all. I read that LLVM could speedup when generating the filter code .....
Maybe I misunderstood the concept behind filter rendering, but what about joining efforts with GEGL guys and using that library? I guess it will develop faster, both big apps that would use it (Inkscape and GIMP) will benefit from joined development, bugs will be fixed faster, etc. If I understand correctly, GIMP will use GEGL for pixel computations too and pass the resoult to cairo for drawing on canvas.
Could we please slow down just a bit. Basically someone has come and claimed, with some justification, that Inkscape is slow. It is, at least in some respects (no one files bugs for cases in which Inkscape performs marvelously). But we are working on performance issues continuously and I think pretty much any Inkscape developer knows at least a few things he would like to speed up.
As for getting some results, apart from some UI issues, which are important but which are not exactly my specialty, filters are probably the main cause of perceived slowness. I would like to make a couple of remarks about that. Inkscape mostly has implementations that are quite accurate, which often results in slightly worse performance than you might otherwise see. Personally I feel that this is good, as this way you at least get the right output, and with a bit more effort even high quality results can often be obtained with very good performance (but yes, it does require more effort, and therefore more patience).
To summarize and extend what some others said, the following could have a big impact: - Smarter handling of tiles - Use of GPU - Faster algorithms - "Simplification" of filters - Tricks to make things "look" faster
Tiles can be quite devastating to performance when doing Gaussian blur (as Krzysztof mentioned), as even with the IIR code we may still have to deal with huge margins. I think that for the specific case of Gaussian blur it might be possible to work out something to prevent this (and I would be willing to work on at least the theoretical part), but the main issue is that it would probably require a fundamental change in how we deal with tiles.
Using the GPU can make things a lot faster, simply because the GPU is generally a lot faster for filter-like operations. Keep in mind though that this is no silver bullet, and especially if you need to transfer image data between main memory and the graphics card a lot. In general this item is very much orthogonal to other improvements.
It might come as a surprise to some, but at least the morphological filters can be sped up relatively easily simply by using a different algorithm. Currently the speed of these filters depends on the size of the structuring element, and this is suboptimal. There are algorithms that aren't even that complicated that allow you to compute the exact same filter without the performance depending on the size of the structuring element (if anyone is interested in coding this up I can provide references/explanation).
Currently, when rendering a filter, we just iterate over the filter primitives without looking at what we're doing. In some cases it might be possible to collapse two filter primitives to one, process a smaller area (because we know that later on only a small part is used), reorder and collapse primitives, detect special cases and process them more efficiently, etc. This can also tie in with using the GPU or LLVM, as you could imagine generating a single program per filter. This approach will not speed up a simple blurring operation or something like that, but it can help in dealing with some of the more complex filters.
And as someone else mentioned, sometimes something just needs to look fast. Apart from carefully choosing when to rerender we could also try some tricks like only updating part of an image while the slider is held (unless you have plenty of time of course), smarter caching (for example, if you change the color of a blurred object with just a flat color it's quite easy to update the blurred version without reblurring), etc.
As for GEGL, following last year's LGM I had a closer look at it, because I too wondered whether we couldn't somehow share code. Well. I still think it could be interesting, but I was unable to get it to work at all under Windows (after much experimentation I managed to get it to compile, but then it just crashed on me) and got zero response when asking about it on their mailing list. Don't get me wrong, I think the project is great, and I even think it might be interesting to use it for normal rendering (not just filters), to remove any artificial barrier between filters and the rest of the rendering code, but as it is now it at least needs some intense Windows love.
Colocation vs. Managed Hosting A question and answer guide to determining the best fit for your organization - today and in the future. http://p.sf.net/sfu/internap-sfd2d _______________________________________________ Inkscape-devel mailing list Inkscape-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/inkscape-devel