On 04-09-12 05:59, Auguste Pop wrote:
On Mon, Sep 3, 2012 at 6:30 PM, Mark Crutch <markc@...2744...>
> Filters are essentially bitmaps generated by performing a series of
> mathematical operations on each pixel in the rendered image. The more pixels
> that need to be rendered, the more maths there is to do, and the slower the
> rendering will be. Rendering to a bitmap doesn't just mean exporting to PNG
> though; every time Inkscape draws to the screen it has to render to a
I think inkscape's problem is it tries to render the whole image
instead of only the visible part. That's why when we zoom in, the
render process becomes increasingly slow.
I maybe wrong about how inkscape works here. That's just a guess work
according to my experience.
In principle that's not how Inkscape does it, but there is a bit of
truth in there (and that's a large part of the problem). It's always
difficult to say for sure exactly why a program is slow until you've
fixed it, but I'll give it a go none the less.
First off all, most of Inkscape's filters are quite fast... However,
that is just when testing them in isolation, as in "give them an image
to work on and measure how long it takes to process it". Unfortunately
this is not the whole story. There are at least two problems that come up.
The first problem with Inkscape's filters is that Inkscape does NOT call
them on the entire image. Instead, it only calls them on patches or
slices (you may have noticed that Inkscape tends to update the screen
one patch/slice at a time). The problem here is that many filters (like
Gaussian blur) need "neighbouring" values. So suppose you have a 100x100
patch with Gaussian blur needing all values within a 100 pixels from the
current one. Inkscape then renders a 300x300 patch for each 100x100
patch that is displayed (cropping the results). That means that Inkscape
is essentially drawing the whole image 9 times!!! Obviously, as patches
get larger this becomes less of a problem (the development version of
Inkscape uses much larger patches/slices if I remember correctly).
The second problem is that if you have a lot of filter primitives in one
filter they basically get executed in series. So first primitive one,
then primitive two, etc. This needs a lot of intermediate buffers (not
100% sure that there isn't any reuse, but I doubt it), putting a lot of
pressure on memory use. But also, it requires an enormous amount of
memory bandwidth. I haven't delved into how much a problem this is, but
it's likely that it contributes significantly to the processing time of
The bad news is that the above is probably not going away any time soon.
I am unaware of any existing system to deal with problem one (although I
have some ideas for how it could be done), and problem two would
essentially require a pretty fundamental rewrite of our code. A GPU
version of our filtering code would help somewhat, but it's also not a
panacea (and again requires some major coding).
Finally, the above is probably not the whole story, as some things are
simply impossibly slow. That is why it is important to keep sending us
examples of files that really render WAY too slow.