On 25-10-12 22:41, pennec victor wrote:
Hi,
I think I'll be a little bit off topic but I don't know where to ask (my question about svg on inkscapeforum are left unanswered)
Can someone point me to reference papers (or friendly forum) about coefficients in FeColorMatrix, Fecomposite ? Something that could answer the following questions :
What are safe values ? For exemple inkscape UI allows x10 for k1,k2,k3,k4 in FeComposite, can I push to 255 ? 256 ? 1000000 ? I don't see anything in SVG spec about max values for those coeffs.
Essentially you can use any values you like. The SVG spec does state that the result of each primitive is clamped, but in a single primitive it is basically up to the implementation (although I would expect an implementation to basically allow anything "reasonable"). In the past I've played around a bit with feComposite, and I believe I ran into some issues here and there, but mostly it was fine.
The best thing is to simply try your file in as many different renderers as possible.
What is the effect of a <100% opacity source or dest ? (On clipping for exemple)
I'm not entirely sure what you mean, but in principle it should have no effect. Well, obviously the opacity is less, and this will be reflected in the result, but that is about it. Of course, if any clamping is done (usually at the end of the operation), then the colour values will be clamped to the opacity when using premultiplied colours, but that's normal.
I try to port svg2 PorterDuff operations in svg 1 and I often get strange results (to me) with inkscape. Is there some known bugs in rendering fecomposite with neg values in inkscape ?
Actually, now that you mention it, yes: https://bugs.launchpad.net/inkscape/+bug/1044989 I'll probably look into it tomorrow.
Bonus question : why is inkscape so slow at rendering filters ? ;)
This has been asked and answered before, so I won't go into it in detail (if you're really interested, I'd recommend searching the archives). Basically, each filter primitive constitutes a very expensive copy of the image. Adding in transformations can add more in some cases, and using something like Gaussian blur (which is not per-pixel, but requires data from a neighbourhood) makes things even less efficient. At least, that's my theory, it hasn't been tested yet, as all these problems are non-trivial to solve.