our filters may be incompatible with SVG
I would like to discuss a potentially very serious compatibility issue with our filters implementation. The question is, If an object has a transform, do the filters apply to it _before_ or _after_ transfomation? To illustrate this, do the following:
- draw a circle
- blur it
- squeeze it vertically (note the blur is also squeezed, i.e. x-blur is greater than y-blur)
- rotate the object
At this stage in Inkscape, the blur remains horizontal, so it is no longer parallel to the object itself, so the appearance of the object changes very visibly after the rotation. If however you now save your file and view it in Batik, you will see that it rotates the blur too, so the object looks exactly the same as before rotation.
Note that Inkscape differs from Batik only after you add rotation. For non-uniform scaling, Inkscape emulates Batik's behavior by scaling the x and y deviations of the blur filter when applied to a scaled object. However, such emulation is not possible for rotation or shear, hence the discrepancy.
Now, I could not find a definitive indication in SVG 1.1 standard as to which of the two behaviors is correct. (I may have missed it, of course.) Some arguments can be found in favor of both behaviors:
Batik:
- It is more visually consistent because rotation/shear does not alter the appearance of a filtered object in a confusing way
- So far, we always considered Batik an authoritative renderer (though it may be in error too, of course)
Inkscape:
- The idea of transforming an already filtered object (i.e., in essence, a bitmap!) goes absolutely against our architecture, in which rasterization and filtering are always the last step after all things that can be done in vector (such as transforms) are done. If we try to fix this to match Batik, we will need to implement a separate bitmap transformer (!) which will have to be run for filtered objects _only_ (otherwise we'll end up with pixelated renditions of any scaled-up object). This will be a total mess from the programmer's viewpoint if you ask me.
- Our behavior preserves the same appearance of a path regardless of whether its transform is embedded or given in a transform= attribute (but this is currently only true for rotations, not for non-uniform scales). This is a principle which we're kinda trying to uphold, although it's not always possible.
I _think_ that even if our behavior is an error, it's not a reason enough to stop 0.45, because this only affects squeezed _and_ rotated objects. However it's still very serious because it's a visible rendering discrepancy. So I would like to discuss this issue and the approach we should eventually pursue.
On Tue, 2006-12-26 at 23:35 -0400, bulia byak wrote:
Note that Inkscape differs from Batik only after you add rotation. For non-uniform scaling, Inkscape emulates Batik's behavior by scaling the x and y deviations of the blur filter when applied to a scaled object. However, such emulation is not possible for rotation or shear, hence the discrepancy.
After looking over the standard for a while, I think Batik is correct.
Everything I see implies that the effect parameters are in the object's own coordinate space (or, given primitiveUnits="objectBoundingBox", a transformed bounding quad based on it).
-mental
bulia byak wrote:
- The idea of transforming an already filtered object (i.e., in
essence, a bitmap!) goes absolutely against our architecture, in which rasterization and filtering are always the last step after all things that can be done in vector (such as transforms) are done. If we try to fix this to match Batik, we will need to implement a separate bitmap transformer (!) which will have to be run for filtered objects _only_ (otherwise we'll end up with pixelated renditions of any scaled-up object). This will be a total mess from the programmer's viewpoint if you ask me.
Can we apply the object transformation to the blur kernel rather than blur with an untransformed kernel and trying to transform the rasterised results?
Dan
On 12/29/06, Daniel Pope <mauve@...1559...> wrote:
Can we apply the object transformation to the blur kernel rather than blur with an untransformed kernel and trying to transform the rasterised results?
Jasper suggested in this thread that this may be doable. However, this may not be worth doing, because the same thing needs to be done for all other filters too, and not all of them can be "transformed" by varying their parameters. In particular, specifying a low filter resolution must, by Mental's close reading of the spec, result in a visible pixel grid, and that grid must be transformed by the object's transform (e.g. rotated). I don't think this can be done other than by a bitmap-to-bitmap affine transformer.
bulia byak wrote:
On 12/29/06, Daniel Pope <mauve@...1559...> wrote:
Can we apply the object transformation to the blur kernel rather than blur with an untransformed kernel and trying to transform the rasterised results?
Jasper suggested in this thread that this may be doable. However, this may not be worth doing, because the same thing needs to be done for all other filters too, and not all of them can be "transformed" by varying their parameters. In particular, specifying a low filter resolution must, by Mental's close reading of the spec, result in a visible pixel grid, and that grid must be transformed by the object's transform (e.g. rotated). I don't think this can be done other than by a bitmap-to-bitmap affine transformer.
In addition the morphology filter would be extremely difficult (as in practically impossible) to implement on a transformed image.
participants (4)
-
bulia byak
-
Daniel Pope
-
Jasper van de Gronde
-
MenTaLguY