Bug with pixel perfect SVG
Hello,
I am trying to open in Inkspace (Mac OS X v0.48) a SVG file (wolfd_4.svg) that is a pixel perfect version of a PNG (wolf3d_4.png). Chrome and Safari are able to render the image well (chrome_rendition.png and safari_rendition.png).
However when loaded in Inkspace (inkspace_rendition.png), I can see gaps between the squares that are supposed to be pixels.
Is there any way to fix this ?
Fabien
PS: Safari Mac OS X v7.0.6 (9537.78.2) Chrome Mac OS X v37.0.2062.94
On 2014-09-21 19:57 , Fabien Sanglard wrote:
I am trying to open in Inkspace (Mac OS X v0.48) a SVG file (wolfd_4.svg) that is a pixel perfect version of a PNG (wolf3d_4.png). Chrome and Safari are able to render the image well (chrome_rendition.png and safari_rendition.png).
However when loaded in Inkspace (inkspace_rendition.png), I can see gaps between the squares that are supposed to be pixels.
Zoom to 100% and the gaps are gone in Inkscape too.
Is there any way to fix this ?
Known issue, tracked in - https://bugs.launchpad.net/inkscape/+bug/170356
Regards, V
On Sun, 2014-09-21 at 10:57 -0700, Fabien Sanglard wrote:
Hello,
I am trying to open in Inkspace (Mac OS X v0.48) a SVG file (wolfd_4.svg) that is a pixel perfect version of a PNG (wolf3d_4.png). Chrome and Safari are able to render the image well (chrome_rendition.png and safari_rendition.png).
Have you zoomed in? I see white space in Chrome when the zoom isn't 100%.
However when loaded in Inkspace (inkspace_rendition.png), I can see gaps between the squares that are supposed to be pixels.
It's an aliasing issue.
Is there any way to fix this ?
Set your zoom to multiples of 100% (e.g. 200%, 300%, ...).
You could also set a very thin stroke the same color as the fill around each pixel.
Tav
It's an aliasing issue.
I thought it was a floating point accuracy issue ? Can you elaborate on this so I can understand the issue better ?
Regards,
Fabien
On Sun, Sep 21, 2014 at 11:19 AM, Tavmjong Bah <tavmjong@...8...> wrote:
On Sun, 2014-09-21 at 10:57 -0700, Fabien Sanglard wrote:
Hello,
I am trying to open in Inkspace (Mac OS X v0.48) a SVG file (wolfd_4.svg) that is a pixel perfect version of a PNG (wolf3d_4.png). Chrome and Safari are able to render the image well (chrome_rendition.png and safari_rendition.png).
Have you zoomed in? I see white space in Chrome when the zoom isn't 100%.
However when loaded in Inkspace (inkspace_rendition.png), I can see gaps between the squares that are supposed to be pixels.
It's an aliasing issue.
Is there any way to fix this ?
Set your zoom to multiples of 100% (e.g. 200%, 300%, ...).
You could also set a very thin stroke the same color as the fill around each pixel.
Tav
Mon, 22 Sep 2014 10:00:14 -0700 Fabien Sanglard <fabiensanglard.net@...400...> kirjoitti:
It's an aliasing issue.
I thought it was a floating point accuracy issue ? Can you elaborate on this so I can understand the issue better ?
It is an issue of antialiasing combined with simple blending model.
Consider a pixel that is exactly divided in half by two polygon: polygon A fills the left side of the pixel, polygon B the right side.
Antialiasing tells us that each polygon on its own has a coverage of 0.5 (or 50 %) in that pixel. This far everything is correct.
Inkscape then uses simple alpha blending to combine the colour contribution from those two polygons to create the pixel colour. The blending stage does not know the shape of the two polygons, just that each would fill half of the pixel on its own.
The blending equations make the assumption that the two polygons are orthogonal to each other (say, A filling the left half and B filling the top half), which usually works well, but is bad in this exact corner case. So, the final coverage of the pixel becomes coverage(A)+coverage(B)*(1-coverage(a)) = 0.5 + 0.5*0.5 = 0.75.
So, instead of getting fully opaque pixel, we get 25 % transparent one. Rendered on top of white background, the pixel becomes considerably lighter than it should have been.
Niko,
Thanks for taking the time to provide more explanation. It seems all the program I was able to find converting png to svg using vector pixels are doing it wrong. I am unsure adding a thin stroke will fix the issue but I am thinking of an other approach:
Convert all PNG pixels to SVG rects. Each rect would be 1.5 wide but each rect would be positioned 1 apart from each others. This way there would be no gap ever. The last line and last column of rects would of course be 1 wide instead of 1.5.
Can you think of a better approach ?
Fabien
On Mon, Sep 22, 2014 at 1:27 PM, Niko Kiirala <niko@...1267...> wrote:
Mon, 22 Sep 2014 10:00:14 -0700 Fabien Sanglard <fabiensanglard.net@...400...> kirjoitti:
It's an aliasing issue.
I thought it was a floating point accuracy issue ? Can you elaborate on this so I can understand the issue better ?
It is an issue of antialiasing combined with simple blending model.
Consider a pixel that is exactly divided in half by two polygon: polygon A fills the left side of the pixel, polygon B the right side.
Antialiasing tells us that each polygon on its own has a coverage of 0.5 (or 50 %) in that pixel. This far everything is correct.
Inkscape then uses simple alpha blending to combine the colour contribution from those two polygons to create the pixel colour. The blending stage does not know the shape of the two polygons, just that each would fill half of the pixel on its own.
The blending equations make the assumption that the two polygons are orthogonal to each other (say, A filling the left half and B filling the top half), which usually works well, but is bad in this exact corner case. So, the final coverage of the pixel becomes coverage(A)+coverage(B)*(1-coverage(a)) = 0.5 + 0.5*0.5 = 0.75.
So, instead of getting fully opaque pixel, we get 25 % transparent one. Rendered on top of white background, the pixel becomes considerably lighter than it should have been.
-- Niko Kiirala niko@...1267...
Exactly Niko, and this problem is probably only solved by super- sampling in some form. Perhaps we should reinvestigate opengl rendering or something? Another strategy that could work is to note where all the edge pixels are in the destination (rather than computing blend, write a special colour or add to a separate bit plane) and rerender them at say 16x16.
njh
On Mon, Sep 22, 2014 at 10:27:15PM +0200, Niko Kiirala wrote:
Mon, 22 Sep 2014 10:00:14 -0700 Fabien Sanglard <fabiensanglard.net@...400...> kirjoitti:
It's an aliasing issue.
I thought it was a floating point accuracy issue ? Can you elaborate on this so I can understand the issue better ?
It is an issue of antialiasing combined with simple blending model.
Consider a pixel that is exactly divided in half by two polygon: polygon A fills the left side of the pixel, polygon B the right side.
Antialiasing tells us that each polygon on its own has a coverage of 0.5 (or 50 %) in that pixel. This far everything is correct.
Inkscape then uses simple alpha blending to combine the colour contribution from those two polygons to create the pixel colour. The blending stage does not know the shape of the two polygons, just that each would fill half of the pixel on its own.
The blending equations make the assumption that the two polygons are orthogonal to each other (say, A filling the left half and B filling the top half), which usually works well, but is bad in this exact corner case. So, the final coverage of the pixel becomes coverage(A)+coverage(B)*(1-coverage(a)) = 0.5 + 0.5*0.5 = 0.75.
So, instead of getting fully opaque pixel, we get 25 % transparent one. Rendered on top of white background, the pixel becomes considerably lighter than it should have been.
On 09/22/2014 10:55 PM, Nathan Hurst wrote:
Exactly Niko, and this problem is probably only solved by super- sampling in some form.
There are other options for /mitigating/ this particular problem. For example, you could do some post-processing to detect the (rough) local orientation, or whether there are any totally transparent pixels in the neighbourhood, to guide the blending. Or you could try keeping track of a little bit more information than just the coverage, like the barycenter of the coverage (and/or even higher moments), and incorporate that information in the blending operation.
An actual (although fairly impractical) solution would essentially use boolean operations on the rendered shapes to split shapes into parts that fully overlap and parts that do not overlap at all, and then use the appropriate blend modes. This should give perfect results, but would require extremely fast and accurate boolean operations, and would massively complicate rendering.
Perhaps we should reinvestigate opengl rendering or something? Another strategy that could work is to note where all the edge pixels are in the destination (rather than computing blend, write a special colour or add to a separate bit plane) and rerender them at say 16x16.
The easiest way of getting this type of stuff working would probably indeed be to start supporting GPU-based rendering. Although I'm not sure whether the specific mode suggested by you (which indeed makes sense) already exists, there are a lot of anti aliasing modes available already that we could basically use out of the box.
On Tue, Sep 23, 2014 at 10:42:26AM +0200, Jasper van de Gronde wrote:
On 09/22/2014 10:55 PM, Nathan Hurst wrote:
Exactly Niko, and this problem is probably only solved by super- sampling in some form.
There are other options for /mitigating/ this particular problem. For example, you could do some post-processing to detect the (rough) local orientation, or whether there are any totally transparent pixels in the neighbourhood, to guide the blending. Or you could try keeping track of a little bit more information than just the coverage, like the barycenter of the coverage (and/or even higher moments), and incorporate that information in the blending operation.
I suspect that will end up being a lot more work than just super sampling. Remember, it has to be done per pixel. Adding an edge pixel bitmap proof of concept to the existing render sounds like an afternoon project for someone who has a current build working and knows where to look. Allocate a bitmap (I think we render 256 square tiles don't we, so we just need another 8k buffer, it would even fit in cache), modify the edge blending code to just set the bit, clear all opaque internal pixels. Then we rerender those pixels again with a higher resolution.
An actual (although fairly impractical) solution would essentially use boolean operations on the rendered shapes to split shapes into parts that fully overlap and parts that do not overlap at all, and then use the appropriate blend modes. This should give perfect results, but would require extremely fast and accurate boolean operations, and would massively complicate rendering.
Until we have robust and fast boolops, this isn't really an option :( And I doubt boolops will be competitive with a pixel basd approach.
The easiest way of getting this type of stuff working would probably indeed be to start supporting GPU-based rendering. Although I'm not sure whether the specific mode suggested by you (which indeed makes sense) already exists, there are a lot of anti aliasing modes available already that we could basically use out of the box.
GPU rendering is definitely a good idea, but much more work than the software solution.
njh
On Tue, Sep 23, 2014, at 12:03 PM, Nathan Hurst wrote:
I suspect that will end up being a lot more work than just super sampling. Remember, it has to be done per pixel. Adding an edge pixel bitmap proof of concept to the existing render sounds like an afternoon project for someone who has a current build working and knows where to look. Allocate a bitmap (I think we render 256 square tiles don't we, so we just need another 8k buffer, it would even fit in cache), modify the edge blending code to just set the bit, clear all opaque internal pixels. Then we rerender those pixels again with a higher resolution.
The first reaction to thinking of that was "Eek!", however the rough scope-of-work details are reassuring.
Would this also contribute to addressing some of our text aliasing issues?
And finally, since I often export at 2x width and height and then use Gimp to scale the image in half, I was considering adding that as an option for PNG export. I believe that adding super sampling in the export code is not too much work, so might have a concrete testbed for rough comparison.
On Tue, Sep 23, 2014 at 12:34:58PM -0700, Jon A. Cruz wrote:
On Tue, Sep 23, 2014, at 12:03 PM, Nathan Hurst wrote:
I suspect that will end up being a lot more work than just super sampling. Remember, it has to be done per pixel. Adding an edge pixel bitmap proof of concept to the existing render sounds like an afternoon project for someone who has a current build working and knows where to look. Allocate a bitmap (I think we render 256 square tiles don't we, so we just need another 8k buffer, it would even fit in cache), modify the edge blending code to just set the bit, clear all opaque internal pixels. Then we rerender those pixels again with a higher resolution.
The first reaction to thinking of that was "Eek!", however the rough scope-of-work details are reassuring.
Your reaction to everything is Ee*k :)
Would this also contribute to addressing some of our text aliasing issues?
I don't know, what are our text aliasing issues?
And finally, since I often export at 2x width and height and then use Gimp to scale the image in half, I was considering adding that as an option for PNG export. I believe that adding super sampling in the export code is not too much work, so might have a concrete testbed for rough comparison.
Eeek!
njh
participants (7)
-
Fabien Sanglard
-
Jasper van de Gronde
-
Jon A. Cruz
-
Nathan Hurst
-
Niko Kiirala
-
su_v
-
Tavmjong Bah