
On 09/22/2014 10:55 PM, Nathan Hurst wrote:
Exactly Niko, and this problem is probably only solved by super- sampling in some form.
There are other options for /mitigating/ this particular problem. For example, you could do some post-processing to detect the (rough) local orientation, or whether there are any totally transparent pixels in the neighbourhood, to guide the blending. Or you could try keeping track of a little bit more information than just the coverage, like the barycenter of the coverage (and/or even higher moments), and incorporate that information in the blending operation.
An actual (although fairly impractical) solution would essentially use boolean operations on the rendered shapes to split shapes into parts that fully overlap and parts that do not overlap at all, and then use the appropriate blend modes. This should give perfect results, but would require extremely fast and accurate boolean operations, and would massively complicate rendering.
Perhaps we should reinvestigate opengl rendering or something? Another strategy that could work is to note where all the edge pixels are in the destination (rather than computing blend, write a special colour or add to a separate bit plane) and rerender them at say 16x16.
The easiest way of getting this type of stuff working would probably indeed be to start supporting GPU-based rendering. Although I'm not sure whether the specific mode suggested by you (which indeed makes sense) already exists, there are a lot of anti aliasing modes available already that we could basically use out of the box.