4 Apr
2011
4 Apr
'11
7:09 a.m.
On 2011-04-03 20:56, Johan Engelen wrote:
On 2-4-2011 17:39, Jasper van de Gronde wrote:
However, the procedure does make use of the internal representation of a double, so I'm a little worried about it breaking on different architectures.
If you use a non-Intel architecture it would be great if you could try out the Gaussian blur currently in bzr (it's rev. 10144) and see if anything goes wrong.
Perhaps you can #ifdef the faster implementation of rounding?
It already checks for big endian and little endian using byte order macros from glib (it even falls back on the usual code if the architecture is neither big nor little endian...), but I have no way of testing on a big endian machine at the moment.