
On Sat, Aug 30, 2014, at 12:09 PM, Nathan Hurst wrote:
On Sat, Aug 30, 2014 at 11:07:02AM -0700, Jon A. Cruz wrote:
Those also are a optimization for space. There is an expectation that the compiler will pack together all those specified to use a single bit into one int.
Of course that then gives a penalty for performance.
These days space optimisation is performance optimisation - if everything is in the cache things go much faster. Setting and clearing bits takes no more time than setting and clearing words in practice.
(I agree with everything else you said)
Except... for larger word architectures, fetching and bit twiddling can take longer. Depends on the number, packing, etc.
Since I've seen this take performance both ways, I'd be interested in how it works on current CPUs in 32-bit and 64-bit modes for our use cases. I've actually measured performance degradation in the past when using bitfields, but I understand it is a context sensitive issue. I'd also be a bit surprised if our structs changing from bitfields to booleans would grow large enough to push that out of the cache.
However... if performance is a non-issue, there are still some other reasons to favor booleans over bitfields. This includes avoiding use of uninitialized memory that can cause noise and distortion with tools like Valgrind.