who thought that gcc should use 32 bits for a bool on power pc architectures. Apparently, gcc only uses 8 bits for a bool on intel and Cocoa typedefs BOOL as a char. Making bool 8 bits would allow Cocoa to change the typedef of BOOL to be a bool with no binary consequences.
Why do I care? I'm updating my CLIPS-ObjectiveC bridge and want things labeled BOOL to have symbol values TRUE and FALSE like all the predicates produce in CLIPS> However, calling valueForKey: on a bool gives me a NSNumber that has a type tag of 'c' (char). So I can't cleanly tell at runtime if a value is meant to be a BOOL or a char unless its a builtin bool.
But who would prefer a 32 bit bool over an 8 bit BOOL?