Historically speaking...
...you'd choose the precision of the floating point operations for the compiler, and the compiler would attempt to enforce that precision. In a lot of old embedded apps, you'd choose to run everything as single precision (32 bits) so that everything could fit into a 4 byte space. When the FPU's came out, they leapfrogged the double precision (64 bits) and went for double long precision (80 bits). Now, 80 bits can fit in 10 bytes, but most modern processors like even 4 byte boundaries, so most of the access for 80 bits is usually 12 bytes in length (with 16 spare unused bits).
I know on the MC6888x, one can move things in and out of the FPU in whatever resolution is desired. So you can have extended precision throughout the compiler enforced if the compiler writer so desires. Of course, this does eat processor cycles and memory since you've increased the access by 1/3.
One thing I couldn't find was the correlary in the intel x87 instruction set to allow you to FLD (load) and FST (store) in extended precision (i.e. 12 bytes). The default is 8 bytes, unless otherwise specified. I'd be surprised if the later versions of the FPU didn't provide that facility, I just couldn't find it.
With that said, the extended precision real (12 bytes, 80 bits) format was not a standard in the K&R or ANSI C specifications. The C'99 spec does have the format as a base type, but I'm not sure that the C++ standard has adopted it yet. From a pure C perspective, the standard is very much influenced by C since the C++ compilers are compiling a lot of C code.