We know how you feel about any language whose name starts with the letter C, but let's get on about it, OK? (Especially when helping someone...)
C, written in the early '70s far antedated even foreign computing. When the ANSI C standard was ratified in 1989, UNICODE was only a glimmer in the eyes of its designers. Nonetheless, C still had given it some thought, with the advent of locale-specific functions and so on. Remember, that the state of the art then was Latin-n and JIS and Shift-JIS. (Who cared about China? It was so backwards, it'd never get on board with ubiquitous computing.)
Well, then came the internet, and damn if talking to and between non-Latin-character-writing folks became important. So did C++. C++ defines as part of its standard the ability to determine locales via facets, and defined the wchar_t type specifically for handling the (then) two-byte code points of the nascent UNICODE and other wide-character encoding schemes that were becomeing more widely used. That's why, as you pointed out, "If you are working in C++, you just use the string object and the right things happen". That wouldn't be the case if those xenophobic white men had behaved in the way you described, now would it?
And no, they did not update the existing legacy C library to explicitly maintain backwards compatibility. Right decision? I dunno...but since there is a workaround, it seems like an OK choice at this point.
[Edit: Fixed typos so the 2nd paragraph made some sense]