I am typing text in this forum, who receives the keyboard event, and translate it to the right character on the screen, is it the kernel or mozilla?
The keyboard sends set of signals, which vary by keyboard model a bit, when you press a key. The OS takes those signals and translates them into a consistant "keycode". If it's not a command that the OS processes itself, it is then passed on to the active application. The application then can do whatever processing it wants. When it wants to display text, it passes a string of characters on the display layer, along with font and other display information. The display layer then builds the actual bitmap from that information.
In the simplest case, the application can take the keycode passed by the OS and tack it on to the string it passes to the display layer. But that isn't always true.
Are all the users of this forum type in the same character set? I dont think so, yet to view this forum we all (I am only guessing) tell our browser to open this forum site in the same character set.
Character encodings and font sets are very ugly in HTML, as the early versions where an english centric defacto-standard. XHTML cleans up most of these issues. Web browsers have to use some guesswork and follow some unwritten conventions for handeling these issues in HTML. The browser has to peek at the web page and try to figure out what the encoding is in many cases. In pracitce, in HTML, ISO-8859-1 is the default if the browser can't find anything else.
On the upload side, the browser is reponsible for encoding the text in a standard format before sending the form data to the server.
Okay I wont lie, I read something like this, the first 128 characters the first 7 bits are common in many character sets, bu the second 128 char sets are different
I think even unicode use some trick to read ASCII chars
Okay surprise question?
What the heck is ASCII?
ASCII is an old standard for 8-bit encoding. Many, but not all, character encodings follow the ASCII encodings for english letters and numbers. This allows many programs to work correctly with basic english even if they don't handle encoding correctly.
Another problem banging my head, when I used to write those silly ...
scanf, printf programs in C, it didn't seem that the compiler bothered
about the character set
C is an old and low level language. It doesn't really deal with these issues. As far as C is concerned, a string is simply a sequence of bytes. C pretty much assumes that the keycodes passed the OS keycode = string codes = display codes = 8 bits. You can work in other encodings in C, but then you have to use functions that understand your encoding.
Does linux have default char set values, why?
Lets put it differently does a system have a global char set? why?
Not really. The OS does have to set some standard for communication between the OS and the applications, but that is independent of what is displayed or what is stored in files. Most OSs use ASCII for communication between the OS and applications. Windows NT and later can use an unicode system for some interfaces, but I don't know the specifics.
Jay