Your part wrong and part right. Here's my explanation of the "fun" of high and low memory.

You're wrong about the 16MB DMA limit. DMA to most hardware long ago passed the 16MB limit. The 16MB limit is a legacy of ISA based hardware. Any decent integrated or PCI hardware, be it IDE controllers, network cards, or anything else will be able to address 4GB of bus address space. There are defective PCI hardware with weird addressing limitations, usually sound cards, but they're very rare. And most "server" class devices can address > 4GB.

For 32bit systems, the highmem/lowmem problem is caused by the kernel only mapping the ~900MB of memory into a readily accessible kernel address space. This region is the "lowmem". For any memory above this, there is no stable memory address for accessing such memory. The kernel must play games creating and deleting temporary mappings to access memory above this point.

This means that for many memory allocations, the kernel will only use low memory. And having more high memory puts more pressure on low mem resources do to the kernel using parts of low memory to keep track of what it's doing with high memory!

And the use of highmem undermines some of the natural balancing in linux's memory management. Usually, one of the kernel's "easiest" ways to free memory in a pinch is to write out or discard cached file data or write program memory to swap space. But in a kernel with highmem, these are usually allocated in high memory! This means it is easier for the kernel to fill up low memory with hard to free data structures (the inodes and buffer heads the email mentions) without it realizing the danger and having no way to quickly free memory when memory is tight.

So that's a partial explaination of why high memory have given the linux kernel fits throughout much of 2.4. I hope it made some sense and didn't get too bogged down in jargon.

Dave "Lord Beatnik"