IWETHEY v. 0.3.0 | TODO
1,095 registered users | 0 active users | 0 LpH | Statistics
Login | Create New User
IWETHEY Banner

Welcome to IWETHEY!

New kswapd / low memory problem - CPU builds up locks on low mem
The following note describes what is occurring intermittently on our boxes when one of our vendor-supplied applications starts up. It appears there is a fix, but noone seems to be able to tell what kernel the fix is in.

We are running Red Hat Advanced Server 2.1 Kernel 2.4.9-e40 and need to know what to upgrade to.

Also, I know about low memory in old Windows/MS-DOS machines, and on OS/390, but what is low memory in Linux?

See note from IBM:

You're showing all the signs of the "kswapd" bug present in v2.4 kernels.
Well, kswapd gets blamed for the problem. It is actually caused by using up
nearly all of low memory with the buffer header and/or inode slab caches.
(Cat /proc/slabinfo when kswapd is running >= 99% and see if those two caches
have grown extra large.) Anyway, kswapd gets triggered because a zone has
hit its low memory threshold. But kswapd can't swap buffer headers or
inodes. The situation is hopeless, yet kswapd presses on anyway, scouring
every memory zone for pages to free, all the while holding important memory
locks.


Meanwhile, every program that wants more memory will spin on those locks.
That's what the .text.lock.* entries are: the out-of-line spin code for each
lock; it is used when the lock is already owned by some other CPU.


Net result: a computer that runs like molasses in January.


Of the several proposed patches for this bug, Andrea Archangeli's and Andrew
Morton's worked best in our tests. I believe that Andrea was going to add in
some of Andrew's code for the final fix. The kernel that is on the SLES 8 /
UL 1.0 gold CDs works fine so I assume the Vulcan Mind Meld on the patches
went well.

Just wanted to see if anyone on this forum might know.

Glen Austin
New Two uses of low memory that I'm aware off.
Memory in Linux is potentially divided up into many different banks depending on system type and amount of memory. Low/high memory is used in two different context on 32 bit x86, other platforms may be different.

The first, which doesn't seem to be related to the problem you mention, is low memory that occurs on 32 bit x86 computers with more then 4 gig installed at once. Because the CPU can only address 4 gig at a time, it has to play games to map / swap various chunks into the contigious 4 gigs that it can address. The part that can be addressed is sometimes called low memory and the rest high memory, even if address mapping means that it isn't contigious.

The second problem is a bit of idocy in PC design dating back to the 386 days. 32 bit x86 systems can only use Direct Memory Access to and from the lower 16 meg of memory. Because modern disk access depends on DMA, the buffers for reading and writing to disk have to be in the lower 16 meg. If all of this memory is locked down, then no memory can be swapped to disk because no buffer for the transfer can be freed.

Last I heard, this problem had not be made impossible, but is very unlikely with recent kernels.

Jay
New Part right, part wrong.
Your part wrong and part right. Here's my explanation of the "fun" of high and low memory.

You're wrong about the 16MB DMA limit. DMA to most hardware long ago passed the 16MB limit. The 16MB limit is a legacy of ISA based hardware. Any decent integrated or PCI hardware, be it IDE controllers, network cards, or anything else will be able to address 4GB of bus address space. There are defective PCI hardware with weird addressing limitations, usually sound cards, but they're very rare. And most "server" class devices can address > 4GB.

For 32bit systems, the highmem/lowmem problem is caused by the kernel only mapping the ~900MB of memory into a readily accessible kernel address space. This region is the "lowmem". For any memory above this, there is no stable memory address for accessing such memory. The kernel must play games creating and deleting temporary mappings to access memory above this point.

This means that for many memory allocations, the kernel will only use low memory. And having more high memory puts more pressure on low mem resources do to the kernel using parts of low memory to keep track of what it's doing with high memory!

And the use of highmem undermines some of the natural balancing in linux's memory management. Usually, one of the kernel's "easiest" ways to free memory in a pinch is to write out or discard cached file data or write program memory to swap space. But in a kernel with highmem, these are usually allocated in high memory! This means it is easier for the kernel to fill up low memory with hard to free data structures (the inodes and buffer heads the email mentions) without it realizing the danger and having no way to quickly free memory when memory is tight.

So that's a partial explaination of why high memory have given the linux kernel fits throughout much of 2.4. I hope it made some sense and didn't get too bogged down in jargon.

Dave "Lord Beatnik"
     kswapd / low memory problem - CPU builds up locks on low mem - (gdaustin) - (2)
         Two uses of low memory that I'm aware off. - (JayMehaffey) - (1)
             Part right, part wrong. - (lordbeatnik)

Enjoy your amazement.
92 ms