I'm trying to allocate more shared memory to run a database.
This is on RedHat 7.1.
When I edit sysctl.conf through the Linux GUI interface, it keeps setting my max shared memory segment size down to 32 meg, but it seems I can set my total shared memory to any value.
Is there a reason for this? If I need a larger shared memory segment, am I going to have to do some uglies deep inside the kernel code to get them? I really don't want to go there...
Also, my database logs are telling me that I'm running out of semaphores. And when I try to stop the database it tells me there aren't enough resources to stop the DB. Finally, I shut down some other processes using shared memory, and the database shut down.
I'm used to the HP/Sun way of doing this with names like shmmax, msgseg, msgmap, msgmni, msgsz, semmni, semmap, semmns, semmnu, shmseg, shmmni.
In HP, we used SAM and edited the values and rebooted the system. In Sun, there was a command line tool (sysconf?).
I see the sysctl.conf in Linux, but is there something more I'm missing here??? I also set the signals from 1024 to 2048.
Glen Austin