Post #336,529
12/1/10 11:01:46 PM
|
Need help analyzing web server performance
A little after noon today my webserver started going crazy. Returning PHP errors, not finding pages, etc.
The resource graph from the web host showed a massive spike:
http://cooklikeyourg...s/20101201-dh.png
I didn't see any heavy usage on Google Analytics, so I submitted a ticket to ask what was up. Got this response:
As for your heavy usage it's all coming from cooklikeyourgrandmother.com:
:/home/drook/logs/cooklikeyourgrandmother.com/http# awk '{print $7}'
access.log|cut -d? -f1|sort|uniq -c|sort -nk1|tail -n10
2740 /wp/wp-content/themes/htclyg/style.css
2750 /wp/wp-content/plugins/wp-polls/polls-css.css
2751 /wp/wp-content/themes/thematic/library/styles/plugins.css
2754 /wp/wp-content/themes/thematic/library/styles/default.css
2758 /wp/wp-content/themes/thematic/library/styles/reset.css
2762 /wp/wp-content/themes/thematic/library/styles/images.css
2763 /wp/wp-content/themes/thematic/library/styles/typography.css
2767 /wp/wp-content/themes/thematic/library/layouts/2c-r-fixed.css
2875 /images/nsr-widget-170.png
4724 /favicon.ico
You're getting close to 185k in hits so you definitely need to upgrade
your RAM as your server is crashing due to not having sufficient RAM. You
want to increase your RAM and also look into optimizing your wordpress
install as well. Should you need anything else, please let me know and
will be glad to help you out.
Whenever I run 'top' on the server I never see anything unusual. Last time this happened they said that I kept having processes that exceeded my memory cap and their system was killing the processes. This happens too quickly to appear in their chart, and I have no way to identify what's doing it.
This time, obviously it did show up in their chart.
For comparison, here's what I saw from Google Analytics for the day.
http://cooklikeyourg...s/20101201-ga.png
The dip coincides with the spike on the usage graph. Which means pages weren't actually being served, so the GA scripts never loaded.
So ... with their management system killing scripts that use too much memory, but don't capture any details of what's being killed ... what do I need to look at to tell what's happening?
--
Drew
|
Post #336,534
12/2/10 3:46:01 AM
|
Wordpress is in PHP, isn't it?
If so, it should log memory failures in the web server error log, though that can't be guaranteed, especially if an external nanny process is doing the killing.
Those access logs should also let you know which pages are loading all those css page requests.
Wade.
Q:Is it proper to eat cheeseburgers with your fingers? A:No, the fingers should be eaten separately.
|
Post #336,542
12/2/10 9:26:47 AM
|
Yes it is
There are also some process memory settings, like what Greg asked about. I'll have to see what I'm able to muck around with.
As for the CSS, that's every page load. Those are all part of the WordPress theme.
--
Drew
|
Post #336,548
12/2/10 3:15:36 PM
|
PHP Memory limits.
The default in the php.ini file is (IIRC) 8Mb. Since that is miniscule in the world of PHP programming, I would expect Wordpress to raise it. However, it can be raised at any time by a PHP page, so you should probably look in the Wordpress settings. Failing that, I'd be grepping for "ini_set"; I wouldn't put it past Wordpress programmers to quietly set no limit (-1).
Beyond that, the next port of call would be a Wordpress forum. I know how to program a site to use less memory, but not how to tell Wordpress how to.
Wade.
Q:Is it proper to eat cheeseburgers with your fingers? A:No, the fingers should be eaten separately.
|
Post #336,560
12/2/10 5:32:00 PM
|
What if there are multiple, conflicting calls?
wp-admin/includes/file.php: @ini_set('memory_limit', '256M');
wp-content/plugins/google-sitemap-generator/sitemap.php: @ini_set('memory_limit', '64M');
wp-content/plugins/google-analyticator/google-analytics-stats-widget.php: @ini_set('memory_limit', '96M');
wp-settings.php:
if ( !defined('WP_MEMORY_LIMIT') )
define('WP_MEMORY_LIMIT', '32M');
if ( function_exists('memory_get_usage') && ( (int) @ini_get('memory_limit') < abs(intval(WP_MEMORY_LIMIT)) ) )
@ini_set('memory_limit', WP_MEMORY_LIMIT);
Hmm, wonder what I can set that to in wp-settings.
--
Drew
|
Post #336,582
12/3/10 4:23:47 AM
|
Two answers.
If they are called in sequence, the latter one wins. But multiple calls in a codebase usually mean there are multiple include paths. Some pages get one, others get another. Looking at the filenames, I'd say pages get either one default or the other except for the two specific pages with their own values.
32M is a reasonable default size for a site, though I wish it was lower. It suggests some effort has been made at some point to the codebase to reduce the memory footprint. It's a bit worrying, though, that those two Google pages need 64M and 96M; that's on the large side. OTOH, 256M for the admin suite is probably not so bad.
I'd also go looking for that function, memory_get_usage().
Wade.
Q:Is it proper to eat cheeseburgers with your fingers? A:No, the fingers should be eaten separately.
|
Post #336,594
12/3/10 8:46:17 AM
|
Just found php.ini has a 90M limit
I've got a VPS with 300M guaranteed, 600M spike limit, so I'm guessing if I have seven PHP instances executing at the same time I exceed it. Does that sound right?
If so, is there a way to configure PHP (or Apache?) so that any additional requests after the first six are delayed rather than starting to execute and then get killed by the process nanny?
--
Drew
|
Post #336,608
12/3/10 9:42:02 AM
|
Re: Just found php.ini has a 90M limit
StartServers 2
MinSpareServers 2
MaxSpareServers 5
MaxClients 7 <---- that one
MaxRequestsPerChild 1000
Of course I'm oblivious to what might happen... people will wait.
FYI, I already see a significant wait time to have your site render. I'm not so sure you want to limit the number of instances...
|
Post #336,614
12/3/10 9:53:21 AM
|
Noticed that recently too
What's odd is that when it does come up, it comes up all at once. Is there a directive that instructs the browser not to render until all the images are downloaded?
--
Drew
|
Post #336,618
12/3/10 10:57:20 AM
|
That can happen if your software isn't
providing the image dimensions to the browser....
|
Post #336,630
12/3/10 1:26:21 PM
|
But it is ... I already made sure of that ... oh wait
Sonofabitch. Several sidebar widgets didn't have their dimensions listed.
Ehh ... just fixed them all and it still waits until everything is there to render. Firebug is showing 3-7 seconds for the initial page get. That blows.
--
Drew
|
Post #336,636
12/3/10 3:33:17 PM
|
CSS listed first on the page?
And Javascript should be last, after the content. CSS is required to properly render, and the Javascript will single-thread and cause everything to wait while it is being retrieved.
Regards, -scott Welcome to Rivendell, Mr. Anderson.
|
Post #336,649
12/3/10 5:09:15 PM
|
Don't think that's the problem
That 3-7 second delay I mentioned is on the first page load. Nothing else starts downloading until after that.
--
Drew
|
Post #336,653
12/3/10 5:35:50 PM
|
Have you tried FireBug's Net tab or IEWatch?
Something that tells you what is loading when, and how long it takes.
Regards, -scott Welcome to Rivendell, Mr. Anderson.
|
Post #336,660
12/3/10 7:34:10 PM
|
That's where I get the 3-7 seconds for initial page load
--
Drew
|
Post #336,705
12/4/10 3:25:07 PM
|
No long delay here + a couple of observations
The redirect from www.cooklikeyourgrandmother.com to cooklikeyourgrandmother.com adds almost a second to the initial page load time in my case. It also generates a second DNS hit. If your DNS server is slow to resond or you have a broken forwarder (say a certain type of Belkin wireless router) this could easily exacerbate the delay.
There are no long delays downloading parts once the root page is loaded. I do see long delays closing connections, but that does not seem to have an effect on page display. The delays occur on both sides, so this may just be the way HTTP works. E.g. the last request is finished after 9s but the last connection is closed at 22s.
The blog doesn't have the redirect overhead and otherwise behaves the same as the main page.
Most of the static content you provide (images, css, ...) has the Expires header set to April 15 2010. This sets up an extra trip to the server for everything that is used more than once.
|
Post #336,711
12/4/10 5:33:10 PM
|
Thanks for noticing the expires header
The article I copied some optimization tips from was written in 2007. At the time 2010 was "far future".
--
Drew
|
Post #336,643
12/3/10 5:00:17 PM
|
Not necessarily.
PHP doesn't allocate the whole memory to a thread when it starts; the number is what it uses to terminate a thread if it asks for too much. In your VPS, you should be able to run nearly 100 that use no more than 6M. Theoretically. In practice, it will be somewhere in the middle, due to system memory overhead and the fact few complex PHP pages will come in under 6M.
Then, too, that 90M limit will apply to any pages that don't override it with their own limit. How did you go with that Wordpress memory limit setting?
Wade.
Q:Is it proper to eat cheeseburgers with your fingers? A:No, the fingers should be eaten separately.
|
Post #336,648
12/3/10 5:08:16 PM
|
Haven't played with it yet
I want to try that later at night when it's not so busy, so I can isolate what I'm doing.
--
Drew
|
Post #336,538
12/2/10 6:07:07 AM
|
Bleah you are using PHP.
modperl has a nice feature.
PerlRequire conf/modperl.pl
PerlCleanupHandler Apache2::SizeLimit
and here is the script preload that handles it (modperl.pl):
use Apache2::SizeLimit;
$Apache2::SizeLimit::MAX_PROCESS_SIZE = 150000;
$Apache2::SizeLimit::MAX_UNSHARED_SIZE = 125000;
$Apache2::SizeLimit::CHECK_EVERY_N_REQUESTS = 5;
1;
It checks its own size and kills itself nicely before any more is served from it. It really has saved from runaway webservers... and other issues.
I don't know if PHP has a function similar, it would sure fix your issue lickety split.
|
Post #336,549
12/2/10 3:44:01 PM
|
Surprisingly today...
Facebook pummeled us and our servers. One of our customers updated their facebook page with an image and a link. wham...
Doesn't help when Facebook does a continuous request to a CMS that will happily comply.
I had multiple apache instances about 2GB process size... but Apache couldn't/wouldn't kill it as it was still serving... no break and the CMS/modperl was just a churning... and churning... spinning.
Manual kills were the only option. It sucked... had to reboot one of the webservers as I didn't catch it in time ... and was out of memory and swap and was flailing at 200 load average trying to spawn new instances.... *BOOP* reset button.
Guess I'll have to run a cronjob that runs every 20 seconds and hammers the runaways... already do it on old versions of the CMS... guess I'll have to do it on the new(er) version.
|