Well ...
They build a new version for each environment. Deployments are always iffy. QA doesn't trust the process.
--
Drew |
|
Ah.
I think QA is right. But I think their solution is wrong.
I don't have time to find it now, but there was a lengthy article about IMVU's build process. Full (and I mean 100%) test coverage, automated build and deploy process. The delivered binary is the tested binary. A new hire does a fix, build and deploy right to clients on his very first day. Wade. Just Add Story http://justaddstory.wordpress.com/
|
|
Agreed, but that's not our alternative
It's not "Automated tests using fake data vs. 100% test coverage & good deployment process". It's "Automated tests using fake data vs. build it ship it and cross your fingers".
--
Drew |
|
If you don't have great representative...
Fake Data... it is not valid anyway.
We have so many exceptions to try and catch... we have to take neutered data from all of customers... (fixing up any PCI Data to be fake as well for compliance) But then we don't have an automated process anymore... I *ALMOST* got them into place... ALMOST! --
greg@gregfolkert.net "No snowflake in an avalanche ever feels responsible." --Stanislaw Jerzy Lec |
|
I think that's an extreme example, unfortuantely.
Aim lower. I was saying that there should be a third alternative: "deploy the same binary/artefact/tarball to Prod that ran all the tests successfully in QA". This might be a bit of a shocking suggestion to at least one of the parties. :-)
Wade. Just Add Story http://justaddstory.wordpress.com/
|
|
Use the database dammit
Production is NEVER the same at dev, test, staging, etc.
But it is a real world starting point number for performance testing, when the app goes live. Maybe you have a 24 core xeon to play with in dev. Maybe you get a 1/2 core in prod. You'll never know until it is too late. The hardware guys and sysadmins and dbas willl NEVER give you the real numbers that would cause them to be the "responsible" party for the app. They did not write it. So, all "live" numbers are variable, out of your control, unless you have this exact test. Call it your "ready for production final qa"(must run in production environment for the real app to run test), and therefor it MUST be in that environment. Then don't forget your peak usage time concurrent test. Can't trust the starting number, gotta see what's happening in the real world. Do the test as often as you can as long as you are not causing any degradation in the rest of the world. How often you run depends on how much headroom the app/os leaves you, and then you still have to leave a comfy level for spikes in the apps. |
|
We're barely starting to talk about performance/load testing
I'm still trying to verify we're getting the right results. Things I've read about measuring performance at the network level are at least two generations beyond what we're capable of.
--
Drew |