IWETHEY v. 0.3.0 | TODO
1,095 registered users | 0 active users | 0 LpH | Statistics
Login | Create New User
IWETHEY Banner

Welcome to IWETHEY!

New Well ...
They build a new version for each environment. Deployments are always iffy. QA doesn't trust the process.
--

Drew
New Ah.
I think QA is right. But I think their solution is wrong.

I don't have time to find it now, but there was a lengthy article about IMVU's build process. Full (and I mean 100%) test coverage, automated build and deploy process. The delivered binary is the tested binary. A new hire does a fix, build and deploy right to clients on his very first day.

Wade.
Just Add Story http://justaddstory.wordpress.com/
New Agreed, but that's not our alternative
It's not "Automated tests using fake data vs. 100% test coverage & good deployment process". It's "Automated tests using fake data vs. build it ship it and cross your fingers".
--

Drew
New If you don't have great representative...
Fake Data... it is not valid anyway.

We have so many exceptions to try and catch... we have to take neutered data from all of customers... (fixing up any PCI Data to be fake as well for compliance)

But then we don't have an automated process anymore... I *ALMOST* got them into place... ALMOST!
--
greg@gregfolkert.net
"No snowflake in an avalanche ever feels responsible." --Stanislaw Jerzy Lec
New I think that's an extreme example, unfortuantely.
Aim lower. I was saying that there should be a third alternative: "deploy the same binary/artefact/tarball to Prod that ran all the tests successfully in QA". This might be a bit of a shocking suggestion to at least one of the parties. :-)

Wade.
Just Add Story http://justaddstory.wordpress.com/
New Use the database dammit
Production is NEVER the same at dev, test, staging, etc.

But it is a real world starting point number for performance testing, when the app goes live. Maybe you have a 24 core xeon to play with in dev. Maybe you get a 1/2 core in prod. You'll never know until it is too late. The hardware guys and sysadmins and dbas willl NEVER give you the real numbers that would cause them to be the "responsible" party for the app.

They did not write it.

So, all "live" numbers are variable, out of your control, unless you have this exact test. Call it your "ready for production final qa"(must run in production environment for the real app to run test), and therefor it MUST be in that environment.

Then don't forget your peak usage time concurrent test. Can't trust the starting number, gotta see what's happening in the real world. Do the test as often as you can as long as you are not causing any degradation in the rest of the world. How often you run depends on how much headroom the app/os leaves you, and then you still have to leave a comfy level for spikes in the apps.

New We're barely starting to talk about performance/load testing
I'm still trying to verify we're getting the right results. Things I've read about measuring performance at the network level are at least two generations beyond what we're capable of.
--

Drew
     Need references for automated testing and test DBs - (drook) - (11)
         dunno about db's - (boxley)
         Sounds like someone's working in isolation. - (static) - (7)
             Well ... - (drook) - (6)
                 Ah. - (static) - (5)
                     Agreed, but that's not our alternative - (drook) - (4)
                         If you don't have great representative... - (folkert)
                         I think that's an extreme example, unfortuantely. - (static)
                         Use the database dammit - (crazy) - (1)
                             We're barely starting to talk about performance/load testing - (drook)
         Is the test database a separate schema? - (malraux) - (1)
             I'm choosing from bad options here (not that it's my choice) - (drook)

Maybe this thing does have "macros".
107 ms