IWETHEY v. 0.3.0 | TODO
1,095 registered users | 0 active users | 0 LpH | Statistics
Login | Create New User
IWETHEY Banner

Welcome to IWETHEY!

New Need references for automated testing and test DBs
Our QA team has recently started writing automated tests. These tests expect that executing specific searches will yield specific results. A test database was created that contains the data they automated against. (There is a unique DB for each state, the test DB is set up as state 'QA'.)

When deploying to production, they want to execute the automated tests as a smoke test. They have requested that the test DB be deployed to production. The DBA doesn't want to deploy fake data to production.

There was a heated discussion. One side says an automated test that depends on a database being in a certain state is a bad test. The other side says that's why you execute a pre-test setup and post-test teardown to install and remove the test data.

Thoughts? Pointers to research? Thanks.
--

Drew
New dunno about db's
but coders should have unit tests built into their codem so if the app starts it has passed all of its unit tests
http://en.wikipedia....wiki/Unit_testing
Unit tests find problems early in the development cycle.

In test-driven development (TDD), which is frequently used in both Extreme Programming and Scrum, unit tests are created before the code itself is written. When the tests pass, that code is considered complete. The same unit tests are run against that function frequently as the larger code base is developed either as the code is changed or via an automated process with the build. If the unit tests fail, it is considered to be a bug either in the changed code or the tests themselves. The unit tests then allow the location of the fault or failure to be easily traced. Since the unit tests alert the development team of the problem before handing the code off to testers or clients, it is still early in the development process.
Any opinions expressed by me are mine alone, posted from my home computer, on my own time as a free American and do not reflect the opinions of any person or company that I have had professional relations with in the past 58 years. meep
New Sounds like someone's working in isolation.
I don't have references, but I can offer some thoughts.

I've never heard of automated tests running on a production system like that. That's the sort of thing you run in a proper testing environment. It's also one of the reasons you deploy the same artefact to prod as to testing. Has anyone told the QA people that the point of formal testing is so that they don't have to do it on Prod?

(Where I currently work, we were one of the first teams to insist on that because I'd had bad experiences elsewhere building multiple artefacts. Several of the other teams had yet to learn that and always had problems with production deploys.)

I like how Ruby on Rails does its automated testing: it is a completely separate database instance that the test framework (part of Rails) has complete control over. But it's still not done on Prod! :-)

Wade.
Just Add Story http://justaddstory.wordpress.com/
New Well ...
They build a new version for each environment. Deployments are always iffy. QA doesn't trust the process.
--

Drew
New Ah.
I think QA is right. But I think their solution is wrong.

I don't have time to find it now, but there was a lengthy article about IMVU's build process. Full (and I mean 100%) test coverage, automated build and deploy process. The delivered binary is the tested binary. A new hire does a fix, build and deploy right to clients on his very first day.

Wade.
Just Add Story http://justaddstory.wordpress.com/
New Agreed, but that's not our alternative
It's not "Automated tests using fake data vs. 100% test coverage & good deployment process". It's "Automated tests using fake data vs. build it ship it and cross your fingers".
--

Drew
New If you don't have great representative...
Fake Data... it is not valid anyway.

We have so many exceptions to try and catch... we have to take neutered data from all of customers... (fixing up any PCI Data to be fake as well for compliance)

But then we don't have an automated process anymore... I *ALMOST* got them into place... ALMOST!
--
greg@gregfolkert.net
"No snowflake in an avalanche ever feels responsible." --Stanislaw Jerzy Lec
New I think that's an extreme example, unfortuantely.
Aim lower. I was saying that there should be a third alternative: "deploy the same binary/artefact/tarball to Prod that ran all the tests successfully in QA". This might be a bit of a shocking suggestion to at least one of the parties. :-)

Wade.
Just Add Story http://justaddstory.wordpress.com/
New Use the database dammit
Production is NEVER the same at dev, test, staging, etc.

But it is a real world starting point number for performance testing, when the app goes live. Maybe you have a 24 core xeon to play with in dev. Maybe you get a 1/2 core in prod. You'll never know until it is too late. The hardware guys and sysadmins and dbas willl NEVER give you the real numbers that would cause them to be the "responsible" party for the app.

They did not write it.

So, all "live" numbers are variable, out of your control, unless you have this exact test. Call it your "ready for production final qa"(must run in production environment for the real app to run test), and therefor it MUST be in that environment.

Then don't forget your peak usage time concurrent test. Can't trust the starting number, gotta see what's happening in the real world. Do the test as often as you can as long as you are not causing any degradation in the rest of the world. How often you run depends on how much headroom the app/os leaves you, and then you still have to leave a comfy level for spikes in the apps.

New We're barely starting to talk about performance/load testing
I'm still trying to verify we're getting the right results. Things I've read about measuring performance at the network level are at least two generations beyond what we're capable of.
--

Drew
New Is the test database a separate schema?
If it's a separate schema/database/whatever, then this can be a valid approach so long as there is *no* chance of it touching production data. In practice this is very hard to guarantee.

Smoke tests (also called validation or deployment tests) should be non-destructive in production.

As far as the heated discussion goes, you can absolutely write tests depending on the DB being in a particular state, and that is exactly why there are setup and teardown and fixtures. But I don't think running the full test suite in production is necessary: again, non-destructive tests are fine.

Typically deployment tests just validate that certain expected actions happen in production. If you do a search nothing comes back but you know something should have, that's a good enough test. It doesn't have to be a particular set of data to test a deployment. The time for precise functional and integration validation was before you hit staging, not in production. In production you just want to know that you deployed the system properly and all of it is running.
Regards,
-scott
Welcome to Rivendell, Mr. Anderson.
New I'm choosing from bad options here (not that it's my choice)
Yes, we should have purpose-built smoke tests. That's absolutely the preferred option. But that isn't what they built first. What we have today is the set of automated tests they're using in our test environment. Before the next deployment we can either put the test data in the production database -- in a separate table that can be deleted after the test -- or we don't run any automated tests after deployment.
--

Drew
     Need references for automated testing and test DBs - (drook) - (11)
         dunno about db's - (boxley)
         Sounds like someone's working in isolation. - (static) - (7)
             Well ... - (drook) - (6)
                 Ah. - (static) - (5)
                     Agreed, but that's not our alternative - (drook) - (4)
                         If you don't have great representative... - (folkert)
                         I think that's an extreme example, unfortuantely. - (static)
                         Use the database dammit - (crazy) - (1)
                             We're barely starting to talk about performance/load testing - (drook)
         Is the test database a separate schema? - (malraux) - (1)
             I'm choosing from bad options here (not that it's my choice) - (drook)

dude...?
61 ms