IWETHEY v. 0.3.0 | TODO
1,095 registered users | 0 active users | 0 LpH | Statistics
Login | Create New User
IWETHEY Banner

Welcome to IWETHEY!

New When 6 Sigma doesn't apply to IT projects
If you've ever studied 6 Sigma, you know that it's all about reducing the number of defects in the output of a process. When the process is manufacturing -- which is where 6 Sigma origiated -- the product is thousands or millions of identical widgets. Any statistical improvement is measureable, and its benefit can be calculated.

But when you move into IT development, frequently the "product" is actually a new process.

Take the example of a home appraisal business. They decide to upgrade the software they use to support their appraisers. Someone has just gotten the 6 Sigma religion and decides to apply it to the development of the new software. They define the development process, establish quality gates, count defects, produce statistics. But they're counting defects in the software. The software isn't the product, it's the tool.

This would be like going into an auto manufacturing plant, doing a project to improve production, and counting the number of defects in the new assembly robot you installed. It's counting the wrong thing.
New Well, on the flipside of this...
The software isn't the product, it's the tool.

Yes, you are right. I have similar feelings towards it.

But, it is a production system? If it is, then it is a product. The product just happens to be a tool.

Its the tool that you are using.

BTW, are you back working for said company of pre-.NET work? Or does this just happen to be similar?
--
[link|mailto:greg@gregfolkert.net|greg],
[link|http://www.iwethey.org/ed_curry|REMEMBER ED CURRY!] @ iwethey
Freedom is not FREE.
Yeah, but 10s of Trillions of US Dollars?
SELECT * FROM scog WHERE ethics > 0;

0 rows returned.
New Don't knock it quite so quickly
If you have a software development organization that develops a series of new products, then 6 Sigma techniques can (and have) been very successfully used to focus attention on bug rates, get bug rates down, be able to estimate how many unfound bugs there are, tc.

However if you're constantly refactoring the same software base, then it is much harder to apply the techniques.

And if you just try to apply the ideas in a random way, your answers are going to have a very random meaning.

Cheers,
Ben
I have come to believe that idealism without discipline is a quick road to disaster, while discipline without idealism is pointless. -- Aaron Ward (my brother)
New Answering both you and Greg
Like I said above, the whole theory of 6 Sigma is based on incremental statistical improvement in the production of large numbers of identical widgets. Neither production of a single tool, as Greg suggests, nor production of a series of new products, as you suggest, fits that description.

In either case, the tool you create will be used to produce something else, the thing that your customers actually pay for. That is the item that should be measured, and that should show a measureable improvement.

Now you can make an argument that you can also measure your process for developing the tool. But then I can make the argument that you can measure your process for measuring the development. After all, shouldn't you seek continuous improvement to 6 Sigma itself? Recursion: See recursion.

Contractors love to measure at higher and higher levels of abstraction. They can bill enourmous amounts of money for something that may or may not show improvement in the bottom line. Any measurement that doesn't include -- or get included in -- a measurement of the end product is meaningless.
New That's not my understanding.
The whole theory of 6 Sigma is based on incremental statistical improvement in the production of large numbers of identical widgets.


6σ can be applied to manufacturing, but in its most general form, it can be applied to any process. [link|http://en.wikipedia.org/wiki/Six_sigma|E.g.]:

Six Sigma has now grown beyond defect control. It can be defined as a methodology to manage process variations that cause defects, defined as unacceptable deviation from the mean or target; and to systematically work towards managing variation to eliminate those defects[6]. The objective of Six Sigma is to deliver world-class performance, reliability, and value to the end customer.


It comes down to having metrics for your process, and metrics for the result, and continuously examining the metrics and making adjustments to reduce the deviation from expected results.

6σ is shorthand for a process; ISO 9000 (and relatives) is shorthand for a process; TQM is shorthand for a process. They can all be applied to software, to some extent. They're all based on ideas popularized by [link|http://en.wikipedia.org/wiki/W._Edwards_Deming|W. Edwards Deming].

If the 6σ process at work is driving you nuts, do what you can to make it better. Read up on Deming's writings. [link|http://www.multi-platforms.com/Tips/Deming.htm|Deming's 14 Points for Software Development] may be a good place to start.

My $0.02.

Cheers,
Scott.
(Who is suspicious of silver bullets; who notes that Microsoft claims to have instituted 6σ; and who notes that Deming's 14 points would argue against outsourcing and offshoring.)
New I didn't say there's no place for it
I was very careful in the first post to say "When 6 Sigma doesn't work", not "Why 6 Sigma doesn't work".

It doesn't work when you measure the wrong thing. You don't measure the tool, you measure the output of the tool. And unless you are selling packaged software, then the software is the tool.

If someone says they're doing 6σ, and all their metrics are on the tool or process instead of on the product, they're not really doing 6σ.
New Good points.
New Measure what you want to improve
If you find a reliable way of measuring something, then you can probably find ways of improving it. The fly in the ointment is that finding ways of reliably measuring things is easier said than done.

I personally think that reducing your bug rate is a Good Thing to do. I agree that it is not the same as what customers want. However reliable software is highly correlated with many things that customers do like (eg actually getting specifications right), so it is generally worthwhile from that perspective as well.

Though I can understand why it would feel frustrating.

Cheers,
Ben
I have come to believe that idealism without discipline is a quick road to disaster, while discipline without idealism is pointless. -- Aaron Ward (my brother)
New And that should be cost or revenue
Remember the article linked on another forum here recently about power generation in Iraq? The people responsible for oil production have a mandate to make as much money as possible. In doing so, they are screwing the electric plants who could really use all the natural gas being burned off at the wells. But the oil guys aren't measuring natrual gas, they're measuring oil prices. If this were a private oil company, this might even be the right thing to do. But it's nationalized, which means they should be sharing a common goal with the electric plants.

Applying 6 Sigma and measuring bug counts is doing the same thing. You're probably making a better program by doing it, but does that better program yield reduced cost or increased revenue? It might, it might not. But if you're not measuring the impact of that program on cost or revenue, you can't possibly know that. If you don't know, you can't demonstrate it. And people wonder why IT is so often seen as a cost center.

New As I recall, at IBM, peer code reviews to catch
design and implementation bugs during development cost one tenth of what it cost to fix those bugs after they were "out in the field".

Revenue that's burned up by repair costs should not be an objective.

"Quality is free!"
Alex

When fascism comes to America, it'll be wrapped in a flag and carrying a cross. -- Sinclair Lewis
New You misremember that article
They weren't burning off natural gas to make a profit. They were burning it to avoid having it explode because they were unable to ship it anywhere without pipes. And they can't build pipes because they'll be blown up.

As for the program, measuring cost and revenue is also the wrong thing to do. What you're really creating with a process like that is reputation. And that is probably impossible to measure.

But let's say that we did it your way. Let's measure cost and revenue. What you'll find very quickly is that switching to processes that run up technical debt show up well in your measurements. And will continue to show up well until you have to addresss that technical debt. But the entire idea of "technical debt" is something that you can't easily measure or estimate. The ever-increasing difficulty of development you can't measure. Good luck determining the cost of turnover because good programmers can't have pride of ownership of crap.

While I agree that what they're doing leads to some stupidity, what you're suggesting that they do leads in an obvious way to the worst MBA mismanagement practices. Which is a lot worse.

Cheers,
Ben
I have come to believe that idealism without discipline is a quick road to disaster, while discipline without idealism is pointless. -- Aaron Ward (my brother)
New Do you think everyone not you is stupid?
You're so quick to point out how I didn't understand what I read that you didn't bother to read what I wrote.
You misremember that article. They weren't burning off natural gas to make a profit.
Really? I misremember? Hmm, let's see what I actually wrote:
The people responsible for oil production have a mandate to make as much money as possible. In doing so, they are screwing the electric plants who could really use all the natural gas being burned off at the wells. But the oil guys aren't measuring natrual gas, they're measuring oil prices.
See that highlighted part there? That's where I point out that they didn't care about the gas. Nowhere did I suggest that they thought burning it off somehow brought them profit.

And since you seem to have missed it, my point was that if you aren't measuring the eventual impact of your changes you're not measuring the right thing. See how the Iraq story is an example of that? They're measuring their piece of the project and ignoring the bottom line. Gosh, that sounds almost like my point, how you can optimize the part at the expense of the whole.

Okay, so you didn't bother to read everything, at least you wouldn't jump to conslusions about how stupid I want to be. Oh wait! Here's where you do exactly that:
But let's say that we did it your way. Let's measure cost and revenue. ... what you're suggesting that they do leads in an obvious way to the worst MBA mismanagement practices. Which is a lot worse.
Gosh, when you put it that way it does sound like a bad idea to measure only cost and revenue. I wish I hadn't said that. Oh wait (again)! I didn't say that. I said:
Any measurement that doesn't include -- or get included in -- a measurement of the end product is meaningless.
Feel free to expound again on how badly I want to screw things up by ignoring the quaility of the tools being developed. Or you could take a fresh approach and address the central point I've been trying to make: that if you fail to measure the output of your new process or application you can't really measure its success.
New No. Only when they ignore important stuff.
For a start you're referring to an article on Iraq. Perhaps [http://spectrum.ieee.org/feb06/2831/3|page 3] of that article will refresh your memory. It doesn't matter what price the Ministry is willing to offer - there is no way to deliver the natural gas to where it is needed.\r\n\r\nSo while I understand perfectly well what you were trying to express with that example, you misremembered the facts. In fact the case you noted isn't an example of what you thought it was. Sure, screwed up price incentives are a big part of Iraq's energy crisis. But at a different stage - they need to give consumers a realistic price for electricity so that people will conserve electric use a bit. If people conserved just a bit, then their electric output would stretch a lot farther than it does.\r\n\r\n<hr>\r\n\r\nNow the entire cost versus revenue thing. I used cost and revenue in my post because I was responding to http://z.iwethey.org/forums/render/content/show?contentid=253146 where you said we should measure cost and revenue. Given that I was responding to a post where you bring up cost and revenue as critical factors, it made sense to me to point out how directly measuring cost and revenue is going to lead you astray.\r\n\r\n<hr>\r\n\r\nNow let's go to your central point.\r\n\r\nThe problem with your thesis is that while it sounds great on paper to just measure the right thing, it is impossible in practice to nail down what that is. Try as you like, I guarantee you that you won't come up with any unambiguous measurement that measures the right thing. And in the process of trying you'll introduce so many potential fudge factors that it will be impossible to do an apples to apples comparison of anything.\r\n\r\nDon't believe me? Well let me give an attempt so you can see how it goes.\r\n\r\nAny decent accountant would tell you that the problem with measuring cost and revenue directly is that you're doing cash-based accounting. It is very easy to manipulate figures with cash-based accounting, and cash misses a whole ton of important factors. What you actually want to do is accrual accounting.\r\n\r\nSo let's try to do accrual accounting on software development. Well some significant assets that we gain from doing a software project are that we often are left with some libraries that we can reuse and we gain knowledge that may help future projects. Some costs that we gain are the ongoing maintainance costs of supporting the existing factor. All 3 factors are very important. (For instance ongoing maintainance typically costs more than initial delivery of the project.) However when a project finishes, <i>none</i> of them can be reliably estimated. So our attempted measurement just winds up with a series of big question marks. Attempt to fill in those question marks, and you'll find that your measurements are completely dominated by the assumptions that went into those estimates.\r\n\r\nAnd that is the whole point of looking at bug counts. Sure, it is imperfect. But it is better than nothing, and is better than any easy alternative that you're likely to come up with. Furthermore measuring bug counts has the following very concrete advantages:\r\n<ol>\r\n<li> Bug counts are a good measure of software reliability. This is something that people tend to value fairly highly.\r\n<li> Bug counts are a fairly good proxy for the cost of ongoing maintainance. Given that maintainance typically is the bulk of the cost of software, this makes them strongly correlated with the real cost of development.\r\n<li> In studies, developers who are asked to optimize for reliability do pretty well in most other measures of the software development process, including development speed and software speed. By contrast developers who attempt to optimize for other characteristics tend to do well in the chosen metric, but fairly badly in most other metrics.\r\n</ol>\r\nTherefore reducing bug counts pretty directly improves two key software characteristics (reliability and maintainance cost), while tending to make you reasonably good at other important characteristics (development speed and software speed). To the best of my knowledge, focusing on any other simple metric will give far more complex results. And trying to focus on a complex metric opens up so many grey areas that you have no hope of getting clear understanding or buy-in on what you're trying to improve on. (And little hope that you're actually measuring something that does what you really want to do.)\r\n\r\nTherefore focussing on reducing bug counts doesn't sound like a particularly stupid idea to me. (Perhaps I've just read - and believed - too much Steve McConnell...)\r\n\r\nCheers,\r\nBen
I have come to believe that idealism without discipline is a quick road to disaster, while discipline without idealism is pointless. -- Aaron Ward (my brother)
New Many elements of software development are stochastic
Trying to optimize stochastic processes using deterministic process optimization technique leads to a lot of insanity.
New One must keep balance.
It is true that cause and effect aren't simple. But that doesn't mean that making the effort isn't amply rewarded.

For instance, it has been repeatedly found that most bugs are found in a small fraction of the code. Research at IBM demonstrated that it is very worthwhile to measure discovered bugs/line of code in different modules, and pre-emptively rewrite any that are particularly buggy. They found that rewriting a small fraction of the software dramatically reduces ongoing maintainance costs.

That's a pretty big return on the simple bookkeeping effort of keeping track of which function a given bug was traced back to, and then computing simple statistics on that.

For another demonstration of the value of proactively dealing with bugs, look at OpenBSD. The secret of their security record is simple - any time they find a bug they reflect on the underlying mistake for that bug and then search their entire code base for similar mistakes. The results speak for themselves.

Cheers,
Ben
I have come to believe that idealism without discipline is a quick road to disaster, while discipline without idealism is pointless. -- Aaron Ward (my brother)
New Let's start again.
The existing thread seems to be going off-tangent in a bad way.

Someone has just gotten the 6 Sigma religion and decides to apply it to the development of the new software. They define the development process, establish quality gates, count defects, produce statistics. But they're counting defects in the software. The software isn't the product, it's the tool.


Can you elaborate a bit more, without telling too much of course?

I assume the software supports the business ("the 'product' is actually a new process"), and the software isn't shrinkwrap that is shipped to customers.

In my other comments in this thread, cites have indicated that 6σ can be applied to software or any process that can be measured and improved. I get the impression that you feel that in the present case, applying 6σ to this software development isn't going to improve the software because the customers have no impact on the process.

In other words, is the problem that there's a disconnect between what the customers want and what the 6σ process is trying to optimize? Or is it your belief that the "new process" is not amenable to 6σ optimization techniques? Or is it something else?

My impression is:
1) 6σ can be applied to the development of new software that supports the customers.
2) The ideas behind 6σ and TQM and so forth can be applied to the development of a new business process, but measuring bug rates in the support software is only a tiny part of the problem.
3) Without a clear understanding of the overall business goal and the benefits and limitations of 6σ by those attempting to manage the process, they won't acheive their goals.

It's like that old [link|http://www.edn.com/article/CA601846.html|software development mantra]:

British computing pioneer Sir Tony Hoare once wrote: "Premature optimization is the root of all evil." Unfortunately, engineers often take this phrase out of context and use it to justify avoiding any thought of optimization or even plans to optimize in their code.

Charles Cook succinctly explains the problem with this approach: "The full version of the quote is 'We should forget about small efficiencies\ufffdsay, about 97% of the time: Premature optimization is the root of all evil,' and I agree. It's usually not worth spending a lot of time micro-optimizing code before it's obvious where the performance bottlenecks are. But, conversely, when designing software at a system level, performance issues should always be considered from the beginning. A good software developer will do this automatically, having developed a feel for where performance issues will cause problems. An inexperienced developer will not bother, misguidedly believing that a bit of fine-tuning at a later stage will fix any problems."

The key point here is that inexperienced developers often write code without any consideration of the performance of their code. Unfortunately, system design without any concerns about performance rarely produces systems that perform well without major rewriting. The only thing worse than premature optimization is designing a system without any consideration of system performance. The assumption that 20% of a program's code accounts for 80% of its execution time has been the downfall of many designs.


Here, I suppose we could change it to, "Premature 6σ is the root of all evil." ;-)

Thanks.

Cheers,
Scott.
New Your point #2 is pretty much my whole point
The ideas behind 6σ and TQM and so forth can be applied to the development of a new business process, but measuring bug rates in the support software is only a tiny part of the problem.
I never suggested that you shouldn't count bugs. The whole problem is that I see lots of "6σ Black Belts" who think that all they have to do is show increased numbers of test cases, and decreased defects, and everything is A-OK. The question of whether the entire process is "pointed in the right direction" doesn't enter into their calculations.
     When 6 Sigma doesn't apply to IT projects - (dbishop) - (16)
         Well, on the flipside of this... - (folkert)
         Don't knock it quite so quickly - (ben_tilly) - (12)
             Answering both you and Greg - (dbishop) - (11)
                 That's not my understanding. - (Another Scott) - (2)
                     I didn't say there's no place for it - (dbishop) - (1)
                         Good points. -NT - (Another Scott)
                 Measure what you want to improve - (ben_tilly) - (7)
                     And that should be cost or revenue - (dbishop) - (4)
                         As I recall, at IBM, peer code reviews to catch - (a6l6e6x)
                         You misremember that article - (ben_tilly) - (2)
                             Do you think everyone not you is stupid? - (dbishop) - (1)
                                 No. Only when they ignore important stuff. - (ben_tilly)
                     Many elements of software development are stochastic - (dws) - (1)
                         One must keep balance. - (ben_tilly)
         Let's start again. - (Another Scott) - (1)
             Your point #2 is pretty much my whole point - (dbishop)

Stop looking over your shoulder and invent something!
133 ms