For a start you're referring to an article on Iraq. Perhaps [http://spectrum.ieee.org/feb06/2831/3|page 3] of that article will refresh your memory. It doesn't matter what price the Ministry is willing to offer - there is no way to deliver the natural gas to where it is needed.\r\n\r\nSo while I understand perfectly well what you were trying to express with that example, you misremembered the facts. In fact the case you noted isn't an example of what you thought it was. Sure, screwed up price incentives are a big part of Iraq's energy crisis. But at a different stage - they need to give consumers a realistic price for electricity so that people will conserve electric use a bit. If people conserved just a bit, then their electric output would stretch a lot farther than it does.\r\n\r\n<hr>\r\n\r\nNow the entire cost versus revenue thing. I used cost and revenue in my post because I was responding to http://z.iwethey.org/forums/render/content/show?contentid=253146 where you said we should measure cost and revenue. Given that I was responding to a post where you bring up cost and revenue as critical factors, it made sense to me to point out how directly measuring cost and revenue is going to lead you astray.\r\n\r\n<hr>\r\n\r\nNow let's go to your central point.\r\n\r\nThe problem with your thesis is that while it sounds great on paper to just measure the right thing, it is impossible in practice to nail down what that is. Try as you like, I guarantee you that you won't come up with any unambiguous measurement that measures the right thing. And in the process of trying you'll introduce so many potential fudge factors that it will be impossible to do an apples to apples comparison of anything.\r\n\r\nDon't believe me? Well let me give an attempt so you can see how it goes.\r\n\r\nAny decent accountant would tell you that the problem with measuring cost and revenue directly is that you're doing cash-based accounting. It is very easy to manipulate figures with cash-based accounting, and cash misses a whole ton of important factors. What you actually want to do is accrual accounting.\r\n\r\nSo let's try to do accrual accounting on software development. Well some significant assets that we gain from doing a software project are that we often are left with some libraries that we can reuse and we gain knowledge that may help future projects. Some costs that we gain are the ongoing maintainance costs of supporting the existing factor. All 3 factors are very important. (For instance ongoing maintainance typically costs more than initial delivery of the project.) However when a project finishes, <i>none</i> of them can be reliably estimated. So our attempted measurement just winds up with a series of big question marks. Attempt to fill in those question marks, and you'll find that your measurements are completely dominated by the assumptions that went into those estimates.\r\n\r\nAnd that is the whole point of looking at bug counts. Sure, it is imperfect. But it is better than nothing, and is better than any easy alternative that you're likely to come up with. Furthermore measuring bug counts has the following very concrete advantages:\r\n<ol>\r\n<li> Bug counts are a good measure of software reliability. This is something that people tend to value fairly highly.\r\n<li> Bug counts are a fairly good proxy for the cost of ongoing maintainance. Given that maintainance typically is the bulk of the cost of software, this makes them strongly correlated with the real cost of development.\r\n<li> In studies, developers who are asked to optimize for reliability do pretty well in most other measures of the software development process, including development speed and software speed. By contrast developers who attempt to optimize for other characteristics tend to do well in the chosen metric, but fairly badly in most other metrics.\r\n</ol>\r\nTherefore reducing bug counts pretty directly improves two key software characteristics (reliability and maintainance cost), while tending to make you reasonably good at other important characteristics (development speed and software speed). To the best of my knowledge, focusing on any other simple metric will give far more complex results. And trying to focus on a complex metric opens up so many grey areas that you have no hope of getting clear understanding or buy-in on what you're trying to improve on. (And little hope that you're actually measuring something that does what you really want to do.)\r\n\r\nTherefore focussing on reducing bug counts doesn't sound like a particularly stupid idea to me. (Perhaps I've just read - and believed - too much Steve McConnell...)\r\n\r\nCheers,\r\nBen