tdd


What is Best for Defect Rate Tracking? Defects per KLOC?


I'm trying to create some internal metrics to demonstrate (determine?) how well TDD improves defect rates in code.
Is there a better way than defects/KLOC? What about a language's 'functional density'?
Any comments or suggestions would be helpful.
Thanks - Jonathan
You may also consider mapping defect discovery rate and defect resolution rates... how long does it take to find bugs, and once they're found, how long do they take to fix? To my knowledge, TDD is supposed to improve on fix times because it makes defects known earlier... right?
Any measure is an arbitrary comparison of defects to code size; so long as the comparison is similar, it should work. E.g., defects/kloc in C to defects/kloc in C. If you changed languages, it would affect the metric in any case, since the same program in another language might be less defect-prone.
Measuring defects isn't an easy thing. One would like to account for the complexity of the code, but that is incredibly messy and unpleasant. When measuring code quality I recommend:
Measure the current state (what is your defect rate now)
Make a change (peer reviews, training, code guidelines, etc)
Measure the new defect rate (Have things improved?)
Goto 2
If you are going to compare coders make sure you compare coders doing similar work in the same language. Don't compare the coder who works in the deep internals of your most complex calculation engine to the coder who writes the code that stores stuff in the database.
I try to make sure that coders know that the process is being measured not the coders. This helps to improve the quality of the metrics.
I suggest to use the ratio between the times :
the time spend fixing bugs
the time spend writing other codes
This seem valid across languages...
It also works if you only have a rough estimation of some big code base. You can still compare it to the new code you are writing, to impress you management ;-)
I'm skeptical of all LOC-related measurements, not just because of different relative expressiveness of languages, but because individual programmers will vary enough in the expressiveness of their code as to make this metric "fuzzy" at best.
The things I would measure in the interests of project management are:
Number of open defects on the project. There's no single scalar that can tell you where the project is and how close it is to a releasable state, but this is still a handy number to have on hand and watch over time.
Defect detection rate. This is not the rate of introduction of new defects into the system, but it's probably the closest proxy you'll find.
Defect resolution rate. If this is less than the detection rate, you're falling behind - if it's greater, you're getting ahead.
All of these numbers are more useful if you combine them with severity information. A product with 20 minor bugs may well closer to release than one with 2 crashing bugs. If you're clearing the minor bugs but not the severe ones, you have to get the developers to refocus their attention.
I would track these numbers per project and per developer. The reason for doing them per project should be clear. The per-developer numbers are certainly not the whole picture of an individual contributor's skill or productivity, but can point you to people who might need training or remediation.
You may also wish to tag all the tickets in your defect tracking system by project module as well (especially for larger projects), so that you can tell when critical modules are in a fragile state.
Why dont you consider defects per use case ? or defects per requirement. We have faced practical issues in arriving at the KLOC.

Related Links

how can i automate using mstest?
Properly testing promises attached to $scope
What are some good services or tools to use for remote pair programming?
How to write independent functional test
Is it possible to study BDD without prior TDD experience? [closed]
testing with automated testing
TDD, How to write tests that will compile even if objects don't exist
first time TDD help needed
How to install PHPUnit using composer in Windows 07 64 bit?
Meteor test driven development [closed]
How will I test my private methods in ios? [closed]
Sending autotest / guard desktop notifications from Vagrant Ubuntu VM to host (W7 and OS X)
Are utility classes allowed with the Single Responsibility Principle (SRP)
Examples of Presenter First (or otherwise Test-Driven) Eclipse RCP applications
Jasmine testing coffeescript expect(setTimeout).toHaveBeenCalledWith
Missing requirements in Joel's Functional Spec

Categories

HOME
gerrit
openstack
gremlin
drivers
dictionary
comparison
webstorm
jsrender
fsm
webpack-2
sql-server-2016
datastax-java-driver
azure-storage-tables
autotools
webrequest
amazon-cloudformation
medical
collectd
moonmail
size
orchardcms
dtrace
pythonanywhere
jqwidget
openedx
zapier
shopware
ejabberd-module
saas
wtx
c++-amp
nouislider
google-cloud-endpoints-v2
copying
perlin-noise
ghost4j
google-api-nodejs-client
revolution-slider
gesture
objectlistview
ensembles
reportingservices-2005
no-www
.net-4.6.2
auto-update
strptime
logparser
yii2-extension
brightcove
sqlclient
spring-mongodb
pycaffe
trash
transmitfile
google-web-starter-kit
storekit
disque
apachebench
bluegiga
goose
kendonumerictextbox
browser-bugs
skos
system.reflection
tablelayout
websocket4net
has-many-through
reactfx
xceed-datagrid
concurrent-collections
spring-io
blending
ftps
file-locking
rabl
selected
automount
armcc
viewswitcher
bigcouch
netdna-api
libc++
table-footer
ecl
iphone-web-app
imac
gin
nhibernate.search
javap
backcolor
temporal-database
pascal-fc
document-conversion

Resources

Database Users
RDBMS discuss
Database Dev&Adm
javascript
java
csharp
php
android
javascript
java
csharp
php
python
android
jquery
ruby
ios
html
Mobile App
Mobile App
Mobile App