tdd


What is Best for Defect Rate Tracking? Defects per KLOC?


I'm trying to create some internal metrics to demonstrate (determine?) how well TDD improves defect rates in code.
Is there a better way than defects/KLOC? What about a language's 'functional density'?
Any comments or suggestions would be helpful.
Thanks - Jonathan
You may also consider mapping defect discovery rate and defect resolution rates... how long does it take to find bugs, and once they're found, how long do they take to fix? To my knowledge, TDD is supposed to improve on fix times because it makes defects known earlier... right?
Any measure is an arbitrary comparison of defects to code size; so long as the comparison is similar, it should work. E.g., defects/kloc in C to defects/kloc in C. If you changed languages, it would affect the metric in any case, since the same program in another language might be less defect-prone.
Measuring defects isn't an easy thing. One would like to account for the complexity of the code, but that is incredibly messy and unpleasant. When measuring code quality I recommend:
Measure the current state (what is your defect rate now)
Make a change (peer reviews, training, code guidelines, etc)
Measure the new defect rate (Have things improved?)
Goto 2
If you are going to compare coders make sure you compare coders doing similar work in the same language. Don't compare the coder who works in the deep internals of your most complex calculation engine to the coder who writes the code that stores stuff in the database.
I try to make sure that coders know that the process is being measured not the coders. This helps to improve the quality of the metrics.
I suggest to use the ratio between the times :
the time spend fixing bugs
the time spend writing other codes
This seem valid across languages...
It also works if you only have a rough estimation of some big code base. You can still compare it to the new code you are writing, to impress you management ;-)
I'm skeptical of all LOC-related measurements, not just because of different relative expressiveness of languages, but because individual programmers will vary enough in the expressiveness of their code as to make this metric "fuzzy" at best.
The things I would measure in the interests of project management are:
Number of open defects on the project. There's no single scalar that can tell you where the project is and how close it is to a releasable state, but this is still a handy number to have on hand and watch over time.
Defect detection rate. This is not the rate of introduction of new defects into the system, but it's probably the closest proxy you'll find.
Defect resolution rate. If this is less than the detection rate, you're falling behind - if it's greater, you're getting ahead.
All of these numbers are more useful if you combine them with severity information. A product with 20 minor bugs may well closer to release than one with 2 crashing bugs. If you're clearing the minor bugs but not the severe ones, you have to get the developers to refocus their attention.
I would track these numbers per project and per developer. The reason for doing them per project should be clear. The per-developer numbers are certainly not the whole picture of an individual contributor's skill or productivity, but can point you to people who might need training or remediation.
You may also wish to tag all the tickets in your defect tracking system by project module as well (especially for larger projects), so that you can tell when critical modules are in a fragile state.
Why dont you consider defects per use case ? or defects per requirement. We have faced practical issues in arriving at the KLOC.

Related Links

Do you TDD for debugging fix?
Experiences with Test Driven Development (TDD) for logic (chip) design in Verilog or VHDL
TDD - When is it okay to write a non-failing test?
What's the point of testing fake repositories?
Separating rapid development from refactoring/optimization
Interface Insanity
unit test installers
Has anybody used Unit Testing as a way to learn programming?
Testing only the public method on a mid sized class?
How to develop complex methods with TDD
How can I remove the “Test With” menu left after uninstalling TestDriven.NET?
“Code covered” vs. “Code tested”?
IoC & Interfaces Best Practices
How do you do TDD in a non-trivial application?
TDD Spider Solitaire
How to deal with those TDD breaking people? [closed]

Categories

HOME
pandas
testing
google-chrome-extension
jdo
omnet++
deezer
fluentd
tesseract
django-imagekit
constraint-programming
swagger-ui
communication
izpack
php-7.1
n-gram
windows-server-2012
ups
vaadin7
undefined
swingx
ml
intentfilter
web-sql
underflow
avcapturesession
textmate
uiswipegesturerecognizer
sequential
jna
hue
cookiecutter-django
graphenedb
optix
sql-server-agent
sas-jmp
websphere-mq-fte
galleria
jquery-nestable
paxos
winscp-net
darcs
forever
np-complete
sybase-asa
epson
linode
mcafee
crosswalk-runtime
mediaelement
google-cdn
word-vba-mac
rotativa
slicknav
asp.net-4.5
django-debug-toolbar
fuzzy-search
gridview-sorting
livequery
natvis
intel-fortran
atk4
wyam
arcanist
ios9.1
apache-commons-fileupload
sniffer
proj4js
tween
qcodo
zend-route
htmlcleaner
bundles
meteor-velocity
app42
ccss
tidy
sqlperformance
sublist
rdoc
online-compilation
fireworks
servicehost
yetanotherforum
self-extracting
krl
gamequery
vc90
multi-tier
javap
uimenucontroller
avatar
genealogy
mdac
data-acquisition

Resources

Mobile Apps Dev
Database Users
javascript
java
csharp
php
android
MS Developer
developer works
python
ios
c
html
jquery
RDBMS discuss
Cloud Virtualization
Database Dev&Adm
javascript
java
csharp
php
python
android
jquery
ruby
ios
html
Mobile App
Mobile App
Mobile App