tdd


When stop testing using TDD?


I don't know so much about Test-Driven Development (TDD), but I always hear that i need to start the development with some test cases. Then, I need to make this tests pass with the most simple solution. And then create more tests to make my tests fail again...
But the question is: When stop creating new tests? When I know that my application is in agreement with the requirements?
Shamelessly copying Kent Beck's answer to this question.
I get paid for code that works, not
for tests, so my philosophy is to test
as little as possible to reach a given
level of confidence (I suspect this
level of confidence is high compared
to industry standards, but that could
just be hubris). If I don't typically
make a kind of mistake (like setting
the wrong variables in a constructor),
I don't test for it. I do tend to make
sense of test errors, so I'm extra
careful when I have logic with
complicated conditionals. When coding
on a team, I modify my strategy to
carefully test code that we,
collectively, tend to get wrong.
Different people will have different
testing strategies based on this
philosophy, but that seems reasonable
to me given the immature state of
understanding of how tests can best
fit into the inner loop of coding. Ten
or twenty years from now we'll likely
have a more universal theory of which
tests to write, which tests not to
write, and how to tell the difference.
In the meantime, experimentation seems
in order.
Code coverage tools can provide useful information about how well tested your code is. Such tools will identify code paths that have not been exercised by your tests.
In TDD, you stop writing tests when you stop writing code (or just so slightly before the last code is written), unless (as mentioned), your code coverage is too low.
Lifecycle
If you follow Test Driven Development to the letter, you have a 5 step cycle:
Write a test: for each unit (the smallest piece of code you can test) you write a test, where you determine what that unit will be responsible for. You need to follow the so called Right-BICEP checklist (right results, boundary conditions, inverse relationships, cross-check results, error conditions, performance characteristics).
Run tests and see them fail: in this step the newly written tests should fail. This is the so called red step, as the unit tests should show up in red. If the tests do not fail, you probably didn't write them correct.
Implement unit: write the code, even if you hardcode it, the point of this step is to get to the next green step.
Run the tests and see them pass: the green step as all the test should pass. If they don't you're not done with writing code.
Done? No, refactor!
TDD lifecycle - image from Wikipedia http://upload.wikimedia.org/wikipedia/en/9/9c/Test-driven_development.PNG
What to test
Test all units until you reach complete code coverage (wishful thinking in most cases, you would have to have a unit test for severe fail scenarios like tripping over the power cable, no more disk space, flood etc). If you reach the 90% ballpark you're more than done.
If you find a bug in your code, create a unit test and fix the code. Repeat.
If your code has a GUI try any automated functional testing you can find. In my case Selenium or JMeter would do the trick. Selenium is a good tool as it allows you to record your tests with Firefox and them replay them on demand.
Continuous integration
Because running all the tests all the time is time consuming, you can delegate most of this mundane tasks to a continuous integration server that will do them for you at predefined time intervals. This does not mean that you do not have to run tests before you commit your code. You still need to run the tests for the part of the system you were fixing, if the system is large running all unit tests would be counterproductive. The CI server would inform you of any failures and you would need to buy drinks for all of your colleagues on top of fixing the code you broke ;)
You stop writing tests when you have no more functionality to add to your code. There may be some additional edge cases you want to make sure are covered, but beyond that, when you don't have anything more to have your code do, you don't have any more TDD tests to write (Acceptance and QA tests are a different story).
There are certain areas you may find difficult to test, such as gui and data access but apart from that you write test until you objectives are met.
In an ideal world where I would follow eXtreme Programming practices, (not just TDD) my customer is supposed to provide me with some automated functional test. When such test goes green I stop writing tests and go to my customer to ask for some more functional tests that do not pass (because tests are specification and if my customer does not provide me with failing tests I won't know what to do)
I could explain it another way, aimed at a more prectical world. At XP France we organize TDD Dojo's on a regular basis (once a week). You could call taht TDD training sessions. There we use to practice TDD on some toy problems. When doing this the idea is to propose a test that fail, then write code to make it pass. Never propose a test that works without coding.
Whoever propose a test that goes green without any code should pay beers to others. So that's a way to know it's time to stop testing: when you are not able any more to write tests that fails you are finished. (Anyway coding after drinking is bad practice).

Related Links

TDD for IMDB html scraper
Literals or expressions in unit test asserts?
TDD: Where to start the first test
Test-Driven Development “Barriers to Entry”?
I know I may not write production code until I have written a failing unit test, so can I tell my manager I cannot write UIs? [closed]
TDD guidelines says to begin with the big scope or start from the foundations? [closed]
Applying TDD when the application is 100% CRUD
Test Driven Development and Pair Programming
TDD …how?
What are the limits of TDD? [closed]
Test Driven Development vs Automated Theorem Proving
TDD and Service (class what do something but nothing returns back)
Creating mock data for unit testing
Is test-driven development a normal approach in game development?
NUnit best practice
How far should you apply TDD? [closed]

Categories

HOME
client
keras
minimum-spanning-tree
plone
smarty
react-virtualized
bookshelf.js
yahoo-oauth
basic
awesome-wm
primary-key
kibana-4
vifm
ssl-client-authentication
modx-revolution
hapi
google-static-maps
google-cloud-speech
lucene.net
amazonsellercentral
smb
one-to-many
cas
procdump
madlib
sql-server-2012-express
column-family
kendo-ui-grid
webtest
geopositioning
atl
html5-fullscreen
bower-install
jmonkeyengine
event-driven
awt
preconditions
unoconv
avro4s
g-code
texmaker
powershell-dsc
botbuilder
arena-simulation
komodoedit
pubmed
togetherjs
colorama
worker-thread
composite-key
nbconvert
blacklist
smart-table
azure-virtual-network
websitepanel
clang-static-analyzer
python-stackless
whois
xna-4.0
sdhc
intellij-14
browser-bugs
map-projections
myo
method-parameters
mmc
jsapi
formatjs
p4java
jquery-layout
phpthumb
mechanize-ruby
srs
stxxl
multiprocessor
c18
xsockets.net
cdc
tws
qt-faststart
yui-compressor
sabredav
clipper
online-compilation
bubble-chart
semantic-diff
gnustep
infobox
lpeg
revisions
photoshop-cs4
simpletest
ubuntu-9.04
geneva-server
signal-handling
document-conversion

Resources

Mobile Apps Dev
Database Users
javascript
java
csharp
php
android
MS Developer
developer works
python
ios
c
html
jquery
RDBMS discuss
Cloud Virtualization
Database Dev&Adm
javascript
java
csharp
php
python
android
jquery
ruby
ios
html
Mobile App
Mobile App
Mobile App