tdd


When stop testing using TDD?


I don't know so much about Test-Driven Development (TDD), but I always hear that i need to start the development with some test cases. Then, I need to make this tests pass with the most simple solution. And then create more tests to make my tests fail again...
But the question is: When stop creating new tests? When I know that my application is in agreement with the requirements?
Shamelessly copying Kent Beck's answer to this question.
I get paid for code that works, not
for tests, so my philosophy is to test
as little as possible to reach a given
level of confidence (I suspect this
level of confidence is high compared
to industry standards, but that could
just be hubris). If I don't typically
make a kind of mistake (like setting
the wrong variables in a constructor),
I don't test for it. I do tend to make
sense of test errors, so I'm extra
careful when I have logic with
complicated conditionals. When coding
on a team, I modify my strategy to
carefully test code that we,
collectively, tend to get wrong.
Different people will have different
testing strategies based on this
philosophy, but that seems reasonable
to me given the immature state of
understanding of how tests can best
fit into the inner loop of coding. Ten
or twenty years from now we'll likely
have a more universal theory of which
tests to write, which tests not to
write, and how to tell the difference.
In the meantime, experimentation seems
in order.
Code coverage tools can provide useful information about how well tested your code is. Such tools will identify code paths that have not been exercised by your tests.
In TDD, you stop writing tests when you stop writing code (or just so slightly before the last code is written), unless (as mentioned), your code coverage is too low.
Lifecycle
If you follow Test Driven Development to the letter, you have a 5 step cycle:
Write a test: for each unit (the smallest piece of code you can test) you write a test, where you determine what that unit will be responsible for. You need to follow the so called Right-BICEP checklist (right results, boundary conditions, inverse relationships, cross-check results, error conditions, performance characteristics).
Run tests and see them fail: in this step the newly written tests should fail. This is the so called red step, as the unit tests should show up in red. If the tests do not fail, you probably didn't write them correct.
Implement unit: write the code, even if you hardcode it, the point of this step is to get to the next green step.
Run the tests and see them pass: the green step as all the test should pass. If they don't you're not done with writing code.
Done? No, refactor!
TDD lifecycle - image from Wikipedia http://upload.wikimedia.org/wikipedia/en/9/9c/Test-driven_development.PNG
What to test
Test all units until you reach complete code coverage (wishful thinking in most cases, you would have to have a unit test for severe fail scenarios like tripping over the power cable, no more disk space, flood etc). If you reach the 90% ballpark you're more than done.
If you find a bug in your code, create a unit test and fix the code. Repeat.
If your code has a GUI try any automated functional testing you can find. In my case Selenium or JMeter would do the trick. Selenium is a good tool as it allows you to record your tests with Firefox and them replay them on demand.
Continuous integration
Because running all the tests all the time is time consuming, you can delegate most of this mundane tasks to a continuous integration server that will do them for you at predefined time intervals. This does not mean that you do not have to run tests before you commit your code. You still need to run the tests for the part of the system you were fixing, if the system is large running all unit tests would be counterproductive. The CI server would inform you of any failures and you would need to buy drinks for all of your colleagues on top of fixing the code you broke ;)
You stop writing tests when you have no more functionality to add to your code. There may be some additional edge cases you want to make sure are covered, but beyond that, when you don't have anything more to have your code do, you don't have any more TDD tests to write (Acceptance and QA tests are a different story).
There are certain areas you may find difficult to test, such as gui and data access but apart from that you write test until you objectives are met.
In an ideal world where I would follow eXtreme Programming practices, (not just TDD) my customer is supposed to provide me with some automated functional test. When such test goes green I stop writing tests and go to my customer to ask for some more functional tests that do not pass (because tests are specification and if my customer does not provide me with failing tests I won't know what to do)
I could explain it another way, aimed at a more prectical world. At XP France we organize TDD Dojo's on a regular basis (once a week). You could call taht TDD training sessions. There we use to practice TDD on some toy problems. When doing this the idea is to propose a test that fail, then write code to make it pass. Never propose a test that works without coding.
Whoever propose a test that goes green without any code should pay beers to others. So that's a way to know it's time to stop testing: when you are not able any more to write tests that fails you are finished. (Anyway coding after drinking is bad practice).

Related Links

How to TDD a 2-steps process which requires input from a remote party?
What are the guiding principles when practicing TDD? [closed]
TDD: refactoring and global regressions
Cucumber capybara fill_in failing
Applying TDD in Spikes
Test rig in TDD (test driven development)
TDD: Test first or Repository pattern first
Installing phpunit to a Laravel 4 site
Test Driven Development is a hoax? [closed]
TDD FIRST principle
Why no return value from function named 'link_to' with PHPUnit in Laravel?
TDD in Cart application
Source for learning TDD [closed]
TDD: What to test in code where is 800 and more possible outputs?
TDD integration testing
Conditional execution of mocha test cases

Categories

HOME
wso2-am
heroku
reserved
at-command
rubygems
twitter-bootstrap-4
gps
sql-server-2016
datastax-java-driver
gitpitch
postgres-xl
correlation
alignment
designer
ghc
windows-7-x64
ef-migrations
normalizr
beyondcompare
immutable.js
swiftlint
autocad-plugin
xlsxwriter
clickonce
icloud-api
cas
su
brunch
pingfederate
fopen
windows-error-reporting
wtx
large-file-upload
bosh
titanium-mobile
avcapturesession
atl
normal-distribution
catalog
dartium
service-discovery
theano.scan
sqlite2
elasticsearch-plugin
cookiecutter-django
user-controls
avro4s
nomethoderror
flashair
termination
qsslsocket
worksheet
sage-one
statsd
jxcore
dtexec
windows-iot-core-10
sqlbulkcopy
pillow
spring-mongodb
topbeat
objective-c-swift-bridge
websitepanel
prettytensor
hill-climbing
pagerank
javax.sound.midi
endeca-workbench
retina
content-length
iis-arr
dukescript
responsive-images
unity3d-gui
pretty-print
geonetwork
codeigniter-routing
rdtsc
nsbutton
socketexception
openexr
article
uitouch
contenttype
cloud-connect
mbr
ccss
ms-project-server-2010
access-rights
aqtime
gdata-api
dsn
wse3.0
custom-backend
yslow
h.323
paster
.nettiers
dbal
ext3
audio-capture
photoshop-cs4
eqatec
sustainable-pace
noscript
sector
wsdl.exe

Resources

Database Users
RDBMS discuss
Database Dev&Adm
javascript
java
csharp
php
android
javascript
java
csharp
php
python
android
jquery
ruby
ios
html
Mobile App
Mobile App
Mobile App