tdd


How do I get sufficient detail in planning and estimation when using TDD?


When planning a 2-week iteration in the past I have taken a user story:
Story: Rename a file
And broken it into tasks which were then estimated in hours:
Story: Rename a file
Task: Create Rename command (2h)
Task: Maintain a list of selected files (3h)
Task: Hook up to F2 key (1h)
Task: Add context menu option (1h)
I would then pick a task to work on, and track the time spent on working on it. I would then repeat the process with another task. At the end of the iteration I could look at the time spent on each task, compare it to the estimation and use this information to improve future estimations.
When working entirely driven by tests, the only work that is clearly defined ahead of time are the acceptance tests that kicks off development and, on a user story that covers a large amount of work, the scope of an acceptance test can be too broad to give a good estimation.
So I can take a guess at the tasks that will end up being completed (as before), but the time spent on them is far more difficult to track as the tests make you work in tiny vertical slices, often working on a bit of each task at the same time.
Are there any techniques I could employ to give more detailed estimations and accurately track time when performing TDD? I am using TargetProcess which encourages splitting of user stories into tasks as outlined above, so keeping things in that format would be helpful.
In agile both tasks and estimates are fluid things that change all the time.
So you might start with (bear in mind that these are very loose examples):
Story: Rename a file
Task: Investigate Problem and break down (0d/5d)
First developer/s pick up that task and break it down as they go:
Story: Rename a file
Task: Investigate Problem and break down (4h/complete)
Task: 1st part (0d/2d)
Task: 2nd part (0d/3d)
Then as they progress these updates get more accurate. New tasks get added and split out as they emerge:
Story: Rename a file
Task: Investigate Problem and break down (4h/complete)
Task: 1st part (4h/7h)
Task: 2nd part (1h/20h)
Task: new task realised while working on x (0h/5h)
It doesn't matter whether you are using Scrum, Crystal, XP, TDD or any other agile variant - they all rely on fluid estimations.
The fact is that you never know how long something is going to take - you just take your best guess and revise it every day. You'll never get a process where there are no surprises, but with agile you manage their impact.
For instance suppose something nasty comes up:
Story: Rename a file
Task: Investigate Problem and break down (4h/complete)
Task: 1st part (10h/complete)
Task: 2nd part (10h/3h)
Task: new task realised while working on x (3h/1h)
Task: resolve messy issue found while working on y (0h/5d)
The story is now taking longer than expected, but everyone knows about it and knows why and you can handle it.
Your tasks and their estimates are constantly changing as the work gets done. A burndown chart is a good indicator of how much is left to do across the team. I wouldn't bother with velocity, but if you do it compares the 'amount done' between different iterations, giving you some idea of a project's momentum. Velocity only works when you have very consistent iteration lengths, team size and classification (size, difficulty, complexity etc) of stories, so I'd start with getting the burndown right each iteration and then move on to it.
We at TargetProcess use simpler tasks for stories:
Story: Rename a file
Task: Specification (2h)
Task: Development (14h)
Task: Testing (6)
Task: User Documentation update (2h)
If Development task takes more than 16 hrs, it is a sign to split it to several smaller tasks. In fact we don't usually create tasks with less than 2-3 h duration.

Related Links

What are some good services or tools to use for remote pair programming?
How to write independent functional test
Is it possible to study BDD without prior TDD experience? [closed]
testing with automated testing
TDD, How to write tests that will compile even if objects don't exist
first time TDD help needed
How to install PHPUnit using composer in Windows 07 64 bit?
Meteor test driven development [closed]
How will I test my private methods in ios? [closed]
Sending autotest / guard desktop notifications from Vagrant Ubuntu VM to host (W7 and OS X)
Are utility classes allowed with the Single Responsibility Principle (SRP)
Examples of Presenter First (or otherwise Test-Driven) Eclipse RCP applications
Jasmine testing coffeescript expect(setTimeout).toHaveBeenCalledWith
Missing requirements in Joel's Functional Spec
TDD & BDD? Which, Why and How? [closed]
Testing the right thing, techniques to avoid duplicate coverage

Categories

HOME
jdo
keras
homebrew
lodash
iot
webstorm
spring-cloud-stream
amazon-ecs
jsrender
podio
autotools
basic
facebook-messenger-bot
mapserver
modelica
google-cloud-spanner
graphlab
solaris-10
oxyplot
contextmenu
visual-composer
swingx
autoconf
errorlevel
excel-2007
language-agnostic
java-7
sqlcipher
smb
spark-jobserver
wpfdatagrid
tasklet
madlib
react-chartjs
semantic-versioning
caret
google-api-nodejs-client
simplexml
bitbucket-pipelines
abstract-class
az-application-insights
hp-ux
http-live-streaming
modelmapper
wptoolkit
parentheses
komodoedit
jedis
jquery-validate
blogengine.net
lowpass-filter
colorama
boost-preprocessor
jspdf-autotable
eventkit
google-feed-api
log4c
home-directory
essence
rvest
natvis
rtbkit
cartesian-product
jmeter-maven-plugin
rgeo
muse
ideamart
remobjects
clipperlib
apache-commons-net
javax.mail
knuth
mdt
id3v2
heisenbug
typeof
funcunit
yorick
hyprlinkr
centos5
imdbpy
pyhdf
c18
geos
spring-io
apc
buster.js
back-stack
free-variable
javaspaces
objective-c-2.0
ticoredatasync
online-compilation
qtkit
reddot
actionview
jquery-ui-droppable
vc90
ext3
microsoft-virtualization
commodore

Resources

Database Users
RDBMS discuss
Database Dev&Adm
javascript
java
csharp
php
android
javascript
java
csharp
php
python
android
jquery
ruby
ios
html
Mobile App
Mobile App
Mobile App