tdd


How do I get sufficient detail in planning and estimation when using TDD?


When planning a 2-week iteration in the past I have taken a user story:
Story: Rename a file
And broken it into tasks which were then estimated in hours:
Story: Rename a file
Task: Create Rename command (2h)
Task: Maintain a list of selected files (3h)
Task: Hook up to F2 key (1h)
Task: Add context menu option (1h)
I would then pick a task to work on, and track the time spent on working on it. I would then repeat the process with another task. At the end of the iteration I could look at the time spent on each task, compare it to the estimation and use this information to improve future estimations.
When working entirely driven by tests, the only work that is clearly defined ahead of time are the acceptance tests that kicks off development and, on a user story that covers a large amount of work, the scope of an acceptance test can be too broad to give a good estimation.
So I can take a guess at the tasks that will end up being completed (as before), but the time spent on them is far more difficult to track as the tests make you work in tiny vertical slices, often working on a bit of each task at the same time.
Are there any techniques I could employ to give more detailed estimations and accurately track time when performing TDD? I am using TargetProcess which encourages splitting of user stories into tasks as outlined above, so keeping things in that format would be helpful.
In agile both tasks and estimates are fluid things that change all the time.
So you might start with (bear in mind that these are very loose examples):
Story: Rename a file
Task: Investigate Problem and break down (0d/5d)
First developer/s pick up that task and break it down as they go:
Story: Rename a file
Task: Investigate Problem and break down (4h/complete)
Task: 1st part (0d/2d)
Task: 2nd part (0d/3d)
Then as they progress these updates get more accurate. New tasks get added and split out as they emerge:
Story: Rename a file
Task: Investigate Problem and break down (4h/complete)
Task: 1st part (4h/7h)
Task: 2nd part (1h/20h)
Task: new task realised while working on x (0h/5h)
It doesn't matter whether you are using Scrum, Crystal, XP, TDD or any other agile variant - they all rely on fluid estimations.
The fact is that you never know how long something is going to take - you just take your best guess and revise it every day. You'll never get a process where there are no surprises, but with agile you manage their impact.
For instance suppose something nasty comes up:
Story: Rename a file
Task: Investigate Problem and break down (4h/complete)
Task: 1st part (10h/complete)
Task: 2nd part (10h/3h)
Task: new task realised while working on x (3h/1h)
Task: resolve messy issue found while working on y (0h/5d)
The story is now taking longer than expected, but everyone knows about it and knows why and you can handle it.
Your tasks and their estimates are constantly changing as the work gets done. A burndown chart is a good indicator of how much is left to do across the team. I wouldn't bother with velocity, but if you do it compares the 'amount done' between different iterations, giving you some idea of a project's momentum. Velocity only works when you have very consistent iteration lengths, team size and classification (size, difficulty, complexity etc) of stories, so I'd start with getting the burndown right each iteration and then move on to it.
We at TargetProcess use simpler tasks for stories:
Story: Rename a file
Task: Specification (2h)
Task: Development (14h)
Task: Testing (6)
Task: User Documentation update (2h)
If Development task takes more than 16 hrs, it is a sign to split it to several smaller tasks. In fact we don't usually create tasks with less than 2-3 h duration.

Related Links

TDD - When is it okay to write a non-failing test?
What's the point of testing fake repositories?
Separating rapid development from refactoring/optimization
Interface Insanity
unit test installers
Has anybody used Unit Testing as a way to learn programming?
Testing only the public method on a mid sized class?
How to develop complex methods with TDD
How can I remove the “Test With” menu left after uninstalling TestDriven.NET?
“Code covered” vs. “Code tested”?
IoC & Interfaces Best Practices
How do you do TDD in a non-trivial application?
TDD Spider Solitaire
How to deal with those TDD breaking people? [closed]
How to create a mock object based on an interface and set a read-only property?
Exercises to enforce good practices such as TDD and Mocking

Categories

HOME
keycloak
heroku
mfc
react-virtualized
ebean
yum
django-imagekit
in-app-purchase
tomcat6
node-pdfkit
volttron
quickbooks
nstableview
msp430
decomposition
designer
oracle-coherence
jasonette
arabic
csrf-protection
hammerspoon
dynamic-featured-image
maxmind
neo4j-spatial
dbclient
bosh
key-value-observing
titanium-mobile
code-contracts
catch-all
repo
subset-sum
mozilla
android-nestedscrollview
fusionpbx
multilingual
jna
squib
ncalc
spring-security-kerberos
slick-3.0
convertapi
wptoolkit
qwt
domain-model
dotnetzip
lowpass-filter
slickedit
diagnostics
android-cursor
akka-cluster
pillow
hittest
sonarlint-vs
blacklist
yt-project
static-ip-address
android-fonts
probability-density
ado.net-entity-data-model
azure-virtual-network
phishing
dataview
ios8-today-widget
libressl
android-listview
0xdbe
microbenchmark
device-orientation
nstableviewcell
google-reader
android-radiobutton
system.net.webexception
kgdb
transcoding
concurrent-collections
multiprocessor
spring-io
shellexecute
padarn
rabl
chronoforms
xamlparseexception
drools-planner
factory-method
runas
automount
armcc
ticoredatasync
chrono
routedevent
photoshop-cs4
spec#
dbisam
caching-application-block

Resources

Mobile Apps Dev
Database Users
javascript
java
csharp
php
android
MS Developer
developer works
python
ios
c
html
jquery
RDBMS discuss
Cloud Virtualization
Database Dev&Adm
javascript
java
csharp
php
python
android
jquery
ruby
ios
html
Mobile App
Mobile App
Mobile App