Rapid Story Estimation Techniques

Rapid Story Estimation Techniques

In my 3 years as a Scrum Master, it’s safe to say I’ve helped facilitate a few estimating sessions. Despite my best efforts, some of these were long (2 hr+) sessions that could lead to team frustration, low engagement and sometimes a loss of confidence in the process.

As a Scrum Master, it’s a terrible feeling when you have a team sit through a lengthy session that no one is enjoying just to get some estimates that boil down to calculating a single date. In my opinion every Agile ceremony should be efficient, valuable and most importantly enjoyable for the team.

Over the years I’ve discovered a couple of techniques to not only achieve speed, but accuracy as well during estimation. They were so successful that I wanted to share them with others.

 

Important things to understand first:

Story slicing and compilation is key!

If I asked you how to estimate on drawing both pictures below, which would be easier to estimate?

geometric_shapes.png

It’s not just the size of the estimate that goes up when a story is large, it’s the time to estimate as well. Story creation is a science in itself, and I don’t want to go into detail here, but it’s important to understand key factors that will aid in reducing estimating time.

Understanding the difference between a vertically sliced story and a horizontally sliced one is also important. Vertically sliced stories are cut based on value to the user. Horizontal stories are cut based on system components or tasks. Eg:

  • Vertical story: As a user I want to change my profile picture
  • Horizontal story: I want a controller class that responds with 400 on system error

There are many reasons why vertically sliced stories are better (easier to prioritise etc), but for now, note that the following rapid estimation techniques only work with vertically sliced stories.

Disclaimer

Neither of the following estimating techniques have been proven outside of my own teams. They may already exist somewhere on the internet (although I couldn’t find them). The purpose of this article is to share what worked well for myself and my teams so that others may benefit.

 

Tee-Shirt Lookup Table Estimating

This technique evolved naturally in one of my teams. It worked fantastically well for the team, but it may not be for everyone. First up you’ll need to break your system down into common elements. My team at the time built micro services, and the common elements for each story were (note this is not extensive):

Element
Responsibilities
Controller
  • Validates inputs
  • Handles throwing errors
  • Transforms basic objects
Service
  • Applies business logic
  • Manipulates data
Repo
  • Interfaces and retrieves data from a DB, internal or external system
  • Handles retrieval errors

It’s important to keep the elements around about the same size in terms of effort for the average story, and ensure they are well defined. It’s possible to initially shift responsibilities around to balance the elements, but it will mess up your estimating a bit each time, so avoid doing it often.

Next for a story, apply a tee-shirt sizing (Small, Medium, Large) estimate to each element:

Example Story: As a user I want to view my account balance

The team would note that there would be one input (already validated) being the account id and no logic or data manipulation needing to be applied. The data would however come from a system not previously interfaced with, so there could be some difficulty there. Here are their estimates:

  • Controller: S
  • Service: S
  • Repo: L

Now convert those sizes into a fibonacci number using a tee-shirt lookup table:

Tee-Shirt Sizes Estimate
SSS 1
SSM 2
SSL 3
SMM 3
SML 5
MMM 5
SLL 8
MML 8
MLL 13
LLL 20

In this case the story estimate would be 3 (SSL)! The process is made easier if you find a story the team would consider MMM. Then the rest is just relative comparison.

I know what your thinking, this is waaaaay more complex than good old planning poker! How is this supposed to save as time?

This method works by helping the team relate the estimate easily to the work involved. The break down of the common elements makes relative comparison much easier. It can also vastly accelerates discussion during estimating, because you’re not trying to describe an entire solution, but rather an individual element. It’s kind of like performing task break down and story estimation in the same step.

As mentioned, it may not be for every team. This technique requires a bit of ground work initially to break your system down into common elements. Large and complex systems would be challenging. Ideally you would aim for three common elements, but more would also work. You’d  need to construct your own tee-shirt lookup table though.

Time From User Estimating

This estimating technique was taught to me by an Agile Coach (John Di Grazia). First up you’ll need to create a basic architectural diagram of your system. Assuming your stories are cut vertically you can number the systems with increasing complexity the further you get from the user. E.g:

ArchitectureEstimationExample.png

The estimates on your diagram don’t need to be exactly accurate on your first pass, they can be refined later (see next section). The important part to understand is that you never add the numbers, only ever pick the largest. It works on the basis that the more layers the change passes through the more complex the testing and potentially the change itself. Ideally you would also split any story that touched more than one finish point. For example a story that touched “Back-end X” and “Back-end Y” should probably be split.

Example Story 1: As a user I want to change my address.

The team would point out that the addresses are stored in the remote DB and would require an update operation to make it happen. Without a second thought, this would be a 13.

Example Story 2: As a user I want a personal greeting message upon login.

The team would point out that the data was already available to the front end and it only requires some minor front end logic to show it when logging in. This story would be a 2.

As you can see, estimating is quite fast. It’s also consistent across stories (a nice bonus).

I used this technique with a large team (14 people) and the average time to estimate a story dropped from 20 mins to 2 mins, with negligible accuracy loss. In fact the first time we used it, I kindly asked the team to re-estimate 20 already estimated stories (done with planning poker). The total difference in story points between the two methods was only 2.

The downside to this technique is that generally only the engineers/devs can perform the estimate as it requires knowledge of where the change will be implemented. This lead to the testers in the team feeling as though they didn’t really have a voice during estimation (not great). I wouldn’t expect this to case with every team though.

 

Refining the Estimation Process

Refining your process to improve estimation is important no matter what techniques you are using. I find the best way to do this is to monitor cycle time.

Cycle time

If you want to get good with estimating, it’s really important to understand this metric. The cycle time of a story is the time from when the story is started to when it is considered done. The definition of done varies from organisation to organisation, but as long as it’s consistent within your team, then cycle time will be too. Most digital Agile story tracking applications (i.e. Jira) provide this metric, but it can be tracked manually with a bit of effort.

Making sense of the data

I’m going to use Jira for the following examples, but feel free to apply it to whatever you’re familiar with.

In order to collect data on cycle times per story point amount, you’re going to need to create a few “Quick filters”. Project quick filters can be found under “Board settings”:

BoardSettings.png

Now create a filter for each fibonacci story point amount:

1PointStoryFilter.png

To get the cycle times, you’ll need to go to “Reports” -> “Control Chart”. Choose a date range that aligns with your sprint, then select one of you newly created quick filters:

SelectFilter.png

At the top left of the graph you should see the cycle time stats:

CycleStats.png

Record the average for every story point filter and begin recording it from sprint to sprint. E.g:

Sprint 1 Sprint 2 Sprint 3 Sprint 4 Sprint 5 Avg Total
1 Point 3.2 3.7 2.8 3.3 3.1 3.22
2 Point 3.9 4.7 4.2 4.5 3.5 4.16
3 Point 5 4.9 5.5 5.9 4.7 5.2
5 Point 6.6 7.1 7.2 7.7 6.9 7.1
8 Point 8.1 8.9 9.5 9.3 9.1 8.98

For the next part you may need excel (or equiv) to graph these stats:

AverageCycleTimeGraph.png

Hopefully you’ll see a clear delineation (in sequential order) for each story point amount. Don’t stress if you don’t it just means it’s time to do some refinement.

I should point out, a mistake I made early on when using this form of analysis was to to think that a 2 point story should be double the cycle time of a 1 point story and 3 points should be triple (and so on). This not always the case. You’ll find that no matter how hard your team works there is a minimum base line cycle time for all stories. This is usually the overhead (even for a minor text change) to get something through the build/test pipeline and into production. For now you can just use the smallest cycle time you have out of all stories and consider it the baseline (e.g. 1.9 days). Now subtract that baseline from your average totals and compare:

Avg Total (before) Avg Total (after)
1 Point 3.22 Subtract baseline (1.9 days) from all totals -> 1.32
2 Point 4.16 2.26
3 Point 5.2 3.3
5 Point 7.1 5.2
8 Point 8.98 7.08

Hopefully you should now be able to see a clear fibonacci step increase in your story point cycle time averages. If it’s not perfect, then don’t stress! Just remember that we’re not robots and there is still room for refinements. If my team had stats like the example above, I’d have a smile from ear to ear.

If using the the Tee-Shirt Lookup Table estimation technique, I don’t recommend changing the table. Instead I would work with the team to potentially change what they consider to be a Small, Medium and Large. You may also need to shift some responsibilities of your elements around. This may take a few sprints and retros to get right.

For the Time From User estimation technique, I’d recommend using labels on your stories to help refine the estimates to the next level. You only need to label it for whatever tier was used for the estimate (e.g. “DBLocal”, “BackendX” etc). Create quick filters for the labels and repeat the average cycle time gathering technique for all labels.

You may find the that “DB Local” changes are closer to a 6 and that “Back-end X” is in fact closer to a 7. Update your architectural guide with help from the team to better represent those stats:

ArchitectureEstimationExampleRefined.pngDon’t stress if your numbers don’t align with fibonacci, it’s more important that you have an accurate representation of the average complexity of your system. Probably best not to report it to the Agile police though 😛

 

Summary

As mentioned above, these rapid estimation techniques may not be for every team. Feel free to utilise the refinement methods though. It will work even for planning poker.

Although the refinement process can be a little tedious at first, the payoff in accuracy will be worth it. Not only will you have super fast estimation sessions, but your planning will be laser accurate (almost). This should hopefully garner smiles from both your team and the business 🙂

Tags:
nicholas.letts@shinetech.com
No Comments

Leave a Reply