Agile Tracking Tool Comparison

Contents


Process

Use Cost-Benefit Analysis

This page walks you through a comparison using downloadable, auto-calculating spreadsheets to perform cost-benefit analysis (CBA) of different digital Agile tracking tools (“trackers”). It focuses on total cost of ownership—not just monthly license fees—and labor savings from time-saving features. You will identify all of the costs of each tool, quantify the benefits, and see which tool provides the highest return on investment. (“You” throughout this section refers to the facilitator of the CBA process.)

At a high level, here is the logic:

  1. Raters evaluate the ease of use of each tool on a range of features important in FuSS.
  2. Their ratings are totaled and averaged, and weighted by impact on productivity.
  3. Only 5% of the functional difference between each pair of tools is assumed to actually impact productivity, and that number is multiplied times the labor costs of your users to determine productivity savings
  4. Installation (if any), set-up, licensing, and training costs—including labor hours of administrators and trainees—are compared for each pair of tools.
  5. The total cost difference is subtracted from the productivity savings for each pair of tools to determine the net savings of the better tool in each pair.
  6. The tool providing the highest net savings relative to the other tool(s) is considered the winner.

Seek Raters

Recruit as many raters as you can. The more you get, the more advocates you will have for the final decision. Try to get various roles, meaning potential Customers, Product Owners, Scrum Masters, other team members, etc. If you are facing partisanship toward a tool, recruit roughly even numbers of partisans for each tool and nonpartisans. Work with the raters to determine how long they need for the effort, but do not let them prolong it. A month is a reasonable time. Tell people if they want their input considered, they must get done by that date.

Please download the “Tracker Comparison” spreadsheet now and take a look. It is an older Microsoft Excel file and was also tested in Open Office 4. Some cells will show error codes until data is entered. The process starts with the tab called “RateStart.” The “Tracker Rater” spreadsheet is a copy of this tab (or “worksheet”). You will give copies of that file to each rater, and when done place those back into your master spreadsheet between the “RateStart” and “RateEnd” tabs. All steps are detailed below.

The file has been protected to prevent changes except in the data cells and the three “Integration” cells (explained below). I don’t recommend this, but if you feel compelled to make changes, “unlock” is the password for each tab. Depending on what changes you make, you will likely need to update formulas on various tabs for correct calculations.

However, I recommend you resist requests to make changes, such as adding or deleting line items in the ratings form. It was created using graduate-level training in survey construction and deep experience with multiple tools at multiple companies. Critically, it focuses on what is important in FuSS™. Every time I have done this kind of exercise, partisans of a particular tool have tried to add items that had no significant impact on productivity; would bias the form toward their option of choice; or run counter to FuSS practices. I don’t know what tools you are comparing—and won’t be personally affected by the results—so by definition the form is nonpartisan and fully applicable to FuSS.

A few changes are legitimate and easy to make. As part of the overall process, you should have decided by this point whether to track defects in the Agile tool, or in a bug tool already in use in your organization. If using a current tool, there is a section of the rating worksheet you will tell the raters to skip. Also notice the “Integrations with…” lines at the bottom of the ratings form. This refers to the ability of the tracker to interact with other tools your organization uses. For example, an existing defect tool might be able to create defects in the Agile tracker and vice versa. Add into both files the names of up to three tools in which you are interested (those cells are unlocked).

The instructions on the form warn raters to check the tool’s documentation and Web site if the trial version does not appear to have a feature, including integrations. The feature might only be in the full version, or your administrator may need to enable it. Guide raters to base their evaluations on what you learn about the full capability rather than what the trial version allows.

Weighing the Results

On the “Totals” tab are weights based on my assessment of how much the feature can impact individual and/or team productivity:

  • 1.50—Better design adds significant value for a significant number of people on the team (usually, something that will speed up ceremonies).
  • 1.25—Better design adds some value for a significant number of people, or significant value for a small number of people.
  • 1.00—Basic function any tracker should have.
  • 0.75—“Nice-to-have” function that does not add much value relative to other items.

These are not included on the rating form so that raters will not subconsciously bias their raw numbers—for example, by giving higher values than they normally would to their preferred tools on high-weight items. As explained on the ratings worksheets, each rater will evaluate each line item on a 0−3 scale. After you copy each rater’s worksheet back into your master spreadsheet, it will automatically total the raw ratings of all raters and multiply those totals by the weights, like this:

In the example, “A” through “C” are three tracker options. The “AA” through “AC” columns are the weighted, averaged ratings. This is done with a function that multiplies the weight by the total rating, and averages the result by the two raters in this example (entered by the facilitator at “Raters”). It is quite possible for an option to be higher in the raw totals but lower in the weighted ratings.

Steps

Provide Copies

  1. Recruit as many raters as you can from among potential tool users, ensuring a balance if some people already support one tool.
  2. Provide editing rights in each tracker for each rater.
  3. Negotiate with the raters a deadline for completing the comparison, no more than a month out.
  4. Download the “Tracker Comparison” and “Tracker Rating” spreadsheets.
  5. Add up to three desired integrations at the bottom of the:
    1. “RateStart” tab in “Comparison.”
    2. “Ratings” tab in “Rating.”
  6. Save a copy of the “Tracker_Rating” file for each rater.
  7. Decide which tools are A, B, and C.
    Note: If only comparing two tools, ignore all references to “C” below, and inform raters to leave that column blank.
  8. Send an e-mail to each rater with:
    • A copy of the ratings file, pointing out that instructions are on the form.
    • Reminders of:
      • Instructions for accessing the tools.
      • The due date.
      • Which tools are A, B, and C.
      • Whether to complete the “Testing” section—that is, whether you decided to use the tracker for defects, too.
  9. If you are one of the raters, you may use either the “RateStart” or “RateEnd” tab in your master spreadsheet.

Gather Information

Add Product Costs

While waiting for the ratings, find and enter other information into the spreadsheet on the “Costs” tab at the row shown, being careful to put each figure in the correct tool column (A, B, or C):

  1. Enter total “First-Year Charges” in Row 3, such as:
    1. Setup fees for a SaaS tool.
    2. For a tool you host internally:
      1. Purchasing costs of equipment needed.
      2. Fixed upfront costs—initial charges not covered by license fees, or by annual service fees per Step 6.
      3. IT Department chargebacks.
  2. Enter the number of hours you estimate would be needed for initial installation and configuration of each tool—for “back-end” setup—in Row 4.
  3. Enter a labor rate for the people doing the installation, if any, in Row 5.
  4. Enter the cost per license (or “seat”) per month, in Row 8.
  5. Enter the total number of license holders you expect to have by the end of Year 1, including team members, Guidance Roles, and stakeholders who will have viewer rights, in the first cell of Row 9.
  6. If a tool must be hosted by you or your IT Department (is not Software-as-a-Service):
    1. Create or obtain an estimate of the number of hours required each year to support similar tools on the back end, and enter it in Row 10.
    2. Enter a labor rate or the amount IT charges back to your budget per hour, in Row 11.
      Note: These figures are back-end costs. They do not relate to time spent by you or other tool administrators to do interface (“front-end”) configurations or provide internal support to users. Some of those activities are covered by the ratings, and the rest will be relatively similar across tools.
  7. Enter any vendor costs beyond licensing, such as upgrade rights or technical support fees, over the first three years, in Row 13.

Add Training Costs

  1. If you might purchase training from the vendor, get a cost estimate and enter it in Row 16.
  2. Enter how many hours the basic user training will last, whether provided by the vendor or by someone in your organization, in Row 18.
  3. If additional training on advanced skills will be provided to some users, usually Product Owners and Scrum Masters at minimum:
    1. Enter the number of advanced users in the first cell of Row 19.
    2. Enter the length of the additional training in hours in Row 20.
  4. Enter the average labor rate in the first cell of Row 22.
    Note: See “Plan a Version Release” for how to determine a rate.

Compile Results

  1. On the “Totals” tab, by “Raters” (in cell P4), enter the number of raters.
    Note: This allows the spreadsheet to average the rating results.
  2. As rater copies are returned, add their worksheets into your original spreadsheet file between the “RateStart” and “RateEnd” tabs.
    Important: If you do not place them between those worksheets, the automatic calculations will not work.
  3. On the “Analysis” or “Summary” tabs, view the:
    • Highest-ROI tool on Row 17—if comparing three options, the one whose letter appears in two of the three cells.
    • Tool’s annual ROI relative to each lower-ranked tool on Row 16.
  4. Go to “Prepare the Digital Tracker for Trial” to continue the decision process.

Example for copying worksheets: In Microsoft Excel 2007:

  1. Open your master spreadsheet and the rater’s file.
  2. In the rater’s copy, right-click the tab with the ratings.
  3. Click: “Move or copy…”
  4. In the box that opens, at “To book,” select your master file.
  5. At “Before sheet,” select: “RateEnd.”
  6. At the bottom, check: “Create a copy.”
  7. Click: “OK.”

Results Example

The columns shown below compare A to B, B to C, and A to C, in that order. The “Winner” row says “A, C, C,” so C provides higher ROI than both B and C. The fact A is higher than B is irrelevant. The row above shows that C provides a 14% higher annual return on investment than A, and 6% higher than B:

Agile Tracking Tool Comparison Results

Note that the way the calculations are structured, if the first option in a pair is the “Winner,” the “Net Savings” number would be a negative, as a text box explains that on the tab. In the example, “C” will save the organization at least $101,477 over three years. Show stakeholders the “Summary” worksheet instead, because it removes the irrelevant minus signs:

Tracker Results Summary

Some math-savvy reviewers may gripe that “the numbers don’t add up.” That is, if A were higher than B and B were higher than C, you can’t add the AB gap to the BC gap and get the AC gap (AB + BC ≠ AC). That’s because the savings numbers derive from two relative factors per pair, arriving at the Net Savings from different combinations.

Example Agile Tracking Tool CBA

Costs and Ratings

Let’s walk through the forms for a comparison done by a small startup which has outgrown free tools on the Web. With limited funds, they prefer a tool they could install that seems much cheaper (A), but decide it is worth investing the time to compare it to one high-end tool (B), a SaaS with few upfront costs. Here are the labor costs:

  • Standard labor rate: $50 per hour
  • People: 20 (in three teams, including the company founders)
  • Payroll (20 x $50 x 2073 hours[1]): $2,073,000.

I’ve made up some cost numbers based on my experience doing these comparisons. We’ll assume the team’s rating effort results in a modest 50% difference between the two in weighted functionality, 198 versus 112 points. That may seem big, but in reality I have found these to be much larger.

The forms below mimic the spreadsheet, but include details of the math for those trying to understand the logic. The raters are finished, and the facilitator has added their worksheets back into his or her master spreadsheet file, along with the other information required. The spreadsheet has automatically calculated the remaining cells. Since the company is only comparing two tools, the “C” column remains blank.

We start with costs:

Costs A B C
Initial Costs
1) Enter First-Year Charges such as purchase costs and initial support. $5,500[2] $1,200 $
2) Enter estimated labor hours for installation and initial configuration. 60[3] 12
3) Enter Labor Rate for installers. $50 $50
4) Multiply #2 by #3. $3,000 $600 $
5) Add #1 and #4 for Initial Costs. $8,500 $1,800 $
Ongoing Costs
1) Enter license cost per month. $6 $45 $
2) Enter number of licenses. 20 20
3) Multiply #2 x #3. $120 $500 $
4) Multiply #4 x 36 months for Licensing Costs for 3 years. $4,320 $18,000 $
5) Enter Upgrade and Support Costs for the two remaining years. $6,000[4] $10,000[5] $
6) Add #4 to #5 for three-year Ongoing Costs. $10,320 $ 28,000 $
Training Costs
1) Enter costs for vendor-supplied Training.[6] $1,200 $1,200 $
2) Enter number of hours for basic user training (hours per person x number of people).

Note: Include SMs, POs, and stakeholders.

20 20
3) Enter number of hours for Guidance Role training (hours per person x number of people in those roles). 6 6[7]
4) Add #2 and #3 for Training Hours. 26 26
5) Multiply #4 times the Labor Rate (from “Initial Costs” above) for Training Labor Costs. $1,300 $1,300 $
6) Add #1 and #5 for Training Costs. $2,500 $2,500 $
Total Tool Costs
Add Initial Costs, Ongoing Costs and Training Costs for Tool Costs (the last rows in each table section above). $21,320 $32,300 $

Benefits

Now we can look at the calculated benefits, which only require the first column because of the two-way comparison in this example:

Analysis A/B
Tool Cost Difference
1) For each pair of options, find the higher Weighted Rating Total (from prior table) and enter that option letter. B
2) Enter that option’s Tool Cost. $32,300
3) Enter the Tool Cost of the other (lower-rated) option. $21,320
4) Subtract #3 from #2 for the Total Cost Difference.

Note: This can be a negative number.

$10,980
Productivity Gain
1) From #1 above, repeat the option with the higher Weighted Rating Total. B
2) Enter that option’s Weighted Rating Total. 198
3) Enter the other (lower-rated) option’s Weighted Rating Total. 112
4) Divide #2 by #3. 1.77
5) Multiply #4 by 100 for the Functional Gain. 177%
6) Multiply #5 by 0.05 (5%) for the Productivity Gain. 8.85%
Labor Cost
1) Enter the Labor Rate from Table 20 above. $50
2) Enter the number of users from the Training Costs calculation above. 20
3) Multiply #1 x #2. $1,000
4) Multiply #3 x 2,073 for Annual Labor Cost. $2,073,000
5) Multiply #4 x 3 for the 3-Year Labor Cost. $ 6,219,000
Savings
1) Multiply the Productivity Gain percentage by #5 for the Labor Savings. $550,382
2) Subtract the Total Cost of the higher-rated tool from that of the lower-rated tool the Total Cost Difference (from Costs table).

Note: Can be negative.

−$10,980
3) Add #2 and #1 for Net Savings. $539,402
Return on Investment (ROI)
1) Divide #3 by #2. 49.1
2) Multiply #1 x 100 for 3-Year ROI. 491%
3) Divide #3 by 3 for Annual ROI. 164%
Record the name of the higher-ranked option. SaaS

Analysis

The SaaS option is roughly 50% more expensive over three years ($32,300 vs. $21,320). Anyone who stopped there would choose it—and lose big money. By investing an additional $10,980 (the Total Cost Difference) over three years for the SaaS, you get back at least $539,402 in labor hours. Any product manager will tell you that a 164% return on investment (ROI) per year is impressive. Imagine if your retirement fund increased by 164% every year!

In case you think I fudged the numbers to make the upscale option look better, this is actually smaller than one real-world CBA I did. That winning option gave a 283% annual ROI! Notice that the form makes a conservative estimate of the benefits by assuming only 5% of the functional difference between two trackers would translate to higher productivity and thus into financial benefits to the company. Obviously, the actual number could be much higher.

My advice? If all you can afford is a cheap digital tool, it is a better option than a paper tracker. That said, don’t “cheap out.” Aim for the best value. As with anything else you buy that is a lot cheaper than alternatives, in the long run you may lose money.


[1] A rule-of-thumb figure for the number of labor hours in a year.

[2] Includes servers.

[3] Two weeks for one person using six-hour days; includes server reconfiguration; tool installation and testing, and initial tool configuration.

[4] Assumes five hours per month for server support (5 hour x 24 months x $50/hour).

[5] Optional contract for general Agile consulting services; first year free, then $5,000 per year.

[6] In this example, the same vendor can supply training for either tool at the same cost.

[7] Two-hour training for three POs and SMs.