- Spreading the Benefits to Everyone
- Structuring for Agile
- Agile Budgeting
- Agile Contracts
- Making Gates Agile
- Manufacturing and Supply Chain Agility
- Other Recommended Agile Enterprise Practices
Taken together, some concepts I have been reiterating throughout this site lead to what may be a surprising conclusion. Here’s a quick review:
- Projects involving unknowns do not fit waterfall.
- Capacity planning prevents overwork and burnout, benefiting workers and the company.
- Sprint planning provides high short-term predictability.
- Self-organizing teams outperform traditionally led teams.
- Agile fits the psychology of (any) humans working in small groups better than other work management models.
Now review this definition from the Project Management Body of Knowledge (PMBOK): “A project is a temporary endeavor undertaken to create a unique product, service, or result.” A project also has a definite start and end, the PMBOK says, though the output is usually long-lived.
Expand your thinking across your company, government agency, nonprofit organization…whatever your “enterprise” is. How many groups in that enterprise spend part of their time doing projects defined that way? Ignore the nature of their work. Include temporary efforts like continuous improvement. Now how many groups?
The answer is, “Almost all of them.” Finance professionals perform monthly and quarterly closings, each a “unique… result.” Human Resources departments don’t think of themselves as doing projects, but each hiring, termination, performance appraisal season, and investigation of misconduct fits the PMBOK definition. Every continuous improvement effort is a project: quality circles in Manufacturing do nothing but projects! Any large document such as a report is a project, as is changing the physical layout of an industrial plant.
In most of these groups, people must cooperate to some degree to create the output, and there are at least “known unknowns.” I’ve observed or read studies about enough top leadership teams to state firmly that every one of them would get the same benefits of Scrum as any software team. Janitorial services, security guards, Customer Support and most others that do not do projectized work can benefit from Kanban.
The most obvious benefit to the organization is elimination of Scrum Pox. When top executives and all stakeholders are Agile, interface conflicts go away. In fact, the Agile Liaison role goes away, too. Instead the Customers and Product Owners in each organization would merely work together to coordinate dependent requirements.
After direct comparison of agile manufacturing (AM) and Agile software development, a journal article concludes, “the current agile software development models could be extended towards higher organizational levels and even to enterprise-wide value chains following the AM principles. This leads to holistic total-company systems thinking where agile capabilities provide sustainable competitive advantages building in part both on the software product development functions as well as on the related manufacturing operations of the (new product development) company.”
This part of the site puts Full Stack Scrum™ (FuSS™) into the context of enterprise-wide change that will not only bring greater internal and external happiness within the company by eliminating Scrum Pox, but most likely greater financial success as well. It shows how changes not typically addressed in Agile transformations can extend the philosophy side-to-side, and more critically, top-to-bottom throughout an enterprise. There are no step sets because I am not an expert in implementing these ideas. Those experts are already in your Finance, HR, Legal, Manufacturing, and Procurement departments, if they are willing to learn. I urge you to take whatever action you can, given your place in the company, to evangelize these changes and help to implement them.
I don’t know why ineffective management practices persist from generation to generation. One study I read suggests the primary reason is that leaders tend to fall back on what their bosses did—even if they disagree with the technique. Presumably this is because they feel overwhelmed with daily challenges. Seeking and trying alternatives takes time. As a result, they seek piecemeal fixes rather than accepting that the entire structure is antiquated and unsound. It doesn’t help that much of the advice on the Web and in management books makes little reference to scientific data on business operations. Most teambuilding companies are so out-of-touch with psychological reality, their claims border on fraud.
For example, researcher and TED speaker Dr. Dan Ariely has said, “The biggest lesson from psychology from the last 50 years is that personality matters very little.” Environment is the key driver of behavior, in this case meaning corporate culture. A study considering why project teams tend to use the same organizational structures, despite significant differences between their projects, concludes, “Three major themes started to emerge… ‘top-down prescription,’ ‘imitation,’ and ‘common background of managers’” (Miterev, Engwall & Jerbrant 2017). Yet many executives and those who mislead them for money continue to focus on personality tests instead of changing the environment.
All humans make decisions for seemingly logical reasons that psychologists consider literally irrational. To cite just one example from a list of 19 I carry around to jobs, we humans have a tendency to only adjust so far from a previous estimate, even if the circumstances that led to that estimate have changed significantly. I’ve already noted the proven tendency to underestimate, and you can throw in our foibles around, among other filters:
- Procrastination even when logic supports immediate action.
- The assumption that what we’ve personally experienced reflects the larger pattern.
- The tendency to focus on what is important to us regardless of that factor’s level of objective importance.
- Our preference for information that supports our existing opinions.
Top executives, like everyone else, can’t be experts in everything. They have to rely on internal advisors, but outdated advice is another issue. I’ll use HR as just one example. I have been arguing for many of the concepts in this “Structuring for Agile” section since the early days of my work with self-directed teams. All have strong scientific backing. Speaking as a former member of the Society for Human Resource Management (SHRM), I know there are strategic thinkers in that field pushing similar changes, and HR people are as smart and dedicated as those anywhere in any company. Yet for more than 20 years, I have run into more resistance than assistance from HR.
Those HR reps may be blameless. Most of the Web postings on Agile from the HR perspective only date to 2015. The first time I saw Agile addressed in relation to HR in a presentation, at either SHRM or Agile gatherings, was in 2017. The speaker, HR consultant Fabiola Lyholzer, confirmed something I had long sensed: Finance and Legal shackle HR policies. Because of this, her slides said, “HR instruments are driven to handle poor performance and mediocrity… (and) must make up for weak managers and uninspiring leaders, costing companies billions.” Therefore, she said, “HR practices work the way they were intended,” but the needs they were intended to address are not the strategic needs HR should be filling.
Add in that executives are likely getting bonuses based on unchanging financial targets within the only budgeting system they have witnessed their entire careers, driven by Finance professionals trained and mentored to run things that one way. As you will read under “Annual Budget Games,” that is not the only or the best way, but it is the predominant system.
The logical conclusion is, you will need to be understanding and patient. The persuasion effort will take considerable time.
Elsewhere on the site, I explained that Scrum teams are fully cross-functional, able to deliver a releasable product. Unfortunately, most companies still organize in function-based “silos.” That is, not only are hardware and software engineers in separate groups, within software alone the user interface developers and the people who connect the UI to the Internet may report to different managers. Often they are on different teams.
I find this bizarre. The damaging effects of silos are so well-documented, the term is instantly recognizable in business discussions around the globe as a bad thing. Despite best intentions to coordinate, in practice the silos almost invariably plan out of sync, such that one team is waiting on the other before it can continue its work. Individual team members assigned to cross-functional teams understandably give more weight to their appraisal-writing functional managers’ wants than the team’s needs. Some defects are difficult to find until silo team outputs are combined and tested as a system, which amounts to escaped defects as defined earlier, with all the attendant costs. Needs of other non-development organizations are usually forgotten, such that documenters and customer support managers block planned releases because they don’t have what their teams need to finish their work. The “Blame Game” becomes pandemic as each silo blames the others for release failures that are likely given the silo environment.
Only momentum and lack of will prevent a shift to a structure that reflects and supports cross-functional, full stack teams… and a lack of information. Let’s fix that next!
These teams are best supported by what I call an “Empowered Matrix” environment. This model is based on standard organizational development concepts in which each line worker reports to two bosses: a “solid line” of reporting to the person who is their formal boss, plus a “dotted line” to the person who actually directs their daily work. The classic example is a project team, where each member of the team gets team assignments and performance appraisals from their functional manager but receives work orders from the project manager. Most managers I talk with about this think it is the only kind of “matrix” organization, but it is actually called a “weak” matrix because the person directing the work has little authority:
In a “strong” matrix organization, the lines of reporting switch. That is, the project manager is the formal boss, but each person also follows guidance related to each function from the functional managers:
Notice the dotted lines are now vertical, and the solid one horizontal. This has long been the recommended form for companies that do “projectized” work, for the obvious reason that it makes each project member accountable to the project. In this case, common sense aligns with the psychology research on motivation: Each person puts the project first, maximizing productivity toward the effort that makes the company its money.
An “Empowered Matrix” organization eliminates the project manager, because in Agile the teams are self-directed, and project management is covered by the sprint and release planning processes. The teams report as a whole to the business unit manager (who might be in charge of each of the functional managers, too):
This may seem like a lot of overhead for that manager, but recall that he or she has no direct role in planning or managing the work of the team members, and tracking progress is as easy as checking the tracker. We will discuss later how to ease the burden of formal performance appraisals, if your organization requires those. In effect, each team is just one additional “direct report.”
Convincing the Top and Sides
Moving to any matrix starts at the top, though with critical conversations on every side. The business unit manager (usually a vice president or higher) needs to have bought into Agile. The whether-to-matrix discussion may be more difficult. You are asking for a lot more time and effort from executives. Given that they will need to rely on advice from the Finance and Human Resources departments, and those people often have not even heard of Agile… well, don’t underestimate the challenge!
All of the affected managers will have to be brought into the discussion. Refer back to “Gain Buy-in” and repeat the steps, now to make the case for eliminating silos. They will have many questions that I try to answer for you in the rest of this topic.
Who Reports to Whom
Ideally, the teams report as units to a “skip-level” middle manager. By that I mean instead of having a “team leader” or “line manager,” all members of the team report to the manager that team leader or line manager would report to. This will seem daunting to middle managers in companies with outdated but typical performance evaluation systems. Combined with changes to how performance data is gathered, discussed later, the burden actually balances out and can even go down under FuSS. In effect, supervising five teams can prove little more challenging that supervising five individuals.
Part of the reason is because unlike traditional team managers, the Empowered Matrix supervisor does not:
- Have daily involvement with team activities.
- Take team-level tasks, at least on a regular basis.
- Define the team’s workload.
- Assign work.
- Oversee progress in depth.
- Track performance.
- Fix project problems.
The FuSS processes for portfolio management, release planning if used, and Sprint Planning cover the first four bullets, and the tracker provides the next two. Any of the various roles in the system can fix problems, and whom does so is clear once the problem is defined.
Before an executive rejects the concept of treating a group of people like an individual, have them do some reflection. Corporate executives are fierce defenders of treating entire corporations as individuals under U.S. law! If that executive thinks the concept valid for a group of 100,000 workers, it surely must be for a group of five or 12.
What Managers Do in Scrum
The role of the traditional line manager is one of the biggest problems with the attempt to run Scrum teams within a waterfall company instead of changing the entire organization. Take, for instance, the standard waterfall software organization. There will be teams of testers with their own managers awaiting a handoff of work from the team of developers on one project while (hopefully) working on a previous handoff from another project. Besides the usual personnel functions of hiring, performance evaluation, training, and budgeting, the manager will be assigning people to projects fluidly and sometimes even assigning specific testing tasks within each project. He or she will also oversee the work to ensure it is getting done right and on time.
In Scrum, each software team has one or two testers who work only on that team’s project. The testers self-assign tasks with input from the team. Their managers don’t need to know what specific tasks someone has, much less assign them. Team members oversee each others’ work and arrange their own cross-training. The Scrum Master has primary responsibility for coaching members. In the most advanced, truly self-directed Scrum organizations, the manager has light involvement in hiring or performance appraisals—the team does the heavy lifting on those as well!
In short, functional managers, meaning those responsible for a particular discipline or skill set, have much less to do in Agile organizations. In turn, that can mean the company needs fewer managers. In fact, for both companies and employees, reducing manager head count is a possible benefit to Scrum, as mentioned elsewhere. The company saves money, and the line employees have fewer management levels between them and the top.
This issue did not start with Agile. I addressed it in the earliest version of The SuddenTeams™ Program when it was just my consulting practice training manual, a year before Schwaber and Beedle’s seminal Scrum book came out. The issue is related to the nature of self-directed teams. In a matrixed organization where people in different disciplines work together instead of in silos by function, fewer functional managers will be needed. When the work teams are self-directed, you might eliminate one full-time manager slot per team.
In my years of consulting experience, I found this to be the single biggest obstacle to managers agreeing to self-direction. It reduced the job security for the managers in the best position to implement this approach. Because self-directed teams are more productive and less costly than leader-led teams, it was good for the company, but not for the team leaders!
In Agile the role of the functional manager in daily actions of the team ends with the assignment of their personnel to the program. As in any matrixed organization, the manager retains responsibility for filling general personnel needs of the company within the manager’s discipline, general orientation, and assigning employees to teams (where teams don’t do the hiring themselves). They will also have responsibility for enterprise-wide training, technical tools, and technical processes related to that discipline, though in Scrum each team must be allowed as much latitude as possible for its own processes. In companies that use performance evaluations, the functional manager is handed most of the input on direct reports from the tracker and from other members of the teams instead of through direct observation.
That’s not enough to keep one person per team busy. Upper managers implementing Scrum will need to find other ways for those individuals to contribute. Within Agile, the line manager who is capable of giving up directive power might make a fine Architect, Product Owner, or multi-team Scrum Master. One of my best Agile Release Managers was officially titled, “Software Engineering Manager.” Should the manager take the PO or SM role, however, it is critical for mutual accountability that the former team manager give up performance evaluations of the other members.
In the last incarnation of The SuddenTeams Program, I listed these suggestions for managers finding themselves with more time on their hands (slightly modified here for Agile):
- Process expert—Map, review, and champion improvements to processes that cross team or business unit boundaries.
- Quality champion—Become an expert and lead efforts in quality techniques such as Continuous Quality Improvement or Six Sigma.
- Technical consultant—Go back to doing what you love about your industry. You can spend some of your extra time getting yourself caught up and becoming the company expert in those technical areas.
- Special projects champion—Take on the company tasks that need to be done but no one else has time to do.
- Agile champion—Help other managers learn about and adopt Agile techniques.
- Customer champion—Lead efforts to better understand your unit’s—or the company’s—customers and how to satisfy their needs.
- Learner—Learn new skills for your own career development or to help the team or company.
- Systems thinker—Conduct researchto understand “the organization as a system and… the internal and external forces driving change,” per the classic work on “learning organizations,” The Fifth Discipline. Then develop methods to help “managers throughout the organization come to understand these trends and forces.”
Asking for “We,” Paying for “Me”
I could fill a book with the scientific arguments and data against annual review processes. Fortunately, others have already done so. If your company is realistic enough to have dumped them already, skip to “Rewarding Teamwork.” The unfortunate majority of you, keep reading.
As detailed under “Performance Evaluations” in The SuddenTeams Program, the typical performance evaluation process drives a focus on “me” instead of “us,” especially if the results are tied to bonuses or raises or, in the face of layoffs, keeping one’s job. No matter how much a company or manager stresses the importance of “working as a team,” if direct or indirect financial incentives emphasize individual performance, the individual is going to focus on what’s best for him or her. Hence I recommend in SuddenTeams a scheme that “games” the performance evaluation system to place the emphasis on team results and an individual’s teamwork behaviors.
I will skip the detailed evidence there and summarize to say that reinforcing a team focus is best achieved through inclusion of measurable achievements by the team and 360-degree performance evaluations of individual behaviors. These will drive a host of psychological changes that improve team performance, such as:
- Commitments to the team (and therefore the team’s customer) becoming more motivating to the individual than commitments to their functional manager.
- Improvements to teamwork skills such as sharing knowledge and communicating earlier about potential problems.
- An increase in what scientists call “organizational citizenship behaviors,” basically actions that help the team, but are not part of the person’s official job description or sprint commitments.
- The same control of negative social behaviors with team members the person shows with their boss.
Rating Team Results
Per SuddenTeams, at least a third to a half of an individual’s performance rating should come from team results like these:
- Delivery of 100% of planned sprint stories most sprints.
- Delivery of 80% of planned epics to which the team contributes in most releases.
- One or fewer escaped defects per release.
- Improvement in customer-satisfaction ratings regarding the team’s product, year over year.
Unlike most annual review standards I have seen, notice these are tied to specific numbers. Scientifically valid results get the same ratings regardless of whom does the rating. The numbers also must apply to everyone being compared against each other, and account for environmental or market changes beyond the individual’s control. Otherwise they are inherently unfair.
Let’s take a typical schema that says someone, “Does Not Meet,” “Meets,” or “Exceeds,” standards. All too often the company does not specify what those standards are, or they aren’t measurable, or they only provide a binary “yes-no” measurement, despite the fact that three levels imply a three-part measurement. Here’s how you could measure the four bullet items above for a team doing four releases a year:
|Standard||Does Not Meet||Meets||Exceeds|
|100% Sprint Delivery||<80% of sprints||81-95% of sprints||>95% of sprints|
|80% Release Delivery||<3 of 4 releases||3 of 4 releases||4 of 4 releases|
|Escaped Defects||>3 (year)||2 or 3 (year)||0 or 1 (year)|
|Customer Satisfaction||<10% improvement||10-20% improvement||>20% improvement|
Notice some themes:
- Every standard is objectively measurable. Assuming you have valid means for measuring each, and you communicated the standard and means of measurement to everyone at the start of the year, no one can argue against the rating.
- The standard is a realistic range. I can’t believe how many times I have seen a single number in a standard, meaning you have to hit that exact figure. I have seen teams hit each “Exceeds” standard, though none have hit all of them in the same year.
- Most don’t demand perfection, because humans aren’t perfect. Setting impossible goals is an instant de-motivator: Because the person knows the goal can’t be achieved, they won’t bother trying (or more likely, will figure out a way to “game” the system).
Sadly, I haven’t succeeded in getting a client to include customer satisfaction in a team’s results, which is ridiculous given that it is a critical factor in sales growth and the first of the Agile Manifesto’s principles.
360° Performance Evaluations
These evaluations are surveys asking the same questions about each individual, filled out by each of the individual’s team members, their supervisors, and potentially a few stakeholders the individual works with directly. Here are some examples adapted from The SuddenTeams Program:
Most of the numbers are used as inputs in place of supervisor judgments in the performance evaluation system. The last scale above about blaming would instead be used to assess the team’s performance on the related Agile Performance Standard. Granted, the members could collude on the scoring, but so can a manager and each of that manager’s favorites on the team in the traditional system. Plus, collusion would look pretty obvious in the patterns of responses, and the manager’s and outsider’s ratings would provide perspective.
Be aware there is an entire science around the development of surveys. It is surprisingly difficult to create surveys that actually measure what you are trying to measure and get the same answers each time they are taken. I was lucky enough to take a course on research methods in graduate school from someone who wrote a textbook on the topic. In that class I helped create a statewide political poll to learn the principles. Unless you had similar luck, consult a survey professional from a local university. The money you spend will pay off in morale because employees will feel the system is easier to fill out and more fair than they otherwise would. If the evaluations are tied to pay, the system will likely be easier to defend if you get sued. In short, as car commercials often say in the fine print: “Professional driver. Do not attempt.”
After reviewing research on performance management as relates to teamwork, I recommended in SuddenTeams this breakdown of the overall rating:
- “Team performance (1/3 to 1/2)
- “Performance as team member (1/4 to 1/3)
- “Performance as individual (1/4 to 1/2)”
“To support teamwork,” I concluded, “at least half of the evaluation should be based on the first one or two items. For example, if your system awards points on 10 evaluation items, 3–5 of those items should relate to team performance… 2–3 to the individual’s teamwork, like ‘Coordinates his/her work with teammates’; and 2–3 to individual performance.”
Whether or not you do formal performance appraisals, this breakdown applies to compensation. HR consultant Lyholzer noted in her presentation deck, “Compensation thinking is a century old.” Again there are plenty of books about progressive systems for paying for performance instead of mere time, and details under “Compensation Schemes” in SuddenTeams. To repeat my earlier point, if you pay for individual performance instead of team performance, that is what you’re going to get.
The budgeting process in most companies is simply broken. It is based on the same falsehood underlying the Waterfall Myth, that humans can predict the future. Then many companies refuse to let people change the goals even when hard data shows they will be missed, and beat up middle managers for not meeting those impossible goals. This, in turn, causes a host of bad behaviors ranging from manipulation of the data to outright lying. In publicly held companies, this can cause organizations to prevent revenues or profits in a given period because they believe stock value is hurt when companies over-shoot their goals as much as when they under-shoot them! (I have witnessed the delay of a shipment to customers by a week, surely reducing customer satisfaction, so the revenue would not post until the next fiscal month.) To some degree this belief is true in the short term, but it is a self-perpetuating issue: Because analysts are given few other metrics to use, they will conclude that the company doesn’t know what it is doing rather than looking at the underlying causes of the missed goal.
The company also thinks it has to project high earnings to look good, while thinking it is penalized when it fails to meet them. “In fact,” the American Association of Independent Investors states, “studies show that over the long run, stocks with high expected earnings growth tend to underperform stocks with low growth rates and low expectations because it is difficult to meet and exceed high expectations over an extended period of time” (italics added).
What really matters for shareholder value is earnings over time. For example, a European Central Bank analysis of 30 years of data found a significant correlation between earnings and stock prices in 13 countries including the United States. A book on valuation by McKinsey & Company summarizes the findings this way:
Companies with higher returns and higher growth (at returns above the cost of capital) are valued more highly in the stock market.
To value stocks, markets primarily focus on the long-term and not short-term economic fundamentals. Although some managers may believe that missing short-term earnings per share (EPS) targets always has devastating share price implications, the evidence shows that share price depends on long-term returns, not short-term EPS performance itself.
By contrast, consider startup companies. Startup investors and managers know there is a “burn rate,” the amount of money the company will spend each period of time to pay salaries and rent and keep the lights on. They also know the guesses at revenues the entrepreneurs have made—and that those are just guesses. The company focuses on work that will bring in the most money quickly. Rare in my experience is the startup that uses project-level budgeting. In a company of 250 in which I created the technical documentation group, I was the first manager to make a budget for my group, much less any projects!
Believe it or not, there are entire companies that operate this way even after achieving significant size and going public. They do not create detailed budgets, instead focusing on the likely value of various initiatives. One of the largest banks in Europe—one with no taint of misconduct during the worldwide recession of the late 2000s—is another. Or at least the company budgets realistically from the bottom up, like Southwest Airlines, instead of forcing a fit into an overall figure based on wishful thinking.
For a proven alternative to the annual budgeting trap, I highly recommend the book Beyond Budgeting from Harvard Business School Press. It describes a proven approach to saving the months of labor hours and stress normally wasted each year on budgeting exercises while also improving financial performance and customer satisfaction in a wide range of company types.
Any CFO should love this approach. As the book authors and conference speakers have pointed out, every CFO they’ve met already knew how much he or she wanted to spend in a year. In this Agile philosophy, as soon as they know that figure for the next year, budgeting is done! Then the organization delivers as much as it can for that price as shown in the next two sections.
If you are given the option, I recommend Scrum teams take the same approach as entrepreneurs. Act like each team or program is a little startup, with a “run rate” exactly like a startup’s burn rate. A run rate total costs for the team each year, usually shown over multiple years with a small adjustment for raises, supply cost inflation, etc. Assume the team’s run rate will be the same regardless of what the team is working on or its size. Adding or losing a few people will not significantly change the rate, and really doesn’t matter for this exercise.
Taking this approach means all you have to focus on is which projects are likely to add the most value:
In the figure, the run rate (dotted line) is essentially flat with small cost-of-living adjustments. Projected annual revenues for the different projects overlay it. Clearly, Project C is the one the team should focus on from a strictly financial standpoint. Note that it doesn’t matter if the cost of the team goes higher because people are added to the team. It also doesn’t matter if the product manager who calculated the potential revenues was too optimistic about how fast they would come in each case. These circumstances would change the angles of the three lines so additional revenues are earned earlier or later, and profit margins would get wider or narrower. However, the changes would impact all three project lines the same, so unless information specific to “C” drops its line below that of Project B, “C” is still the way to go. Substitute costs for an entire business unit instead of a single team, and the result is the same.
Perhaps, though, the company wants to also do “B” based on strategic reasons, like creating a relationship with a particular customer or preparing for the future market. It can quickly figure out from the budget-to-value ratio how much of the run rate to apply to each project. In a multi-team program, this translates to what percentage of the teams’ sprint or release plans to apply to each project. Want to give Project C 60% of your organization’s effort, Project B 30%, and continuous improvement efforts 10%? If you have the luxury of 10 full-stack teams, have them split the projects out at a 6:3:1 ratio (six teams on “C,” etc.). Otherwise have the Release Planners assign epics from each project to each release at roughly that ratio. As explained elsewhere in this site, size differences will average out over the releases.
Fixing slow progress by justifying additional budget for people is easy if you are using capacity planning. Given the hard data that technique supplies, you will be able to:
- Show that people are maxed out.
- Calculate how many people you will need to speed up a project.
- Prove that progress is slowed on a given project because you are running out of capacity for this or that role (or lack of an additional team).
Since most executives think Agile is something development teams do, they may not change their demands for project-level costing. Fortunately, release planning allows teams to come up with an initial budget number in FuSS with relative ease. Once you know which teams will work on the project, the Scrum Master or Agile Release Manager can:
- Create a version release.
- Obtain from your Finance or Human Resources department either of these for each team:
- The total compensation of all team members per week, or
- The “standard labor rate” used in calculating costs per person, which you can multiply by the number of members times 40 hours to determine a weekly cost.
- Multiply the Step 2 figure times the number of weeks in each team’s sprint cycle to obtain a “price per sprint” (PPS).
- Multiply the PPS times the number of sprints estimated for completion of the version plan.
- Add the costs for any equipment, software, other supplies using the traditional project management means for estimating these.
- Add a small cushion for unknown costs, perhaps 10%.
Example: For two software teams totaling 16 people and a standard labor rate of $52 per hour, for work expected to take 12 three-week sprints, with no additional supply costs:
- 16 x 120 hours (3 labor weeks) x $52 = $99,840 PPS
- ($99,840 x 12) + 10% = $1.32M (rounded)
Hardware programs should begin with an initial “Revision A” project resulting in at least a digital prototype and a draft Bill of Materials (list of parts). After the program is approved based on those, estimate time and materials for the next iteration (revision), and so on until the final “rev.”
Nothing in the process promises specific deliverables. The estimate simply comes up with an amount of work, translated to a specific cost to provide a number. In fact, there is an even more streamlined approach in line with the Beyond Budgeting summary above. The same way CFOs know how much they want to spend in a year, most project sponsors know how much they want to spend on a project. Make that figure the project budget! The PPS per team times the number of teams tells you how many sprints you will get for that amount, and teams using FuSS will deliver as much functionality as humanly possible in that time.
One advantage to an Agile method with full-stack teams over waterfall is that the labor costs are directly correlated to time periods, making it much easier to understand the impacts of added scope and resulting sprints. In waterfall, different resources participate in the project at different times to some degree, and the percentage of their hours applied to the project on a given calendar date can be impacted as the schedule changes. This can make cost impacts a nightmare to calculate during a formal Change Management Process, and regularly renews fights over resources.
What Customers Really Want
The Agile Manifesto is littered with ideas that confound the traditional new product development contract, beginning with two of the four primary statements:
- “Customer collaboration over contract negotiation.”
- “Responding to change over following a plan.”
In the vast majority of companies, development contracts or related statements of work specify what will be delivered by when at what cost. This is another driver behind executive insistence that project managers commit to and meet the “Iron Triangle.” On the other side of the negotiation table, customers who don’t understand Agile—which is still most of them—want to know what they are going to get for their money by when. That is easy to do when the product already exists. It is impossible to do when the product doesn’t exist.
Fortunately, the evidence from research into customer satisfaction proves these customers don’t know what they really want, which is to be happy with what they get for their money. One example of the science is a two-year study of 8,000 customers of internet-service providers, banks, and large retailers. “The gap between perceived quality and expected quality, called ‘expectancy disconfirmation,’ is a strong predictor of customer satisfaction,” it found. Another strong predictor was captured in a sample question the journal article quoted from another researcher: “Considering the products and services that your vendor offers, are they worth what you paid for them?” A number of studies cited in that article show that customer satisfaction foreshadows future purchases and customer retention.
Agile rejects the false expectations set by the Iron Triangle, greatly increases transparency, and radically improves quality. It follows that Agile provides a smaller gap between what the customer expects and what the customer gets. Again, the critical finding: It is not when or what is delivered by itself that matters. What matters is how those compare to customer expectations, and whether the customers feel they got their money’s worth.
A Contract to Match Reality
Corporate lawyers are almost universally ignorant of that finding, and that is not their fault. It is not up to lawyers to come up with the general approach to customer relations and related project governance, only to put that approach into the language needed to prevent disputes and protect their clients if those occur. Project management is not their area of expertise; their clients have not forced them to learn about Agile; and there are very few sources from which they can learn about it.
I am not a lawyer, of course, but training on various contract types is part of becoming a project manager, and I have been involved in many contract negotiations over the years. By both means, I am well acquainted with the strengths and weaknesses of the common types for projects, from a business perspective. Not a “legal” one, obviously, so be sure to talk with your counsel about my recommendations. I will repeat some information from the rest of the site in this section so you can give it to them as background for that conversation. You might also give them the link to “The Difference between Agile and Waterfall” as background, or at least spend a few minutes explaining sprints and releases.
As of 2016, I could only find one book, one significant paper, and a couple of templates related to Agile contracts. In 2017 the Project Management Institute included two pages on contracts in its Agile Practice Guide. After reviewing all these, I propose here an approach that modifies a known type of contract to reflect the Agile mindset, which I will call the Agile Capped Time and Materials Contract. The standard T&M contract charges the client for labor time and supplies until the defined product (“scope”) is delivered to the customer’s satisfaction. In a waterfall world, this type rightly troubles customers, because they think they are taking all of the risk. That is, they fear that paying the vendor for the time spent encourages the vendor to stretch the project out.
Therefore, many such contracts add a maximum amount or “cap.” In theory this motivates the vendor to finish up before that amount is spent. Because of the impossibility of predicting delivery in R&D, what usually happens is the product is incomplete when the cap is met, causing acrimonious negotiations for a new cap and/or the hassles of transfer to a new vendor. At the very least, quality is harmed in the rush to get the product out the door, and the vendor ends up continuing work for free under the warranty. Plus, their reputation is hurt. In any of those cases, no one is happy (except the new vendor!).
Unfortunately, the other common contract types rely on the myth that project management can accurately predict the Iron Triangle (scope, schedule, and budget). Therefore, all too many companies I have worked with said they wanted to be Agile, yet asked for—or too often, sent down from on high—the dates by which a specific feature or product would be delivered. Results like those in the previous paragraph occur under these contracts as well. The Agile Capped T&M Contract attempts to break these patterns by focusing solely on customer satisfaction. This is ensured by matching expectations to reality.
Like a standard T&M capped contract, the Agile version reflects how much the vendor will be paid per period of time plus how supply and material expenses will be repaid. But in the Agile case:
- The period is a “sprint,” a set length of time from one to four weeks during which the product is built in an iterative cycle.
- Scope is only described as a goal statement and objectives within the contract or statement of work.
- A high level of customer involvement per the Agile Manifesto is prescribed, to:
- Ensure expectations align with outcomes, nearly guaranteeing customer satisfaction.
- Ease customer fears by giving them a high sense of control.
- Easy “off ramps” are provided, based on the assumption that both parties are better off moving on if the project isn’t working out.
Heavy customer involvement ensures the customer complete visibility into and control over the project decisions as often as every sprint. If scope is added, it’s because the customer wants it despite impacts on project length. Fully aware of how many requirements are delivered in a given period, the customer does not have to ask the vendor for the impact of adding or changing a requirement. They already know! If adding resources is suggested, the customer understands the reason—and in fact, may be the party suggesting them. A feature of Full Stack Scrum (FuSS) not shared by all Agile-at-scale models is hard proof that the teams are working as fast as they can without risking burnout.
One Contract, Two SOWs
Note the emphasis on “new” products in that discussion. For all but Web-based software projects, development is followed by software implementation and/or deliveries of hardware. In those cases, a light over-arching contract would encompass two more-detailed statements of work (SOWs). The first, which I call the “development” SOW, covers the design, creation, and testing of a new product or a new major version of an existing one. This would definitely use the Agile Capped T&M approach.
A second “delivery” SOW, if needed, covers the implementation of the final version as if it had already existed. The Delivery SOW could cover implementations small enough to be highly predictable across clients or sites, and therefore could instead invoke one of the standard date-centric contract models. So too would manufacturing deliveries.
The more variations there are from previous projects, however, the more I recommend the Agile approach for the Delivery SOW as well. Regardless of the type, it is possible for the two SOWs to be in effect at the same time. It would be very Agile indeed to continue improving the product under a Development SOW while installing the base version under a Delivery SOW!
The Development SOW would specify terms something like the following, in loose chronological order:
- The customer and vendor representatives draft initial requirements:
- For smaller, rapid-release projects, these may take the form of user stories provided directly to one or two teams via their Product Owner(s).
- For larger multi-team efforts, these take the form of multi-story “epics.”
- In either case, the point is not to identify the actual scope that will be delivered, but to estimate the amount of work.
- The vendor conducts a version planning exercise using those requirements, and drafts a project charter resulting in:
- A “price per sprint” (PPS).
- The initial number of sprints.
- The vendor and customer negotiate a cap based on the resulting cost (see “Budgeting an Agile Project”).
- Scope is not fixed until, for contracts using:
- Stories and Sprints—The Planning Ceremony for each sprint, after which no stories in the sprint can be proposed or significantly changed except by the team.
- Epics and Planning Releases—One sprint after the start of a planning release, after which no epics in the release can be proposed or significantly changed except by the Release Planners.
Note: A planning release may or may not result in a version handed off to the customer, depending on the type of deliverables and customer preferences.
- Requirements can be paused, reduced, or deleted by the customer during a sprint/release, but cannot be replaced or revised to add scope.
Note: Teams that complete remaining requirements can work the next highest ones in the Version Release. This honors the Agile Principle about accepting change, because the new requirement can quickly be workshopped and its story or stories placed at the top of the backlog.
- Progress is reported primarily via customer participation in the Demonstration Ceremony, and/or “sprintly” using the format under “Send Sprintly Reports.”
Note: If multiple teams are involved, they hold a Joint Demonstration.
- Customer acceptance testing takes place after each sprint for stories or planning release for epics, with defects communicated in a specified way (detailed below).
- Initial delivery occurs, and the Delivery SOW takes over, for:
- Software or services—One sprint/release after the customer signs off on acceptance testing.
- Hardware—Within a specified period after the customer signs off, based on the company’s historic ramp-up time for manufacturing and delivering new products.
The Development SOW would require the customer to name an Agile Liaison (AL) who:
- Can make decisions on behalf of the customer.
Note: This means they are not simply messengers who have to check all decisions with higher managers, which would greatly slow the development process.
- Is the only person at the customer’s company authorized to funnel requirements to the vendor.
- Meets with the vendor representative weekly to reach agreement on the wording of the requirements proposed for the next sprint or planning release.
Note: In the case of releases, this would be done through participation in the normal release planning process.
- Replies to vendor representative contacts within one business day.
- Attends Demonstration Ceremonies.
- Identifies a backup within the customer’s company and:
- Coordinates with that person to ensure seamless representation in the AL’s absence.
- Communicates with the backup so they can step in without the vendor having to repeat much information.
If the vendor is using a tracking tool available via the Web, both individuals would be granted “Viewer” rights.
Note that acceptance testing by the customer is done after the vendor representative has “accepted” the requirements from the team as described earlier in this site. Any standard approach to “user acceptance testing” (UAT) is fine, and may result in “standalone defects” whose fix must be started in the next sprint. If any are found, approval of all fixes results in customer acceptance of the deliverables.
The Development SOW would specify that Acceptance Criteria negotiated for each requirement prior to the work are the sole grounds for accepting or rejecting a requirement. That way the vendor gets credit for delivery, and the customer recognizes their own impact on progress if the customer changes their mind after the work is done. During the sprint/release, as noted before, the customer can reduce or cancel the requirement. Afterward, the parties can add a new requirement to remove or revise that feature in the next increment. To reiterate, the customer doesn’t have to keep a feature they don’t like; they just have to recognize that the vendor delivered what they originally agreed upon.
A “Definition of Done” in the SOW specifies the assumptions the customer can make when a story is presented for acceptance even though they are not repeatedly specified in the Acceptance Criteria, such as:
- Types of tests performed.
- 100% passage of tests.
- Updating of documentation and training materials.
- Placement of code in the customer’s UAT location, if relevant.
If at any point the budget cap will be passed in the next Planning Release (or some number of mutually agreeable sprints), negotiations are begun with the customer to either:
- Raise the cap.
Note: This may require a new version planning exercise.
- Accept the product as “good enough” as of the end of the warning period (see next section), at which point the Delivery SOW kicks in.
- Terminate the contract.
Per the Agile Principle emphasizing customer satisfaction, the contract specifies no delivery dates. The development continues until the customer says they are happy. After final UAT and fixes are done, the Development SOW is considered fulfilled. Since testing and fixing has been happening throughout the project, there should be no bugs, or few enough to fix in a single sprint after they are identified. For hardware, defects may require a new product increment taking one or more planning releases. In either case, the emphasis on building quality in from the start means the teams can move onto their next projects, leaving a little time in their sprints for UAT bug-fixing. Meanwhile, the Delivery SOW takes effect, likely overlapping the end of the Development SOW.
During development, the customer can cancel with two sprints’ notice (or longer, if more time would be needed to transfer the work to a new vendor). The customer would only pay for the number of sprints that will be completed by the end of that time. This power provides the protection clients often consider to be missing from T&M contracts, because it creates an incentive for the vendor to maintain a pace and quality that keep the customer happy. The vendor still has the usual protection of these contracts, plus the costs of switching vendors. And both sides are protected by the high level of transparency—each is fully aware of how their actions are impacting the project.
The usual T&M invoicing schemes should work fine for the Agile version, except that you would replace the typical unit of measure (billable hours) with the simpler Price Per Sprint. For example, the Development SOW could call for:
- A down payment of 20% of the cap.
- Use of that to cover initial invoices until emptied.
- Billing of the remaining invoices at 90% until the product is accepted or the contract terminated.
- Payment of the remainder upon customer acceptance of the final planning or Version Release.
- No charges for the final defect-fixing-only sprint(s) or release(s), as this is effectively warranty work and the teams can be doing other development.
- A switch to the terms of the Delivery SOW at that point.
All defects—and I do mean “all”—identified within a set period of time after the Delivery SOW kicks in would be fixed at no additional charge. This would drive teams to maximize built-in quality and prevent executives from trying to hurry the project by short-cutting testing.
Many larger companies with centralized planning and processes enforce “gates” where project leaders must convince an approval board of upper managers to continue the project. Generally there are a number of “stages” or “phases,” each leading to a “gate meeting.” Each gate requires the completion of a number of documents intended to ensure various good business practices are met or regulations satisfied. A key tool of corporate governance as commonly practiced, the phase/gate model is built on the idea that executives need to ensure the organization’s project dollars are spent wisely.
Unfortunately, this means people far removed from the needs of a unit’s customers are making decisions that directly impact the satisfaction of those customers. Every gate model I’ve seen was extremely waterfall in nature, progressing through the project in traditional project management phases. Each phase was filled with the kind of “comprehensive documentation” the Agile Manifesto devalues, most of it guaranteed to quickly become out of date in research and development projects. Finally, the phase/gate effort was usually so burdensome, I have yet to be in a company where the model was followed religiously.
Many companies and some academics have attempted to create hybrid methods that apply a traditional stage-gate model to the overall project, and an Agile model to the development phases. My reading of this literature is that it shows a lack of holistic thinking. One 2012 journal article, for example, starts from the assumption that Agile was created specifically for control of team-level software development, ignoring roots in manufacturing and broader cross-functional applications detailed in this site. It goes on, however, to show how iterative planning has successfully been used in earlier planning phase-gates. I hope I have made clear in this site that Scrum can be applied to any kind of research and development. That includes iterative investigation of project feasibility!
For all these reasons, I debated for months whether to cover this topic at all. My simplest answer to a gate model is, “Don’t do it!” As you are about to see, FuSS covers all of the goals of corporate governance of projects that are valid in a decentralized, Agile organization. However, because I keep running into people trying to fit the square peg of Agile into the round holes of a waterfall gate system, and I have an answer for rounding the edges off, I felt compelled to give it some space here.
Some of the documentation required in waterfall gate models is valid. Any project needs some level of business justification, such as that captured in a project charter. Companies in highly regulated industries like medical devices, or those maintaining certifications from organizations like Underwriters Laboratories Inc. (UL), are required to keep and provide some documents in prescribed formats. Many times gates are instituted because projects are not identifying these business requirements.
Instead of telling people what documents must be completed by when in each project, however, I solve the needs by two means. One is the Agile Liaison role. As mentioned in its description, each business unit becomes aware of the project status and responsible for requesting the information it needs through that role throughout the project. This is done by creating epics or stories for required documentation, such as certification forms.
The second method is the “template program.” A generic set of projects and/or epics are created that identify all of the generic “must-have” documents and other business requirements. Each time a new program is created, its sponsor copies the template program and revises it into his or her new program. For details, see “Create a Template Program.”
I have failed to convince any client company to do this, but it is possible to flex an existing gate model such that it does not interfere with Agile organizations within. In fact, the 1986 article that introduced the term “scrum” to project work includes a gate model with four overlapping phases adapted from a sequential set of six. First used to develop a copier, the company later improved and spread the model. “Compared with that effort, a new product today requires one-half of the original total manpower,” the article says. “Fuji-Xerox has also reduced the product development cycle from 4 years to 24 months.”
Every gate model I have seen has phases along the lines of: “Propose, Initiate, Plan, Design, Develop, Test, Close.” Remember back in the topic introducing Agile, when I likened Agile to a series of mini-waterfalls? That is how I make the translation to an Agile model.
We already established that you have to initiate an Agile project like any other, so we’re going to leave that phase in. The models usually split this into two or more steps. The first amounts to preparing for the second, and in every case I have witnessed, the participants have ended up combining the two with the complicity of the approvers. I deal with that reality by combining the two, allowing requesters to prepare any way they define.
As shown under “The 30-Second Explanation,” all of the remaining steps repeat with each iteration in Agile. At some point the code or hardware revision is released to customers for testing, but we don’t want to hold up the team from starting the next iteration. As the Agile contracts section details, the iteration should have few if any defects, so the team sets aside some time for defect fixing and keeps going. From the aspect of the customer, though, the iteration does not end until UAT is done. We’ll keep a phase for that, but overlap it with the start of the next iteration. You’ve already seen in the release planning section how planning for that next iteration overlaps with the previous one. To meet your executives’ requirements for a gated approach, after initiation we’ll apply the remaining phases to each version release. In other words, for phase/gate purposes we treat each version release as a separate project, except it does not require another Initiation Phase. I assume here that you are doing multiple planning releases per version release, trusting you can figure out how to condense the steps below if not.
Here, then, are the phases and the deliverables required in each:
- For the first Version Release:
- Draft epics.
- Version plan.
- Project Charter.
Note: The remaining phases repeat for each version release.
- For the first Version Release:
- Release plan for the first planning release, including proposed epics.
- Architectural Runway (first planning release) or updated architecture documentation.
- Accepted epics.
- Release reporting.
- UAT defects fixed.
- Project reporting required by the company.
For subsequent versions, the Planning phase overlaps the last planning release of the current version, and Closing overlaps the first planning release of the next version. Some examples will help you understand.
Let’s walk through the cycle with a couple of programs. In the table below, the first program gets approved before any planning is done, and the Planning gate is passed before development starts. The rows after that are three-month planning releases (PRs). After getting approval at the end of its Initiation phase, Program 1 requires two version releases (VRs). Each VR repeats the last three phase/gates. (The first VR requires four PRs, the first column, while the second needs only three.) The Initiation and first Planning phases for Program 2 kick in as Program 1 is finishing up. That way, as the teams finish one, they go directly into the new one with no loss of productivity.
This table illustrates the overlapping gate cycles for a company using quarterly planning releases:
|PR||Program 1||VR1||VR2||Program 2|
|2016-C||Development||Planning (P2 VR1)|
This is a very idealized model, so let’s look at a couple of variations:
- In companies requiring a bill of materials (BOM) for sign-off, the first “Plan” phase in the program could last an entire VR, but the plan would still be developed through iterative PRs.
- Each VR results in a “revision” of the product: VR1 produces “Rev. A,” usually just a digital prototype on which the BOM is based; VR1 produces “Rev. B”; and so on.
- Because of the length of hardware stress testing, manufacturing qualification testing, etc., the “Close” phase could require more than one PR as well.
- Shorter Cycles—Software and non-technical programs that can release deliverables every PR will be in a state of continuous overlap in which they:
- Develop the current Version Release (also the PR).
- Close the previous VR.
- Plan the next VR.
The BMW plant in Spartanburg, S.C., USA is fascinating for anyone interested in business processes. I took the tour expecting what I had seen in old films of car plants, each assembly line producing the same model over and over. Instead, at BMW the same line had different customizations one after the other, meeting the exact demands of each customer. The synchronization required with suppliers was more impressive to me than the amazing robots. One vendor supplied the doors in racks in the same order as the customized cars came down the line.
Techniques now considered “Agile” were in use in manufacturing long before the Manifesto was written. In this field “agility” is still a lower-case word for most, yet as I read through scientific journal articles on manufacturing and supply chain, I was struck by the degree to which the issues and responses directly parallel those in organizations choosing between waterfall and Agile project management:
- “Agility is a business-wide capability that embraces organizational structures, information systems, logistics processes, and, in particular, mindsets,” a marketing and logistics professor cited by other experts explained.
- An Agile manufacturing company “is capable not only of responding to changes reactively, but also creating actively further changes to the environment, and taking advantage of the new opportunities. Such capabilities are in general enabled by mutually enforcing flexible people, processes and technologies,” wrote a Nokia engineer comparing manufacturing and software agility.
- Three Indian researchers wrote, “Agility means using market knowledge and (a) virtual corporation to exploit profitable opportunities in a volatile marketplace.” 
- After leading a panel of supply chain veterans through a structured approach to identify the factors, those researchers concluded, “supply chain agility depends on customer satisfaction, quality improvement, cost minimization, delivery speed, new product introduction, service level improvement, and lead-time reduction.”
Around the time the Agile Manifesto was being created, supply chain experts were determining when to focus on “Lean” and when agility was more important. Some argue there is no difference. Lean principles originated in the same environment and time frame as methods now called “Agile.” As a journal article explained, “The primary focus and guiding principle of lean is the identification and elimination of waste from the process with respect to customer value.” Those familiar with Lean may already have recognized a lot of overlap between its practices and those described in this site.
The article goes on to say, “In the domain of software development, the types of waste can be interpreted as: extra features, waiting, task switching, extra processes, partially done work, movement, defects and unused employee creativity.” Given that list, I would go so far as to say FuSS not only complements Lean practices, it is a Lean practice! However, other Lean practices can be applied to both the implementation and improvement of your FuSS processes.
Different types of supply chains probably would have different points where Lean or agility would be more important within the same chain. For example, a company that uses a standardized set of parts with fairly steady pull rates to build a range of products on demand would leverage Lean more for the parts, and agility more for the final assembly. Perhaps more likely are scenarios where some portion of a multi-company supply chain lean more Lean while others are more Agile.
A 2011 study of plant and operations managers provided a concrete example of the demarcation line. It found that Just-in-Time purchasing, a Lean element that provides parts as-needed to minimize inventory costs, is likely a precursor to manufacturing agility. However Just-in-Time production was part of agility, and in fact might be an element of both Lean and agility. By the way, the study provided support for others that found manufacturing agility improved financial and marketing performance (sales, market share, etc.).
I bring up the parallels between manufacturing, supply chain, and software agility for a simple reason. No manufacturer should claim it cannot use Agile principles, because thousands already are!
Manufacturing and supply chain agility, so closely interrelated that I will follow researchers to lump them together as agile manufacturing (AM), relies on a range of larger environmental strategies:
- Modularity—By designing products built from interchangeable parts, a new variant can quickly be customized by hardware, software and firmware engineers iteratively in close coordination with supply chain and manufacturing professionals.
- Extended Enterprises—When a Fuji-Xerox joint venture created a new copier in the 1980s, the self-organizing program team introduced high levels of collaboration with suppliers. “The FX-3500 team invited them to join the project at the very start (they eventually produced 90% of the parts for the model),” says the seminal Scrum article I’ve cited several times. “Each side regularly visited the other’s plants and kept the information channel open at all times.” In effect, AM firms treat suppliers and delivery vendors as units of the same company, with high levels of joint planning and design, development coordination, and progress tracking. Depending on the complexity of design, this may include either deeply interrelated functions with a sole-source supplier, or connections with multiple sources all of whom practice AM themselves.
- Anticipation of Change—AM places a far higher emphasis on predicting future customer needs (market requirements) than software firms typically do, because of the longer development cycles. Firms with high AM employ a number of techniques to specify and prepare for coming changes before these can impact them.
- Physical Flexibility—Assembly line layouts can be quickly reconfigured, and tools are versatile, allowing the firm to shift among product variants as often as needed with minimal cost.
- Workforce Flexibility—Hiring and training practices emphasize broad skill sets that allow workers to shift roles as needed. Whether a matter of people being out on a critical day, or a quick turnaround to produce a different product variant, lack of people with the right skills would translate directly to lost revenue. Often in AM compensation is based not on tasks accomplished but on skills learned. For example, a minimal base salary may be supplemented by taking formal or cross-training within and across prescribed verticals and tiers to become more versatile.
- Continuous Improvement—I think it fair to say all of the best-known business process and quality improvement methods you can think of originated in the manufacturing world. Lean, Total Quality Management, Six Sigma, quality circles… I could go on and on, and do a bit under “Other Recommended Practices” below.
- Local Control and Empowerment—Decisions to determine what is best for the customers they serve are pushed down to local plants, and those plants are “full-stack” in the sense of having dedicated personnel in the roles needed to make and implement those decisions. The plant doesn’t have to wait to make the moves it needs while a centralized procurement department prioritizes those needs against every other plant. It just makes them. Another element is empowered teams similar or identical to those I have described in this site. Indeed, a significant percentage of the field studies I read for The SuddenTeams Program were conducted in manufacturing settings.
- Integrated Information Systems—Information tools transparently link sales, purchasing, finance, development, manufacturing, shipping, and other relevant departments to enable quick identification of, and response to, coming change. Each function can learn information it needs from others by looking at a screen instead of hunting down individuals, wasting productive time for seeker and provider both.
Each of these strategies directly match, or fit perfectly with, the Agile Manifesto principles. Companies that produce tangible products certainly can spread Agile through the rest of the enterprise without harming their manufacturing processes, whether those are more Lean or more Agile. In the latter case, they already have internal examples to use and more reasons to make the change in their R&D and administrative functions. In the former (Lean-leaning) case, a chat with a consultant on manufacturing and supply chain agility should lead to a better bottom line.
There are so many technical techniques that complement Full Stack Scrum, engineers can attend “Agile technical conferences” dedicated to those topics. These are outside my area of expertise, but I have been involved in their establishment at various clients. After you have FuSS up and running, any of these applicable to your deliverables would be logical next steps in your continuous improvement efforts:
- Six Sigma, ISO, and related quality or process improvement methods—Given the emphasis in FuSS on quality, I obviously support almost any relevant system for formally measuring and improving that quality. I say “almost” because some of those methods violate the “self-organizing” principles, which in turn can cause them to lower customer satisfaction by emphasizing standardized processes too highly. For example, after reading through online debates about Capability Maturity Model Integration (CMMI) and speaking with experts, I remain unconvinced that its two highest levels are compatible with team self-organization.
Temporary work groups like quality circles, kaizen teams, or action teams are great ways to address specific problems affecting FuSS organizations. Representatives from the Agile teams can either create stories to set aside time, or merely remove the hours for the temporary team from their individual Capacities.
- Continuous builds and automated testing—After software code is written, it must be compiled into a cohesive functional set in language a computer can understand, called a “build.” Once a manual process, this now can be done automatically. After the builds reach a predetermined level of maturity, a version traditionally was turned over to a team of quality assurance (QA) engineers. More complex systems with various teams contributing often still do this. The QA team runs a large range of tests to ensure the new functionality is working as claimed and that all the pre-existing features are still working. In earlier days all testing was manual, done by people at keyboards, so this required a “code freeze” where new development was finished on a build that might be released to customers, followed by weeks or months of manual testing and bug fixing. Meanwhile development began on the next version. (In my experience, the process was never this clean: the “code freeze” was mushy, with new functionality sneaking into builds that were only supposed to fix bugs the testers were finding.)
Today applications can create the builds, and all testing that does not require hardware changes can be done by applications using “automated test scripts” based on test cases. There is no better way of ensuring defects are captured and fixed prior to release to customers than doing a build every night, and running every system-wide test every night. In part this is because problems are found soon after they were created, reducing time for cause analysis. It also prevents small problems from combining into much larger ones, or creating cascade effects farther down the chain of computer actions. With traditional testers freed from that burden, also, more “exploratory” testing can be completed in which they try things your Customers do not expect, preventing customers from finding them first and identifying potential improvements.
- Dev/Ops—Most companies split their engineers into development teams and operations teams. The former create new functionality; the latter install it and support customers. The Dev/Ops movement is trying to at least get these teams working more closely together. That way the development teams create fewer problems for Ops, and the knowledge that Ops gains about the system and customer needs is efficiently fed back into improvements in the system. In extreme cases the line is blurred to where one team does both. Dev/Ops conference topics largely overlap the Agile technical topics, and I consider combined Dev/Ops teams a logical “full-stack” form.
- Test-Driven Development (TDD)—Different from ATDD, this is an approach to actual writing of software code. Just as Agile takes a small piece of functionality and delivers it “fully tested,” programmers using TDD write very small pieces of the larger functionality at a time, pieces requiring as little as five lines of code. They first write a technical version of a test case for that functionality, and then a rough draft of code that in most cases fails to pass the test the first time the programmer runs it. The coder fixes the code until it passes, and in an important final step, edits the code to make it as simple and easy for other programmers to understand as possible. The process is repeated until the entire functionality is implemented—in FuSS terms, until the related user story’s Acceptance Criteria are met. The method is well-proven to significantly reduce defects and improve overall productivity.
I often get asked how to “make time” to implement techniques like these. Simple: Write technical stories or epics to do so, and set aside a portion of every sprint for the effort. In more than one organization, I have gotten backing from executives to create a “Continuous Improvement” program in the portfolio hierarchy, and tell Customers that 10% of our effort would be delegated to such work—no arguments allowed. Each of the bullet points above could become a project within that program.
When Customers gripe, and some do, I ask whether they put money away for retirement or for college for their kids. They all do. You already believe in investing a small part of current resources toward improving the future, then, I tell them. That usually ends the argument, and if not, I refer them to the aforementioned executives.
 Broughton 2011.
 Bottani 2010; numbering removed.
 Project Management Institute 2013.
 Kettune 2009.
 Zhang, Higgins & Chen 2011.
 Ariely 2011.
 Gino & Pisano 2008.
 Senge 1990.
 For example, Abolishing Performance Appraisals: Why They Backfire and What to Do Instead, by Tom Coens and Mary Jenkins; and Stop the Leadership Malpractice: How to Replace the Typical Performance Appraisal by Wally Hauck.
 AAII 2016.
 Durré & Giot 2005.
 McKinsey & Company 2005.
 Hope & Fraser 2003.
 An average for all workers or all workers in a category, usually “loaded” with average benefits costs and sometimes overhead like office rental and administrative support.
 Keiningham, et al. 2007.
 Opelt, Gloger, Pfarl & Mittermayr 2013.
 Arbogast, Larman & Vodde 2016.
 Cooper 2016.
 Takeuchi & Nonaka 1986.
 Christopher 2000.
 Kettune 2009.
 Agarwal, Shankar & Tiwari 2007.
 Inman, Sale, Green & Whitten 2011.
 Takeuchi & Nonaka 1986.