Sustainability
- Details
In my last post, Agile Development in the '80s, I talked about the team I was on in the early '80s that was, in many ways, a Kanban team. I was discussing this with a colleague the other day, and he pointed out that my team lacked one very important attribute of an agile team: sustainability.
In the intervening 30 years, I may have fallen prey to the The Way We Were effect ("What's too painful to remember, we simply choose to forget.") There were several aspects of the team that were not sustainable.
We would all come into work around 10 AM, work closely together till lunch time, take a long lunch together, where we designed the product while eating, then come back to the office and work till 10 PM. That was really too much togetherness, since we had no time alone.
In spite of its agility, it really was a death-march project, since we had a fixed deadline, and way too much work to do. And this meant we could not experiment, because there was no time in the schedule for failing. (See my earlier post, Failing so you can win.) There was even a point where the boss asked us to reduce the quality of our code so we could get it done faster. (I had a lot of trouble with this. I know how to write good code, but I don't know how to write code that is only 85% good.)
There were a lot of weird (and bad) interpersonal dynamics between some of the team members, and particularly between the boss and the rest of the team. And the 12-hour days were not sustainable. This really hit home for me when I realized that I had lived in Chicago for a year, and the only places I knew were two restaurants that were open late, and the 7-11 down the street from my apartment.
So why did we do it? I always think about Tracy Kidder's book, The Soul of a New Machine, where he talks about "signing up". This isn't like signing a piece of paper, but is the point where the team starts feeling that the project is their project, and they will do whatever is necessary to make it go. We were young, and we were launching a new product. Most of us had come from a systems programming background and now, instead of just maintaining the systems at a data center, we were Building Something New and Great. That was enough to keep us going for the year or so it took us to make the first release of the product, but it was not sustainable.
Getting teams engaged enough that they "sign up" is important, but they have to be able to work at a sustainable pace, or they will eventually burn out and disintegrate.
Agile development in the '80s
- Details
As with many innovations in the software business, if you look back in time, you can see glimpses of agile development long before it was "invented". Even though we did not know of the term "agile development" at the time, the team I worked with on my first project for a software vendor, 25 years ago, was a more agile than many of the teams I see today.
I was the lead developer on an screen-based product that ran on the MVS, a popular IBM mainframe operating system.
Our team consisted of the following people:
- Our boss, who was a subject matter expert and a well-known speaker in the industry, with lots of industry contacts.
- An experienced MVS systems programmer, with some subject-matter expertise.
- Another experienced systems programmer.
- A young programmer of modest experience, but who was extremely sharp.
- An experienced technical writer.
- A secretary.
We were all new to the company, and we all sat together in our own room, separated from the rest of the development organization. We had a four-person cubicle, where the three programmers and the technical writer sat. The secretary had her own desk, and the boss had his own office, just off the main room.
Since our product was screen based, we started out by designing the screens, printing out the designs, and taping them to the glass wall of our conference room. We wrote the bare minimum of code required to display the screens, and mocked them up, using simple scripts to drive them with dummy data. We let a few potential customers, as well as several people from the greater development organization, try out the demo, adjusting things based on their feedback.
As we developed the production code to run each screen, providing real system data instead of the canned data from the mockup, we would replace the script, and mark that screen done in the conference room. Occasionally we would need to reorganize the hierarchy of screens based on the feedback we got. When that happened, we rearranged the pieces of paper taped to our conference room window. We continued this process until we had a shippable product.
Although we were not officially using agile development, what we were doing had a lot in common with them.
- Our boss served as our product owner. He had many years subject-matter experience, and a lot of connections in the industry, so he had a good handle on what the industry wanted.
- We made changes in small increments, demonstrated them to stakeholders, and changed our direction based on their feedback.
- While we did not have an official backlog or burndown charts, the printed-out screens on the window of our conference room, clearly marked when they were done, served as both.
- We met every day to discuss our direction and the status of things. These were longer than 15-minute standups; they frequently happened over a long lunch, and would include discussions of the design of new features. One could think of the design discussions as parking lot items.
- We were all in the same room. The fact that the room was quiet, since we were isolated from the rest of the development organization, and that we were all just one cubicle wall away from each other, meant that one usually didn't need to get up to ask a question of a co-worker; one could just ask, and receive an answer through the cubicle wall.
What were we lacking?
- Mostly, the fact that we didn't use formal sprints. What we were doing was probably more akin to Kanban: A free developer would pick a screen off the wall to work on, and usually was not working on more than one screen at a time. So the work-in-progress limit was generally one times the number of developers.
- A formal done-done criteria. A screen was considered done when the developer showed it to the boss (product owner) and convinced him (in other words, demonstrated) that it was done. So what we were doing was actually compatible with agile development; we just didn't formalize it.
- Automated tests. Testing was done by the developers, but we did not have the capability to do automated testing of screens at the time, and I'm not sure we would have thought of that even if we had had the capability.
- Documentation was treated as a completely separate project, so screens were generally considered done long before their documentation was completed. On the good side, the technical writer attended all of our design sessions, so she could be a lot more familiar with the product than we are used to today, where we just send documentation changes to the writers.
- We didn't have formal stories or burndown charts. The screens on the conference room window served as both.
- We didn't do any formal estimation like planning poker. We would look at the screens, guess at the how hard it would be to implement them, and with the boss's input, decide how to prioritize them.
Why did this work?
- The nature of the product, with its individual screens, made it very easy to create bite-sized stories.
- This was a new product, so there was no legacy code to get in the way of doing things the way we wanted to.
- This was a completely new development team, one that did not interface very much with the existing development organization, so we had no entrenched corporate culture to deal with.
- There were no existing build tools to deal with. The whole build environment was up for grabs, and we could develop whatever we needed to support us, and change it as we felt necessary.
Interestingly, the team became less agile once we got the first release out. We could no longer rely on our screen mockups taped to the wall to keep track of the work to be done, and how it was progressing. We were forced to change to the more traditional release-cycle-oriented model, and had to deal with the existing procedures and facilities for shipping products, and adopt the ways of the existing development groups. And we started having to do technical support as well as development, which interfered with our development schedule, just as it does today.
High-Performance Teams
- Details
Bob Schatz from Agile Infusion is visiting our facility this week, as he does from time to time.
(If you hire someone to train your developers about agile development, I highly recommend that you have them come back periodically. Teams forget some things that they had learned, and also, as they understand more about agile development, they can ask better questions, to improve their understanding even more.)
Bob was talking about high-performance teams, and he asked if we knew what it felt like to work in a team like that. The best analogies I can think of come from outside the software business:
- When I lived in Detroit, I used to have a lot of friends involved in SCCA car racing, and we used to get together and work on our cars. When you are working on a car with someone, and just as you realize you need a 15 mm wrench, your partner hands it to you, that is a high-performance team. He is watching what you are doing, he knows that the nut you want to tighten is a 15 mm, and he sees that you are going to be ready to tighten it in a moment, so with no words spoken, he has the wrench ready for you.
- Another example is a jazz band I used to go see in nightclubs around the Detroit area. They had two keyboard players, one who sang and played keyboards, and one who doubled on sax and keyboards. One night one of them was in the middle of a solo on the keyboard when a microphone stand started to fall over. He stopped playing and grabbed the stand, and the other keyboard player finished his solo, without missing a note.
I have been on emergency teams with a number very impressive colleagues. Usually, we would divide the work up based on our areas of expertise, and then go off in our corners and work on our parts, periodically getting back together to check our status. It was a great experience, but it still wasn't a high-performance team, because we were not working closely together on the same thing, and had not been working that way over a long period of time.
This is one of the bad points about shifting resources around between teams frequently. It changes the team dynamics. Everyone has to learn how to work with the new person, or without the person who was pulled off the team. When teams are always in flux, it is difficult to foster the working relationships that result in high-performance teams.
Low-cost, scalable, corporate information radiators
- Details
Many teams and companies have found information radiators useful. These are displays that show information and statistics, such as burndown charts, open issues, top backlog stories, or days till product release. They are located in well-trafficked areas, so rather than having to look up the information, people can almost absorb it by osmosis as they walk by. And seeing the information each time they walk by makes it more likely that they will notice important changes.
Most of the information radiators reported in the literature have been projects done by single teams, or a few teams in small companies. When you try to scale it up to a company with a lot of teams, it becomes trickier: although each team needs displays customized to the team, each team should not have to come up with their own solution, taking up time that would be better spent on product development. And standardizing hardware makes support easier, as well as keeping individual teams from getting too extravagant. (The 55-inch commercial display used by panic.com's information radiator is tempting, but at over $3000, is a bit hard on the budget.)
What is needed is a standard way to set up information radiators that is not overly expensive, and does not take a lot of time to do. Here is a proposal for such a scheme.
A lot of companies have spare laptops and monitors, either because of upgrades or, in today's lagging economy, reductions in force. The monitors are frequently in the 21-inch range which, while not as impressive as a 55-inch display, will do the job if placed on a counter or bookcase. The laptops do not need to be particularly powerful; they just need to be able to run a web browser. They can be running Windows, Linux, FreeBSD Unix, or Mac OSX; it doesn't matter.
Some companies may be nervous that laptops that sit out day and night running the information displays might be stolen after hours. If this is a concern, or if you do not have a surplus of laptops, Android-powered set-top boxes, which cost less than $100, could be used instead. (You will probably have to clear this with your local IT department, as many have rules about attaching non-standard equipment to their networks.)
Once you have the laptop and monitor set up, you just need a start-up script that brings up the browser, which has its home page set to a special URL for information radiators, perhaps something like radiator.example.com. Spare laptops could be set up this way in advance, so when someone needs an information radiator, you just hand them the laptop and a monitor, and they find an Ethernet connection and plug it in. This relieves the IT department of the burden of setting up the information radiator.
The special URL points to a web server that supplies the pages for the displays. The server checks the IP address of the requester against a list of known IP addresses. If it does not recognize the IP address, it returns a page that displays the IP address, as well as who to contact to register the IP address. (It is assumed that the IP address will be static, since the information radiator is not moving around. If this is not the case, the registrar could, with a bit more effort, add its MAC address to the DHCP server to ensure that it remains the same.)
When someone registers an information radiator, they indicate the team they are with and what reports they want. Most teams will want the same sort of reports, using data extracted from the sprint-tracking or issue-tracking system and customized for their team. The server will return web pages for the various reports. Each page will include a META REFRESH tag that will cause the page to refresh every 15 seconds, and a different page will be displayed each time, cycling through the reports registered for that IP address. The company can also insert additional pages, like the days till product launch, or the date of the company picnic.
Although the pages are refreshing every 15 seconds, the reports will not actually be generated that often. Since the server knows what reports the various information radiators want for each team, it can pre-generate the reports at reasonable intervals and cache them. For example, if sprint hours are burned down in the daily standup, there is no need to generate the burndown report more than a few times a day (to accommodate teams with morning or afternoon standup schedules). On the other hand, a report of outstanding issues should probably be generated much more often.
If teams want to create their own reports, they can be contributed to the server, as long as they are parameterized, so that other teams can use them.
This approach lets teams get the benefits of information radiators without a lot of expense or setup time, imposes some standardization without being onerous, and lets teams easily share custom reports with other teams that might find them useful.
If the volume of new information radiator requests becomes high enough, a web-based GUI could be developed, to let teams register their information radiators and select which reports they would like. Chances are, though, that doing so would be more work than just having someone manually register them.
Failing so you can win
- Details
It has long been known in engineering circles that much can be learned from failure. Claude Albert Claremont, in his 1937 book on bridge building, "Spanning Space," wrote:
The history of engineering is really the history of breakages, and of learning from those breakages. I was taught at college "the engineer learns most on the scrapheap."
In one of my past lives, I was involved in performance car rallying. This involved racing in beefed-up cars over forest logging roads. In the ’70s, there was a driver named John Buffum. His day job was running a car dealership in Vermont, but he was also the top rally driver in the U.S. and had factory support from British Leyland, who supplied him with Triumph TR-7s, like this one:
John Buffum, Libre Racing © 1979, Lynn Grant
He was very fast, but he crashed a lot. Other drivers nicknamed him “Stuff 'em Buffum,” because he stuffed his car into the ditch so often. Each time, British Leyland would ship him a new TR-7 for the next race.
As time went on, he stopped crashing, but he was still very fast. Since he had crashed so many times, he knew exactly what the car felt like when it was right on the edge, so he knew when to back off. Other drivers who hadn’t had the luxury of getting a new car every race didn’t have this experience, so they had to be more cautious, and were thus slower. Buffum went on to win 11 National Pro Rally Championship titles and 117 Pro Rallies.
Tony Dismukes, a martial artist whose blog, BJJ Contemplations, I follow said this about practicing failure:
On the mats we have the opportunity to fail over and over and over again. This is the only way to learn the limits of our techniques and of ourselves. As Rener and Ryron Gracie are fond of saying: every technique can work some of the time, no technique works all the time. Only by testing our techniques to failure can we learn exactly when and how much we can rely on each one. Only by testing them to failure can we truly understand which details are crucial for success and why. Only by testing ourselves to failure can we understand exactly where our personal limitations are and begin to learn how to improve upon them.
Engineers, racers, and martial artists all see the benefits of learning from failure, but too often in software development we consider failure a luxury we cannot afford.
Much of this is because we are always under the gun, trying to develop software to meet some too-early deadline. This causes us to stick to things that worked in the past, rather than trying something new, something that, if it works out, could make the team much more efficient or, even if it fails, might teach us valuable lessons.
In his book, Kanban: Successful Evolutionary Change for Your Technology Business, David J. Anderson wrote:
You need slack to enable continuous improvement. You need to balance demand against throughput and limit the quantity of work-in-progress to enable slack.
Too often, we completely fill our release schedules with work, so that any failure that delays something by a sprint is a catastrophe. If we allowed ourselves enough slack to have experiments that failed from time to time, those failures would be made up for by the improved efficiency that comes from continuous improvement.
Page 3 of 5