"Comments on Agile Methodologies"
Comments on Agile Methodologies.
by Alan Prosser
alprosser19@yahoo.com
Several months ago, I heard Martin Fowler speak at our Central Iowa Java User's Group about Agile Methods (see [r3]).
Then a couple months later, I heard a talk by Cara Taber of
Thoughtworks, Extreme Programming on Large J2EE Projects.
Some aspects sounded like some of the things we did when I worked on Space Shuttle Software, one of the heaviest methodologies, a CMM Level 5 organization.
I knew that to create high quality software in a productive manner required tools and planned testing, for example.
I decided to investigate this subject further.
I thought first about comparing to CMM, but in my research found that Mark C. Paulk[r5] already has done a good job comparing XP specifically. I may have a comment or two on his analysis, later.
One big difference is the focus. The Space Shuttle software has (near) zero defects as the highest priority.[r7]
Agile methods have the highest priority "to satisfy the customer through early and continuous delivery of valuable software".[r1]
When reading Martin Fowler's paper,[r3] many of the supposed disadvantages of heavy methodologies were things we did not do on supporting Space Shuttle Software, and it seemed that the Agile solutions to these were what we were doing.
It then occurred to me that when he talks of heavy old methodologies, he refers to immature heavy methodologies, that I will call IHM.
What I was familiar with was more mature heavy methodologies, or MHM.
I use maturity in the way that The Software Engineering Institute does with their Capability Maturity Model.
The MHM organization where I worked was one of the first to achieve a SEI CMM Level 5 rating.
Part of becoming a mature organization is finding ways around the disadvantages of IHM.
I mentioned earlier that the highest priority of the Shuttle Software team was zero defects.
One of the managers defined zero defects as "fulfilling all requirements".
I had on my whiteboard something pretty close to "and doing what it should do".
Some of the things they did were to
- The Shuttle team was very people oriented.
When the team was formed by IBM they hired good people and treated them well.
The 1996 Fast Company article[r7] about it tells something of the "Software for Grown Ups" attitude.
This is very compatible with the people focus of Agile methods such as the 40 hours/week max of XP.
- Standards and processes were evaluated on a regular basis. I was there when many of these were developed, and we had weekly working groups for a long time before things stabilized to where they may seem to be unchanging.
Some of the current commentators on agile methods seem to feel that having the process become stable after a few iterations of a single project may be a lot to hope for.
From what I have seen, I suspect that as organizations get experience with agile methodologies, they will find that they can develop loose enough organization process documentation that will statisfy Paulk's[r5] concerns about Integrated Software Management and in a few years maybe those about Process Change Management too.
- They have about 6 different processes for Shuttle and Tools software with different risk factors.
The software to fly the Shuttle was treated much differently than the menus developers used to log on to the system.
Processes were reviewed on a regular basis and overrides were possible in unusual circumstances where the process had not kept up with reality.
- The Shuttle team accumulated metrics over a long period of time, to where they could be very predictive of the behaviour of the process for the higher risk levels of software.
I know firsthand that this would not have been possible without tools to automate the measurement process.
You cannot get consistent metrics voluntarily. You can get them more easily when they just come out of the configuration management process.
I suspect that tools can be used for Agile methodologies too, and eventually some useful metrics can be obtained relatively painlessly.
- The Shuttle team has annual deliveries of versions to the customer.
There is about an 18 month cycle for work on a major version.
The requirements for the actual "flying on the Shuttle" software had to be much more stable than the tools to make it happen.
This is admittedly a different circumstance than where Agile methods are discussed.
However, what they do with that time is not as different as it may seem at first.
The requirements, design, code and test is all going on simultaneously for the first year of the cycle.
NASA picks their priority features to implement, then the team(s) figure out a schedule for which iteration or release cycle the change is likely to be implemented.
Schedules may be updated weekly.
The mini-schedule for each change is then backed up, so requirements for changes later in the schedule are not done until several iterations into development.
Sometimes some of the tools need information that will not be available until some decisions are made regarding some piece of payload or other hardware.
The Agile methods approach for picking what will go into the next iteration did not seem that much different.
- Over the year and a half cycle, 6 week iterations for delivered Shuttle and Tools software were used and weekly (or as needed) for other kinds of software (team use only). There is usually a functional set of software for each iteration, with the scheduled set of changes implemented (sound familiar?)
- There are restrictions on the kinds of changes that can be made in the last 6 months or so before delivery. This is due to the mission critical nature, and allowed for extensive full regression testing and independent verification.
It was common for minor requirements changes to be needed.
The high level descriptions of changes were frozen, but there was a documented process so the details to make them work could be changed if needed.
Although officially using an overlapping waterfall process model, in fact it could be mapped to a spiral. The requirements and design could be speeded up to catch the code (next release or patch) when necessary.
- There was a controlled process for delivering patches.
- Internal users or representatives of the customers were involved in every review of requirements, design, and code.
- The customer could review status of all changes on a weekly basis.
- Many of the developers would confer with each other, the users, and experts on a daily basis.
We did not sit in the same room, but would holler across the hall or walk over or run into each other.
Sometimes we may create a prototype or stubbed off test case and show the users or peers for feedback.
We could call a meeting if absoultely necessary.
- Anyone with too big an ego would not be able to survive the peer reviews, where the goal was to create the best software for the end product and not to have zero defects on the first try.
One would get written up for standard violations including comments, not just for errors.
Although fixing comments and some minor nitpicking stuff may be optional, most developers would eventually code to standards to avoid the attention. (I think the peer programming in XP could serve similarly).
- Test specs were reviewed as part of code reviews.
For some of the more experienced developers, we would be planning our tests as we did designs, and develop the test cases at the same time as the code.
Some teams would review test results at the code review, some would have separate review.
There were independent verifiers for all software over a certain risk level. They developed their own tests.
- Creation of regression test suites was encouraged, and many functional areas did so.
We would use these to test new compilers and operating system upgrades as well as running at least once per major version.
- Most people were cross trained with at least multiple functional areas, and often serving different functions for different functional areas.
The last year I was there, I was primary developer for a couple functional areas, backup for others, Requirements analyst and verifier for another, and verifier for another.
I also moderated code reviews, worked on standards, was backup build coordinator, reset passwords and audited department dataset security.
- A Requirements Analyst would create requirements to the detailed appropriate for the application, but the programmmer/developer would do the design.
This gave the programmers more say than many IHM shops.
Both groups would be subject to reviews with the other, so both requirements, design, and code turned out to be collaborative efforts as often as not.
This became more collaborative when we began using a modified OMT methodology to design a reusable C++ class library in the mid-1990s. The RA would maintain traditional type requirements, and the developers would create and maintain the design as well as the code.
It was rare for one person to do the design and another to do the construction. This would be most likely when training someone new to a functional area. Note that for most enhancement type changes, the design was comments in the code.
- Everyone had to agree to estimates for their work. Sometimes an experienced developer would estimate for someone, but that person could revise the estimate later, with customer (sometimes internal customer) agreement. Most emphasis was put on milestones, effort was tracked mostly to help make estimates of it better.
- There was someone responsible as owner of each piece of code. This seems to contrast with the everybody owns everything attitude, but in fact, in the tools area, anyone could change something in another area, if the change affected their own functional area. I wonder if there might need to be some kind of ownership of some parts of the software for Agile methods to scale to large projects.
References
References
- Note some references are out of date
- Manifesto for Agile Software Development
http://agilemanifesto.org/
- The New Methodology by Martin Fowler.
http://www.martinfowler.com/articles/newMethodology.html
- The Agile Alliance
https://www.agilealliance.com/
- Extreme Programming From a CMM Perspective by Mark C. Paulk (July 2001).
http://www.sei.cmu.edu/cmm/papers/xp-cmm-paper.pdf
- Agile Development Joins the "Would Be" Crowd by
Alistair Cockburn (Cutter IT Journal, 2002)
http://www.agilealliance.com/articles/articles/ACcitj0102.pdf
-
They Write the Right Stuff From Fast Company Magazine (1996)
https://www.fastcompany.com/28121/they-write-right-stuff
Change History
I reserve the right to make updates as I have more information and new ideas. I plan to note major changes here.
Initial version April 22, 2002
Some links fixed Ocotber 30, 2020
About the Author
Alan Prosser spent many years in the Flight Software Application Tools organization at IBM Federal Systems, Loral, and Lockheed Martin supporting the Onboard Software Contract to NASA for Space Shuttle Software in Houston, Texas.
He served on many control boards, wrote or supported applications for configuration management, automated code generation, automated testing, automated measurements, project management support, code and process standards and all around process improvement.
For the past several years, he has been working at EDS CRM Solutions Development in Des Moines, Iowa including 6 months as a Software Configuration Management Subject Matter Expert for the business unit.
He is currently self employed.
This site created by Alan Prosser
of West Des Moines, Iowa in nearly raw HTML with the help of Hot Dog Professional Software.
Last update April 22, 2002
Drop Al a line at
alprosser19@yahoo.com with any comments.
Back to Al's Home Page
© Alan Prosser 2002. Commercial use of anything copied from here requires written permission.