close

Moved

Moved. See https://slott56.github.io. All new content goes to the new site. This is a legacy, and will likely be dropped five years after the last post in Jan 2023.

Showing posts with label project management. Show all posts
Showing posts with label project management. Show all posts

Wednesday, September 21, 2016

Bad Trends and Sloppy Velocity

Read this: https://www.linkedin.com/pulse/story-points-evil-brad-black-in-the-market-

There are good quotes from Ron Jeffries on the worthlessness of story points. (I've heard this from other Agile Consultants, also.) Story Points are a political hack to make management stop measuring the future.

The future is very hard to measure. The difficulty in making predictions is one of the things which distinguishes the future from the past. There's entropy, and laws of thermodynamics, and random quantum events that make it hard to determine exactly what might happen next.

If Schrödinger's cat is alive, we'll deliver this feature in this sprint. If the cat is dead, the feature will be delayed. The unit test result is entangled with the photon that may (or may not) have killed the cat. If you check the unit tests, then the future is determined. If you don't check the unit test, the future can be in any state.

When project management becomes Velocity Dashboard Centered (VDC™), that's a hint that something may be wrong.

My suspicion on the cause of VDC?

Product Owners ("Management") may have given up on Agility, and want to make commitments to a a schedule they made up on a whiteboard at an off-site meeting. Serious Commitments. The commitment has taken on a life of its own, and deliverable features aren't really being prioritized. The cadence or tempo has overtaken the actual deliverable.

It feels like the planning has slipped away from story-by-story details.

What can cause VDC?

I think it might be too many layers of management. The PO's boss (or boss's boss) has started dictating some kind of delivery schedule that is uncoupled from reality. The various bosses up in the stratosphere are making writing checks their teams can't cash.

What can we do?

I don't know. It's hard to provide schooling up the food chain to the boss of the boss of the product owner. It's hard to explain to the scrum master that the story points don't much matter, since the stories exist independent of any numbering scheme.

The link above says that there's some value in ordering the stories; assigning random-ish point numbers somehow helps order them.

I reject the last part of this. The stories can be ordered without any numbers. The Agile manifesto is pretty clear on this point: talk about it. The points don't enhance the conversation. Push the story cards around on the board until you have something meaningful. Assigning numbers is silliness.

Actually. I think its harmful.

Rolling numbers up to "senior" management isn't facilitating a conversation. It's summarizing things with empty numerosity. ("Numerosity"? Yes. Empty numerosity: applying numeric methods inappropriately. For example, averaging the day of the week on which it rains, for example, to discover something about Wednesday.)

The Best Part (TBP™)

Irreproducibility.

On Project X, we had a velocity of 50 Story Points per Sprint. On Project U, the "same" team -- someone quit and two new people were hired -- had a velocity of 100 Story Points per Sprint. Wow! Right?

Except. Of course, the numbers were inflated because the existing folks figured the new folks would take longer to get things done. But the new folks didn't take longer. And now the team is stuck calling a 3-point story a 5-point story because the new guys are calibrated to that new range of random numbers.

So what's comparable between the "same" team on two projects? It's not actually the same people. That's out.

We can try to pretend that the projects have the "same" technology (Java 1.6 v. Java 1.8) or the same CI/CD pipeline (Ant v. Maven, Hudson v. Jenkins) or even the same overall enterprise. But practically, there's nothing repeatable about any of it. It's just empty numerosity.

Sorry. I'm not sold, yet, on the value of Story Points.

Tuesday, August 6, 2013

How to Manage Risk

Also see "On Risk and Estimating and Agile Methods". This post is yet another angle on a common theme.

Orders of Ignorance and Risk Management.

Software risk management has two sides.  First, there's the classical meaning of risk, we'll call that "casino risk" because it's really random events.  This includes fire, flood, famine, conquest, war, pestilence, death, etc.  Actual risks.

The second meaning of risk is a load of malarkey.  It's a code word that includes two things: "bad management" and "ignorance".  Some things called project risks are just plain old bad management—generally driven by a nonexistent process for handling ignorance.  The events aren't random.  

There are five orders of ignorance, and each of them leads to project management problems.  None of these are "random events"; none of this is like casino gambling.  There aren't any odds; most of these things are certainties.

0 Order Ignorance: Things We Know.

There are two sides to the things we think we know about a project.  There are the things we know which are true, and things which are false.  Falsehoods come from at least two places:  we assumed something or we were actually lied to.  (Other choices, like illusions and hallucinations, are too creepy to pursue.)

Our assumptions aren't facts.  This sounds so obviously stupid, but projects get into trouble based on assumptions that were never checked to see if they were true or not.  Managers insist on doing "risk analysis" and then pad their project estimates with time and money instead of simply challenging their assumptions.

Some "assumptions" are explicit placeholders for facts to be found out later.  These formally documented assumptions are a different thing, they're 1st order ignorance, something we know we don't know.

Example.  The customer says they need an app to do [X].  There's 8 people in the department who are likely actors.  We assume the user population is 8.  With no fact checking.  We don't put it in the plan as a formal, documented assumption, we just assume it.

Bottom line.  What are the odds that a plan is based on false knowledge?  This isn't casino gambling.  There aren't any odds that we assumed something, or odds that we were lied to.  This is simple fact-checking, simple bad management. Every unchecked fact is going to be false.

1st Order Ignorance: Things We Don't Know.

There are two ways to deal with things we don't know.  Make a guess and document this, or actually ask a question.

The formally documented guesses (usually called "assumptions") are the hallmark of software project plans.  Documents are often full of lists of assumptions on which the plan and associated estimates rest.

Each one of these "assumptions" is a question that—for some reason—couldn't be asked or couldn't be answered.  Some questions are "politically sensitive" and can't be asked.  Some questions require lots of research to develop an answer.

The answers, of course, change the project.  In most cases they change the project dramatically.  If they didn't have a big impact, we wouldn't spend any time documenting them so carefully, would we?

And all project managers are punished for making any changes.  We can't expand the scope without a lot of accusatory conversations where people keep repeating the original price back to us.  

The canonical line is something like "I thought this was only 1.8 million dollars, how can you change it now?" or "I've already committed to $750K, we can't change the price, something else has to be changed.  We have to work smarter not harder."  Bleah.

This doesn't involve casino-like odds of finding answers that will change the scope of the project.  We know we have questions.  We know we made guesses and documented them as "assumptions".  There were no odds; it was absolutely certain there would be changes.

Bottom line.  What are the odds that a plan is based on things we guessed at?  Typically, this is a fact of life: parts of the estimate are guesses.  

What are the odds that the real answer will be different from the guess?  This, too, is absolutely certain. It's merely a question of magnitude.

For first-order ignorance problems, we should create a contingency budget for each answer that will diverge from the guesses.  This isn't a book-making exercise, it's a list of alternate guesses (or assumptions).

It isn't enough to simply detail the assumptions.  We have to provide alternative answers and the associated costs when the assumptions turn out to be false.

2nd Order Ignorance: Things We Didn't Know to Ask.

If we didn't know to ask, what's that?  Is that a risk?  What are the odds we forgot to ask something?

Here's the canonical quote: "Are there any other unforeseen problems?"

What? If they're unforeseen, then, uhhh, we can't identify them now.

If we can identify them, then, uhhh, they're not unforeseen.

There aren't any "odds" of an unforeseen problem.  It's an absolute certainty that there will be unforeseen problems.

Remember, these are things we didn't know to ask.  Things that didn't make the list of "assumptions".  These are things that completely escaped our attention.

What will the impact be?  We have no way of knowing. No. Way. Of. Knowing.

We can't even put a contingency in place for these things.  We didn't know to ask.  So we don't know what it will cost for rework when we figure out what we should have asked.

All we can do here is use a good, transparent management process.  Each new piece of information -- each thing that's learned that we didn't know to ask -- will change scope, schedule, cost, deliverables, everything.

This isn't "casino risk".  There are no odds associated with this.  This is just change management. It's inevitable. Calling it a project risk is lying about it.

3rd Order Ignorance: No Process for Managing Ignorance.

When we have second-order ignorance (we didn't know to ask) there are two responses: an organized change-management process, or a leap down to 3rd order ignorance.  Third order ignorance slips from simply not knowing into denying that the level of knowledge changes through time.

When we learn something unexpected, we can either deny that it is something new, or we can expose it.  When a business analyst learns that the "simple" calculation involves a magical MS-Access database with no known author, magical numbers and no discernible calculations, this is going to change the scope of the work.  Or make it impossible to make progress until someone explains the MS-Access database.

Denying this kind of unexpected information is common, it's done by playing the management trump card of "schedule is sacred.  Once the schedule is sacred, all learning is either trivially denied or learning turns into ways of shaving scope or quality to make the schedule.  

3rd Order Ignorance means there's no change process and the "schedule is sacred".  If the only thing that matters is schedule, then buggy, useless software will be delivered on time and on budget.

What are the odds of 3rd order ignorance?  Either 1.0 or 0.0.  Either the organization has an effective change management process (in which case, we don't have 3rd order ignorance) or there will be problems in delivering software that works on time. 

Bottom Line.

Here's the summary of ignorance and mitigation.

0th order ignorance: do basic fact checking to validate your assumptions.

1st order ignorance: do contingency planning. Define specific contingencies around each specific unknown fact.  Don't just document an "assumption", plan for alternatives when the assumption is invalidated.

2nd order ignorance: have a change management process.

3rd order ignorance (i.e., no change management): stop using waterfall-style methodologies. Switch to Agile methods so that change and the management of ignorance become essential features of the overall process.

4th order ignorance is the state of not being aware that ignorance is one of the most significant driving forces behind project failure.  A symptom of 4th-order ignorance is conflating "risk analysis" for a project with "casino risks" (or "insurance risks.")  With rare exceptions, all project risk analysis is just ways of coping with bad management.

When there's 4th order ignorance, folks are told that it's helpful or meaningful to try and assign odds to the veracity of the facts, the presence of things which were forgotten, and the change management process itself.

Avoiding 4th order ignorance means recognizing that software project management "risks" are just bad management (with minor exceptions for fire, flood, famine, conquest, war, pestilence, and death.)

Here's how to manage risk:

  • Check the facts, 
  • plan specific contingencies, 
  • use Agile methods because of their built-in ability to manage change.

Tuesday, July 16, 2013

How Managers Say "No": The RDBMS Hegemony Example

Got an email looking for help in attempting break through the RDBMS Hegemony. It's a little confusing, but this is the important part of how management says "no".
"Their response was nice but can you flush [sic] it out more"
[First: the word is "flesh": "flesh it out." Repeat after me: "Flesh it out," "Flesh it out," "Flesh it out." Flesh. Put flesh on the bones. No wonder your presentation went nowhere, either you or the manager or both need help. English as a second language is only an excuse if you never read anything in English.]

There's a specific suggestion for this "more". But it indicates a profound failure to grasp the true nature of the problem. It amounts to a drowning person asking us to throw them a different colored brick. It's a brick! You want a life preserver! "No," they insist, "I want a brick to build steps to climb out."

Yes, RDBMS Hegemony is a real problem. I've talked about it before "Hadoop and SQL/Relational Hegemony". Others have noted it: "NoSQL and NewSQL overturning the relational database hegemony". You can read more concrete details in articles like this: "Introduction to Non-Relational Data Storage using Hbase".

RDBMS Hegemony is most visible when every single in-house project seems to involve the database. And some of those uses of the database are clearly inappropriate.

For example, trying to mash relatively free-form "documents" into an RDBMS is simple craziness. Documents—you know, the stuff created by word processors—are largely unstructured or at best semi-structured. For most RDBMS's, they're represented as Binary Large Objects (BLOBs). To make it possible to process them, you can decorate each document with "metadata" or tags and populate a bunch of RDBMS attributes. Which is fine for the first few queries. Then you realize you need more metadata. Then you need more flexible metadata. Then you need interrelated metadata to properly reflect the interrelationships among the documents. Maybe you flirt with a formal ontology. Then you eventually realize you really should have started with document storage, not a BLOB in an RDBMS.

Yes, some companies offer combo products that do both. The point is this: avoiding the RDBMS pitfall in the first place would have been a big time and money saver. Google Exists. The RDBMS is not the best choice for all problems.

The problem is this:
  • Getting away from RDBMS Hegemony requires management thinking and action.
  • Management thinking is a form of pain.
  • Management action is a form of pain. 
  • Managers hate pain.
In short, the only way to make progress away from the RDBMS is to create or expose existing pain. Or make it possible for the manager to avoid pain entirely.

Let's look at the various approaches.

Doing A "Presentation"

The email hinted at a conversation or presentation on the problem of RDBMS Hegemony. 
"I finally convinced my current client that RDBMS's are expensive in terms of adding another layer to the archtiecture [sic] and then trying to maintain it."
It's not clear from the email what the details of this conversation or presentation were, but it clearly involved the two key technical points (1) the RDBMS has specific use cases, and (2) not all applications fit those use cases.

However. Those two key technical points involve no real management pain.

Real pain comes from cost. And since the RDBMS license is usually site-wide, there's no obvious cost to the technology.

The labor cost for DBA support, similarly, is side-wide and already in the budget. So there's no obvious cost to the labor.

No cost means no pain. No pain means no change.

Asking a manger to think, however, causes actual pain. Managers want technical people to do the thinking for them.

Asking a manager to consider the future means they may have to take action in the future. That's potential pain. 

Either way, a management presentation on database hegemony is pure pain. No useful action will ever come from a simple, direct encapsulation of how the RDBMS is not really the universal data tool. Management said "no" by asking for more information.

We'll return to the "more information" part below.

It was good to start the conversation.

It's good to continue the conversation. But the specific request was silliness.

Exposing the Existing Pain

What's more important than a hypothetical conversation is showing how the RDBMS is causing pain right now. It's easier to convince managers of the hidden cost of the RDBMS by exposing existing actual pain in the current environment. And it has to be a level of pain that exceeds the pain of thinking and taking action.

What's most clear is a specific and avoidable labor cost. Ideally, this specific—and avoidable—labor cost will obviously be associated with something obviously database-related. It must be obvious or it won't yield a technology-related management understanding. If it's not obvious, management will say "no", by asking for more data; they'll claim it's people or process or measurement error.

The best place to look for avoidable labor is break-fix problem reports, bugs and enhancements. Another good source of avoidable costs are schema migrations: waiting for the DBA's to add columns to a table, or add tables to a database.

If you can point to specific trouble tickets that come from wrong use of an RDBMS, then you might be able to get a manager to think about it.

The Airtight Case

Your goal on breaking RDBMS Hegemony is to have a case that is "airtight". Ideally, so airtight that the manager in question sits up, takes notice, and demands that a project be created to rip out the database and save the company all that cost. Ideally, their action at the end of the presentation is to ask how long it will take to realize the savings.

Ideally.

It is actually pretty easy to make an airtight case. There are often a lot of trouble tickets and project delays due to overuse and misuse of the RDBMS.

However.

Few managers will actually agree to remove the RDBMS from an application that's limping along. Your case may be airtight, and compelling, and backed with solid financials, but that's rarely going to result in actual action.

"If it ain't broke, don't fix it," is often applied to projects with very high thresholds for broken. Very high.

This is another way management says "no". By claiming that the costs are acceptable or the risk of change is unacceptable. Even more farcical claims will often be made in favor of the status quo. They may ask for more cost data, but it's just an elaborate "no".

It's important to make the airtight case.

It's important to accept the "no" gracefully.

Management Rewards

When you look at the management reward structure, project managers and their ilk are happiest when they have a backlog of huge, long-running projects that involve no thinking and no action. Giant development efforts with stable requirements, unchallenging users, mature technology and staff who don't mind multiple-hour status meetings.

A manager with a huge long-running project feels valuable. When the requirements, people and technology are stable, then thinking is effectively prevented.

Suggesting that technology choices are not stable introduces thinking. Thinking is pain. The first response to pain is "no". Usually in the form of "get more data."

Making a technology choice may require that a manager facilitate a conversation which selects among competing technology choices. That involves action. And possible thinking.

Real Management Pain. The response? Some form of "no".

Worse. (And it does get worse.)

Technology selection often becomes highly political. The out-of-favor project managers won't get projects approved because of "risky technology." More Management Pain.

War story. Years ago, I watched the Big Strategic Initiative shot down in flames because it didn't have OS/370 as the platform. The "HIPPO" (Highest Paid Person's Opinion) was that Unix was "too new" and that meant risk. Unix predates OS/370 by many years. When it comes to politics, facts are secondary.

Since no manager wants to think about potential future pain, no manager is going to look outside the box. Indeed, they're often unwilling to look at the edge of the box. The worst are unwilling to admit there is a box.

The "risk" claim is usually used to say "no" to new technology. Or. To say "no" to going back to existing, well-established technology. Switching from database BLOBs to the underlying OS file system can turn into a bizzaro-world conversation where management is sure that the underlying OS file system is somehow less trustworthy than RDBMS BLOBs. The idea that the RDBMS is using the underlying file system for persistence isn't a compelling argument.

It's important to challenge technology choices for every new project every time.

It's necessary to accept the "no" gracefully.

The "stop using the database for everything" idea takes a while to sink in.

Proof Of Concept

The only way to avoid management pain (and the inaction that comes from pain avoidance) is to make the technology choice a fait accompli.

You have to actually build something that actually works and passes unit tests and everything.

Once you have something which works, the RDBMS "question" will have been answered. But—and this is very important—it will involve no management thought or action. By avoiding pain, you also default into a kind of management buy-in.

War Story

The vendors send us archives of spreadsheets. (Really.) We could unpack them and load them into the RDBMS. But. Sadly. The spreadsheets aren't consistent. We either have a constant schema migration problem adding yet another column for each spreadsheet, or we have to get rid of the RDBMS notion of a maximalist schema. We don't want the schema to be an "at most" definition; we'd need the schema be an "at least" that tolerates irregularity.

It turns out that the RDBMS is utterly useless anyway. We're barely using any SQL features. The vendor data is read-only. We can't UPDATE, INSERT or DELETE under any circumstances. The delete action is really a ROLLBACK when we reject their file and a CREATE when they send us a new one.

We're not using any RDBMS features, either. We're not using long-running locks for our transactions; we're using low-level OS locks when creating and removing files. We're not auditing database actions; we're doing our own application logging on several levels.

All that's left are backups and restores. File system backups and restores. It turns out that a simple directory tree handles the vendor-supplied spreadsheet issue gracefully. No RDBMS used.

We had—of course—originally designed a lot of fancy RDBMS tables for loading up the vendor-supplied spreadsheets. Until we were confronted with reality and the inconsistent data formats.

We quietly stopped using the RDBMS for the vendor-supplied data. We wrote some libraries to read the spreadsheets directly. We wrote application code that had methods with names like "query" and "select" and "fetch" to give a SQL-like feel to the code.

Management didn't need to say "no" by asking for more information. They couldn't say no because (a) it was the right thing to do and (b) it was already done. It was cheaper to do it than to talk about doing it.

Failure To See The Problem

The original email continued to say this:
"how you can achieve RDBMS like behavior w/out an actual RDBMS"
What? Or perhaps: Why?

If you need RDBMS-like behavior, then you need an RDBMS. That request makes precious little sense as written. So. Let's dig around in the email for context clues to see what they really meant.
"consider limting [sic] it to
1) CREATE TABLE
2) INSERT
3) UPDATE
    An update requires a unique key. Let's limit the key to contain only 1 column.
4) DELETE
    A delete requires a unique key. Let's limit the key to contain only 1 column."
Oh. Apparently they really are totally fixated on SQL DML.

It appears that they're unable to conceive of anything outside the SQL DML box.

As noted in the above example, INSERT, UPDATE and DELETE are not generic, universal, always-present use cases. For a fairly broad number of "big data" applications, they're not really part of the problem.

The idea that SQL DML CRUD processing forms a core or foundational set of generic, universal, always-present use cases is part of their conceptual confusion. They're deeply inside the SQL box wondering how they can get rid of SQL.

Back to the drowning person metaphor. 

It's actually not like a drowning person asking for a different colored brick because they're building steps to walk out.

It's like a person who fell face down in a puddle claiming they're drowning in the first place. The brick vs. life preserver question isn't relevant. They need to stand up and look around. They're not drowning. They're not even in very deep water.

They've been laying face-down in the puddle so long, they think it's as wide as the ocean and as deep as a well. They've been down so long it looks like up.

Outside the SQL Box

To get outside the SQL box means to actually stop using SQL even for metaphoric conversations about data manipulation, persistence, transactions, auditing, security and anything that seems relevant to data processing.

To FLESH OUT ["flesh", the word is "flesh"] the conversation on breaking the SQL Hegemony, you can't use hypothetical hand-waving. You need tangible real-world requirements. You need something concrete, finite and specific so that you can have a head-to-head benchmark shootout (in principle) between an RDBMS and something not an RDBMS.

You may never actually build the RDBMS version for comparison. But you need to create good logging and measurement hooks around your first noSQL application. The kind of logging and measurement you'd use for a benchmark. The kind of logging and measurement that will prove it actually works outside the RDBMS. And it works well: reliably and inexpensively. 

This is entirely about asking for forgiveness instead of asking for permission.  

Managers can't give permission, it involves too much pain.

They can offer forgiveness because it requires neither thinking nor action.

Tuesday, June 18, 2013

The Small Class Large Class "Question"

This isn't really a question. Writing a few "large" omnibus classes is simply bad design.

There are several variations on the theme of principles of OO programming. None of them include "a few large omnibus classes with nebulous responsibilities."

Here's one set of principles: Class Responsibility Collaboration. Here's one summary of responsibility definition: "Ask yourselves what each class knows and what each class does".  Here's another: "A responsibility is anything that a class knows or does." from Class Responsibility Collaborator (CRC) Models.

This idea of responsibility defined as "Knows or Does" certainly seems to value focus over sprawling vagueness.

Here's another set of principles from Object-Oriented Design; these echo the SOLID Principles without the clever acronym.

Getting down to S: a single reason to change means that the class must be narrowly-focused. When there are a few large classes, then each large class has to be touched for more than one reason. By more than one developer.

Also, getting to O: open to extension, closed to modification requires extremely narrow focus. When this is done well, new features are added via adding subclasses and (possibly) changing an initialization to switch which Factory subclass is used.

But Why?

Why do people reject "lots of small classes"?

Reason 1. It's hard to trivially inspect a complex solution. I've had an argument similar to the one Beefarino alludes to.  In my case, it was a manager who simply didn't schedule the time to review the design in any depth.

Reason 2. Folks unfamiliar with common design patterns often see them as "over-engineered". Indeed, I've had programmers (real live Java programmers, paid to write Java code) who claimed that the java.util data structures (specifically Map, TreeMap and HashMap) were needless, since they could write all of that using only primitive arrays. And they did, painstakingly write shabby code that had endless loops and lookups and indexing garbage instead of simply using a Map.

Reason 3. Some folks with a strong background in simple procedural programming reject class definitions in a vague, general way. Many good programmers work out ways to do encapsulation in languages like C, Fortran or COBOL via naming conventions or other extra-linguistic tricks.

They deeply understand procedural code and try to map their ideas of functions (or subroutines) and their notions of "encapsulation via naming conventions" onto OO design.

At one customer site, I knew there would be friction because the project manager was very interested in "code conventions" and "naming conventions". This was a little upsetting at the time. But I grew to realize that some folks haven't actually seen any open source code. They don't understand that there are established international, recognized conventions for most programming languages, and examples are available on the World Wide Web. Just download a popular package and read the source.

The "naming conventions" was particularly telling. The idea that Java packages (or Python packages and modules) provide distinct namespaces was not something that this manager understood. The idea that a class defines a scope was not really making much sense to them.

Also Suspicious

Another suspicious design feature are "utility" packages. It's rare (not impossible, but rare) for a class to truly be interpackagial in scope and have no proper home. The "java.util" package, for example, is a strange amalgamation of the collection data structures, national and cultural class definitions (calendars, currency, timzones, etc.) handy pattern abstractions, plus a few algorithms (priority queue, random).

Yes, these have "utility" in that they're useful. They apply broadly to many programming problems. But so does java.lang and java.io. The use of a vague and overly inclusive term like "util" is an abdication of design responsibility to focus on what's really being offered.

These things do not belong together in a sprawling unfocused package.

Nor does disparate functionality belong in a sprawling, unfocused class.

Education

The answer is a lot of eduction. It requires time and patience.

One of the best methods for education is code walkthroughs. This permits reviews of design patterns, and how the SOLID principles are followed (or not followed) by code under development.

Thursday, April 19, 2012

Should the CIO Know How to Code?

Read this Computerworld posting: Should the CIO know how to code?

The answer is "Yes."

The examples of "well-functioning non-technical CIOs" are people as rare as hen's teeth.  "These are leaders who know what they don't know. They are good at asking the right questions, probing for further insight, and then re-framing the answers in such a way that the business side will understand".

I'm sure there are people like this.  In the last 35 years, I've met very, very few.  Two actually.

Larry and Chuck are the two examples.

Larry knew what he didn't know.  He took the time to actually sit with actual developers and actually watch them work.  It was weird the first time he sat and watched you type.  But without deep knowledge, he couldn't be sure the projects would get done.  So he allocated an hour or more each day to sit with key developers and learn.

Chuck did essentially the kind of thing.  He sat with each developer individually every single day.  He did not have all-hands meetings that lasted hours.  He did not have an "around the table" where everyone spent 20 minutes boring the entire rest of the team with irrelevant details.

Could they code?

Essentially, yes.  They looked at code over a developer's shoulder.  They participated in a form of "pair programming" where they watched code happen.  By themselves they couldn't code much.  As pair programmers, however, they could work with another programmer and get stuff done.

Thursday, February 16, 2012

The Estimation Charade

"In reality, most projects worth doing are not repetitions of previous things."

Thank you for that.  

If it has been done before -- same problem -- same technology -- then we should be able to clone that solution and avoid creating a software development project.  If there's something novel -- new problem -- new technology -- then we can't easily predict the whole sweep of software development effort.

The whole Estimation Charade is an artifact of the way accountants exercise control over the finances.  They require (with the force of law) that budgets be written in advance.  Often in advance of the requirements being known.  When we sit down to fabricate next year's budget, we're dooming some fraction next year's projects to a scramble for funding leading to cancellation and failure.  

Accountants further require that software development be arbitrarily partitioned into "capital" and "expense".  There's no rational distinction between the phases.  The nature and scope of the work don't change at all.  

Yet.  

Somehow, the accountants are happy because some capital budget has been spent (as planned 18 months ago) and now we're spending expense budget.  From an accounting perspective, some kind of capital asset has been created.  

Think of it.  Some lines of code are a capital asset.  Other lines of code are an expense.  

Someday, I'll have to ask some accountants to explain how I can tell which was which.

Tuesday, January 10, 2012

Innovation is the punctuation at the end of a string of failures

Read this in Forbes: "Innovation's Return on Failure: ROF".

Also, this: "The Necessity of Failure in Innovation (+ more on CDOs)".

This, too: "Why innovation efforts fail".

While we're at it: "Accepting Failure is Key to Good Overall Returns on High-Risk Development Programs".

I can't say enough about the value of "failure".  The big issue here is the label.

A project with a grand scope and strategic vision gets changed to make it smaller and more focused.  Did it "fail" to deliver on the original requirements?  Or did someone learn that the original grand scope was wrong?

A project that changes isn't failure.  It's just lessons learned.  Canceling, re-scoping, de-scoping, and otherwise modifying a project is what innovation looks like.  It should not be counted as a "failure".

A project "Death March" occurs because failure is not an option, and change will be labeled as failure.

Thursday, December 1, 2011

Agile "Religion" Issues

See this Limitations of Agile Software Development and this The Agile "Religion" -- What?.  What's important is that the limitations of Agile are not limitations.  They're (mostly) intentional roadblocks to Agile.

Looking for "limitations" in the Agile approach misses the point of Agile in several important ways.
The most important problem with this list of "limitations" is that five of the six issues are simply anti-Agile positions that a company can take.

In addition to being anti-Agile, a company can be anti-Test Driven Development.  They can be Anti-Continuous Integration.  They can be Anti-NoSQL.  There are lots of steps a company can take to subvert any given development practice.  Taking a step against a practice does not reveal a limitation.

"1. A team of stars... it takes more than the average Joe to achieve agility".   This is not a specific step against agility.  I chalk this up to a project manager who really likes autocratic rule.  It's also possible that this is from a project manager that's deeply misanthropic.  Either way, the underlying assumption is that developers are somehow too dumb or disorganized to be trusted.

Agile only requires working toward a common goal.  I can't see how a project manager is an essential feature of working toward a common goal.  A manager may make things more clear or more efficient, but that's all.  Indeed, the "clarity" issue is emphasized in most Agile methods: a "Scrum Master" is part of the team specifically to foster clarity of purpose.

Further, some Agile methods require a Product Owner to clarify the team's direction.

"A team of stars" is emphatically not required.  The experience of folks working in Agile environments confirms this.  Real, working Agile teams really really are average.

"2. Fit with organizational culture".  This has nothing to do with Agile methods.  This is just a sweeping (and true) generalization about organizations.  An organization that refuses autonomy and refuses flexibility can't use Agile methods.  An organization that refuses to create a "Big Design Up Front" can't use a traditional waterfall method and must use Agile methods.

Organizational fit is not a limitation of Agile.  It's just a fact about people.

"3. Small team...Assuming that large projects tend to require large teams, this restriction naturally extends to project size."

The assumption simply contradicts Agile principles.  It's not a "limitation" at all.  Large projects (with large numbers of people) have a number of smaller teams.  I've seen projects with over a dozen parallel Agile teams.  This means that in addition to a dozen daily scrums, there's also a scrum-of-scrums by the scrum masters.

Throwing out the small team isn't a limitation of Agile.  It's a failure to understand Agile.  A project with many small teams works quite well.  It's not "religion".  It's experience.

A single large team has been shown (for the last few decades) to be expensive and risky.

"4. Collocated team...We can easily think of a number of situations where this limitation prevents using agile:"  These are not limitations of Agile, but outright refusals to follow Agile principles.  Specifically:

  • "Office space organized by departments" is not a limitation of Agile.  That's a symptom of an organization that refuses to be Agile.  See #2 above; this indicates a bad fit with the culture.  An organization that doesn't have space organized by department might have trouble executing a traditional waterfall method.
  • "Distributed environment" is not a limitation of Agile.  Phones work.  Skype works.
  • "Subcontracting... We have to acknowledge that there is no substitute for face-to-face".  Actually, subcontracting is irrelevant.  Further, subcontracting is not a synonym for a failure to be collocated.  When subcontractors are located remotely, phones still work.  Skype works better and is cheaper.  
"5. Where’s my methodology?"  This is hard to sort out, since it's full of errors.  Essentially, this appears to be a claim that a well-defined, documented processes is somehow essential to software development.  Experience over the last few decades is quite clear that the written processes and the work actually performed diverge a great deal.  Most of the time, what people do is not documented, and the documented process has no bearing on what people actually do.  A documented process -- in most cases -- appears irrelevant to the work actually done.

Agile is not chaos.  It's a change in the rules to de-emphasize unthinking adherence to a plan and replace this with focus on working software.  Well-organized software analysis, design, code and test still exist even without elaborately documented (and irrelevant) process definitions.

"6. Team ownership vs. individual accountability... how can we implement it since an organization’s performance-reward system assesses individual performance and rewards individuals, not teams...?"  Again, the assumption ("performance-reward system assesses individual performance") is simply a rejection of Agile principles.  It's not a limitation of Agile, it's an intentional step away from an Agile approach.  

If an organization insists on individual performance metrics, see #2.  The culture is simply antithetical to Agile. Agile still works; the organization, however, is taking active steps to subvert it.

Agile isn't a religion.  It doesn't suffer from hidden or ignored "limitations".

"But did we question the assumption that Agile was indeed superior to traditional methodologies?"  

The answer is "yes".  A thousand times yes.  The whole reason for Agile approaches is specifically and entirely because of folks questioning traditional methodologies.  Traditional command-and-control methodologies have a long history of not working out well for software development.  The Agile Manifesto is a result of examining the failures of traditional methods.

A traditional "waterfall" methodology works when there are few unknowns.  Construction projects, for example, rarely have the kinds of unknowns that software development has.  Construction usually involves well-known techniques applied to well-documented plans to produce a well-understood result.  Software development rarely involves so many well-known details.  Software development is 80% design and 20% construction.  And the design part involves 80% learning something new and 20% applying experience.

Agile is not Snake Oil.  It's not something to be taken on faith.  

The Agile community exists for exactly one reason.  Agile methods work.

Agile isn't a money-making product or service offering.  Agile -- itself -- is free.  Some folks try to leverage Agile techniques to sell supporting products or services, but Agile isn't an IBM or Oracle product.  There are no "backers".  There's no trail of money to see who profits from Agility.

Folks have been questioning "traditional" methodologies for years.  Why?  Because "traditional" waterfall methodologies are a crap-shoot.  Sometimes they work and sometimes they don't work.  The essential features of long term success are summarized in the Agile Manifesto.  Well-run projects all seem to have certain common features; the features of well-run projects form the basis for the Agile methods.

Thursday, November 24, 2011

Justification of Project Staffing

I really dislike being asked to plan a project.  It's hard to predict the future accurately.

In spite of the future being -- well -- the future, and utterly unknowable, we still have to have the following kinds of discussions.

Me: "It's probably going to take a team of six."

Customer: "We don't really have the budget for that.  You're going to have to provide a lot of justification for a team that big."

What's wrong with this picture?  Let's enumerate.
  1. Customer is paying me for my opinion based on my experience.  If they want to provide me with the answers, I have a way to save them a lot of money.  Write their own project plan with their own answers and leave me out of it.
  2. I've already provided all the justification there is.  I'm predicting the future here.  Software projects are not simple Rate-Time-Distance fourth-grade math problems.  They involve an unknown number of unknowns.  I can't provide a "lot" of justification because there isn't any indisputable basis for the prediction.
  3. I don't know the people. The customer -- typically -- hasn't hired them yet.  Since I don't know them, I don't know how "productive" they'll be.  They could hire a dozen n00bz who can't find their asses blindfolded even using both hands.  Or.  They could hire two singular geniuses who can knock the thing out in a weekend.  Or.  They could hire a half-dozen arrogant SOB's who refuse to follow my recommendations. 
  4. They're going to do whatever they want no matter what I say.  Seriously.  I could say "six".  They could argue that I should rewrite the plan to say "four" without changing the effort and duration.  Why ask me to change the plan?  A customer can only do what they know to be the right thing. 
Doing the Right Thing

Let's return to that last point.  A customer project manager can only do what they absolutely know is the right thing.  I can suggest all kinds of things.  If they're too new, too different, too disturbing, they're going to get ignored.

Indeed, since people have such a huge Confirmation Bias, it's very, very hard to introduce anything new.  A customer doesn't bring in consultants without having already sold the idea that a software development project is in the offing.  They justify spending a few thousand on consulting by establishing some overall, ball-park, big-picture budget and showing that the consulting fees are just a small fraction of the overall.

As consultants, we have to guess this overall, ball-park, big-picture budget accurately, or the project will be shut down.  If we guess too high, then the budget is out of control, or the scope isn't well-enough defined, or some other smell will stop all progress.  If we guess too low, then we have to lard on additional work to get back to the original concept.

Architectures, components and techniques all have to meet expectations. A customer that isn't familiar with test drive development, for example, will have an endless supply of objections.  "It's unproven."  "We don't have the budget for all that testing."  "We're more comfortable with our existing process."

The final trump card is the passive aggressive "I'll have to see the detailed justification."  It means "Don't you dare."  But it sounds just like passive acceptance.

Since project managers can only do what they know is right, they'll find lots of ways of subverting the new and unfamiliar.

If they don't like the architecture, the first glitch or delay or problem will immediately lead to a change in direction to yank out the new and replace it with the familiar.

If they don't like a component, they'll find numerous great reasons to rework that part of the project to remove the offending component.

If they don't like a technique (e.g., Code Walk Throughs) they'll subvert it.  Either not schedule them.  Or cancel them because there are "more important things to do."  Or interrupt them to pull people out of them.

Overcoming the Confirmation Bias

I find the process of overcoming the confirmation bias to be tedious.  Some people like the one-on-one "influencing" role.  It takes patience and time to overcome the confirmation bias so that the customer is open to new ideas.  I just don't have the patience.  It's too much work to listen patiently to all the objections and slowly work through all the alternatives.

I've worked with folks who really relish this kind of thing.  Endless one-on-one meetings.  Lots of pre-meetings and post-meetings and reviews of drafts.  I suppose it's rewarding.  Sigh.

Thursday, September 29, 2011

The Politics of Estimating

Computerworld, September 12, page 10.

MicroburstIT Disasters
According to a study of 1,471 big IT projects, 15% turn out to be money pits, with cost overruns averaging 200%.

How is this a politically-charged statement?  We hear this kind of thing all the time.

As developers (or project leaders) we're failing to execute.

Right?

Hogwash.

An "overrun" is isomorphic to "badly justified" or "badly budgeted" or "oversold to executive sponsors".

An "overrun" can be a failure to use (or even permit) realistic estimates.  It may reflect an executive sponsor restating objectives to make the project large enough to justify it.  An overrun can mean anything.

Calling it an overrun is a way to label it as "failure to execute".

I prefer to call it a failure of vision (or whatever it is executive sponsors do).  It's more likely to be an under-estimate than it is to be an over-run.

After all, how many times have we been told to reduce an estimate?  How many times have folks gotten their "attaboys" and "attagirls" for "sharpening their pencils" and reducing the proposal to the smallest amount that the customer would approve?

Thursday, March 11, 2010

Great Lies: "Design" vs. "Construction"

In reflecting on Architecture, I realized that there are some profound differences between "real" architecture and software architecture.

One of the biggest differences is design.

In the earliest days, software was built by very small groups of very bright people. Alan Turing, Brian Kernighan, Dennis Ritchie, Steve Bourne, Ken Thompson, Guido van Rossum. (Okay, that last one says that even today, software is sometimes built by small groups of very bright people.) Overall architecture, both design and construction where done by the same folks.

At some point (before I started in this business in the '70's) software development was being pushed "out" to ever larger groups of developers. The first attempts at this -- it appears -- didn't work out well. Not everyone who can write in a programming language can also design software that actually works reliably and predictably.

By the time I got my first job, the great lie had surfaced.

There are Designers who are distinct from Programmers.

The idea was to insert a few smart people into the vast sea of mediocre people. This is manifestly false. But, it's a handy lie to allow managers to attempt to build large, complex pieces of software using a a larger but lower-skilled workforce.

Reasoning By Analogy

The reasoning probably goes like this. In the building trades there are architects, engineers and construction crews. In manufacturing, there are engineers and factory labor.

In these other areas, there's a clear distinction between design and construction.

Software must be the same. Right?

Wrong.

The analogy is fatally flawed because there is no "construction" in the creation of software. Software only has design. Writing code is -- essentially -- design work.

Architecture and Software Architecture

Spend time with architects and you realize that a good architect can (and often does) create a design that includes construction details: what fastenings to use, how to assemble things. The architect will build models with CAD tools, but also using foam board to help visualize the construction process as well as the final product.

In the software realm, you appear to have different degrees of detail: High Level Design, Detailed Design, Coding Specifications, Code.

High Level Design (or "Architecture") is the big picture of components and services; the mixture of purchased plus built; configuration vs. constructions; adaptation vs. new development. That kind of thing. Essential for working out a budget and plan for buying stuff and building other stuff.

Usually, this is too high-level for a lot of people to code from. It's planning stuff. Analogous to a foam-board overview of a building.

Detailed Design -- I guess -- is some intermediate level of design where you provide some guidance to someone so they can write programming specifications. Some folks want this done in more formal UML or something to reveal parts of the software design. This is a murky work product because we don't have really formal standards for this. We can claim that UML is the equivalent of blueprints. But we don't know what level of detail we should reveal here.

When I have produced UML-centric designs, they're both "too technical" and "not detailed enough for coders". A critique I've never understood.

Program Specifications -- again, I'm guessing -- are for "coders" to write code from. To write such a thing, I have to visualize some code and describe that code in English.

Let's consider that slowly. To write programming specifications, I have to
  1. Visualize the code they're supposed to write.
  2. Describe that code in English.
Wouldn't it be simpler to just let me code it? It would certainly take less time.

Detailed Design Flaws

First, let me simplify things by mashing "Detailed Design" and "Specification" together, since they seem to be the same thing. A designer (me) has to reason out the classes required. Then the designer has to pick appropriate algorithms and data structures (HashMap vs. TreeMap). Then the designer has to either draw a UML picture or write an English narrative (or both) from which someone else can code the required class, data structure and algorithm. Since you can call this either name, the names don't seem to mean much.

I suppose there could be a pipeline from one design document at a high level to other designs at a low level. But if the low-level design is made difficult by errors in the high-level design, the high-level designer has to rework things. Why separate the work? I don't know.

When handing things to the coders, I've had several problems.
  1. They ignore the design and write stuff using primitive arrays because they didn't understand "Map", much less "HashMap" vs. "TreeMap". In which case, why write detailed design if they only ignore it? Remember, I provided specifications that were essentially, line-of-code narrative. I named the classes and the API's.
  2. They complain about the design because they don't understand it, requiring rework to add explanatory details. I've gone beyond line-of-code narrative into remedial CS-101. I don't mind teaching (I prefer it) but not when there's a silly delivery deadline that can't be met because folks need to improve their skills.
  3. They find flaws in the design because I didn't actually write some experimental code to confirm each individual English sentence. Had I written the code first, then described it in English, the description would be completely correct. Since I didn't write the code first, the English description of what the code should be contained some errors (perhaps I failed to fully understand some nuance of an API). These are nuances I would have found had I actually written the code. So, error-free specifications require me to write the code first.
My Point is This.

If the design is detailed enough to code from -- and error free -- a designer must actually write the code first.

Indeed, the designer probably should simply have written the code.

Architecture Isn't Like That

Let's say we have a software design that's detailed enough to code from, and is completely free from egregious mistakes in understanding some API. Clearly, the designer verified each statement against the API. I'd argue that the best way to do this is to have the compiler check each assumptions. Clearly, the best way to do this is to simply write the code.

"Wait," you say, "that's going too far."

Okay, you're right. Some parts of the processing do not require that level of care. However, some parts do. For instance, time-critical (or storage-critical) sections of algorithms with many edge cases require that the designer build and benchmark the alternatives to be sure they've picked the right algorithm and data structure.

Wait.

In order for the designer has absolute certainty that the design will work, they have to build a copy that works before giving it to the coders.

In architecture or manufacturing, the construction part is expensive.

In software, the construction part does not exist. Once you have a detailed design that's error-free and meets the performance requirements, you're actually done. You've created "prototypes" that include all the required features. You've run them under production-like loads. You've subjected them to unit tests to be sure they work correctly (why benchmark something that's incorrect?)

There's nothing left to do except transition to production (or package for distribution.)

Software Design

There's no "detailed design" or "programming specifications" in software. That pipeline is crazy.

It's more helpful to think of it this way: there's "easy stuff" and "hard stuff".
  • Easy Stuff has well-understood design patterns, nothing tricky, heavy use of established API's. The things where the "architectural" design can be given to a programmer to complete the design by writing and testing some code. Database CRUD processing, reporting and analysis modules, bulk file processing, standard web form processing for data administration, etc.
  • Hard Stuff has stringent performance requirements, novel or difficult design patterns, new API's. The things where you have to do extensive design and prototyping work to resolve complex or interlocking issues. By the time there's a proven design, there's also code, and there's no reason for the designer to then write "specifications" for someone to reproduce the code.
In both cases, there are no "coders". Everyone's a designer. Some folks have one design strength ("easy stuff", well-known design patterns and API's) and other folks have a different design strength.

There is no "construction". All of software development is design. Some design is assembling well-known components into easily-visualized solutions. Other design is closer to the edge of the envelope, inventing something new.

Tuesday, February 23, 2010

Numerosity -- More Metrics without Meaning

Common Complaint: "This was the nth time that someone was up in arms that [X] was broken ... PL/SQL that ... has one function that is over 1,500 lines of [code]."

Not a good solution: "Find someway to measure "yucky code"."

Continuing down a path of relatively low value, the question included this reference: "Using Metrics to Find Out if Your Code Base Will Stand the Test of Time," Aaron Erickson, Feb 18, 2010. The article is quite nice, but the question abuses it terribly.

For example: "It mentions cyclomatic complexity, efferent and afferent coupling. The article mentions some tools." Mentions? I believe the article defines cyclomatic complexity and gives examples of it's use.

Red Alert. There's no easy way to "measure" code smell. Stop trying.

How is this a path of low value? How can I say that proven metrics like cyclomatic complexity are of low value? How dare I?

Excessive Measurment

Here's why the question devolves into numerosity.

The initial problem is that a piece of code is actually breaking. Code that breaks repeatedly is costly: disrupted production, time to repair, etc.

What further metric do you need? It breaks. It costs. That's all you need to know. You can judge the cost in dollars. Everything else is numerosity.

A good quote from the article: "By providing visibility into the maintainability of your code base—and being proactive about reducing these risks—companies can significantly reduce spend on maintenance". The article is trying to help identify possible future maintenance.

The code in question is already known to be bad. What more information is needed?

What level of Cyclomatic Complexity is too high? Clearly, that piece of code was already too much. Do you need a Cyclomatic Complexity number to know it's broken? No, you have simple, direct labor cost that tells you it's broken. Everyone already agrees it's broken. What more is required?

First things first: It's already broken. Stop trying to measure. When the brakes have already failed, you don't need to measure hydraulic pressure in the brake lines. They've failed. Fix them.

The Magical Number

The best part is this. Here's a question that provides much insight in to the practical use of Cyclomatic Complexity. http://stackoverflow.com/questions/20702/whats-your-a-good-limit-for-cyclomatic-complexity.

Some say 5, some say 10.

What does that mean? Clearly code with a cyclomatic complexity of 10 is twice as bad as a cyclomatic complexity of 5. Right? Or is the cost function relatively flat, and 10 is only 5% worse than 5? Or is the cost function exponential and 10 is 10 times worse than 5? Who knows? How do we interpret these numbers? What does each point of Cyclomatic complexity map to? (Other than if-statements.)

Somehow both 5 and 10 are "acceptable" thresholds.

When folks ask how to use this to measure code smell, it means they're trying to replace thinking with counting. Always a bad policy.

Second Principle: If you want to find code smells, you have to read the code. When the brakes are mushy and ineffective, you don't need to start measuring hydraulic pressure in every car in the parking lot. You need to fix the brakes on the car that's already obviously in need of maintenance.

Management Initiative

Imagine this scenario. Someone decides that the CC threshold is 10. That means they now have to run some metrics tool and gather the CC for every piece of code. Now what?

Seriously. What will happen?

Some code will have a CC score of 11. Clearly unacceptable. Some will have a CC score of 300. Also unacceptable. You can't just randomly start reworking everything with CC > 10.

What will happen?

You prioritize. The modules with CC scores of 300 will be reworked first.

Guess what? You already knew they stank. You don't need a CC score to find the truly egregious modules. You already know. Ask anyone which modules are the worst. Everyone who reads the code on a regular basis knows exactly where actual problems are.

Indeed, ask a manager. They know which modules are trouble. "Don't touch module [Y], it's a nightmare to get working again."

Third Principle: You already know everything you need to know. The hard part is taking action. Rework of existing code is something that managers are punished for. Rework is a failure mode. Ask any manager about fixing something that's rotten to the core but not actually failing in production. What do they say? Everyone -- absolutely everyone -- will say "if it ain't broke, don't fix it."

Failure to find and fix code smells is entirely a management problem. Metrics don't help.

Dream World

The numerosity dream is that there's some function that maps cyclomatic complexity to maintenance cost. In dollars. Does that mean this formula magically includes organization overheads, time lost in meetings, and process dumbness?

Okay. The sensible numerosity dream is that there's some function between cyclomatic complexity and effort to maintain in applied labor hours. That means the formula magically includes personal learning time, skill level of the developer, etc.

Okay. A more sensible numerosity dream is that there's some function between cyclomatic complexity and effort to maintain in standardized labor hours. Book hours. These have to be adjusted for the person and the organization. That means the formula magically includes factors for technology choices like language and IDE.

Why is it so hard to find any sensible prediction from specific cyclomatic complexity?

Look at previous attempts to measure software development. For example, COCOMO. Basic COCOMO has a nice R×T=D kind of formula. Actually it's aKb=E, but the idea is that you have a simple function with one independent variable (likes of code, K), and one dependent variable (effort, E) and some constants (a, b). A nice Newtonian and Einsteinian model.

Move on to intermediate COCOMO and COCOMO II. At least 15 additional independent variables have shown up. And in COCOMO II, the number of independent variables is yet larger with yet more complex relationships.

Fourth Principle: Software development is a human endeavor. We're talking about human behavior. Measuring hydraulic pressure in the brake lines will never find the the idiot mechanic who forgot to fill the reservoir.

Boehm called his book Software Engineering Economics. Note the parallel. Software engineering -- like economics -- is a dismal science. It has lots of things you can measure. Sadly, the human behavior factors create an unlimited number of independent variables.

Relative Values

Here's a sensible approach: "Code Review and Complexity". They used a relative jump in Cyclomatic Complexity to trigger an in-depth review.

Note that this happens at development time.

Once it's in production, no matter how smelly, it's unfixable. After all, if it got to production, "it ain't broke".

Bottom Lines
  1. You already know it's broken. The brakes failed. Stop measuring what you already know.
  2. You can only find smell by reading the code. Don't measure hydraulic pressure in every car: find cars with mushy brakes. Any measurement will be debated down to a subjective judgement. A CC threshold of 10 will have exceptions. Don't waste time creating a rule and then creating a lot of exceptions. Stop trying to use metrics as a way to avoid thinking about the code.
  3. You already know what else smells. The hard part is taking action. You don't need more metrics to tell you where the costs and risks already are. It's in production -- you have all the history you need. A review of trouble tickets is enough.
  4. It's a human enterprise. There are too many independent variables, stop trying to measure things you can't actually control. You need to find the idiot who didn't fill the brake fluid reservoir.

Thursday, January 28, 2010

Aristotle's Poetics and Project Management

It can be a fatal mistake to impose a story arc on a project.

Aristotle's Poetics is a commentary on drama, in which he identified two story arcs that are sure-fire hits: Big Man Brought Down, and Small Man Lifted Up. These are the standard "Change in Fortune" story lines that we like.

Most movies in the category called "westerns" have elements of both. When we look at a movie like "Who Shot Liberty Valance", we see both changes in fortune intertwined in this story.

Movies, further, have a well-defined three-act structure with an opening that introduces characters, context and the dramatic situations. Within the first minutes of the film, there will be some kind of initiating incident that clarifies the overall conflict and sets up the character's goals. We can call this the narrative framework. Or a story design pattern.

Project Narrative

A "project" -- in principle -- has a narrative arc, much like a movie. Walker Royce (Project Management: A Unified Framework) breaks big waterfall-like projects into four acts (or "phases"):
  • Inception
  • Elaboration
  • Construction
  • Transition
Even if done in a spiral, instead of a waterfall, these are the acts in the narrative between the opening "Fade In:" and the closing credits.

In some cases, folks will try to impose this four-act structure on an otherwise Agile method. It's often a serious mistake to attempt to impose this convention of fiction on reality.

Things That Don't Exist

One of the most important parts of the narrative arc is "inception". Every story has a "beginning".

Projects, however, do not always have a clear beginning. They can have a "kick-off" meeting, but that's only a fictionalized beginning. Work began long before the kick-off meeting. Often, the kick-off is just a one small part of Inception.

Some projects will have a well-defined narrative structure. Projects labeled "strategic", however, do not ever have this structure. They can't.

For large projects, something happened before "inception"; this is a real part of the project. The fiction is that the project somehow begins with the inception phase. This narrative framework is simply wrong; the folks that helped plan and execute inception know this thing that filmmakers call "back story". This pre-inception stuff is a first-class part of the project, even though it's not an official part of the narrative framework.

Even if you have an elaborate governance process for projects, there's a lot that happens before the first governance tollgate. In really dysfunctional organizations, there can be a two-tiered inception, where there's a big project to gather enough information to justify the project governance meeting to justify the rest of the work. The "rest of the work" -- the real project -- starts with an "inception" effort that's a complete falsification. It has to ignore (or at best summarize) the stuff that happened prior to inception.

The Price of Ignorance

The Narrative Arc of a project requires us to collect things into an official Inception or story setup. It absolutely requires that we discard things that happened before inception.

Here's what happens.

Users say they want something. "Automated customer name resolution". Something that does an immediate one-click credit check on prospective B2B e-commerce customers.

In order to justify the project, we do some preliminary work. We talk to our vendors and discover that business names are always ambiguous, and there's no such thing as one-click resolution. So we write sensible requirements. Not the user's original dream.

We have a kick-off meeting for a quick, three-sprint project. We have one user story that involves two multi-step alternative scenarios. We have some refactoring in the first sprint, then we'll build one scenario for the most-common case, then the other scenario for some obscure special cases.

When we get ready for release, the customer asks about the one-click thing.

"What one-click thing?" we ask.

"I always understood that this would be one-click," the customer says.

"Do you recall the project governance meetings, the signed-off requirements and the kick-off meeting? You were there. We walked through the story and the scenarios. There can't be one-click."

Communicate More? Hardly

What can be done to "prevent" this? Essentially, nothing.

The standard project narrative framework -- start, work, finish -- or perhaps inception, elaboration, construction, transition -- doesn't actually exist.

Stuff happened "before" the start is part of the project. We can claim (or hope) that it doesn't exist, but it really does. We can claim that a project has a "start", but it doesn't. It sort of eases into being based on lots of preliminary conversations.

When the users asked for "one click", it was a result of several other conversations that happened before going to the business analyst to ask for the additional feature.

Indeed, the "one click" was a compromise (of sorts) reached between a bunch of non-IT people having a conversation prior to engaging the business analyst to start the process of starting the project. All of that back story really is part of the project, irrespective of the requirements of the project's standard narrative structure.

Bottom Line

Poetics don't apply to large, strategic projects. A project is a subtle and complex thing. There's no tidy narrative arc. Backstory matters; it can't be summarized out of existence with a kick-off slide show.

Thursday, December 3, 2009

The King Cnut School of Management

See this story of King Cnut ruling the waves.

The King Cnut School of Management is management by fiat. Declaring it so.

PM: "When will this transition to production?"

Me: "After the firewall and VM configuration."

PM: "So, can we say Thursday?"

Me: "You can say that, if you want, but you have no basis for that. The firewall hardware is sitting on the loading dock, and the RHEL VM's won't run Python 2.6 with the current SELinux settings. I have no basis for expecting this to be fixed in a week."

PM: "We can just pick a date, and then revise it."

Me: "Good plan. Pick a random date and complain when it's not met. While you're at it, hold the tide back for a few hours, too."

Thursday, November 19, 2009

On Risk and Estimating and Agile Methods

See The Question of Risk.



These are notes for a long, detailed rant on the value of Agile methods.

One specious argument against an Agile approach is the "risk management" question. In this case, however, it becomes a "how much of a contingency budget should be write into the contract." Which isn't really risk management.

Thursday, October 1, 2009

Agile Project Management

Got this question recently.
"Any suggestions on PM tools that meet the following considerations

1) Planning

2) Estimating

3) Tracking (allowing both PM input and developer input)

4) Reporting

5) Support both Agile and Waterfall projects

6) Releases

7) Bug fixes (probably just another type of backlog)"



Agile PM requires far less planning than you're used to.

1. A "backlog" which is best done on a spreadsheet.

2. Daily standup meetings which last no more than 15 minutes at the absolute longest.

And that's about it.

Let's look at these expectations in some detail. This is important because Agile PM is a wrenching change from waterfall PM.

Planning

There are two levels of detail in planning. The top level is the overall backlog. This is based on the "complete requirements" (hahaha, as if such a thing exists). You have an initial planning effort to decompose the "requirements" into a workable sequence of deliverables and sprints to build those deliverables. Don't over-plan -- things will change. Don't invest 120 man-hours of effort into a plan that the customer will invalidated with their first change request. Just decompose into something workable. Spend only a few days on this.

The most important thing is to prioritize. The backlog must always be kept in priority order. The most important things to do next are at the top of the backlog. At the end of every sprint, you review the priorities and change them so that the next thing you do is the absolutely most valuable thing you can do. At any time, you can stop work, and you have done something of significant value. At any time, you can review the next few sprints and describe precisely how valuable those sprints will be.

The micro level of detail is the next few deliverables. No more than four to six. Don't over-plan. Review the deliverables in the backlog, correcting, expanding, combining and refining as necessary to create something that will be of value. List the sprints to build those deliverables. Try to keep each sprint in the four week range. This is really hard to do at first, but after a while you develop a rhythm based on features to be built and skills of the team. You don't know enough going in, so don't over-plan. After the first few sprints you'll learn a lot about the business problem, the technology and the team.

Estimating

Rule 1: don't. Rule 2: the estimate is merely the burn rate (cost per sprint) times the number of sprints. Each sprint involves the whole team building something that *could* be put into production. A team of 5 with 4 week sprints is a cost of 5*40*4 (800 man-hours).

Each sprint, therefore, has a cost of 800 man-hours. Period. The overall project has S sprints. If the project runs more than a year, stop. Stop. The first year is all you can rationally estimate this way. Future years are just random numbers. 5*40*50 = 10,000 man-hours.

Details don't matter because each customer change will invalidate all of your carefully planned schedules. Just use sprints and simple multiplies. It's *more* accurate since it reflects the actual level of unknowns.

What about "total cost"? First, define "total". When the project starts is the "complete requirements" (hahahaha, as if such a thing actually exists). Then, with each customer change, this changes. Further, half the requirements are merely "nice-to-haves". Since they're merely nice, they're low priority -- at the bottom of the backlog.

Since each sprint creates something deliverable, you can draw a line under any sprint, call it "done" and call that the "total cost". Any sprint. Any. There are as many different total costs as there are sprints, and all of them are right.

Tracking

I don't know what this is. I assume it's "tracking progress of tasks against a plan". Since the tasks are not planned at a low level of detail, there's nothing to "track".

You have a daily stand-up. People commit to do something that day. The next day you find out if they finished or didn't finish. This isn't a "tool thing". It's a conversation. Done in under 15 minutes.

Two things can happen during this brief conversation.

- Things are progressing as hoped. The sprint will include all hoped-for features.

- Things are not progressing as hoped. The sprint may not include some feature, or will include an incomplete implementation. The sprint will never have bugs -- quality is not sacrificial. Features are sacrificial.

There's no management intervention possible. The sprint will have what it will have. Nothing can change that. More people won't help. Technology changes won't help. Design changes won't help. You're mid-sprint. You can only finish the sprint.

AFTER the sprint is over, and you've updated the backlog and fixed the priorities, you might want to consider design changes or technology changes.

Reporting

What? To Whom? Each sprint is a deliverable. The report is "Done".

The backlog is a shared document that the users "own" and you use to assure that the next sprint is the next most important thing to do.

Support both Agile and Waterfall projects

Not possible. Incompatible at a fundamental level. You can't do both with one tool because you don't use tools for Agile projects. You just use spreadsheets.

Releases

Some sprints are release sprints. They're no different (from a management perspective) than development sprints. This is just CM.

Bug fixes

Probably just another type of backlog. Correct.