close

Moved

Moved. See https://slott56.github.io. All new content goes to the new site. This is a legacy, and will likely be dropped five years after the last post in Jan 2023.

Showing posts with label use case. Show all posts
Showing posts with label use case. Show all posts

Tuesday, December 5, 2017

Functional Requirements and Use Cases -- even for "simple" things

In the mailbag I found this nonsense, doomed to inevitable failure:
"As I get more serious about this data science stuff, it has become obvious that a windows machine is not the way to go. ...
Q1: What other things should I think about and consider while shopping for a new computer?
Q2: Are there issues w/ running VmWare and Windows 7 w/in VmWare on Ubuntu?
"
I've omitted many, many words (400 or so.)

Here are all of the functional requirements I could discern:
  • I would like to have 1 machine. I don't want a desktop and a laptop
  • Install VmWare 
  • Install Windows 7 using VmWare
This was all of the functional requirements. The other 400 words involved specifications. Nothing that approaches a use case other than singular, VMWare, and Windows under VMWare. The form factor of laptop, which seems to go without saying, might be a user story, but that's pushing it.

The "a windows machine is not the way to go" and "Install Windows 7" indicate a fairly serious problem. It is not the way to go and it's required. Both. This is doomed to inevitable failure. 

This is not the way to make a decision.

Q1. What other things should I think about? 

Just about every other thing. Start with use cases and functional requirements. Skip over specifications. (In general, never start with specifications because that's where you end: a list of useless numbers that don't bracket what you actually want to do.)

Use Cases Matter. Specifications Don't Matter.

Write down all the Mbs and Tbs you want. Without a use case, they're irrelevant noisy details. Throw the numbers away until you have a list of verbs. Things you will DO. 

With so few actual functional requirements, almost *any* computer (possibly including a Raspberry Pi 3) would pass the suite of acceptance test cases.

✅ One Machine.
✅ VMWare.
✅ Windows.

After a lot more back-and-forth, I discerned one (or maybe two) additional functional requirement(s).
  • I have leo w/ java to gen html.
I know what Leo is in this context. I'm guessing the "java to gen html" is JRst. The lack of clarity is, of course, part of the problem here.

This requirement surfaced in the context of explaining to me why Windows was so important. Really. Windows was required to run two open-source apps. And. "a windows machine is not the way to go." Doomed. To. Inevitable. Failure.

Here's the only relevant functional requirement(s): run Leo and Java. And even then, there's a huge hole in this. Leo is Python-based. Docutils RST2HTML is Python-based. Why not simply use Leo and Python?  What does Java have to do with anything?

Buy this: a Pi-top: https://www.sparkfun.com/products/13896

Q2. Are there issues w/ running...? 

Yes. Always. For everything you can possibly enumerate there are "issues". 

There. Are. Always. Issues.

Use Cases Matter.

Since you don't have any functional requirements or use cases, it's impossible to filter the issues and see if any of the known issues impact what you think you're going to do.

From what I was told, a Pi-top covers everything that's required. It's hard to be sure, of course, when the functional requirements are so vague. But there's no evidence that the Pi-top can't work to fill all of the stated functional requirements.

What To Do Next

It seems obvious, but the next step is to create a test plan. Actually, that was the first step. Since it wasn't done first, now it's the next step.

Write down the things you want to do. Make a list. Ideally a long list of things you will DO. Active voice. Verbs. Actions. Tasks. Activities. It's hard to emphasize this enough.

Then, when considering a computer, see if it can actually do those things. Test it against the requirements to see if it does what it's supposed to do. Among all the machines that pass the tests, you can then sort by price. (Or availability, or reputation, or cool stickers, whatever non-functional requirements seem relevant.)

The questions of Tb and Mb and processor clock speed mean nothing. Nothing. Find the cheapest (smallest) machine that does what you want. Don't find the machine with xMb and yTb of whatever.

There there's this, "As I get more serious about this data science stuff" which seems little more than context. But it's really important. Indeed, it's essential.

If you're going to do machine learning, you don't really want to buy the necessary computer. You want to rent it for the hour or so each day you actually need it. It will be idle 23/24 hours each day (96% of the time.) Why buy that much horsepower which you are never going to use.

If you're going to login to a server you purchased from a cloud computing vendor. Amazon AWS. Microsoft Azure, etc., then, you can probably get by with a tablet that runs SSH and a browser. A tablet with a cool keyboard and a little display rack can be very nice. https://panic.com/prompt/ and https://www.termius.com seem to be all that's required.



Without Use Cases, however, it's impossible to select a computer. Don't spend money without test cases.

Thursday, June 20, 2013

Automated Code Modernization: Don't Pave the Cowpaths

After talking about some experience with legacy modernization (or migration), I received information from Blue Phoenix about their approach to modernization.

Before talking about modernization, it's important to think about the following issue from two points of view.

Modernization can amount to nothing more than Paving the Cowpaths.

From a user viewpoint, "paving the cowpaths" means that the legacy usability issues have now been modernized without being fixed. The issues remain. A dumb business process is now implemented in a modern programming language. It's still a dumb business process. The modernization was strictly technical with no user-focused "value-add".

From a technical viewpoint, "paving the cowpaths" means that bad legacy design, bad legacy implementation and legacy platform quirks have now been modernized. A poorly-designed application in a legacy language has been modernized into a poorly-designed application in yet another language. Because of language differences, it may go from poorly-designed to really-poorly-designed.

The real underlying issue is how to avoid low-value modernization. How to avoid merely converting bad design and bad UX from one language to another.

Consider that it's possible to actually reduce the value of a legacy application through poorly-planned modernization. Converting quirks and bad design from one language to another will not magically make a legacy application "better". Converting quirky code to Java will merely canonize the quirks, obscuring the essential business value that was also encoded in the quirky legacy code.

Focus on Value

The fundamental modernization question is "Where's the Value?" Or, more specifically, "What part of this legacy is worth preserving?"

In some cases, it's not even completely clear what the legacy software really is. Old COBOL mainframe systems may contain hundreds (or thousands) of application programs, each of which does some very small thing.

While "Focus on Value" is essential, it's not clear how one achieves this. Here's a process I've used.

Step 1. Create a code and data inventory. 

This is essential for determine what parts of the legacy system have value. Blue Phoenix has "Legacy Indexing" for determine the current state of the application portfolio. Bravo. This is important.

I've done this analysis with Python. It's not difficult. Many organizations can provide a ZIP file with all of the legacy source and and all of the legacy JCL (Z/OS shell scripts). A few days of scanning can produce inventory summaries showing programs, files, inputs and outputs.

A suite of tools would probably be simpler than writing a JCL parser in Python

A large commercial operation will have all kinds of source checked into the repository. Some will be inexplicable. Some will have never been used. In some cases, there will be executable code that was not actually built from the source in the master source repository.

A recreational project (like HamCalc) reveals the same patterns of confusion as large multi-million dollar efforts. There are mystery programs which are probably never used; the code is available, but they don't appear in shell scripts or interactive menus. There are programs which have clear bugs and (apparently) never worked. There are programs with quirks; programs that work because of an undocumented "feature" of the language or platform.

Step 2. Capture the Data.

In most cases, the data is central: the legacy files or databases need to be preserved. The application code is often secondary. In most cases, the application code is almost worthless, and only the data matters. The application programs serve only as a definition of how to interpret and decode the data.

Blue Phoenix has Transition Bridge Services. Bravo. You'll be moving data from legacy to new (and the reverse, also.) We'll return to this "Build Bridges" below.

Regarding the data vs. application programming distinction, I need to repeat my observation: Legacy Code Is Largely Worthless. Some folks are married to legacy application code. The legacy code does stuff to the legacy files. It must be important, right?

"That's simple logic, you idiot," they say to me. "It's only logical that we need to preserve all the code to process all the data."

That's actually false. It's not simple logic. It's just wishful thinking.

When you actually read legacy code, you find that a significant fraction (something like 30%) is trivial recapitulation of SQL's "set" operations: SQL DML statements have an implied loop that operates on a set of data. Large amounts of legacy code merely recapitulates the implied loop. This is trivially true of legacy SQL applications with embedded SQL; explicit FETCH loops are very wordy. There's no sense in preserving this overhead if it can be avoided.

Programs which work with flat files always have long stretches of code that models SQL loops or Map-Reduce loops. There's no value in the loop management parts of these programs.

Another significant fraction is "utility" code that is not application-specific in any way. It's an application program that merely does a "CREATE TABLE XYZ(...) AS SELECT ....": a single line of SQL. There's no sense in preserving this through an "automated" tool, since it doesn't really do anything of value.

Also. The legacy code has usability issues. It doesn't precisely fit the business use cases. (Indeed, it probably hasn't fit the business use cases for decades.) Some parts of the legacy code base are more liability than asset and should be discarded in order to simplify, streamline or improve operations.

What's left?

The high value processing.

Step 3. Extract the Business Rules.

Once we've disposed of overheads, utility code, quirks, bad design, and wrong use cases, what's left are a the real brass tacks. A few lines of code here and there will decode a one-character flag or indicator and determine the processing. This code is of value.

Note that this code will be disappointingly small compared to the total inventory. It will often be widely scattered. Bad copy-and-paste programming will lead to exact copies as well as near-miss copies. It may be opaque.

IF FLAG-2 IS "B" THEN MOVE "R" TO FLAG-BC.

Seriously. What does this mean? This may turn out to be the secret behind paying bonus commissions to highly-valued sales associates. If this isn't preserved, the good folks will all quit en masse.

This is the "Business Rules" layer of a modern application design. These are the nuggets of high-value coding that we need to preserve.

These are things that must be redesigned when moving from the old database (or flat files) to the new database. These one character flag fields should not simply be preserved as a single character. They need to be understood.

The business rules should never be subject to automated translation. These bits of business-specific processing must always be reviewed by the users (or business owners) to be absolutely sure that it's (a) relevant and (b) has a complete suite of unit test cases.

The unique processing rules need to have modern, formal documentation. Minimally, the documentation must be in the form of unit test cases; English as a backup can be helpful.

Step 4. Build Bridges.

A modernization project is not a once-and-done operation.

I've been told that the IT department goal is to pick a long weekend, preferably a federal Monday holiday weekend (Labor Day is always popular), and do a massive one-time-only conversion on that weekend.

This is a terrible plan. It is doomed to failure.

A better plan is a phased coexistence. If a vendor (like Blue Phoenix) offers bridge services, then it's smarter and less risky to convert back and forth between legacy and new over and over again.

The policy is to convert early and convert often.

A good plan is the following.
  1. Modernize some set of features in the legacy quagmire of code. This should be a simple rewrite from scratch using the legacy code as a specification and the legacy files (or database) as an interface.
  2. Run in parallel to be sure the modern version works. Do frequent data conversions from old to new as part of this parallel test.
  3. At some point, simply stop converting from old to new and start using the new because it passes all the tests. Often, the new will have additional features or remove old bugs, so the users will be clamoring for it.
For particularly large and gnarly systems, all features cannot be modernized at once. There will be features that have not yet been modernized. This means that some portion of new data will be converted back to the legacy for processing.

The feature sets are prioritized by value. What's most important to the users? As each feature set is modernized, the remaining bits become less and less valuable. As some point, you get to the situation where you have a portfolio of unconverted code but no missing features. Since there are no more desirable legacy features to convert, the remaining code is -- by definition -- worthless.

The unconverted code is a net cost savings.

Automated Translation

Note that there is very little emphasis on automated translation of legacy code. The important work is uncovering the data and the processing rules that make the data usable. The important tools are inventory tools and data bridging tools.

Language survey tools will be helpful. Tools to look for file operations. Tools to look for places where a particular field of a record is used.

Automated translation will tend to pave all the cowpaths: good, bad and indifferent. Once the good features are located, a manual rewrite is just as efficient as automated translation.

Automated translation cannot capture meaning, identify use cases or write unit test cases. Thoughtful manual analysis of meaning, usability and unit tests is how the value of legacy code and data is preserved.

Thursday, June 28, 2012

How to Write Crummy Requirements

Here's an object lesson in bad requirements writing.

"Good" is defined as a nice simple and intuitive GUI interface. I would be able to just pick symbol from a pallette and put it somewhere and the software would automatically adjust the spacing.
Some problems.
  1. Noise words.  Phrases like "'Good' is defined as" don't provide any meaning.  The word "just" and "automatically" are approximately useless.  Here is the two-step test for noise words.  1.  Remove the word and see if the meaning changed.  2.  Put the opposite and see if the meaning changed.  If you can't find a simple opposite, it's noise of some kind.  Often it's an empty tautology, but sometimes it's platitudinous buzzwords.
  2. Untestable requirements.  "nice, simple and intuitive" are unqualified and possible untestable.  If it's untestable, then everything meets the criteria (or nothing does.)  Elegant.  State of the art.  Again, apply the reverse test:  try "horrid, complex and counter-intuitive" and see if you can find that component.  No?  Then it's untestable and has no place.
  3. Silliness.  "GUI".  It's 2012.  What non-GUI interfaces are left?  Oh right.  The GNU/Linux command line.  Apply the reverse test: try "non-GUI" and see if you can even locate a product.  Can't find the opposite?  Don't waste time writing it down.
What's left?  
pick symbol from a palette ... the software would ... adjust the spacing.
That's it.  That's the requirement.  35 words that mean "Drag-n-Drop equation editing".

I have other issues with requirements this poorly done.  One of my standard complaints is that no one has actually talked to actual users about their actual use cases.  In this case, I happen to know that the user did provide input.

Which brings up another important suggestion.
  • Don't listen to the users.
By that I mean "Don't passively listen to the users and blindly write down all the words they use.  They're often uninformed."  It's important to understand what they're talking about.  The best way to do this is to actually do their job briefly.  It's also important to provide demos, samples, mock-ups, prototypes or concrete examples.  It's 2012.  These things are inexpensive nowadays. 

In the olden days we used to carefully write down all the users words because it would take months to locate a module, negotiate a contract, take delivery, install, customize, integrate, configure and debug.  With that kind of overhead, all we could do was write down the words and hope we had a mutual understanding of the use case.  [That's a big reason for Agile methods, BTW:  writing down all the user's words and hoping just doesn't work.]

In 2012, you should be able to download, install and play with candidate modules in less time than it takes to write down all the user's words.  Often much less time.  In some cases, you can install something that works before you can get the users to schedule a meeting.

And that leads to another important suggestion.
  • Don't fantasize.
Some "Drag-n-Drop" requirements are simple fantasies that ignore the underlying (and complex) semantic issues.  In this specific example, equations aren't random piles of mathematical symbols.  They're fairly complex and have an important semantic structure.  Dragging a ∑ or a √ from a palette will be unsatisfying because the symbol's semantics are essential to how it's placed in the final typeset equation.

I've worked recently with some folks that are starting to look at Hypervideo.  This is often unpleasantly difficult to write requirements around because it seems like simple graphic tools would be all that's required.  A lesson learned from Hypertext editors (even good ones like XXE) is that "WYSIWYG" doesn't apply to semantically rich markup.  There are nesting and association relationships that are no fun to attempt to show visually.  At some point, you just want to edit the XML and be done with it.

Math typesetting is has deep semantics.  LaTeX captures that semantic richness.  

It's often best to use something like LaTeXiT rather than waste time struggling with finding a Drag-n-Drop tool that has appropriate visual cues for the semantics.  The textual rules for LaTeX are simple and—most importantly—fit the mathematical meanings nicely.  It was invented by mathematicians for mathematicians.

Thursday, July 21, 2011

Spam Email Footers

I don't want the spamilicious email.  I'm trying to actually unsubscribe.

The footer says "If you are not the intended recipient, you are hereby notified that any dissemination, distribution or copying of any information contained in or attached to this communication is strictly prohibited. If you have received this message in error, please notify the sender immediately and delete the material from any computer."

I don't feel like the intended recipient because it's just irrelevant junk.  Perhaps you should not have disseminated, distributed, copied or sent me this.  Wouldn't that have been simpler? Keep it to yourself?

I also think I've received the message in error.  Since I don't want the damn thing. And that means that I have to delete it?  Why can't you stop sending it?  Wouldn't that be simpler for both of us?

Thursday, June 30, 2011

Implementing the Unsubscribe User Story

I've been unsubscribing from some junk email recently.

The user story is simple: As a not-very-interested person, I want to get off your dumb-ass mailing list so that I don't have to flag your crap as spam any more.

The implementations vary from good to evil.  Here's what I've found.

The best sites have an unsubscribe link that simply presents the facts -- you are unsubscribed.  I almost feel like re-subscribing to a site that handles this use case so well.

The first level of crap is a site which forces me to click an OK or Unsubscribe button to confirm that I really want to unsubscribe and wasn't clicking the tiny little links at the end of the message randomly.

The deeper level of "marketing" crap is a form that allows me to "configure my subscription settings".  This is done by some marketing genius who wanted to "offer additional value" rather than simply do what I asked.  This is a hateful (but not yet evil) practice.  I don't want to "configure" my settings.  I want out.

The third-from-worst is a form in which I must enter my email address.  What?  I have several email aliases that redirect to a common mailbox.  I have to -- what? -- guess which of the aliases was used?  This is pernicious because I can make a spelling mistake and they can continue to send me dunning email.  This fill-in-the-blanks unsubscribe is simply evil because it gives them plausible deniability when the continue to send me email.  It's now my fault that I didn't spell my name correctly.

The next-to-worst is a "mailto:" link that jumps into my emailer.  I have to -- what? -- fill in the magic word "Complete" somewhere?  You're kidding, right?  This is so 1980's-vintage listserv that I'm hoping these companies can be sued because they failed to actually unsubscribe folks.  Again, this gives the spammer a legitimate excuse because I failed to do the arcane step properly.

The worst is no link at all.  Just instructions explaining that an email must be send with the magic word "Complete" or "Unsubscribe" in the subject or body.  Because I use aliases, this will probably not unsubscribe anything useful, but will only unsubscribe my outbound email address.  This is the worst kind of evil.  In a way, it meets the user story.  But only in a very, very oblique way.

Thursday, May 12, 2011

A Taxonomy of Use Case Errors

First, the definition. A use case describes an actor's interaction with a system to create business value. There are three parts: Actor, Interaction and Business Value.

1. Not Interactive.
1.1. The use case is just features and technical attributes with no actor interaction expressed.
1.2. The use case is just algorithms and processing with no connection to an actor or a goal.

2. No Business Value.
2.1. Incomplete
2.1.1. The use case focus on sequential operations with no value or goal.
2.1.3. The use case simply follows existing precedent without supporting actual business goals. It "paves the cow path".
2.2. Non-Specific
2.2.1. The use case is a result of free-running imagination; it conflates "possibly" vs. "required". It contains descriptions of interactions which could happen or would be nice to happen.
2.3. Covers the Technology Only
2.3.1. The solution technology is conflated with the business problem. Words like "database" or "foreign key" or "error log" or other solution technology are central.
2.4. Contradictory
2.4.1. The use case goal contradicts other goals.
2.4.2. The use case sequence is inconsistent with the stated goal.

3. No Actor.

Tuesday, March 15, 2011

XBox Live -- Can't Unsubscribe

Here's a lack of a use case for you.

Someone -- fraudulently -- used my email address to subscribe to XBox live. I cannot remedy this. Apparently, neither can Microsoft.

I get spam from XBox. I change my passwords all over the place.

I go to the XBox live web site to cancel this fraudulent account. I can't. There's no place to do that. I cannot cancel the account because it can only be done through the XBox console. Except -- of course -- in the case of fraud, the email user doesn't have a console.

So I call the help desk. "Please remove my email from this account that fraudulently uses it." They can't. Absolutely can't. All I can do is route xbox.com email into the spam folder. That's it.

Nice help desk agent. Doing the best she can. But, she cannot find the email address and disconnect me from spam or XBox or XBox live. Someone at the console needs to do that.

How do we contact the person at the XBox Console? Can't send them email -- it goes to me!

Somehow, someone at Microsoft has to call "beezyNdetroit" on the phone (I guess) and break the bad news to them that they're fraudulently using one of my email addresses for their XBox spam.

Tuesday, June 29, 2010

Creating Complexity Where None Existed

I read a 482-word treatise that amounted to these four words "sales and delivery disagree".
A more useful summary is "Sales and Delivery have different views of the order".

It started out calling the standard sales-delivery differences a "Conflict" requiring "Resolution". The description was so hopelessly enmeshed in the conflict that it code-named sales and delivery as "Flintstones" and "Rubbles" as if they might see their names in the email and object. [Or -- what's more likely the case -- the author refused to see the forest for the drama among the trees.]

What?

Sales and delivery are in perpetual conflict and there is no "resolution" possible. I assume this "resolution" comes from living in a fantasy world where order-to-fulfillment and fulfillment-to-invoice processes somehow are able to agree at each step and the invoice always matches the order in every particular.

If this were actually true, either sales or delivery would be redundant and could be eliminated from the organization.

Fantastic Software

I'm guessing that someone fantasized about an order-to-invoice process and wrote software that didn't reflect any of the actual issues that occur when trying to deliver services. Now, of course, reality doesn't match the fantasy software and someone wants a "solution".

Part of finding that solution appears to be an effort to document (482 words!) this "drama" and "conflict" between sales and delivery.

Here's what I observed.
  1. Take a standard process of perfectly typical complexity and fantasize about it, writing completely useless software.
  2. Document the process as though it's a titanic struggle between two evil empires of vast and malicious sociopaths with innocent little IT stuck in the middle as these worlds collide. Assign code-name to sales and delivery to make the conflict seem larger and more important than it is.
  3. Start layering in yet more complexity regarding "conflict resolution algorithm" and other buzzwords that aren't part of the problem.
  4. Start researching these peripheral issues.
That makes a standard business process into something so complex that one could spend years doing nothing useful. A make-work project of epic proportions.

Wednesday, June 16, 2010

Adobe's Feckless Updater

BERJAYA
Consider this dialog box.

The application was modified.

It can't be updated.

Why not just replace it? Replacing a modified application seems to be a perfectly sensible use case.

But no, rather than doing something useful, it shows a dialog box. I guess no one thought through this use case and asked what -- if anything -- the Actor actually cares about. Doing installs is not one of my goals as an actor. Managing installations is not one of my goals. I want to (a) read PDF's and (b) have everything else handled automatically.

Saturday, June 13, 2009

How to Derail Use Case Analysis: Focus on the Processes

It's easy to prevent successful use case analysis: make it into an exercise of defining lots of "processes" in excruciating detail.

First, ignore all "objects" definition.  All business domain entities -- and actors -- must be treating as second class artifacts.

Second, define everything as a process.  A domain entity is just some stuff that must be mapped between processes.  Act like the entity doesn't really have independent existence.

Symptoms

You may be trying to do use case analysis, but if you have these symptoms, it might be time to step away from the process flows and ask what you're really doing.

There Are No Actors.  Well, actually, there's one actor: "user".  When all of your use cases have one actor, you've forgotten the users and their goals.  Stop writing the processes and take a step back.  Who are the users?  What are they trying to accomplish?  Where is their data?  When is it available?  What interactions with a system would make them happier and more productive?

Every Action Defines A New Class of Actors.  You have actors like content creators, content updater, content quality assurance, content refinement, content link checking, do this and do that.  Too many actors is easy to spot because the attributes and behaviors of all those actors are essentially identical.  In this example, they all edit content.

Each Use Case is a Wizard.  If each use case is a strictly sequential input of a data element followed by "click next to continue", you've taken away the actor's obligation to make decisions and take action on those decisions.  If you're lucky, you've got a use case for each individual goal the actor has.  More typically, you've overlooked a fair number of the actor's goals in your zeal of automating every step of one goal. 

You Need an "Overall Flow" or Sequence for the Use Cases.   If your use cases have to be exercised in one -- and only one -- order, you've taken away the actor's goals

Collaboration

Use Case analysis describes the collaboration between actors and a system to create something of value.  If the system is described by wizards or modal dialogs that completely constrain the conversation to one where the system asks the actor for information, something's terribly wrong.

The point is to describe the system as a series of "interfaces", each of which has a use case.  The actors interact with the system through those interfaces.    The actor is free to gather information from the system, make decisions, and take action via the system.

War Story

The users had a legacy "application" that was a pile of SAS code that did some processing on the source data before reporting.

The use cases were -- essentially -- "1.  Actor runs this program  2. System does all this stuff."  The "all this stuff" was usually a lengthy, complex reverse engineering exercise trying to discern what the SAS code did.  

No mention of the business value.  No reason why.  And no room to implement a better process.

War Story

Analyst is pretty sure the user wants collaborative editing.  The analyst has a pretty good "epic" (not a proper user story, but a summary of a number of user stories) that describes creating, modifying and extracting from a collaboratively edited document.

The initial discussion lead to every single verb somehow defining a separate actor.  In the original epic, there were exactly two actors, one who added or elided certain details for the benefit of another.

Later discussions lead to a single "User" actor and the craziest patchwork of use cases.  Random "might be nice to have"s crept in to the analysis, and the original "epic" was dropped.  No trace of it remained, making it very difficult to determine priorities.

War Story

Users had developed a complex work-around because they didn't have all the equipment they needed in their local office.  It involved mailing CD's from one office to another to prevent network bandwidth problems.  The business analysts wanted to capture this process, even though parts of it created no value.

It took a fair amount of work to get the analysts to stop documenting implementation details (mailing addresses, Fedex account numbers) and start documenting interactions and the business value that was created.  

Many process steps are physical moves and don't involve making information available for decision-making.  Those no-decision physical move steps should not be described in a use case.  Perhaps in an appendix, but their incidental because they're just the current implementation.  A use case should have the essence of the business value and how the actor uses the system to create that value.