By Alan Silverstein; last update: Apr 9, 2026 -- Email me at ajs@frii.com.
Contents:
Introduction
Software Productization
The Power of Vision
Contextual Inquiry
Automation
Human Factors / Gourmet Theory
Agile Programming
Payload Versus Scaffold
Documentation/Software Styles
Other Miscellaneous Philosophy
New material to merge (pending)
What is this? This is one person's distillation of key learnings and advice about how to do software engineering better.
This webpage is the result of me spending over 30 years working as a professional software engineer. I developed and supported a wide range of commercial applications from boot loaders to business software, including along the way: System administration, install/update, symbolic debuggers, ASIC design and test tools, and a great deal of software process engineering.
I received a lot of professional training paid for by my employer. I also looked constantly for ways to improve my skills, although I'm not an "academic" who reads a lot of theoretical material. So what you have here is advice from a crusty old geezer who learned a lot about "where to tap with a hammer" even without always knowing the quantum physics behind the machinery.
This long but still brief webpage cannot replace a lifetime's experience in the field. But I still feel the urge to try articulating all in one place a summary of the best philosophies and methods I've incorporated. Many of the topics mentioned briefly here can lead you to finding much longer, and more precise, formulations of the same terms and concepts, if you're so inclined. "Take what you can use, and let the rest go by." -- Ken Kesey
Creating code can be easy or hard. Turning code into software by adding comments and other documentation is even harder. (Discussed in more detail in later sections.) "Productizing" software for long-term use and maintainability adds at least 2x the effort to just "ship a prototype."
Consider some major areas of possible investment effort:
Years ago I watched a series of videotapes called "the Power of Vision." I don't recall many of the details, but the author's name was Joel Barker, he's apparently still around (see here). The key points I took away were:
Suggestions for vision leaders:
This is a method for identifying and accurately understanding product and market opportunities. It can be done very formally, but its terms and concepts are valuable even when used informally. The key concepts are:
Suggestions for questions that should be answered and communicated early in a major new software design project:
People tend to think of "manual" versus "automated" methods as being polar opposites. But actually there's a relatively smooth spectrum between the first-time (prototypical) execution of any task, and the 100% mechanical (without human intervention) implementation of the same action.
For example, writing down a recipe (a real one, or steps to follow while interacting with a computer system) might be considered "10% automation." Using the recipe is still mostly manual, but not completely reinvented each time.
One of the arts of software engineering is recognizing and striving for the best balance between manual and automated methods. Humans are very good a heuristic reasoning, while computers are very good at rote repetition and rule-following. Humans drift off mentally and make errors of inattention blindness, while computers are so dumb they make errors of stupidity.
Software products can fail because they require too much detailed (or expert) attention from users, but also because they are overly complex and opaque, failing to instill a sense of end-user control and understanding. "Too clever is dumb." -- Ogden Nash
My suggestion is: Software product developers should consciously and routinely ask themselves what's being automated, how, and why; and what tasks end users care about, but are better off doing more-manually rather than ultimately with a negative ROI trying to over-automate.
Ideal GUIs don't just log what they do at the CLI level; they help users learn and visualize these details, and build their own well-commented scripts, so it's easy to jump from being a GUI to CLI user when the time is right.
Anomaly handling is an art form that's especially difficult to get right up front. So remain open and welcoming to end-user feedback about ways to improve it, usually arriving in the form of ill-articulated support requests like: "The system hung." "OK, what was the first specific sign of failure? Can you show me an error message?"
Consider these homilies: "No battle plan survives contact with the enemy," and, "Plans are nothing, planning is everything."
If you try to spell out a UI, architectural plan, etc, in great detail before writing any code, you are probably wasting your time, and others' too if you subject them to design reviews. It's more important to have a well-articulated and shared higher-level understanding of the project's vision, goals, end user tasks, and tradeoffs. Then as code development and alpha testing proceeds, the team should remain flexible/"agile" about "refactoring" existing code and documents based on new understandings.
The ideal goal is to have documentation and code in hand at every moment that best matches the statement, "this is exactly what we would have created from the beginning, if we knew then what we know now."
Of course in real life there are schedules, milestones, and soft/hard freeze/delivery dates. In a healthy software project, everyone including the management understands that refactoring is necessary, a continual striving for perfection balanced by various tradeoffs. Discussions are more about what to go back and change, and how soon, than whose fault it is that something wasn't understood well earlier, and how fouled up it all is now.
Finding new product disconnects early, and fixing them, should be seen as a glorious path to success, rather than an unsettling lack of stability... Just so long as the spiral continues upwards.
I find it useful to consciously label deliverables as payload or scaffolding (umbilicals). The former are documents or code features intended to be of long-lasting value to downstream users, including, say, someone writing client code around your library. The latter are features known to be temporary throwaways, including slides created for a team meeting and obsolete after the fact.
Beware however: Quite often a scaffold or prototype is thrown into production for lack of a better solution to a downstream problem. This is an argument for doing professional-quality work on almost anything you touch. Also there's no excuse for shortchanging your "internal" colleagues with inferior documents or tools just because "we'll never ship this to a customer."
I don't want to engage in any more long debates about style issues. I've been around that loop more times than I care to remember. It starts out fun, and it can be constructive or destructive, but is usually tiresome regardless.
But I do think it's important for everyone on a team to have an attitude of continual learning and improvement in their own skills and styles, sharing examples of best-practices, and being as egoless as possible.
That said, here are a few suggestions to consider:
But if you must err, make it be on the side of overcommunicating. Arrange your documentation and comments so it's easy for the reader to navigate rapidly and skip large sections not of present interest, while still locating the "meat" they're looking for. Consider that when they do need to dive deep to understand your code, what are you taking for granted that might not be obvious to them?
Your software should not look like an army of ants swarming across the screen or the page.
This practice also encourages you to improve code comments upon review, or even revise "bad" (embarrassing) code further before checking it in, rather than just apologizing for it.
File "change sets" (like in git, but not in CVS), with one checkin log message for all related files, are great for keeping multi-file changes together. But avoid becoming lazy about describing in detail the changes you made, across numerous files if necessary. Diff all related files into one message file, and still review all of your diffs to create the checkin message.
I find all of the following terms and concepts to be useful metaphors when working on software design and implementation. But I don't know how to categorize them further.
For highest S/N ratio in your interfaces, documentation, and software, avoid meaningless "differences" that do not convey useful information. While it's been said that, "a foolish consistency is the hobgoblin of little minds," I argue that consistency in engineering is not foolish. A healthy mindset, and a slight bit more upfront effort, produces products with optimal usability and mind-to-mind portability.
The prior advice about "one object for each name, one name for each object" is an example of avoiding meaningless differences.
When in doubt, format and punctuate the same as in English. For example, even a right-side comment that's not a complete sentence, shouldn't trail off into space, end with a period or colon. However, there are many cases where in software (really code) some unnatural practices like vertical alignment lead to greater readability.
"The expert never blames their tools." This is more than a sage observation about how people could/should behave. It also contains a nugget of advice that if your tools aren't serving you well -- or if you don't know how to use them to your best advantage -- it's worth taking some time to "sharpen the edges" or to find better tools.
In practice I often observe people turning out what I consider "crud" for documentation and software. If I inquire as to the root causes, ultimately it's because in their map of the world, even basic file editing is too hard. It's so onerous that they have a gap between what they can envision and what they can accomplish (at least with reasonable time and effort).
There's a great joy in continually learning and sharing new tricks of the trade: Better tools, and more powerful use of the tools that you use frequently.
But also, "tools alone are not the solution." As Tom DeMarco observed, quite often people (including managers) throw new tools at perceived problems when what's really needed is better goals, practices, and metrics. He called that "software laetrile" (a phony drug), and said: "People optimize whatever is measured, and hide their garbage elsewhere." So carefully choose the right metrics. "Make the right thing to do, the easiest thing to do."
Remember, your job is to help the reader rapidly get a fuzzy understanding of the big picture, then narrow in on a section of key interest, with good comprehension of the details there.
if (! foo) {error(...);}
else
{
<52 lines of turgid code>
}
Long ago the creator of the short-lived Canon Cat "modeless" personal computer visited HP. He made a strong case that modes are bad because they enable mode errors. Years later I realized that people can and do routinely deal well with modalities, such as switching between manual and automatic transmission vehicles. The key is that the modes must be so obvious or apparent that it's hard to get them wrong. Yet ideally the necessary mode switch becomes so natural that the user spends very little conscious effort paying attention to it.
My point is that it's a good idea for software designers to have in mind the concepts of modes and mode errors, and try to minimize them. Modes should be apparent, not hidden; and mode errors, especially frequent ones (if unavoidable), shouldn't be catastrophic, but instead well-handled as anomalies.
But it can be painful to improve performance after the fact. So even while initially "getting the code right," you can wisely choose methods that are less likely to hurt performance later. For example, blindly copying math equations off a website into code, without considering performance killers (like unnecessary repeated calls of expensive subroutines), is not smart.
If you find yourself adding workarounds (like table lookups) due to poor performance, study and measure the code first. Is something dumb happening, perhaps even unintentionally, such as recomputing shared values unnecessarily, or failing to exit a loop before timeout?
I was taught this model as an interpersonal communication technique (for labeling one's expressions to reduce friction), starting with "facts." Not long afterward, I encountered exactly the same model in a human factors class; but there, starting with "intentions."
Although we only have five senses (or perhaps nine or more, see for example here), we are routinely overwhelmed by far more information than we can consciously process. (Also ref: "Consciousness is like a flashlight. Wherever you shine it in your own mind, you see light, and don't realize the immense cavern is mostly dark.") So we filter facts before we even start interpreting them.
This model is useful for sorting through complicated end user reactions and reports. "What are the facts?" "What's the root symptom of failure, the first anomaly report?" "Are they interpreting differently than I intended?" Etc.
There are a couple of cute generic sayings worth sharing here. One is about love relationships ending: "I will always love the false expectations I had of you." And the other is: "Have preferences, not expectations; make choices, not demands."
You don't get to prevent your end users from building expectations or making demands. The most you can do is be aware of the process, and try to manage it for the happiest outcome. This includes the advice to, "under-set expectations, and over-deliver on them."
The simplistic stages of grieving are:
(Of course people don't proceed smoothly through these stages in linear fashion, but the model's still useful.)
The point of explaining this here is to help you realize that when you share your creativity with others, and they don't love it as much as you do, it's natural to experience "grief" (even if you don't think of it as such) and to go through denial, bargaining, etc. Your goal as a professional engineer (even though it's contrary to human nature) is to humbly, rapidly, and (as much as you can) cheerfully accept feedback with thankful acceptance, secure in the knowledge that by improving your product (and yourself), you and everyone else will end up happier later.
Under TQC, every unexpected question, support request, or bug report is an opportunity to improve the product (including its documentation) to address a newly recognized disconnect. Say a downstream user visits you with a question about a GUI feature you added. Don't heckle them for not getting it; don't just answer their question; interview them for what were they thinking, trying to do, and where did they derail? What message wasn't clear, or documentation was missing?
Human tribal nature is to seek conformity under one or a few strong leaders, yet wise dissonance can ultimately be more important for group survival and success than, "doing what's always worked before."
...Bear in mind my advice about considering pre-filtering versus
post-filtering. The key difference is what actually gets saved
(anywhere) for later review/replay, versus never saved. For example
in a high-security environment they usually do 100% post-filtering of
highly detailed "audit trails" (formal name for logging), saving every
scrap (no pre-filtering), so the art then is how to do this
efficiently in time and space and post-filter searchability.
Conversely, in our casual applications it's mostly OK to do a lot of
pre-filtering and require a do-over with "higher debug level" or
equivalent to capture anomaly details.
Either way, filtering means both deciding WHAT to show/save (per above),
and WHERE to send it. Generic message handling can involve a lot of
features:
- Message type, to control filtering, also for side-effects such as
inserting a standard "ERROR" prefix string in error messages, etc.
For rich examples, see the HW server:
levity/aacs_server_hw/workspace/aacs_server/aacs_server.cpp, starting
say with error_write(), although (a) I didn't go with a single
point-of-call for all message types, instead multiple types that merge
into common lower-level code, and (b) I wrapped the functions in
macros like ERROR() for simpler calling.
- Printf with varargs (for caller convenience) versus direct "save this
string".
- Application-specific side-effects like calling AAPL_FAIL for errors,
or noting the system clock (timestamping) or caller ID/process/thread,
etc. In the HW server, errors/warnings on the command client thread
also record the command number.
Thinking of (what some people are doing in a new project) as "just
logging" is simplistic to the point of being misleading. When the
end-user does high-level task X, you could just record "X with these
params" for both auditing and later playback. What you're wanting,
and taking for granted, is decompositional logging leading toward
script/replay file creation. So it's worth putting some thought into
how this works end-to-end, and what it actually does for your end
user, including helping seduce them into expertise.
So Joe Developer looks at some source code and decides it's wrong and
needs fixed. What's the most sinful thing they might do? It goes like
this (all too often):
"I'll just copy and paste these 27 lines of text, comment out the old
version, and edit a few characters here and there in the new version.
Of course since I'm using a colorized editor (doesn't everyone?), I
don't need to put a space after the comment marks, or put them all on
the left margin either. And there's no need to explain why I commented
out the old code either, *I* know what I'm doing. Seems to work? All
done, check it in."
In my view, here's what they SHOULD do:
- Open the file for editing and make all necessary changes; test,
review, test...
- While working, comment appropriately, with block comments and
navigation comments as necessary; clearly delineated from the code, so
I can mentally filter out and skim just comments or just code.
Most people are so laconic (telepathic I suppose) that if they err at
all, I'd rather they overcommunicate (at least as they see it). But
also do a good job on the formatting so I can easily find and read the
comments apart from the code!
And, edit comments along with the code to keep them accurate. Think,
"Two intertwined messages, one to the human reader and one to the
compiler/interpreter."
- OK now you're ready to do the checkin. Let the history manager keep
the old version around -- that's its job. Don't hang onto old code
unless there's a reason for it in the latest version.
- In the checkin log, it's appropriate to reference old versions --
although I have no idea how to do this SIMPLY and BRIEFLY in git
versus CVS. Example:
Working file: ...
----------------------------
revision 2.281
date: 2013/01/11 23:42:57; author: ajs; state: Exp; lines: +27 -8
Oops, various improvements based on talking about neg-Q issues:
...
- in read_data_file() fix two problems created in 2.271 when
value fields switched to CSVs
...
----------------------------
- It's also reasonable to reference old versions of code in the new
code, as a reminder, like this:
// The following warning is seen especially often, and in the original long
// form (see revision 1.190), it was confusing to users...
- Now in some cases you really do want to keep some unused code or
comments around so they are highly visible in-context, without going
to the history manager. Examples include:
+ rare-use print statements for debugging
+ alternate compilations for occasional testing
+ old code useful to understand the new code
For a single line, it's not too awful to just comment it out, for
example:
# A crude way to test erf() at this point:
# while (<>) {chomp(); $x = erf($_); printf("erf(%f) = %23.20f\n", $_, $x);}
Later someone can uncomment the line to do a one-time test.
For multiple lines though, avoid just turning them into comments!
This is very crude and marks you as a troglodyte.
In C or other compiled code, use #ifdefs (and include explanatory
comments):
#ifdef SPECIAL_TEST
...
#endif
In interpreted code, other than in a build environment where scripts
can contain conditional parts (like #ifdef's), you can still do better
than commenting-out. For example in Perl (yes, PURPOSELY brief/terse
in this case, so it looks weird):
# Turn on the following to rebuild config file:
if(0){
...
}
- A convention in C code for #ifdef'd-out code that's never to be used,
just kept as documentation (or perhaps for future expansion), is this:
#ifdef notdef
...
#endif
Define "notdef" at your peril!
- Finally, it's great that git has modern history manager features
including directory versioning, change sets, tagging, and (gasp)
private repositories. However, so far I'm not impressed about it
being able to do simple things simply, like refer to an older version
of a file. SCCS, RCS, and their networked-wrapped ubertools like
SoftCM and CVS might lack powerful features for unusual expert
situations, but at least it was straightforward to think about the
linear (or a bit branched at times) history of any one source file.
...a summary of what I was referring to when I talked about "software
productization." This is an area of great familiarity to me, having
worked on many aspects of it for many years, but I realized it might be
a fuzzy concept to a hardware group.
I'll just list major areas of possible investment effort without going
into great detail, but I can, if anyone wants.
- Solid software quality itself:
+ language and platform
+ clean/efficient architecture (refactoring when necessary)
+ common code/libraries
+ naming and parameterization
+ good/sufficient file, block, navigation, and inline comments
+ return code checking and anomaly handling
+ human factors, user interface, contextual inquiry
+ peer reviews/inspections
+ etc
- Documentation apart from software: End user and internal.
- Testing: Manual (recipes) and regression (automated daily).
- History manager (such as SoftCM, CVS, Gimp, Git, ClearCase): Version
control, branch/merge.
- Build processes: Multiple types of products from common source;
repeatability; logging/diagnosis.
- Packaging and delivery: Includes install, update, downdate, remove,
verify/manifest; extra credit for product relocatability.
- User/customer support including defect/enhancement tracking (such as
WITS).
This is just from memory, hope I didn't leave anything out. :-)
Jokes:
"Good thing there are so many standards to choose from."
"Standard is better than better."
And yet more old material, from 1991-1998, to eventually merge above.
(Or, "Creating Mind-to-Mind Portable Software Without the Benefit of Telepathy")
By Alan Silverstein;
email me at ajs@frii.com
Last update: 981210 (tweaks 260403!)
What is this? It's a dessert topping and a floor wax, ...errr, It's a website and a seminar...
So what's it about? This document offers you lots of ideas for how to write software (and documents) with style.
"Style? What's that mean?" As a famous person once said, "We all have style, but few have class." (I forget who said this.) So maybe this sermon is really about writing classy software -- but "software with style" had a better ring to it. Also, while C++ has class(es), typical C++ programs deserve an F for style...
Anyway, right about now you might be wondering, "where does Alan get off, preaching to me about software with style?" Let me address that (very reasonable) concern with lesson #1:
Perfectionism != perfect.
I wrote this document because I'm a perfectionist and I want to share it around. This in no way means I'm perfect, or even necessarily better than someone else at software engineering. It does mean that I try, and I care, and I wish most everyone else I work with would try harder and care more too.
As another anonymous famous person once said: "Plans are nothing; planning is everything." Similarly, perfection is nothing, perfectionism is everything.
This document is based on my years of experience as a software engineer. (I started at HP in 1977.) Much of the content is debatable, and much of it is "fuzzy", but all of it works for me and I find it relevant.
Using concepts and practices such as I've set forth here, I've written a lot of pretty successful software, relatively quickly. OK OK, so most of it is obsolete today, but it had a low defect density while it survived, and people didn't hate using it or adopting it...
Maybe if there's any truth in the words that follow, you could get the same results?
Around 1991 I presented this seminar to eight UDL engineers in a bit over an hour and a half. Items that probably should be skipped for a shorter presentation are marked second-level.
If this material has value to others, it is not as specific pragmas or practices, but as a set of models, concepts, ideas, ideals, and philosophies. Perhaps to be effectively conveyed it must be presented orally, with animation and examples? Or maybe it suffices if people just think (and debate) about the issue at all?
Anyway, what follows are goals; no human being (including me) can live up to them all. So, in the immortal words of Ken Kesey, "take what you can use and let the rest go by."
For any semantics (meaning) to be accurately conveyed between entities, they must have context in common. In software this context includes the computer language, the overall design and goals of the application, the technical rationale for the design, abbreviations and naming conventions, etc.
Most writers assume too much context knowledge on the part of the reader, because in the process of creation they are deeply and intimately engrossed in the details. They "can't see the forest for the trees." Learning when to record your overview and assumptions in documents or comments, and how to make them digestible, is a fine art worth cultivating.
Successful software survives long enough to rot and need maintenance, even if it's initially "perfect".
When someone asks you a question, any question, about your code or document, consider it as useful feedback for how the work could be improved.
(Of course some people are induhviduals and some questions are stupid. But at least consider whether the question represents a valid general case.)
second-level
Be consistent so the real purpose, message, or benefit
shines through and is maximally useful.
Be inconsistent when it shortens the overall size of the message!
For example, "#endif /* FOO */" is brilliant when the "#if" is 10 lines back,
but stupid and noisy when it's just one line back.
Offer them meaningful patterns that help fast, accurate comprehension.
To get them to do what you want, you must make the "right" path easiest to follow.
second-level
second-level
But interrupts carry a very high price.
If it's not urgent and it doesn't require a lot of back-and-forth, put down that telephone and use email!
If it's quick and/or you're in the mood to chat in person, stand quietly by the other person's desk until they are at a good interrupt point. Let them "push their stack," or even wave you off until later.
For example, for a Log() call, try to put the message to be logged on the same physical line as "Log", so a "grep Log" yields useful results.
(Unfortunately this often conflicts with readability in the context of the document. Well you can't have everything.)
When a co-worker asks a question, don't just give them a fish. If you can, teach them how to fish for themselves. They'll love you for it... Trust me.
"Why didn't you put a comment header on this file?"
"I thought it was obvious, and besides, it compiled and ran OK."
"Well great! It compiled once, and it's obvious, so can I throw away
your source file now?"
Personal note: I hate software in which I must "play compiler" to infer the purpose or context of the algorithm.
Perhaps in an ideal world we'd view everything through a browser, and every connection would be a hyperlink? (And everyone would be very good at writing web pages, too?)
When the code compiles and runs you are only half way done (or less).
(Yes, of course, one person's "class" is another person's nightmare. At least put energy into caring about it. Discuss it with your peers. Beat their irrational neural nets into conformance with your own...)
Tradeoffs in clarity and ease of comprehension and maintenance.
Performance is easier to add than clarity.
(Alan finally treads on the thin ice...)
Put energy into choosing appropriate object names.
| iSwapLimp | pointer to integer which is the value of the limit (end + 1) of swap area |
| unMapChar | union that maps chars to something else |
| stFSNodepp_t | fileset node structure pointer pointer type: typedef struct FSNode ** stFSNodepp_t; |
| CHNULL | macro: #define CHNULL '\0' |
| CHNULLP | macro: #define CHNULLP ((char *) NULL) |
It's not necessary to explain every use of the language -- but do give general ideas of what's going on in tricky cases.
"Make it right before you make it faster."
(The ice gets thinner...)
"Oh no, let's not get into that religious debate again."
But, "believing themselves to be wise, they became fools."
So let's briefly revisit the debate...
"Software macho": "It was clear to me, it should be clear to you. What's your problem?"
Counter with: "If you don't document your work clearly enough, that's OK with me. You can be my living, breathing documention. I assume I can come visit you as often as I find necessary... ?"
It's not the quantity of comments that deters the more proficient programmer. It's where you place the comments in the code. Block them before a fragment of code and try to keep inline comments to the right of code. The more proficient reader can concentrate on the code and bypass the comments. The comments are still available, however, for the neophyte, or as a reference.
"Who is this 'we' you keep referring to in your comments? I didn't write this trash, you did."
Humor has little place in software. A cute joke or a clever object name tires quickly when the code is revisited.
"Have you heard the self-referential joke about the guy who was giving a sermon... ?"
"Use the Force, Luke." If you have an SCM (software configuration management) system, like HMS or ClearCase, get obsolete files and text lines out of your current working set! Store them in past versions in the SCM, not in commented-out code.
In particular, avoid "I", "please", etc. Machines should be neutral, not chatty, and not over-friendly.
You can layer your documentation (for example, envelope it with an introduction) to make it palatable for various uses, such as where an ERS is now required, to reduce redundancy, and to decrease dead-end effort (scaffolding, umbilicals).
Developers must not allow any documentor to write end-user documentation about their work until the documentor is an expert on the work. (Mid-course corrections are easiest earliest.)
OK, that's it, all of it. Well, sort of. If I had hours and hours to lecture, I could go off on zillions of virtual hyperlinks that start in the distilled wisdom (wisdumb?) above. Like for instance, my "Guidelines For Designing Administrable Computer Services" (no longer web available).
Well anyway, thanks for listening. Maybe I'll run your code someday and think you're a genius. Maybe I'll have to read your source code, and I'll think you're a saint.
More raw text later:
...I came around to appreciating, even at times extolling, most of the virtues of rapid design and refactoring (agile programming). It admits that "no battle plan survives contact with the enemy," that we aren't initially the experts we think we are, that we have a lot to learn from end-user testing and feedback. It reverses the stodgy old HP model of not writing a line of code (or burning a PC board) until you've sent your ERS and IRS through enough phases of review and editing, then being afraid to deviate from the Gospel later when design issues inevitably show up. :-)
Of course any model can be abused or misapplied. A big problem with refactoring is the natural tension between perfectionists (like me) who don't know when to stop tinkering ("perfect is the enemy of done") and managers feeling perennial pressure to "paint it yellow and ship it." It's great if rapid and iterative prototyping allows you to test and refine your designs efficiently for maximum value, but it sucks when you're forced to ship "one prototype too soon" and then move on to something else.