Sometimes in large projects the requirements phase
is also separated into a userview phase, producing
use cases or their equivalent, and a requirements
specification phase, in which the requirements embedded
in the use cases are recast as a features list. The
main justification for doing this is that the features
list embodies a checklist which can be used as a basis
for test plan development and acceptance testing. Also,
if a domain team produces the use cases, and a separate
development team produces the requirements list, it
is a way of forcing the developers to digest and understand
the system requirements, by expressing them in their
own terms.
A logical design partitions the system into conceptual
components and specifies their behavior. It is important
that as much information about the problem domain be
reflected, and that minimal attention be paid to considerations
having to do with performance, platform, or technology
choices, unless these things are constants and known
ahead of time, or are key choices that affect the functional
capabilities.
The reason is that this postponement allows
choices of platform and technology to be deferred and
left for implementation experts. The important goal
of the logical design phase is to synergize the
knowledge of application experts and implementation
experts to produce a logical model of the system which
could be implemented and would work, but perhaps not
optimally.
If the system has transactional behavior, transaction
partitioning should be addressed in a logical sense
(what are the transactions), but not the implementation
(e.g. whether DBMS locking is to be used, or implemented
with an optimistic policy or checkout). This does not
mean that no thought should be given to such issues;
in fact, judgment about the likely technical challenges
of the alternative logical designs is crucial to coming
up with a design that can be built within the project’s
budget. Pinning down specific implementations is probably
premature, however, since at this point a prototype
has not even been built, and regardless the focus should
be on making sure all the application’s functionality
is addressed.
A prototype should be developed during the logical
design phase if possible. If there are new technologies
involved—which is almost inevitable nowadays— what are
their limitations? Do they perform as advertised? What
surprises do they have in store? Scale prototyping and
testing should also be performed in an investigatory
manner during this stage. Note that bugs such as memory
leaks in key third-party components may not show up
until the system is tested at scale.
The detailed design phase modifies the logical
design and produces a final detailed design,
which includes technology choices, specifies a system
architecture, meets all system goals for performance,
and still has all of the application functionality and
behavior specified in the logical design. If a database
is a component of the system, the schema that results
from the detailed design may be radically different
in key places from the one developed in the logical
design phase, although an effort should be made to use
identical terminology and not change things that do
not need to be changed. The detailed design process
should document all design decisions that require schema
changes, or in general any changes to the logical design,
and the reasons for the change. The project manager’s
challenge will be to again disseminate understanding
of the new design, which is replacing a logical design
that had achieved credibility and consensus. This is
the reason why all changes need to be well documented,
so there is a clear migration, and the changes do not
seem radical or arbitrary.
If a features list approach is used, it is easy to separate
the project into builds, and make the detailed design
and implementation phases of each build iterative. The
systems’s features can be analyzed for dependencies
and resource requirements, and assigned to project builds
based on these dependencies, critical path, and priority.
The minimum set of features for a testable bootstrap
system can then be determined. For each build, additional
features are added, and all tests are rerun, resulting
in a working system after each build with increasing
functionality and reliability. During each build, a
detailed design of each feature can be performed, identifying
the packages and classes affected, and with roughcut
and then detailed updates to the specifications being
produced, possibly in an iterative manner, and finally
actual implementation in code. I have seen this technique
work extremely successfully on many compiler and other
projects with which I have been involved, and it is
directly applicable to all kinds of systems.
Once a detailed design for a build is agreed upon, the
implementation phase should make very few changes
to the system design, although some changes are inevitable.
It is critical for maintainability that all changes
be incorporated back into the design specifications.
Otherwise, the value of the system design will be lost
as soon as the system is released, and the only design
documentation will be the code. A system documented
only by its code is very hard for management to understand
and upgrade, outsource, or disseminate.
A Java-specific reason to incorporate changes back into
specifications is that JDK 1.2 introduces the concept
of package versioning. A package is viewed as a fieldreplaceable
unit, and besides its name has two identifying pieces
of information associated with it: its specification
version and its implementation version. Two versions
of a package that have the same specification version
are implemented according to the same specifications,
and should therefore be field-replaceable; the only
difference between them should be that one has bug fixes
which perhaps the other does not. A user might choose
one implementation version over another if the user
has instituted workarounds for certain known bugs; otherwise,
the latest implementation version should be the most
desired one. You can see that in order for this methodology
to work, there must exist a well-defined set of specifi-
cations for every package, and those specifications
should have a version number associated with them.
Some methodologies view final QA testing as a separate
phase of its own. This is a legitimate way of looking
at final testing. However, developers are still busy
during this period. They are not adding new functionality;
instead, they are responding to bug reports from the
QA group. The workflow is not any different, and all
feedback mechanisms for changes and notification must
still be in place. It is not clear, then, if distinguishing
between development and final testing is of much value,
except to clearly mark a cutoff point for adding new
features and begin testing the packaging and deployment
system. The QA group will likely make a “frozen” copy
of the project code and test it in complete isolation,
but testing of “frozen” code in this way still does
not obviate feedback to the developers to subsequently
fix reported problems. In fact, generating frozen releases
is part of the normal build process, even though it
may receive increased emphasis in the final build.
Throughout all these phases, continuity is essential.
A project that assigns domain analysis tasks to analysts
and then reassigns those analysts during the implementation
phase is operating with a severe handicap, if not doomed
to failure. Domain expertise must remain within the
project throughout its lifecycle. The dilemma is that
once up-front analysis is complete, the analysts have
less work, and their role becomes more passive. Often
this cannot be justified, and these people are valuable
to the business and are needed elsewhere. A solution
that often works well is to keep a few domain experts
assigned full time, and give them the permanent role
of facilitator. In this capacity, they perform domain
analysis, and execute all change requests to requirements
specifications. They also develop user-oriented test
plans, and construct system documentation. Their role
therefore remains an active one, and their knowledge
about the application, and contacts within the organization,
can still be tapped when questions arise during development.
Extract from "Advanced Java Development for Enterprise Application" written by Clifford J. Berg Published by Prentice Hall PTR New Jersey 1998