It seems like every six months there is an entire new
class of products and APIs, designed
to fill spaces unaddressed by those from before. Many
server vendors also now make major enhancements to their
core products more than once a year. This makes it hard
to build a system, because there is the risk that the
products are unreliable or have unanticipated behavior,
or advertised features are not quite ready.
Some times it is wise to forego a new class of products
or a new API until it has matured. The
problem is, you never get to that state anymore. If something
is so mature that it is no longer evolving, it is dead.
The problem is not that products evolve; it is the rate
of evolution that has sped up, as a result of the tremendous
growth of the Internet and the spread of computers to
mainstream and even home use.
The Java core API is a good example of
an evolving technology. The hunger for enhancements is
so great that people are usually willing to put up with
bugs in order to use the new features. Take the Swing
API. When Swing was in beta
0.5, we already had clients using it for mission critical
applications. They could not be dissuaded. The reason
was that, even though it was buggy then, it would become
stable by the time it was officially released, and they
wanted to base their development on a standard. Swing
was also so much nicer than its predecessor, the
AWT, that staying with the AWT
was not even an option. One could say that this is a special
case, because the AWT was flawed, but
in fact it is not a special case, because in every corner
of Internet technology you see this a
constant outcropping of things so new and so different
that they cannot be dismissed.
Not all of these new inventions have staying power, however.
When “push” technology first appeared, it was touted as
the new wave that would drive our desktops from now on.
As it turned out, this technology's strength is in the
deployment of applications, as opposed to dynamic desktops.
In only one year, the entire perception of how this technology
should be used made a complete turn. It is a major challenge
for a manager to sort the promising technologies from
the doubtful ones, and move cautiously but at the same
time without fear of using something new because everything
is new.
The viability and professionalism of the provider of a technology must also be considered when making a selection. A great many Java product companies are startups, or small organizations recently acquired by larger ones. Of course, all companies start small; in the mid 80s Oracle was a fledgling company and an underdog, battling against mainframe Codasyl and IMS databases, so smallness itself should not be a handicap. It is a red flag, however. In a recent project, an object oriented “blend” product was selected to provide an object oriented layer for a relational database. Many vendors were evaluated, and since the Java binding of the ODMG standard for object oriented databases was fairly new, not all vendors had incorporated it into their products. As a result, there was quite a bit of disparity between the model used by the different vendors. The vendor that was finally selected was persuaded to agree to accelerate their schedule for certain feature enhancements of interest to the project, at the expense of others. Since they were a small vendor, and this was a large customer, convincing them was easy. They made the changes, but when the product started to be used, other unanticipated limitations were discovered. Luckily, this was during a prototype stage for the project, because the limitations necessitated reconsideration of the project's core architecture. The constant change of technology is a major problem for staff training. It is virtually impossible to staff a project with people who have experience in all the things you want to use. In February 1996 I recall seeing an ad in a newspaper, for a programmer with “2+ years of Java development experience.” Apparently the human resources person who wrote the ad was not aware that at the time there were perhaps 20 people in the world who fit that description, and they were very busy. The cost of obtaining specialized talent can also be a factor that reduces flexibility if not increasing risk. One project I consulted on originally had decided that the best technology to implement their application in was Lotus Notes. However, Java was selected because it was calculated that while Lotus Notes developers could be found, the cost of obtaining them in the required numbers would double project costs compared to using C++ programmers and training them in Java. The incentive to tolerating these risk factors is that these new technologies promise lower cost development, much wider deployment and end user productivity and functionality, and greater adaptability. In order to get control of the risks so that these gains can be realized, a strategy is needed with new emphasis on rapid delivery in a core capability driven manner. To this end, I propose three rules to comprise this strategy. Surely you have heard these rules before, but I am putting them at the top of the list.
The first rule is,keep each component simple.
By keeping things simple, you can get quick turnaround
in development this is important so that you discover
quickly if the project is going to have technological
difficulties. Do not create unnecessary layers. Make sure
each layer has a well defined purpose that is easy to
conceptualize. Otherwise, your best staff resources will
be spent creating advanced local architectures, instead
of understanding and applying the plethora of new technology.
Also, with things changing so rapidly, you don't want
to invest too much effort in in house designs
that may become obsolete or replaced by components available
in the marketplace. True component based software is finally
appearing. Don't worry that staff won’t be challenged
it is a tremendous challenge to understand, evaluate,
assemble, and apply all these new technologies, even if
custom programming is kept simple.
The second rule is, focus on the primary mission
of the application, and make that work really well.
Concentrate on what the application has to do most of
the time. Tune the system for that. A system that does
everything equally well is either overfunded or in fact
does everything in a mediocre manner. If the primary mission
is processing orders, design the system for that. If the
primary mission is checking patient records, design the
system for that. In large systems that are built in stages,
sometimes the wrong piece is built first and then drives
the rest of the design. For example, if the system must
interface to an external system, a data loading mechanism
might be built early on, and the system's primary features
added later. Unfortunately, when this is done, there is
a risk that the system will be tuned for the data loading
process, which is not the primary purpose of the system.
The result is a system that may load data well, but has
poor response time for calling up data—the primary mission
and success criterion of the system.
The third rule is, if using new technologies, build
a prototype. You cannot avoid using new technologies today,
and things are changing all the time. The best
way to deal with this unstable situation is to move quickly,
to avoid obsolescence, but prudently, in measured steps,
to manage the risk of using new features. Don’t commit
to an architecture without building a prototype that tests
the core mission- critical functionality using the new
technologies. That is the only way that technology limitations
will be exposed so that a successful full-scale architecture
can then be designed. Further, it takes someone with a
great deal of experience with these technologies to judge
where the risks might be, and what features prototype
development should focus on.
Extract from "Advanced Java Development for Enterprise Application" written by Clifford J. Berg Published by Prentice Hall PTR New Jersey 1998