||NetWorker ||Subscribe ||




Network Computing
in The New Thin-Client Age
by Jerry Golick

........

........Today's data networks are becoming a practitioner's nightmare - difficult to manage and costly to scale. But networkers can take a clue from the telephone and television systems, where the end-user's access devices are cheap, disposable and simple to operate. More than ever, thin makes sense, for companies as well as developers.

There are many other problems with current data communications networks. Ever-increasing amounts of finite resources are required for end-user support and training. Security is challenging, and accounting is virtually non-existent. And, perhaps most damaging of all, the evolution of network technology is causing end users an enormous amount of frustration as they try to keep up with a pace of change that seems to increase at an exponential rate.

It doesn't have to he this way, however. Consider some of the great global electronic communications networks, such as the telephone and television systems. By any measurement, these networks have proved their ability to be successfully deployed across wide geographic and national boundaries - not to mention their ability to scale to ultra-large deployments. For example, current estimates place the number of telephone handsets in the world in excess of 4 billion.

Data networkers would do well to examine these networks as we try to scale our own relatively modest offerings. While the Internet has gotten the lion's share of attention, it is in fact orders of magnitude smaller than what will be required for true global networking.

While the large telecom networks have many common attributes, a recurring theme seems to run across the implementations - the access device employed by the end user is thin. It is cheap, disposable and easy to operate. As a stand-alone device, a radio, television or telephone has little value. It is only when connected to the network that this device allows an end user an incredible amount of leverage. And because the network connection occurs via an open interface, these devices can be offered by many vendors, which in turn promotes price competition and technological innovation.

In short, thin makes sense. This article is designed to show networkers and corporate managers the concepts, challenges and concerns associated with new approaches to thin--client computing, and the potential benefits that will result from successful deployment.

THE MYTHICAL NETWORK COMPUTER

........Thin-client networking suffers, at least on the surface, from a lack of agree-ment on terminology and definitions. No sooner are the words "net-work computing" uttered than the debates begin on the relative merit of having computers without disk drives. These debates inevitably end up being somewhat digital in nature; that is, on-again/off-again. We hear that the network computer is dead, the network computer is thriving, the network computer may be resurrected, the network computer was resurrected but now is dying again, and so on.

Fortunately, we can leave these less-than-illuminating discussions behind. Thin-client networking is no longer focused on hardware, but rather on architecture the architecture of building seamless network applications that maximize the networker's ability to manage the network while at the same time preserving the autonomy of end users to select the most suitable mix of hardware and software to meet their requirements. Networks that have been architected according to thin-client precepts will be easier to scale, offer better security and audit capabilities, and provide smoother migration paths to new technology.

THE THIN-CLIENT CONCEPT

........In an architecture-centric approach to thin-client networking, the majority of application processing will be done via a server process rather than on the client. The words "client" and "server" refer to a software process, not to hardware devices. Thin-client networking is based on the concept that from the end-user perspective, the interface is the system. End users really do not care where process takes place, or where data is resident, as long as the interface is fast, consistent, seamless and easy to use.

The groundwork for thin-client networking was laid in the early 1990s when the Gartner Group introduced its now-famous reference designs for client-server systems (see Figure 1). By defining an application in terms of presentation, application and data-access logic, it became possible to consider segmentation both within and between these modules. Early client-server systems focused on the Remote Data Access module. Good examples are systems built around Powersoft's PowerBuilder or Microsoft's Access. Because both the pre-sentation and application logic were performed by the client process, these applications were dubbed "fat." The Distributed Presentation design was used to slap a graphical front-end onto an existing legacy application (sometimes called wallpaper or screen scraping). The Remote Presentation design was mostly used for integrating applications on different UNIX hosts via X-Windows. Shedletsky [11 developed a similar though more detailed set of designs.

Note that the Distributed Function design, which best represented the intent of client-server; was perhaps the most difficult to implement due to the complexity of design and the cost of support and maintenance.

However; neither the Gartner Group nor Shedletsky were able to predict the impact that browser technology, Java and high-speed switched networking would have on our ability to deploy new applications. These are the fundamental building blocks, or components, of thin-client systems. (For the purpose of this article, a thin-client system is defined as a cooperative processing environment that primarily implements applications based on remote and distributed presentation designs.) This environment can be dynamically morphed into a distributed function design through the use of applets as required. As with any client-server system, the primary intent is to provide a single system image to the user.

Also note that there is no need for a network computer in this definition. Network, or diskless, computers might be used in a thin-client network, but they are not required. Thin-client networking is not about what you have, but how you use what you've got.

BENEFITS OF BEING THIN

........The ability to leverage existing desktop hardware and software is perhaps the single greatest benefit of thin-client networks. Any desktop computer capable of running a browser can participate in a thin-client network. From Macs to OS/2, from Windows 3.1 to Windows 2000, and for a host of emerging platforms, thin-client networks make sense. By taking a thin approach, organizations can get off the vicious treadmill of constant desktop upgrading. New features and functionality will be delivered from servers via the network. When replacements are required, networkers will be able to choose the technology that makes the most sense. End users who require limited functionality may find a network computer to be sufficient. Other users may require more powerful machines for; say, local CAD/CAM processing.

In other words, thin-client networking expands choices instead of limiting them. Another primary benefit of thin-client networks is enhanced security and audit capability (see sidebar: "Feeling Secure in a Thin World").

Still not convinced? Then consider some of the other potential benefits of a thin-client network:

  • Lower support and distribution costs. Since the applications mostly run on servers, there are fewer machines to configure. Support costs are lowered since there is more consistency across the implementations.
  • Interface portability. User profiles and interface specifications can be maintained via a server process. This means that as the end user logs onto the system, his or her interface profile is downloaded in real time. The implicanon is that the user's personal interface is now available from any piece of hardware.
  • Faster Mean Time To Repair (MTTR). Today, the failure of desktop hardware can be catastrophic to the individual user; While some files may be kept on a corporate or departmental server; many of them are stored locally. These must be recovered before the user can continue to work. In the worst case, the entire local station may have to be rebuilt, reconfigured and restored. This is not the case in a thin-client network. All personal files can be kept on back-office servers. Tn the event of an end-user machine failure, simply replace the old machine with a new one, log on and continue to work. This reduced MTTR also implies a lower cost of outage when these failures occur.
  • Capacity planning. Networkers will be able to perform better capacity planning in a thin-client environment. It will be possible to measure and evaluate the actual work being performed. This data may be plotted so that trends can be predicted. Since the desktop hardware/software environment will become more stable, most of the planning will now be for the back-office engines and the network to deliver the presentation information. For example, information about the number of concurrent users, the applications being used, the duration of time spent in each application, and the disk space utilization can all be measured on a daily basis. Over time, as usage increases, network managers will be able to predict the capacity that they will require to handle projected loads. As a side benefit, the fact that the same application (say, a word processor) does not have to be duplicated on every machine should reduce over-capacity requirements which is to say, better utilization of existing equipment.

It is unlikely, however; that there will be cost savings on hardware. This is one of the great myths of thin-client networking. Hardware costs, regardless of which architecture is chosen, stay about the same.

MIGRATION AND POLITICS

........Thin-client networks are not an "all or nothing" proposition. Organizations will naturally be reluctant to give up their investment in both fat-client applications and local personal productivity software such as office suites. For thin-client networking to succeed, care must be taken in choosing a migration strategy. Happily, this migration can be phased in gradually at whatever pace is deemed appropriate by organizational requirements.

As mentioned previously, a primary advantage of a thin-client system is that it will leverage the existing infrastructure. In terms of application deployment, this means that new applications can be deployed in the existing desktop hardware/software environment. The payoff is achieved not through reduced hardware acquisition costs, but by increasing the useful lifetime of the existing equipment and lowering overall support costs. It is unclear at this point how many more years an organization will be able to squeeze out of desktop hardware, but if the life cycles of ultra-thin components such as telephones or VTxxx and/or 3270-type terminals are any indication, the savings may be dramatic. By extending the useful life cycle, depreciation costs are reduced, which should have a positive impact on both cost of ownership and payback period.

The migration strategy must also consider the political side of thin-client computing. Many end users may be reluctant to give up the perceived personal privacy associated with fat-client processing. In addition, certain individuals may be using their desktop machines for "inappropriate" applications. Depriving them of access to these applications is certain to promote a degree of animosity (undeserved, perhaps, but no less real) toward the thin-client concept. As a final political hurdle, there may be inhouse technical support specialists who will feel threatened if the desktop environment is simplified. Product certification is not required to manage thin desktops; most of the work is done on the server side.

There are a number of ways to address these political issues. First and foremost is a commitment by senior management to migrate to a thin-client network. In part, this has already been accomplished by the current interest in intranets. The benefits listed above may also become selling points for these managers. With guidance from higher-ups, the end-user community can be sold on the thin-client concept. This may involve short seminars explaining how thin-client networks will provide a more stable, reliable and functional environment. Tn particular; managers should emphasize that thin-client systems will be less disruptive and intrusive. Interface portability is another strong selling point; this will allow users to work from multiple locations.

The tech-support specialists will be mote difficult to convince. Much of their training and certification has been geared to highly complex desktop systems; Windows 2000, for instance, contains some 40 million lines of code. However; since the introduction of thin clients is intended to be gradual, there will be time for retraining and refocusing.

SELECTING THE RIGHT APPLICATIONS

........The first candidates for thin-client systems will generally be existing mainframe legacy applications.Many of these applications can benefit from a reengineered interface, and thin clients make sense as graphical replacements for mono-chromatic, character-based terminal screens. These are relatively low-risk projects because the existing application can be maintained until the new system is ready.

Since browser technology is the primary thin-client implementation, all intranet-based applications may also be considered part of the migration. Many groupware applications such as Lotus Domino can be considered good examples of thin-client systems.

Local processing, if required, can be provided via dynamic applets. These allow application designers to implement systems based on the Distributed Function model if needed. The advantage of using such a model is that it can leverage desktop CPU cycles for custom transformations or presentation work. An added bonus is that applets can also be used to access an object broker such as OMG's CORBA or Microsoft's DCOM to provide a wide range of capabilities. These capabilities can include personal productivity applications, data analysis programs, or transactional enablers. They might be security agents, multimedia viewers, games or book readers. That's the beauty of using object technology. Application developers are no longer constrained by having to write to one platform; they can use whatever hardware/software technology makes sense.

The ability to dynamically deploy client software will assist in the implementation of transactional systems. While SQL may be the language of choice for analytical applications, it does not provide a clean interface to most transaction processing monitors (TPMs). Tn addition, most networked SQL offerings only provide a simple request-reply interaction model. However; the downloaded applets can provide a wide range of connectivity options and custom syntax as requited. Further; if these applets are written in a neutral syntax language such as Java and implement standard object calls, designers will have a fully decoupled environment allowing them to modify both the clients and the servers at will.

The message is clear: The wider the deployment, the greater the benefit of using a thin-client approach. Small, homogenous departmental systems may be bertet off sticking with fat clients. There ate a wide variety of rapid application prototyping tools available that enable the development of applications using the fat-client model. The payoff for thin-client increases as the systems become larger; the cost differential for support and management vis-a-vis the traditional fat-client approach grows rapidly.

THIN-CLIENT CONFIGURATIONS

........Thin-client networks can be implemented in a variety of configurations.The basic building blocks ate the star and tier (see Figure 2). These two fundamental configurations can then be joined in a variety of hybrid network types.

Selection of a configuration will generally be dependent on the network design requirements and acceptable tradeoffs. Star configurations favor performance (by reducing the number of hops between the client and the server), while tiers reduce support costs and simplify implementations. However; star configurations generally require a "fatter" client since they must be configured for access to all possible servers. This is why thin-client networks tend to favor the use of tiered configurations.

In real-world settings, the two configurations are often joined together to form hybrid configurations. The third configuration displayed in Figure 2 shows a tier relationship between the client and level 2 server; and a star (or mesh) relationship between level 2 and 3 servers. This is a common configuration used in many organizations for access to the Internet via a proxy server or similar gateway. The advantage of this approach is that it allows the network manager a control point (i.e., the proxy server) and takes advantage of using a star topology for performance where possible. This type of configuration can be successfully applied to other applications such as database access, transaction processing and directories.

The physical relationship between the client and the level 2 server deserves additional attention. In many organizations, desktop systems are connected via shared media LANs based on 802.3 or 802.5 protocols. While these are adequate for the burst-mode traffic of typical fat-client applications, they may not perform as well under the constant load offered by thin-client systems. As a rule of thumb, fat-client systems are throughput-based while thin-client systems, being more interactive, are response time-based. Dedicated bandwidth is preferable whenever possible.

(As a corollary to the preceding paragraph, I have often wondered if Bob Mercalfe's original intention with Ethernet was to develop an alternative cabling scheme for terminal-host connectivity. If so, running thin clients over 802.3 will really have brought us full circle.)

The implication for networkers is consideration of switched LAN technology right down to the desktop. In an ideal world this would be B-ISDN (using ATM), but it is far more likely, and affordable, that switched 802.3/802.5 will be used in the short term. The deployment of this technology is one of the costs that should be associated with thin-client networks.

DEALING WITH PERSONAL PRODUCTIVITY

........Some applications should always be kept local. Consider the calculator; No one wants to have to make a phone call to add a couple of numbers together; Many people carry a calculator with them (or have it embedded in another device, such as a watch) so they can have easy access to this function.

Much the same case could be made for word processing and spreadsheets. The wide-spread deployment of office suites such as Microsoft's Office reinforces this point. It seems that most end users require this function locally.

Can personal productivity be supplied by thin-client networks? Well, yes and no. Product offerings such as Microsoft's Windows NT Terminal Server and Citrix's WinFrame and MetaFrame can provide this functionality. These products allow multiple instances of Windows NT to be run on a single hardware platform. The sessions can be assigned to users who can then run NT applications. The keystrokes, screen images and mouse events are transmitted across the network via proprietary protocols. WinFrame currently supports only Windows NT 3.51 applications; the Microsoft product is required for Windows NT 4.0 applications. MetaFrame is an add-on to Terminal Server that extends its functionality by supporting multiple heterogeneous client types (e.g., Mac, X-Windows). It also provides numerous enhancements in management and security.

The implication is that an end user; regardless of desktop platform, can run any NT application as if it were native to his or her own environment. In other words, a Mac or X-Windows machine can give the illusion of running Microsoft Office locally.

But there are some problems. One concern is how well these products may scale, in particular Terminal Server with the MetaFrame addition. Petreley [21 has voiced a number of concerns over NT's ability to handle multiple terminal sessions and Office's ability to support more then one user concurrently on the same machine. He also questions Microsoft's commitment to the concept of multi-user support, which on the surface runs counter to Microsoft's primary marketing concepts.

Another concern with using these products is licensing costs. With Terminal Server; Microsoft requires that each end-user machine have a Windows NT Workstation license regardless of which operating system is actually being run. This means that any Mac, OS/2 or X-Windows device will require an NT Workstation license to access Microsoft's Terminal Server; Petreley estimates that the software costs of a Terminal Server/ MetaFrame setup for 50 concurrent users would be in excess of $20,000, not including the cost of office suite software.

Are there other solutions? Two in particular; but you may not like either one. The first is the use of personal productivity software from companies other than Microsoft. The idea is that as long as the third-party software is compatible with Microsoft file formats, you can still interchange documents. A number of organizations make such alternative suites; Lotus SmartSuite is perhaps the best known. Packages are available for other operating platforms as well. It may also be possible to deploy thin local word processing/spreadsheet systems using Java, and then add functionality via applets as required. To date, there has been little success with this strategy since performance and functionality have been lacking. The primary problem with this solution is the vested interest most end users have in their training and comfort level with Microsoft applications; they are naturally reluctant to learn a new interface.

The second solution is to accept the inevitable and deploy personal productivity applications locally using the Windows/Office bundle. While this reduces some of the benefits of using a thin-client architecture, it does not negate them. Internally developed applications can still be employed using thin-client techniques.

There is a pressing need for a thin-client personal productivity suite that can maintain compatibility with the Microsoft file formats and offer a reasonable level of performance. Whoever develops such a package will have the world beating a path to his or her door.

THE FUTURE OF THIN

........The potential of thin-client networking is unmistakable. What is less clear is bow long it will take, how many mistakes will be made and what it will cost.

The convergence of digital technologies such as voice, video and data will drive the requirement for a wide variety of thin devices. We are beginning to see indications of this in the next generation of cell phones, TV set-top boxes and even video game consoles. The rapid acceptance of ultra-light devices such as the Palm Pilot and Windows CE palmtops is further evidence of this trend.

As the Java language, object brokers and the Internet continue to evolve and mature, they will collectively form an infrastructure where new functionality can be dynamically delivered as required. This may give birth to the concept of "just in time" applications.

However; there is an enormous invested interest in fat clients and fat-client computing. Today's reality is one of large Windows-based desktops running enormously large and complex operating systems and applications. Microsoft, Windows-based independent developers and much of the technical community have little motivation to change this environment; it is profitable and affords great job security. All indications are that the next versions of Windows and Office will continue this trend - larger machines, larger disk drives, larger memory, complexity upon complexity. At some point, though, all of this complexity is bound to collapse under its own weight.

Don't wait for this to happen to your organization. Start planning your thin-client networking migration today. The end result will be an infrastructure that is easier to manage, easier to scale, boosts productivity and costs less to operate.

Finally, less is truly more.


||NetWorker ||Subscribe||

The Association for Computing Machinery
networker logo