Hosting: What is it and why should CIOs care?

Jan 5, 2000
Tim Landgrave

You've seen the press reports and analysts' predictions. Writers in the PC tabloids rave about the new business models that allow large companies to outsource their electronic mail, customer relationship management, or accounting software. Most analysts are predicting that the application service provider business (also known as hosting) will be anywhere from a $2 billion to $10 billion business by 2003. But if you're one of those companies that is certain you'll never let your IT functions leave the four walls of your building, then why should you care about software hosting at all?

Read the entire series
This is part 1 of a four-part series on issues that a CIO needs to consider when examining outsourcing or hosting.
What is "hosting?"
Before we go much further, it may help to define what we mean by hosting. For the purposes of this article (and the remaining articles in the series), we'll define hosting as "running all or part of an application in a shared, centralized data center." Wait a minute! You already do that yourself! Well, sort of.

Most larger companies have been running mainframe or minicomputer applications for the last few years that have been part of a distributed network that includes intelligent workstations (PCs with Windows) as a front end to the system. Doesn't this qualify as a hosted environment? Yes, by our definition it does. The problem arises when you consider most companies' implementation of microcomputer technology—file servers, application servers, and directory servers—in the data center. These assets haven't been given the same attention, respect, consideration, and investment as their minicomputer and mainframe brethren.

The greatest example of this comes from companies who have "bet the farm" on Windows NT 4.0. I've been in on hundreds of these installations over the past couple of years and can count on one hand the number of facilities where simple rules like restricting physical access, obeying change control procedures, and limiting administrator accounts are even documented, much less followed. I've heard countless CIOs and network administrators complain about NT 4.0's reliability, only to find that their standard answer to any problem is to reboot the server. In most cases a simple start/stop of a service would have solved their problem with no system downtime. System engineers who've never set foot in a real data center installed most of these systems. Their concept of reliability is rebooting less than once a day. In fact, in installations where strict rules and procedures for the installation, support, and maintenance of Windows NT 4.0 are followed, companies are achieving uptime of 99.999 percent with ease. But most companies will never reach this level of reliability.

Why can't most companies build reliable PC data centers?
Because they don't have the "data center" mentality for their distributed systems. But hosting companies do. CIOs and IT managers are turning to hosting companies to alleviate several major problems in their own organizations:
  1. The shortage of experienced engineers and developers makes it difficult for companies to hang on to some of their most valuable employees. They can generally make it compelling for the top echelon to stay, but they soon discover that they don't have enough manpower to do more than maintain the status quo.
  2. The cost of deploying software to PCs is declining modestly, and with the shortage of qualified developers to re-code applications from one- or two-tier applications to true distributed applications, these costs aren't going away by themselves.
  3. The capability of most software packages far outstrips the company's ability to install and support them. Very few companies are using the collaboration features of products like Microsoft Exchange or Novell GroupWise. The effort involved in just getting people up to speed on e-mail and calendar functions and maintaining the system leaves little time for innovation.

New companies that host applications on the microcomputer platform have the distinct advantage of recreating their own data centers from scratch. They can design and build robust platforms with defined policies and procedures for issues like change control, backup, and problem escalation. Application Service Providers (ASPs) are counting on the availability of reliable, high-speed bandwidth to make their data center connections transparent to the customers who engage them. And with the increased emphasis on data warehousing and analysis of Web data, as well as the future need for reliable video storage and retrieval, the data storage requirements for most companies will increase rapidly. ASPs will be able to provide this storage at a fraction of the cost since they'll be buying in larger quantities over longer terms.

It's not as easy as it looks
Since ASPs are going to build these new, shiny, robust data centers, we should just hand over the keys to our own data center and let them move our applications to their new platforms, right? Not so fast; it's not as easy as it looks. Companies building new platforms based on off-the-shelf PC technology have a lot of challenges ahead of them. Like the mainframes that preceded them, these new platforms will have to meet expectations of scalability, reliability, manageability, and interoperability. And meeting these requirements is a lot more difficult than it appears on the surface.

In the next three articles in this series, we'll look at the issues surrounding the ability to develop, install, and maintain a distributed platform based on off-the-shelf microcomputer hardware and software that can support thousands of users from different departments and/or companies.

So why should you as the CIO care about the issues ASPs are going to face in building their next generation platforms? I can give you two very good reasons.

First, if you're going to hand over all or part of your data center operations to one of these firms, you should know that they've thought about and resolved these issues. Your reputation—and probably your job—is on the line if you send out a mission-critical application to a company that fails to deliver.

Second, and most importantly, you can use these issues as a guideline if you choose to "self-host." Self-hosting, in my opinion, will be the next big trend after CIOs try sending their work out to external hosting providers. Once some applications have been moved out of their own data centers, CIOs will have the time and labor necessary to re-architect their own data centers. Leaders who use this time wisely will be in a position a couple of years from now to take back the applications that they've farmed out to ASPs and run them on their own internal platforms—now redesigned to handle the load properly. Let the ASPs burn through their cash to find out the pitfalls of building and maintaining shared data centers, and then you can use the technology to create your own next-generation data center. Over the next three weeks, we'll show you how to gauge their progress.

What do you think?
Has your company chosen to go with an ASP? Has your experience been a positive one? Or maybe not so positive? We'd like to know. Click here to tell us what you think.

Copyright © 1999-2000 TechRepublic, Inc.
Visit us at http://www.techrepublic.com/