Building a scalable network
Jan 12, 2000
Tim
Landgrave
When trying to build a data center that can support
hundreds or thousands of users, the first issue most companies face is
scalability. ASPs have to deal not only with multiple users, but multiple users
from multiple companies.
You want to be sure that if you use the
services of an ASP, then your data is secure and your performance won't diminish
as the ASP adds additional customers. But even if you don't use an external
hosting provider, you need to consider how to re-architect your internal systems
to allow you to "self-host" your applications.
In order to be
successful, you have to consider how you will accomplish the deployment of a
scalable system.
Part two in a four-part series |
In part one of this series, "Why
Every CIO Should Care About Hosting," we examined how to make informed
decisions about outsourcing. Check back to find out more information on
hosting in the enterprise. |
If bigger is better, then should you be taller or
wider?
Simply defined, scalability is the ability to add users to a
system without impacting the performance of the existing users. Strictly
speaking, you're not building a scalable system by adding different sets of
computers for each set of users you want to support. But with the current state
of software, hardware, and applications, this is sometimes the most
cost-effective choice.
If you're building for the long term, however,
you have to decide to manage scalability in one of two fundamental ways—with
hardware or software. Either way, you're trying to accommodate the development
of "Internet Scale" systems. You should certainly assume that at some point in
the future, you're going to open up your systems to your own prospects,
customers, and trading partners. This assumption demands that you consider how
to handle hundreds or thousands of users on the same system.
Scaling up: Build it with more hardware
With more
manufacturers supplying Symmetrical Multiprocessing Systems (SMP) and with
products like Microsoft Windows 2000 Data Center Edition to support them, many
companies will opt for the hardware scaling approach. Let's look at the
advantages and challenges of choosing a hardware scaling approach:
Advantages
- There's a single system to program and manage.
- Companies can consolidate multiple servers onto a single system.
- There are several high-volume, 8-way systems already available, with
32-way systems only months away.
Challenges
- Relying on a single system for production makes scaling down to a single
system for starter or development systems more expensive.
- There's a single point of failure for all applications.
- Although systems with more than eight processors will be available soon,
current systems may not yet be large enough to support the applications that
companies need.
Ultimately, you will reach a hardware limitation to
scalability that will require multiple servers. At this point, you'll need to
consider the best way to manage these servers with software.
Scaling out: Rely on software to manage multiple
servers
Using a software scaling methodology allows us to ride the PC
economics curve. As off-the-shelf systems continue to improve in performance and
reliability, it's a natural evolution for the software that manages these single
systems to become more aware of the systems around them. Let's consider the
advantages and challenges of implementing a software scaling
approach:
Advantages
- The system can be expanded in a simple and modular way by adding more
servers to the server farm.
- There's no single point of failure in the system.
- The hardware limitation to scalability has been removed.
- Linear scalability can be accomplished with predictable, incremental
costs.
- Development or starter systems can be easily implemented at a low
cost.
Challenges
- Managing multiple systems is difficult to do today.
- Developers are still learning how to write systems that perform well when
distributed across multiple machines. (See the side-bar on "Architecting a
three-tier data center on Windows 2000" for more details.)
In order for
software scaling methodology to become the norm, four major goals have to be
accomplished by operating system manufacturers:
- The software that manages the server farm must scale linearly with the
number of machines with minimal per-server overhead from the operating system.
- Server failures must be transparent to users of the system, with automatic
system software and application software recovery.
- Load balancing between the servers must be dynamic. That is to say, as
servers are dropped or added, the existing load is spread evenly between the
remaining servers.
- This needs to be accomplished using standard, off-the-shelf hardware in
order for the system to be affordable.
Which is
the right choice?
The fact is that your solution will involve some
combination of the two methods. As the cost of MIPS continues to drop and
demands on performance continue to rise, you'll continue to deploy better
performing hardware. But software to manage and maintain a distributed system
will be essential to keeping your data center operating over the long haul. And
just like the proprietary, hardware-based word processor and PBX before them,
the mainframes of the future will become software-based distributed
systems.
Now it's bigger, but is it any better? |
Just because you can make it bigger, doesn't mean you
can add users and applications and keep the whole thing running. Next week
in part 3, we'll look at the issues surrounding managing the platform
itself and the applications on the platform.
|
How have you solved your
scalability issues?
Have you used a combination of software and
hardware to produce a scalable network capable of handling your enterprise's
growth? Share your experiences by dropping us a
note.
Tim Landgrave is the founder and CEO of eAdvantage.
eAdvantage assists small to medium enterprises in the development of their
electronic business strategy.
Copyright © 1999-2000 TechRepublic, Inc.
Visit us at http://www.techrepublic.com/