With so many technologies available in this day and age, well-known and well-utilized terms can quickly become antiquated, to say the least. After all, when technology changes on a seemingly minute-to-minute basis, what was viable 10 minutes ago may very well be on its way to being obsolete as each second passes.

This is very much the case with the well-used and well-known terms, Disaster Recovery and Business Continuity. In fact, for years disaster recovery was one of the biggest topics for companies—the idea that if a horrific natural disaster struck, or fire decimated a business, a remote location along with workstations, backup files, etc., would all be enacted to ensure business continued without too much delay.

Then, of course, came the next evolutionary step: the ability to manage business continuity as opposed to simply managing a disaster after the fact. With the advent of better backup technologies, multisite connectivity, and cloud, all that once was an archaic and manual process began to transform itself into a self-governing, self-reliant system that would back itself up to the ether auto-magically without the old fears of force majeure based scenarios.

However, with this technology that helps us save time, energy, and heartache, comes a new stress—the technology itself and where and how it’s deployed.

The factors that come into play for companies are twofold: is everything outsourced to data centers and cloud-enabled continuity efforts or, conversely, is it an “internal” scenario that plays into more traditional hot sites, warm sites, etc. In either case, there is infrastructure that must be put in place.

For data centers, telecoms, ISPs, etc., that offer a path to business continuity the need for continual upgrades is always present. After all, think of the data involved. For every company there is data—whether financial and customer data, day-to-day files, the output of business processes, and more—all of which needs to be accounted for somewhere. But there is far more. There is also the issue of the programs themselves, applications that drive business processes that must be accessed for a company to do business.

Take all of that data and the applications that must be accessed, and then consider the speed and connectivity to access it. This means that data centers need the gear in hand for accessibility and throughout to maintain their SLAs and to meet the expectations of their customers—the speed of business in its purest form.

The exact same scenario is true for businesses that maintain their own data centers. From large enterprise companies to utility companies, the list can go on and on, the same requirement for speed, power, and storage continually grows.

In the end, the backup itself becomes the easy part—software can govern that—the issue becomes everything else that supports it.

Oh, and BTW … what is backing up the backups?