To get a grasp of the demands placed upon data storage today, one can merely try and conceptualize the immense size of a 407,000 square foot datacenter outside of Las Vegas referred to as SuperNAP7 that is owned and operated by Switch which is based in the same local area. Companies like Google with mind boggling data storage demands utilize SuperNap7. According to a spokesperson of Switch, “The entire written works of man, from the dawn of time until today, in every single language, is the equivalent of 50 petabytes of data,” she says. “Inside this one cage (about the size of a dozen refrigerators) is a little over 100 petabytes.” The number of cabinets is simply mind boggling and if that weren’t enough, SuperNAP8 just recently went operational with SuperNAP9 under construction. Now think about the fact that Google, which manages over one million servers utilizes nearly three dozen other datacenters across the world. And that’s just one company.
According to an article in Forbes Magazine dated 2012, 90% of the data in the world today was created in the last two years, and data volumes are rising faster than storage prices are declining and technology is improving.” The article goes on to say that even with a 20% decline in storage cost, data storage for large enterprises will soon rise to consume close to 20 percent of the typical IT budget.
Now think about the fact that the vast majority of enterprises today must not only deal with exponentially increasing storage needs, but perpetually growing demands on their enterprise infrastructure itself. Users scattered across large enterprise conglomerates are connected with a complicated web of switches and routers. Due to the sheer size of the enterprise today, today’s IT professionals spend an abundance of time configuring and managing a vast array of multi-generation technologies, typically one device at a time. Days are full of repetitive tasks, such as logging onto the basic building-blocks of the profession: bare-metal servers, switches, and storage devices, in which simple changes with significant operation implications are performed using arcane interfaces.
For large infrastructures with individual devices, these simple tasks add up quickly, causing occasional outages and internal flogging, and leaving little time to bring up new projects that could transform the business. The fact is, IT leaders must adapt to a better way of managing their swelling enterprises or else they risk losing control of the most basic IT processes.
Najam Ahmad, Director of Network Engineering for Facebook spoke at Interop in 2013 and said, “The days of managing networks through protocols and command-line interfaces are long gone, he said. “We feel it’s the way to build networks. … CLI is dead, it’s over,” he said. “We want robots running the network and people building the robots.”
Ahmad was speaking of SDN, or Software Defined Networking which is based on the OpenFlow protocol and is being driven by industry giants such as HP. In the case of HP, along with a partnership with VMware are teaming up to provide a network virtualization platform to provide customers with an integrated approach to automating their physical and virtual network infrastructure. The networking solution will provide a centralized view, unified automation, visibility and control of the complete data center network, improving agility, monitoring and troubleshooting. No longer will the IT staff have to manage an endless array of switches and other network devices through the CLI command.
There is a similar trend now in the data storage industry – SDS, or Software Defined Storage. A good example is EMC’s VIPR which according to EMC, is a lightweight, software-only solution that transforms existing storage into a simple, extensible and open platform. Like other Software Defined System solutions, VIPR decouples the control plan from the data plane. Rather than being locked into the rigid boundaries of a physical RAID, VIPR virtualizes storage from physical arrays, conglomerating them into a single pool of storage that still retains the individual characteristics and value of each array. According to EMC, “This enables administrators to define automated, policy-based Virtual Storage Pools and deliver them to users instantly through a self-service catalog with centrally managed storage resources.” If you think this sounds like Server and Client virtualization, you are correct.
This transformation from a hardware model to a software one makes perfect sense, just as it did in the case of server virtualization. Hardware by its very definition is rigid and inflexible and with the brevity of the product life cycle today for IT hardware, coupled with the constant innovation and introduction of new technologies, it makes little sense for today’s organizations to lock themselves into a specific hardware architecture for five years or more. In addition, the very act of purchasing hardware architecture results in the overprovisioning of resources as enterprise managers find themselves buying for “tomorrow” rather than today. The time has come to integrate the level of agility currently found in the computer virtualization industry into data storage and other facets of IT as well. Enterprises must prove to be flexible and efficient, integrating a deep level of elasticity into all levels of the IT process in order to performance delivery while minimizing costs. Although Software Defined Systems architecture is still in its infancy, it is a concept that is past due, as IT managers are not only considering cost and performance into the purchase decision, but management allocated time as well.
Posted on November 21, 2013
0