There have seen significant changes in data center infrastructure over the past 15 years. With the advent of server virtualization technologies (particularly the first release of VMWare ESX server in 2001) the trend towards hardware abstraction has only accelerated. Today, virtualization encompasses all layers of data center infrastructure, from virtualized servers to network and storage. This new converged and hyper-converged infrastructure design allows for significantly more cost effective, agile, and robust infrastructure but also requires new methods of design and management.
In the past, data center infrastructure was composed of the same basic components that it is today. Server, network and storage components all came together to form a platform on which applications and data reside. Standard data center design practices revolved around the deployment of physical servers to host a single or a small number of applications. Servers generally did not share storage resources, and the network layer was entirely separate from the server/application layer, with neither having much knowledge of what was happening on the other. Once an application or data resided on a particular location of the infrastructure moving it for performance or efficiency improvements took significant effort and planning.
As computing power continued to increase in tandem with significant increases in storage capacity and network speed, hosting a single application on one physical server became inefficient from both a computing resources perspective as well as from a power consumption perspective. For all but the heaviest application loads, it became apparent that a physical server was idle 99% of the time hosting a single application. Given the cost of a single mid-range rack mount server and the electrical costs to operate it over the period of its useful lifetime, it was plain to see this was a highly inefficient way to conduct operations.
Vendors first began introducing server virtualization platforms that allowed organizations to run application servers as virtual machines on physical server platforms. No longer were expensive servers sitting idle most of the time. Now a single physical machine could host multiple application servers, providing significantly better use of computing resources, more efficient use of power and ease of management. This shift also made it much easier to deploy new application servers as demand increased. Organizations were no longer bound by purchasing new hardware and the lengthy amount to time required to deploy and configure it. The time required provisioning a new application server went from days or weeks to minutes and hours.
Soon storage resources were brought into the virtualized arena. Storage was moved away from being siloed off on individual servers to large networked arrays called Storage Area Networks. These SANs allowed for the aggregation of large amounts of storage that could be managed centrally and presented to virtualization or application servers as a single block of storage. This change also made highly available clusters of virtualization servers significantly easier to deploy. No longer did an organization have to choose which applications it would invest significant sums of money on to ensure high availability via clustering solutions. Now it was possible for all or most of an organization’s critical systems to enjoy the benefits of high availability clustering.
The last major aspect of the data center to enjoy the benefits of virtualization has been the network layer. With the advent of network virtualization/software defined networking, this too can now be managed as a flexible entity whose resources can be dynamically allocated as demand requires. The network is no longer simply a standalone medium upon which the rest of the data center infrastructure resides. It is a fully integrated part of the data center fabric which must have its resources properly monitored and distributed.
Today, all these entities form a unified computing fabric on which to serve up the critical applications and data that an organization uses. Instead of each infrastructure component being a standalone computing resource that hosts a single or a small number of applications, components are grouped together as a single system that can be presented as a single set of resources to the application layer in a dynamic and easy to allocate fashion. However, this new paradigm brings its own set of management challenges.
With server, storage, and network functioning as a single converged system, you have an enormous number of factors that can impact the performance of the application layer and vice versa. When your infrastructure resources are pooled together and treated as commodities in a marketplace it can become very difficult to determine performance bottlenecks or plan for future capacity. On the surface, this may not seem to be the case, as some of the challenges are the same as they have always been. Do I have enough CPU? Do I have enough RAM? Is my disk I/O overburdened? Is my network over capacity? These are questions IT admins and engineers have had to struggle since the days of classic data center architecture. In the past, these questions could be focused on individual systems on a case by case basis.
Today, the picture is not so clear. The same questions remain, but they apply not just to the physical aspects of your infrastructure but also the virtual. Do my
VMs have too much or too little vCPU? Do my VMs have too much or too little vRAM? Is my disk I/O overburdened and if so is it being caused at the VM or physical layer? Is my network over capacity or underutilized? If the network is being saturated at a specific point, which system is causing it and how can I redistribute resources in order to alleviate it? At first, it may seem that only a few extra data points are being added, but we’re really only scratching the surface here. When examining even a mid-sized system compromised of a handful of virtualization servers, a few dozen virtual machines and their applications, the storage layer and the network layer, you quickly end up with thousands of performance metrics that must be plotted, tracked and continually optimized in order to keep the system in a properly functioning state. Old methods of human-driven capacity management and planning can no longer keep up with myriad of calculations and decisions required to maintain this desired state or to properly predict where a system will be in the future given past trends in growth and performance demand.
This is the newest trend is data center operations management. Married to the unified computing fabrics of today’s data centers are operations management suites that collect data in real time on all aspects of the infrastructure. By bringing to bear powerful predictive algorithms these programs are able to monitor a multitude of data points and take autonomous actions in real time to ensure the performance and efficiency of a system or to make recommendations to IT teams on actions to take to keep the system healthy. This keeps teams up from having to track down issues as they arise and frees them from the break/fix cycle. It also allows organizations to make even more efficient use of their hardware investments.
This trend is only likely to continue into the future, as we see the line between server, storage, and network continue to blur. And while the consolidation and unification of these systems continue to simplify the task of delivering applications and data to end users, the complexity on the back end only increases, as does the need for talented professionals to design, implement and manage them.
Written By:
John Belcher
Former MetroStar Principal Infrastructure Engineer
Never miss a thing by signing up for our newsletter. We periodically send out important news, blogs, and other announcements. Don’t worry, we promise not to spam you.