I’m an Infrastructure Architect. No, that doesn’t mean that I design highways and bridges. I design the major systems found in data centers: servers, storage and networks. These things have changed dramatically over the past few years. I remember when (don’t worry, I promise not to wax on about punch cards) processors had a single core, RAM was measured in megabytes, ten megabit Ethernet was the standard, and nobody had a SAN. We installed server operating systems from a stack of floppies, and a few hundred servers was almost a life’s work.
Yes, things have changed, and for good reason. A closer look at the old data center revealed that all those expensive processors, memory sticks and disk drives were spending a lot of their time and capacity doing nothing, other than consuming power. Perhaps this wasn’t a huge problem back then, when a few hundred servers was enough to run a good sized enterprise. Nowadays, you’re likely to have thousands of servers running in your data center, and that sort of waste would be unacceptable, both in terms of hardware costs and in wasted energy.
Enter virtualization and shared storage. Looking back, these are the two technologies that have had the biggest effect on that old data center. Those, and the advances in industry standard architecture (you know, x86 and x64- based processors) like multi-core processors, huge memory sticks, and PCI express. These technologies have allowed computing workloads to be consolidated onto fewer physical servers, and fewer disk drives, helping to decrease idle time and idle storage, and therefore reduce wasted hardware and power.
At the same time, business has grown ever more dependent on IT, and as a result, computing requirements have only grown. The number of processors and disk drives in the data center has grown, and grown tremendously, slowed only by these technologies that have driven higher and higher levels of virtualization and density. Though more efficient, we can’t say that it’s become less expensive. I remember the promise of ROI in SAN storage and virtualization, and to be sure, there is some, but the hawkers of these products seem to have done a pretty good job keeping a good portion of that ROI for themselves.
This seems to be a big part of my job. I look for ROI wherever it can be found, among the choices of servers, storage and networking. How can we reduce the cost per virtual machine, per gigabyte of storage, while improving service levels and the ability to deliver infrastructure to meet the needs of the business. And, it’s not just an exercise in cost savings for the business, it’s potentially all out war on our very way of life. The cloud has arrived on the scene, threatening to take away our computing workloads, and with them, our jobs. To compete with the cloud, we’ve got to fight tooth and nail to compete on cost and value.
Competition is a good thing, with public cloud comes private cloud. Private cloud will cause the data center to undergo a new round of automation, density, and growth. This growth demands scalable design. In some cases, extreme, push-the-envelope design.
This site will try to cover the pieces of the data center design, including servers and blades, storage, network, data protection (backup and replication), converged infrastructure, virtualization and cloud. Stay tuned as we dive into these topics, with an eye towards performance, scalability, and cost effectiveness.