Not all computing tasks are created equal, and the same goes for computing systems. Corporations and other organizations will be best served by systems that are designed to perform particular jobs or are flexible enough to adapt to them — one of the core principles of engineering. In techie jargon, these are “workload optimized systems.”
Consider some real-world scenarios: An application for managing resource planning in a factory requires different computing capabilities than one for analyzing the turnover rate among mobile phone customers and sending them custom-crafted renewal offers in real time. A Web-based social networking tool is very different from an accounting program. And a system for monitoring the use patterns on an electricity grid has very little in common with a program for recording online banking transactions.
Could a single computing setup handle all of these tasks? Perhaps. But would it do a good job on each? No way.
Adjust and Improve
It’s possible to make adjustments at every layer in the stack of technology to improve its ability to handle particular tasks. That includes everything on the hardware side, from semiconductors to server computers, and on the software side, from operating systems to databases.
This approach to designing computing systems isn’t just an option; given the complexity of the systems we build and the tasks we expect them to complete, it’s now a necessity. That’s because the laws of physics have made it more difficult to continue to increase the performance of today’s computer chips in conventional ways even as the demands on computing are exploding.
Increasingly, in order to more effectively manage this imbalance between demands and performance capabilities, data processing jobs will be broken up into pieces what will be parsed out to different processor cores. At the same time, all of the layers of technology in a system, previously mentioned, will be completely integrated with one another in order to maximize performance.
Workload optimization takes two primary forms. In the first, by pre-loading software programs and adding extra memory or processing power, we can build specialized systems from the bottom up for particular tasks, such as business analytics or information archiving. These specialized systems require minimal setup and minimal management on the part of the user, which enables these systems to work more like information appliances rather than tools requiring a large amount of user oversight and involvement.
The second form is responding to an increasing need for flexible computing systems. With computing systems now being tasked with processing and running such a wide variety of variable tasks, many computing systems now need to be capable of adapting dynamically as new demands are placed on them.
The Move to Integrated Service Management
The data center operators of today use sophisticated software to manage individual servers, clusters of like or similar servers, and networks of storage devices. They also currently manage functions that cut across different types of machines, one example function being security.
While this management of separate functions may seem simple, it is actually quite complex, and only a relative few systems today can holistically manage all of the equipment, networks and software in the data center — plus the facility’s cooling and air-circulation systems. The most sophisticated organizations today are only just beginning to manage elements of the extended enterprise from a central console — including everything from smartphones to sensor networks. This combination of visibility, control and automation, called “integrated service management,” is the wave of the future.
If we take a look into our crystal ball to see ahead five years, the people who will be running data centers of the organizations of the future will be able to handle applications by tapping into their entire vast array of processing power, computing memory, data storage, and networking capacities on the fly. It will be possible to connect any one system to another, integrating whole units and processes.
In the very near future, we will work with and think of computing resources not as discrete boxes or even racks of like assets, but more like a portfolio of LEGO-like components. We’ve seen this transformation take place gradually over the past decade in the software world with so-called service oriented architectures. Now we’re watching as it’s happening piece by piece in the world of computer hardware.