Optimizing Cloud Resources: the Requirements and the User
Your applications, your users, and your business all rely on your data center and your cloud architecture. Here are a few good ways to optimize these critical resources.
September 9, 2015
In a cloud environment, administrators are still using physical resources to deliver their workloads to the end points. These resources may be located at a nearby data center or somewhere offsite. The most important fact to remember is that these resources must be properly watched over and managed as they are very finite. As mentioned earlier, poor resource provisioning will result in a Band-Aid effect, where administrators are simply pumping more RAM, storage or bandwidth into an environment without really fixing the original issue: improper cloud resource balancing.
When deploying a cloud-ready data center, engineers must plan out their environment and properly size as well as balance their resources. This means understanding the following components:
Current user count. The only way to properly size and balance a system is to establish the amount of users that will be accessing the cloud infrastructure immediately. This can be a department, a corporate division or an entire branch office. By understanding the immediate need of the cloud, administrators are able to plan for baseline requirements. When user count is established, plans can begin for proper resource provisioning. Here, RAM, CPU, storage and WAN requirements are calculated based on the number of users accessing the environment at any given time, and the workloads that they will be launching.
Future user count. One of the most important planning phases in any cloud environment is forecasting for future usage. This means working directly with business partners to understand organizational demands for growth and expansion. If an administrator knows that there will be a new acquisition around the corner, they will size their cloud environment for growth. This could mean having a spare blade chassis available for more users, or having additional resource prepared for a spike in user count. This also means planning for capacity needs. For example, if a cloud-based storage controller is purchased only for “now” demands, future usage spikes could potentially cripple performance for any user trying to access the workload. When forecasting for the future, it’s important to size every component in the cloud environment properly. This way, as user counts increase, administrators are able to equally balance the additional users amongst available resources.
WAN requirements. The ability to quickly and efficiently deliver workloads over the WAN will be crucial to the success of a cloud deployment. Special considerations must be made depending on the environment. Some organizations will have multiple different links connecting their cloud environment for proper load-balancing and HA. Although each environment will have its own needs, there is a good set of best practices which can be followed for a respective site type:
Major cloud datacenter: This is a central cloud computing environment with major infrastructure components. Hundreds or even thousands of users would be connecting to this type of environment. It can host major workload operations where workers from all over the world would connect and receive their data. The requirements here involve very high bandwidth and very low latency.
Recommendations: MPLS, optical circuits, or carrier Ethernet services.
Branch cloud datacenter: This is usually a smaller, but still sizeable cloud environment. This infrastructure would be used to house secondary, but still vital cloud systems. Here, administrators may be working with a few cloud delivered workloads which need to be distributed to a smaller amount of users. In this type of datacenter, requirements call for moderate bandwidth availability with the possible need for low latency.
Recommendations: MPLS or a carrier Ethernet service.
Small cloud datacenter for DR or testing: This is a small cloud datacenter with only a few components. Many times small distributed datacenters are used for testing and development or for smaller DR purposes. Requirements in this environment call for low bandwidth but may still need low latency and the option for mobility.
Recommendations: MPLS over T1/DSL, broadband wireless options, or Internet VPNs.
Remember, your data center must adapt to new kinds of technologies including mobility, consumerization, and now IoT. It’s critical to create interconnected environments capable of sharing resources to help the user, and the business, be most productive. New kinds of link aggregation services, user optimizations, and even virtual technologies are directly impacting how we control major data center points and remote branch locations as well. The key point to understand is that it’s becoming easier to control these environments. Cloud computing brings distribution of data. It’s up to the administrator and the data center to properly control this data and optimize the delivery.
Be ready for user spikes. Be ready for new challenges around workload and application delivery. Most of all, be ready for a new kind of cloud architecture that’s designed to optimize resources and the overall user experience. We’re moving towards an age where automation and orchestration help drive many data center and cloud components. But that still means that you must properly plan and align your physical resources. Poor data center resources utilization can take down even the best cloud strategy. Ensure that our technology solutions and your business are always aligned.
About the Author
You May Also Like