Living On the Edge With Fog Computing
As more and more user devices come online, storing data a the "edge" for delivery is quickly growing in adoption
December 25, 2014
I know, it’s another buzz term. It just feels like we haven’t had one in a while. That being said, something interesting is happening in the cloud world. We’re seeing more users access an even larger set of information. Services around streaming, content delivery, and even caching are becoming very popular. But how do you deliver such large workloads efficiently to users located all over the world?
Cisco recently joined forces with Akamai to create a truly powerful distributed computing platform. In fact, Akamai's network is one of the world's largest distributed-computing platforms, responsible for serving between 15 and 20 percent of all web traffic. But let’s look beyond this and examine the current user, cloud, and delivery model. The modern data center is literally becoming the home of everything. We now have services called “Everything-as-a-Service” and there’s quite a bit of information being transferred via the cloud. Furthermore, we have the idea of the Internet of Everything where anything you require can be delivered through the cloud. This is where Fog Computing comes in.
Bringing the “edge” closer to the user. Content delivery is huge. With so much more emphasis on cloud computing, you know that there is more data being pushed down to the end user. Take a look at this report from Cisco:
We’re already in the zettabyte era. In fact, global cloud traffic crossed the zettabyte threshold in 2013, and by 2018 more than three-quarters of all data center traffic will be in the cloud. Cloud traffic will represent 76 percent of total data center traffic by 2018. With that in mind, edge computing strives to reduce the amount of bandwidth we use and lessen the latency for our content. By placing the information on servers closest to the user we’re able to deliver rich content quickly.
Creating geographical distribution. So much data, so many data points. Information and data analytics are becoming crucial for organizations to understand both their business and the consumer. With edge/Fog computing, big data and analytics can be done faster with better results. Complex data engines no longer have to traverse large data sets across the WAN. Rather, they’re able to access these edge (Fog) systems in such a way that real-time data analytics becomes a reality on a truly distributed scale.
Support for mobility and “Everything-as-a-Service.” With so many devices connecting to the cloud and the modern data center, administrators are tasked with creating true efficiency. In creating a Fog computing platform, you’re able to improve user performance and further enhance security and privacy issues. By controlling data at the edge, Fog computing integrates core cloud services with those of a truly distributed data center platform.
Adoption is already happening. I bet you can think of a few examples already. Big data platforms love the concept of Fog computing because it accelerates their ability to process data. On the same note, consumer services love the concept of the edge as well. Folks like Netflix openly adopt this model. Here’s the thing: they’re not the only ones. Companies like Facebook, Twitter, AMD, Adobe, ESPN, Blizzard, and Trend Micro all use Fog computing and edge services to deliver rich content to their users. As more users connect to the cloud and request data heavy in content and size, utilizing the edge for fast delivery will make complete sense.
The truth of the matter is that cloud services and user device numbers are going to increase. As the modern data center becomes even more distributed, organizations are going to have to find ways to deliver a lot of data very quickly. Using edge networking and Fog computing can help bring that data much closer to both the organization and the end-user.
About the Author
You May Also Like