Ensuring Performance for Real-time Media Packet Processing in OpenStack – Part 1
It’s been a while since I last posted, but now I am back to talk some more about Sonus innovation for real-time communications (RTC) in the Cloud. This time I want to talk about a subject our customers tell us is very important to the success of their network evolution - the ability to ensure high performance for real-time media packet processing in a virtual, Cloud deployment.
While the OpenStack and OPNFV communities have done a good job of defining options to address requirements related to carrier-grade performance, scalability, and reliability, we find ourselves at a place and time where this is still a “work-in-progress.” Many service providers are concerned that their migration of real-time communications from purpose-built hardware to virtual, cloud-based deployment models comes with a performance cost. In particular, for virtual network functions (VNFs) that demand high throughput, high availability, and low latency for real-time media packet processing, it is true that the industry is still in the early stages of determining the best solutions to overcome this performance cost.
So where are we today and what happens next?
At Sonus, we have looked into this subject extensively and we believe there needs to be solutions that will provide accelerated performance handling of network traffic and ensure deterministic behavior of real-time communications media traffic with no packet loss. With these, it will be possible to deliver against the high expectations of the largest communication service providers (CSPs).
In this blog I will address our thoughts on acceleration of handling real-time media packets and save our recommendations regarding OpenStack configurations for deterministic behavior for my next blog.
Accelerating the handling of real-time media packets is a simple construct - get a packet off the network into a Virtual Machine (VM), process it as quickly as possible and get it back out again on the network. But of course it is not that simple in the real world.
I will start by saying the use of the default of Open vSwitch (OVS) technique to get packets from the NIC cards and pass it on the VMs through the Linux tap devices created for each port only works well for applications with low-throughput traffic rates. For applications with high-throughput of media packets, solutions like SR-IOV or DPDK-accelerated Open vSwitch with multi-queue vhost-user support are needed to provide fast packet processing.
And yet both of these solutions still have some limitations.
With SR-IOV, we get a solution that virtualizes the network physical function into multiple virtual functions and use either MAC or MAC+VLAN to segregate the incoming traffic to each virtual function. One issue with SR-IOV is it does not police of incoming packets so they are not rate-limited and this needs to be handled at the guest. Unfortunately, with SR-IOV, no security can be provided to the incoming packets so this too is left to the responsibility of the VNF itself, which incurs additional processing. In addition, as SR-IOV is hardware dependent, live migration is not yet supported.
On the other hand, DPDK-accelerated Open vSwitch has an implementation that improves handling of packets read from the interface and how packets are delivered to a specific VM. While this is excellent news, even this solution has limits, especially for RTC where most packets are only 64 bytes, meaning there is a lot of overhead process per packet. What this means is there is a trade-off of high CPU utilization for VNF flexibility. The good news is some of this high CPU utilization can be mitigated with the use of dedicated CPUs list and huge pages reserved for OVS-DPDK. Lastly, although a bit costlier, would be to use separate NICs to segregate VNF management traffic from actual customer traffic.
Testing in labs and moving into real-world proof of concept (POCs) trials is happening now with these solutions and with the knowledge gained we look forward to providing innovative solutions that ensure performance at scale for media packet handling in our customer’s virtual, Cloud networks.
Stay tuned for my next blog the read more about how to configure OpenStack configuration to ensure that real-time media packets are handled in a deterministic manner to eliminate packet loss or latency.
MathWorks estimates that through the automated provisioning and call routing features of the Sonus solution, the company has freed up more than 250 IT staff hours per week for more important projects.MathWorks is the leading developer of mathematical computing software for engineers and scientists. Founded in 1984, MathWorks employs 2800 people in 15 countries, with headquarters in Natick, Massachusetts, U.S.A.
The industry-leading performance and scale of Sonus' SBC 5100 allows us to maintain a competitive edge in the market while delivering exceptional customer service.Smart Tel is a major player in the Singapore telecommunications industry and aims to develop its global presence with new offices in Australia, Thailand, Indonesia, Philippines, India, South Africa, the US and the UK, with cost effective, easy-to-use and scalable telephony solutions.
We wanted to work with an industry-leading SBC vendor and our market analysis indicated that Sonus was the clear choice for this partnership.(GCS) is a software company founded in 2006 by Neal Axelrad and Jay Meranchik. GCS' goal is to be the best company in the marketplace. We are privately held and have offices in New York & New Jersey USA.
Sonus made the deployment, integration and migration to Microsoft Lync easy.We are experts in identifying and delivering flexible communication solutions that scale and adapt to your business demands, empowering your business to do more, faster and with less effort and cost.