Is Monetizing OTT Content a new flavour of the same old?

Monetizing_OTTToday Korea Telecom stated they would be using Ericsson’s Mobile Cloud Accelerator (MCA), an announcement that can be read in multiple sources including Azi Ronen’s Broadband Traffic Management Blog (here). In this way and following the tradition of KT for highly innovative technologies adoption, now encouraged by the huge LTE growth in that country, they are the first operator using the MCA solution that promises to achieve a better Quality of Experience (QoE) by the combination of caching platforms and the access network’s traffic prioritization. What I find most interesting about the MCA is of course the technical details around that combination of caching and prioritization, but even more importantly how Ericsson is marketing (and selling it) as a mean for monetizing OTT content. Let us try to describe the particularities around this in the next lines.

Content Caching

There are many Content Delivery/Distribution Network (CDN) solutions and providers in the market having huge data centres for storing the content providers’ popular information, and delivering it with high availability and a high performance thanks to distributed networks and techniques like smart load balancing. In example, an Over-The-Top (OTT) provider like Netflix could store the most popular Warner Brothers’ movies in CDN based data centres for allowing this content caching, being delivered directly from highly efficient data centres to the subscribers requesting these using AT&T or Telefonica networks and resulting on a faster service, and the resultant higher QoE. Ericsson pre-integrates one of the most popular platforms for CDN from Akamai Technologies, Inc. in the MCA solution.

Traffic Prioritization

The traffic prioritization in the other hand is a Policy Management and Enforcement (PCRF/PCEF) technique, typically used by the operators in the core network nodes for ensuring the premium content (for premium subscribers) have the maximum available bandwidth in the network, while the less valuable content is delivered on the remaining bandwidth or “best-effort”. Different priorities are typically set in the PCRF platforms and enforced in the PCEF elements (e.g. DPI’s or the actual traffic gateways like GGSN or P-GW) according to the services defined by the operators. The prioritization can be based on the subscribers’ profiles (e.g. subscribers paying more for having a better priority in the bandwidth allocation), or in the actual traffic (e.g. prioritization based on an order of protocols or applications in the traffic), or in a combination of both, being the latest the most typical scenario. The result is a secured QoE for the premium traffic and/or subscribers at all times, while the rest of the subscribers could get a variable QoE depending on the time of the day, network capacity, and any congestion condition on peak times.

Multiple other techniques exists for improving the QoE in the operators’ networks, and ensuring an optimal management of the increasing OTT traffic, including the Video Optimization. Today Light Reading published an interesting piece about the evolution of this topic (here).

Monetizing OTT services

Monetizing the OTT services has been the obsession of most operators in the modern networks, due to the fact some of these providers are making highly successful business using the operators’ networks as a free transport for providing the content and services to the end-users. Applications like Whatsapp or Skype can be used by the subscribers for communication in text, voice, and video, without having to pay a premium to the operators for those in most cases. Portals like Netflix provide video on demand in the same way. It is difficult to charge and control this traffic separately in the operator premises even with the most advanced Deep Packet Inspection (DPI) systems and Policy Management nodes, and the operators are losing revenue in their own services with these OTT’s. The approach of Ericsson with the MCA offers another monetization objective instead, allowing the operators selling the prioritization to the actual content providers as a mean to ensure a high QoE when the subscriber is loading their contents. As it was commented in my previous article “Three short stories on today’s Mobile Networks Performance” a research by the University of Massachusetts Amherst and Akamai Technologies shows the users start abandoning videos if these do not load within 2 seconds, and rate gets higher with higher latencies. The situation is the same with web pages, and an infographic from Strangeloopnetworks can be found below. According to Ericsson’s math during the MCA presentations a single second improved in the loading times of a popular content in Amazon or Netflix could represent a billion dollars gain at the end of the year, so here is your business case now.

Solutions like the MCA represents an interesting try to improve the OTT services monetization in the operators’ networks. The driver for adopting such solutions in the market is clearly the combination of improved QoE for the most popular content, and the additional revenue source from the content providers’ deals with the operator. We will have to wait and see if this is a successful approach… we could be asking KT soon.

A. Rodriguez

My hands-on experience with NFV for the EPC


I have written many times about the Network Functions Virtualization (NFV) and/or Software Defined Networks (SDN) revolutions for the telecom operators, but last week I had the chance to attend to a hands-on workshop with some of the most advanced vendors for the Core Network virtualization field. I was able to test the products, ask all the questions I wanted, and get a real feeling on the vendors and operators opinions on the subject. Despite the trip to Silicon Valley in the sunny California USA, the beautiful San Francisco sights, and the unavoidable visit to the technology monsters (e.g. Google in Mountain View CA, Apple in Cupertino CA, Oracle in Redwood Shores CA, etc.), my objective was doing a reality check on the NFV trend which I will try to share with you.

What is ON with NFV:

The advantages of the NFV for the CSP’s are obvious, as previously commented in my article “The Network Functions Virtualization, a different kind of animal”, these includes: using COTS hardware, flexible automatic scaling & HA based on software, licensing costs reduction as a consequence of unified software domains, signalling load reduction, and pure IT software based maintenance & operation, among others. The operators are all well aware of this, either by their own initiative or because of the NFV/SDN vendors sales efforts, and that is the reason why most of them are researching the technology and have already done trials (e.g. Telefonica, AT&T, NTT, Vodafone, Verizon, Orange, Verizon, Optus, Telecom Italy, T-Mobile, for naming a few I know).

According to the information seen these days the release of the ETSI ISG standards for NFV will most likely happen around October 2014, and this should unify the different approaches in the market today. In the meantime the vendors seems to be taking different paths, like virtualizing the current core network nodes one by one (e.g. virtual S-GW, virtual P-GW, virtual MME, etc.), or virtualizing the functions required in the core (e.g. virtual attach & register, database, bearer handling, policy, etc.). If you think the NFV for the core or the Evolved Packet Core (EPC) is going slow, and the tier-1 operators will wait years for testing these technologies, you had better think again. Many products are available now, and some mavericks in the industry are already betting hard for the change.

In terms of the actual products these already deliver some of the promises commented. I was able to see virtual EPC’s based on software running, and handling test traffic with the equivalent functionality of the traditional core while reducing the signalling messages, and having an impressive flexibility for the flow logic and scaling. I also saw OpenStack based orchestration working, and API’s connecting to the operators OSS/BSS. Some HA capabilities are also quite innovative, like methods for managing the SCTP flows when a virtual machine gets down and other takes over. All of this was running on standard Blades, or Bare Metal, having a ridiculous cost compared to the current traditional solutions.

What is OFF with NFV:

As you would expect the current NFV solutions are not all roses. The bad news are the lack of maturity seen in most of the solutions, typical of the starting and revolutionary technologies.

The automatic scaling is not yet mastered, and the management & monitoring capabilities neither. Some solutions are still not able to match the performance of the traditional cores when activating Deep Packet Inspection (DPI) up to layer 7, which is being optimized with virtual DPI’s now. Some challenges are also seen when handling distributed no-SQL DB’s for things like the HA. The standards support is still not complete neither, as most of the solutions still do not cover the 3GPP Release 12 for naming an example. The Policy and Charging features are very limited, often relying on external solutions, which potentially affects the improved performance. There seem to be a lack of security features in the products. Among other limitations.

These challenges combined with the fact a new EPC represents additional costs, as no operator would intend to fully replace the current core network yet, the fear for mentality change in the different areas of the carrier, and the lack of knowledge in the NFV/SDN details and possible use cases, are currently blocking the technology embracement. An interesting article on this is available in Light Reading (here), and reflects what I felt from some operators in the field.


What is coming for NFV:

Lucky for us some intelligent carriers are solving those challenges by having a vision of the future today. Some operators in the US are thinking on interesting use cases, like having portable EPC’s for special events in highly congested areas (e.g. you can imagine installing a virtual core network next to the radios around the stadium during the Super Bowl day, reducing congestion and improving the QoE), they are already testing this as you read this article. Other carriers in the UK and Japan are thinking on dedicated core networks for M2M type traffic based on NFV. The NFV start-ups are improving their products, including vendors like Cyan, Connectem, Affirmed, among many others, making these more robust and solving the challenges faced. Some big vendors are also perfecting their NFV offers for entering into the game, including vendors like Ericsson, Juniper, Alcatel Lucent, among many others.

As soon as we start seeing production deployments in the field, and I anticipate you this will happen very soon according to what I saw, other operators will join the trend and learn from the competition. This is the future of telecoms.

A. Rodriguez