Is Monetizing OTT Content a new flavour of the same old?

Monetizing_OTTToday Korea Telecom stated they would be using Ericsson’s Mobile Cloud Accelerator (MCA), an announcement that can be read in multiple sources including Azi Ronen’s Broadband Traffic Management Blog (here). In this way and following the tradition of KT for highly innovative technologies adoption, now encouraged by the huge LTE growth in that country, they are the first operator using the MCA solution that promises to achieve a better Quality of Experience (QoE) by the combination of caching platforms and the access network’s traffic prioritization. What I find most interesting about the MCA is of course the technical details around that combination of caching and prioritization, but even more importantly how Ericsson is marketing (and selling it) as a mean for monetizing OTT content. Let us try to describe the particularities around this in the next lines.

Content Caching

There are many Content Delivery/Distribution Network (CDN) solutions and providers in the market having huge data centres for storing the content providers’ popular information, and delivering it with high availability and a high performance thanks to distributed networks and techniques like smart load balancing. In example, an Over-The-Top (OTT) provider like Netflix could store the most popular Warner Brothers’ movies in CDN based data centres for allowing this content caching, being delivered directly from highly efficient data centres to the subscribers requesting these using AT&T or Telefonica networks and resulting on a faster service, and the resultant higher QoE. Ericsson pre-integrates one of the most popular platforms for CDN from Akamai Technologies, Inc. in the MCA solution.

Traffic Prioritization

The traffic prioritization in the other hand is a Policy Management and Enforcement (PCRF/PCEF) technique, typically used by the operators in the core network nodes for ensuring the premium content (for premium subscribers) have the maximum available bandwidth in the network, while the less valuable content is delivered on the remaining bandwidth or “best-effort”. Different priorities are typically set in the PCRF platforms and enforced in the PCEF elements (e.g. DPI’s or the actual traffic gateways like GGSN or P-GW) according to the services defined by the operators. The prioritization can be based on the subscribers’ profiles (e.g. subscribers paying more for having a better priority in the bandwidth allocation), or in the actual traffic (e.g. prioritization based on an order of protocols or applications in the traffic), or in a combination of both, being the latest the most typical scenario. The result is a secured QoE for the premium traffic and/or subscribers at all times, while the rest of the subscribers could get a variable QoE depending on the time of the day, network capacity, and any congestion condition on peak times.

Multiple other techniques exists for improving the QoE in the operators’ networks, and ensuring an optimal management of the increasing OTT traffic, including the Video Optimization. Today Light Reading published an interesting piece about the evolution of this topic (here).

Monetizing OTT services

Monetizing the OTT services has been the obsession of most operators in the modern networks, due to the fact some of these providers are making highly successful business using the operators’ networks as a free transport for providing the content and services to the end-users. Applications like Whatsapp or Skype can be used by the subscribers for communication in text, voice, and video, without having to pay a premium to the operators for those in most cases. Portals like Netflix provide video on demand in the same way. It is difficult to charge and control this traffic separately in the operator premises even with the most advanced Deep Packet Inspection (DPI) systems and Policy Management nodes, and the operators are losing revenue in their own services with these OTT’s. The approach of Ericsson with the MCA offers another monetization objective instead, allowing the operators selling the prioritization to the actual content providers as a mean to ensure a high QoE when the subscriber is loading their contents. As it was commented in my previous article “Three short stories on today’s Mobile Networks Performance” a research by the University of Massachusetts Amherst and Akamai Technologies shows the users start abandoning videos if these do not load within 2 seconds, and rate gets higher with higher latencies. The situation is the same with web pages, and an infographic from Strangeloopnetworks can be found below. According to Ericsson’s math during the MCA presentations a single second improved in the loading times of a popular content in Amazon or Netflix could represent a billion dollars gain at the end of the year, so here is your business case now.

Solutions like the MCA represents an interesting try to improve the OTT services monetization in the operators’ networks. The driver for adopting such solutions in the market is clearly the combination of improved QoE for the most popular content, and the additional revenue source from the content providers’ deals with the operator. We will have to wait and see if this is a successful approach… we could be asking KT soon.

A. Rodriguez

Advertisements

Three short stories on today’s Mobile Networks Performance

Ensuring the quality of the networks for an optimal end user experience is often a challenging task for mobile network operators. While the carriers’ engineers adjust the systems for getting the most efficient usage according to the load required, you might be affecting the quality of the subscribers’ service in particular conditions, subject to the applications being used by them, the coverage and access technologies available in determined locations, or even the non-always optimal policies used for access technology selection.

Evolved QoE – Application Performance who?

Nowadays delivering quality services to the mobile subscribers has evolved beyond the traditional network availability and quality. Today’s users are demanding sufficient performance for each type of application used, leading to profile based modelling of the traffic and increasing the complexity of the Quality of Experience (QoE) evaluation for the carriers. For the operators evaluating the QoE is hard, as published by the GSA and Ericsson this month (here) “A 2012 study from the University of Massachusetts Amherst and Akamai Technologies found that internet users start abandoning attempts to view online videos if they do not load properly within two seconds. As time goes on, the rate at which viewers give up on a given video increases”, “with the rise of mobile-broadband and smartphone usage over the past few years, the meaning of user experience has changed dramatically”.

Network1

What used to be measured with coverage and bandwidth capacity is now extended to performance per application and end user experience, involving signal coverage maps, latency analysis, QoS, security features, loading speed for web pages or online multimedia content (e.g. HD video) and apps, among others. As explained and exemplified in a recent Ericsson Whitepaper on Network Performance (here) “Network performance cannot be generalized because the only true measurement of performance is the experience of those who use it.”, “App coverage is one way we describe this performance. It refers to the proportion of a network’s coverage that has sufficient ability to run a particular app at an acceptable quality level. For example, an area that has 95 percent coverage for voice calls, may have 70 percent coverage for streaming music and only 20 percent coverage for streaming HD video. A consumer’s usage patterns will determine their preferred type of coverage”

Network2

Indoor small cells – Please mind the gap between the macro and small cells platforms

Evolved small cells for indoor installations are coming to fill the coverage gap between the macro networks (i.e. 4G/LTE, 3G, 2G, etc.) and the small cells technologies (i.e. Pico and Femto cells, etc.). A new solution was recently announced by Ericsson called Radio Dot System (here), which is according to them “The most cost-effective, no-compromise solution to indoor coverage challenges”. It is well known the operators have challenges for covering indoor areas and buildings on a cost effective manner, while more than 70% of the traffic is generated in this domain. The solution is ultra-small, light, scalable, with fast deployment, and relies on Ethernet connection for integrating with the existing mobile network.

Although Ericsson’ solution should not be available before next year, we would expect to see other similar solutions in the market in the near future. This trend would potentially look to take over part of the current usage being done on WiFi technologies, preferred by most of the users for indoor communications.

Network3

Smart access network selection – The seamless cellular and WiFi access marriage

A recent report from 4G Americas (here) analyses the role of the WiFi technology in current mobile data services, and the methods for overcoming the challenges appearing as a result of the integration and mobility between cellular technologies and the WiFi. As stated by them “with smartphone adoption continuing to rise and the increasing prevalence of bandwidth-intensive services such as streaming video, the limited licensed spectrum resources of existing cellular networks are as constrained as ever. Wi-Fi, and its associated unlicensed spectrum, presents an attractive option for mobile operators – but improved Wi-Fi/cellular network interworking is needed for carriers to make optimal use of Wi-Fi.”

Network4

The so-called interworking between traditional mobile access technologies and the WiFi networks must be seamless and transparent to the end users. In such way, the service continuity must be assured when a subscriber moves in example from 4G/LTE coverage to WiFi covered zones and back, using methods like an automatic offload policy. Different methods are currently used for this interworking like session continuity, or client-based mobility, or network-based mobility. One of the most popular and accepted, also standardized by the 3GPP, is the network-based Access Network Discovery and Selection Function (ANDSF), which is already supported by most of the WiFi devices and network elements, including Policy Managers and specific network gateways. Other innovations have been made available for addressing the seamless interworking issues, in standards like the Hotspots 2.0, or the seamless SIM-based authentication.

Network5

As it was commented in my previous post “The top 10 fast facts you should know about LTE today”, the 5G will be a combination of access technologies for jointly fulfilling the requirements of the future. In these scenarios the seamless network selection and mobility becomes even more important beyond the classical offload scenarios, and some particular issues for these are commented by 4G Americas and vendors like Ericsson. These issues include: Premature WiFi selection (access technology shifted when coverage is still too weak due to distance), Unhealthy choices (traffic offloaded to systems overloaded), Lower capabilities (offload to alternative technologies having less performing networks), or Ping-pong effects (frequent access technology shifting due to mobility affecting the QoE).

A. Rodriguez

My hands-on experience with NFV for the EPC

SV_4

I have written many times about the Network Functions Virtualization (NFV) and/or Software Defined Networks (SDN) revolutions for the telecom operators, but last week I had the chance to attend to a hands-on workshop with some of the most advanced vendors for the Core Network virtualization field. I was able to test the products, ask all the questions I wanted, and get a real feeling on the vendors and operators opinions on the subject. Despite the trip to Silicon Valley in the sunny California USA, the beautiful San Francisco sights, and the unavoidable visit to the technology monsters (e.g. Google in Mountain View CA, Apple in Cupertino CA, Oracle in Redwood Shores CA, etc.), my objective was doing a reality check on the NFV trend which I will try to share with you.

What is ON with NFV:

The advantages of the NFV for the CSP’s are obvious, as previously commented in my article “The Network Functions Virtualization, a different kind of animal”, these includes: using COTS hardware, flexible automatic scaling & HA based on software, licensing costs reduction as a consequence of unified software domains, signalling load reduction, and pure IT software based maintenance & operation, among others. The operators are all well aware of this, either by their own initiative or because of the NFV/SDN vendors sales efforts, and that is the reason why most of them are researching the technology and have already done trials (e.g. Telefonica, AT&T, NTT, Vodafone, Verizon, Orange, Verizon, Optus, Telecom Italy, T-Mobile, for naming a few I know).

According to the information seen these days the release of the ETSI ISG standards for NFV will most likely happen around October 2014, and this should unify the different approaches in the market today. In the meantime the vendors seems to be taking different paths, like virtualizing the current core network nodes one by one (e.g. virtual S-GW, virtual P-GW, virtual MME, etc.), or virtualizing the functions required in the core (e.g. virtual attach & register, database, bearer handling, policy, etc.). If you think the NFV for the core or the Evolved Packet Core (EPC) is going slow, and the tier-1 operators will wait years for testing these technologies, you had better think again. Many products are available now, and some mavericks in the industry are already betting hard for the change.

In terms of the actual products these already deliver some of the promises commented. I was able to see virtual EPC’s based on software running, and handling test traffic with the equivalent functionality of the traditional core while reducing the signalling messages, and having an impressive flexibility for the flow logic and scaling. I also saw OpenStack based orchestration working, and API’s connecting to the operators OSS/BSS. Some HA capabilities are also quite innovative, like methods for managing the SCTP flows when a virtual machine gets down and other takes over. All of this was running on standard Blades, or Bare Metal, having a ridiculous cost compared to the current traditional solutions.

What is OFF with NFV:

As you would expect the current NFV solutions are not all roses. The bad news are the lack of maturity seen in most of the solutions, typical of the starting and revolutionary technologies.

The automatic scaling is not yet mastered, and the management & monitoring capabilities neither. Some solutions are still not able to match the performance of the traditional cores when activating Deep Packet Inspection (DPI) up to layer 7, which is being optimized with virtual DPI’s now. Some challenges are also seen when handling distributed no-SQL DB’s for things like the HA. The standards support is still not complete neither, as most of the solutions still do not cover the 3GPP Release 12 for naming an example. The Policy and Charging features are very limited, often relying on external solutions, which potentially affects the improved performance. There seem to be a lack of security features in the products. Among other limitations.

These challenges combined with the fact a new EPC represents additional costs, as no operator would intend to fully replace the current core network yet, the fear for mentality change in the different areas of the carrier, and the lack of knowledge in the NFV/SDN details and possible use cases, are currently blocking the technology embracement. An interesting article on this is available in Light Reading (here), and reflects what I felt from some operators in the field.

???????????????????????

What is coming for NFV:

Lucky for us some intelligent carriers are solving those challenges by having a vision of the future today. Some operators in the US are thinking on interesting use cases, like having portable EPC’s for special events in highly congested areas (e.g. you can imagine installing a virtual core network next to the radios around the stadium during the Super Bowl day, reducing congestion and improving the QoE), they are already testing this as you read this article. Other carriers in the UK and Japan are thinking on dedicated core networks for M2M type traffic based on NFV. The NFV start-ups are improving their products, including vendors like Cyan, Connectem, Affirmed, among many others, making these more robust and solving the challenges faced. Some big vendors are also perfecting their NFV offers for entering into the game, including vendors like Ericsson, Juniper, Alcatel Lucent, among many others.

As soon as we start seeing production deployments in the field, and I anticipate you this will happen very soon according to what I saw, other operators will join the trend and learn from the competition. This is the future of telecoms.

A. Rodriguez

The top 10 fast facts you should know about LTE today

Top10LTE

1. LTE is the fastest deployment technology ever:

According to a report released by the Global mobile Suppliers Association (GSA) a few days ago, “LTE is the fastest developing mobile system technology ever” (here). It took LTE less than 4 years to reach the same number of deployments 3G took more than 6 years.

Consider NTT Docomo launched the first 3G commercial network on May 2001, and for December 2007 a total of 190 3G networks in 40 countries were operating. Now consider Telia Sonera launched the first LTE commercial network on December 2009, and for July 2013 a total of 194 LTE networks were operating around the world, according to the GSA.

LTE1

2. LTE-Advanced is real and keep coming:

Ericsson announced this week the first carrier aggregation technology deployment of a commercial network for LTE-Advanced using the 1800MHz and 900MHz spectrum bands (here). In this way Telstra, the Australia’s operator, joins South Korea and Russia as the first deployments of LTE-Advanced in the world.

According to the Executive Director of Networks of Telstra, Mike Wright, “Telstra’s LTE subscriber numbers are growing dramatically, with nearly 3 million subscribers currently using the LTE network, up from 2.1 million six months ago. The capacity, higher data speeds and efficiencies provided by LTE-Advanced will help manage growth in data traffic as more customers choose our network…”

3. Total LTE subscriptions worldwide is expected to be 1.36 billion by the end of 2018:

According to a recent report from the GSA, the total number of LTE subscriptions around the world is expected to reach around 1.36 billion by the end of 2018. The rate of growth, particularly increasing this year, is the result of the number of deployments done by operators during 2013 including Verizon Wireless, SK Telecom, NTT Docomo, Everything Everywhere, and Vodafone Germany. All these operators speeded up the LTE deployments, devices penetration, and services this year.

4. The LTE 1800 is the key band for roaming:

Over 43% of the commercially launched LTE networks are using the 1800MHz wireless spectrum band, according to the GSA. A recent report from Informa Telecoms & Media comments “The adoption of the 1800MHz band for LTE has exploded over the last year, as mobile operators are attracted by the band’s unique set of advantages, such as widespread availability, excellent coverage and the possibility of reusing existing network assets. Coupled with strong support from LTE-device manufacturers, these benefits make 1800MHz an ideal band for LTE services, and a strong candidate to provide a globally harmonized roaming solution for LTE.” (here).

If you have a LTE device which can operate in both the band 3 (1800MHz) and band 7 (2.6GHz), you could potentially use it in at least 61 countries today (81% of the LTE commercially available countries). At least 363 LTE devices have announced its capability for operating in bands 3 and 7.

LTE2

5. Today almost 1K LTE devices are 3GPP Category 3, and 40 are Category 4… and increasing:

Information from the GSA indicates that 948 LTE user devices today are confirmed to comply with the Category 3 definition of the 3GPP. As LTE Category 4 implies higher peak downlink rates up to 150Mbps, and peak uplink rates up to 50Mbps, there are already 40 LTE user devices confirmed to support the Category 4 definition. These devices include dongles, routers, hotspots, smartphones, and other modules, and the numbers in Category 4 will continue increasing in the future.

6. Almost 60 LTE TDD networks are commercially deployed or being planed:

The Long-Term Evolution Time-Division Duplex (LTE TDD) offers an asymmetric spectrum flexibility advantageous for the operators, especially when considering the wireless spectrum capacities limitations and its increasing growth for the future. According to the GSA today 18 commercial LTE TDD networks exists around the world (details can be seen in the LTE table in my previous post “The European race for Wireless Spectrum”), and 9 networks are combining LTE FDD and TDD for cost reductions and increased capacity. Additionally 41 LTE TDD networks and currently in deployment or planned. Many operators are also running trials and studies for it, as this technology becomes more popular.

7. Small cells will become a critical technology in the future:

As it was already commented in my previous post “The European race for Wireless Spectrum”, the small cells (e.g. Wi-Fi) will become a critical allied for the macro networks when serving the increasing usage demand. The networks of the future (short and mid-term future) will most likely be a combination of LTE networks and small cells, offloading traffic when required for ensuring an optimal quality of the experience (QoE).

8. Huawei and Ericsson are dominating the LTE infrastructure market:

A report released by Informa Telecoms & Media this week, based on data provided and validated by different vendors, estimates Huawei has been awarded 40% of the LTE infrastructure contracts in the world and Ericsson another 34%. The runner-ups have been NSN with 17%, and others like ALU, ZTE, Samsung and NEC for a total of 9% of the allocated contracts. The reports states the reasons for the contracts being awarded to Huawei and Ericsson are mainly due to their technology, pricing, support, and managed-service capabilities.

9. More smartphones, more video, and a lot more mobile traffic usage:

Different reports from the GSA and the Ericsson Mobility Report for this year comment on the growing trend of the mobile data usage. Around 50% of the phones sold in the first quarter of 2013 were smartphones, considering during the full 2012 this percentage was 40%. The mobile data traffic usage doubled from the first quarter of 2012 to the first quarter of 2013, and being mainly driven by video it is expected to grow 12 more times for 2018, having LTE as the main technology for accessing these services. The online video is the main contributor to the mobile traffic usage, and the GSA estimates around 100 hours of video are uploaded per minute today, being YouTube the most used service.

10. Transition to 5G will take place from around 2020:

A whitepaper published by Ericsson and supported by the GSA analyses the status of the 5G research, its standardization process, and the technical challenges it will have to face before being ready for the market (here). From this report we can highlight “…a much wider variety of devices, services and challenges than those accommodated by today’s mobile-broadband systems will have to be addressed (for 5G). Due to this diversity, the 5G system will not be a single technology but rather a combination of integrated RATs, including evolved versions of LTE and HSPA, as well as specialized RATs for specific use cases, which will jointly fulfil the requirements of the future. The research required for the development of 5G is now well underway. The recently founded European METiS (Mobile and wireless communications Enablers for the Twenty-twenty information Society) project is aimed at developing the fundamental concepts of the 5G system and aligning industry views.”

A. Rodriguez