My hands-on experience with NFV for the EPC

SV_4

I have written many times about the Network Functions Virtualization (NFV) and/or Software Defined Networks (SDN) revolutions for the telecom operators, but last week I had the chance to attend to a hands-on workshop with some of the most advanced vendors for the Core Network virtualization field. I was able to test the products, ask all the questions I wanted, and get a real feeling on the vendors and operators opinions on the subject. Despite the trip to Silicon Valley in the sunny California USA, the beautiful San Francisco sights, and the unavoidable visit to the technology monsters (e.g. Google in Mountain View CA, Apple in Cupertino CA, Oracle in Redwood Shores CA, etc.), my objective was doing a reality check on the NFV trend which I will try to share with you.

What is ON with NFV:

The advantages of the NFV for the CSP’s are obvious, as previously commented in my article “The Network Functions Virtualization, a different kind of animal”, these includes: using COTS hardware, flexible automatic scaling & HA based on software, licensing costs reduction as a consequence of unified software domains, signalling load reduction, and pure IT software based maintenance & operation, among others. The operators are all well aware of this, either by their own initiative or because of the NFV/SDN vendors sales efforts, and that is the reason why most of them are researching the technology and have already done trials (e.g. Telefonica, AT&T, NTT, Vodafone, Verizon, Orange, Verizon, Optus, Telecom Italy, T-Mobile, for naming a few I know).

According to the information seen these days the release of the ETSI ISG standards for NFV will most likely happen around October 2014, and this should unify the different approaches in the market today. In the meantime the vendors seems to be taking different paths, like virtualizing the current core network nodes one by one (e.g. virtual S-GW, virtual P-GW, virtual MME, etc.), or virtualizing the functions required in the core (e.g. virtual attach & register, database, bearer handling, policy, etc.). If you think the NFV for the core or the Evolved Packet Core (EPC) is going slow, and the tier-1 operators will wait years for testing these technologies, you had better think again. Many products are available now, and some mavericks in the industry are already betting hard for the change.

In terms of the actual products these already deliver some of the promises commented. I was able to see virtual EPC’s based on software running, and handling test traffic with the equivalent functionality of the traditional core while reducing the signalling messages, and having an impressive flexibility for the flow logic and scaling. I also saw OpenStack based orchestration working, and API’s connecting to the operators OSS/BSS. Some HA capabilities are also quite innovative, like methods for managing the SCTP flows when a virtual machine gets down and other takes over. All of this was running on standard Blades, or Bare Metal, having a ridiculous cost compared to the current traditional solutions.

What is OFF with NFV:

As you would expect the current NFV solutions are not all roses. The bad news are the lack of maturity seen in most of the solutions, typical of the starting and revolutionary technologies.

The automatic scaling is not yet mastered, and the management & monitoring capabilities neither. Some solutions are still not able to match the performance of the traditional cores when activating Deep Packet Inspection (DPI) up to layer 7, which is being optimized with virtual DPI’s now. Some challenges are also seen when handling distributed no-SQL DB’s for things like the HA. The standards support is still not complete neither, as most of the solutions still do not cover the 3GPP Release 12 for naming an example. The Policy and Charging features are very limited, often relying on external solutions, which potentially affects the improved performance. There seem to be a lack of security features in the products. Among other limitations.

These challenges combined with the fact a new EPC represents additional costs, as no operator would intend to fully replace the current core network yet, the fear for mentality change in the different areas of the carrier, and the lack of knowledge in the NFV/SDN details and possible use cases, are currently blocking the technology embracement. An interesting article on this is available in Light Reading (here), and reflects what I felt from some operators in the field.

???????????????????????

What is coming for NFV:

Lucky for us some intelligent carriers are solving those challenges by having a vision of the future today. Some operators in the US are thinking on interesting use cases, like having portable EPC’s for special events in highly congested areas (e.g. you can imagine installing a virtual core network next to the radios around the stadium during the Super Bowl day, reducing congestion and improving the QoE), they are already testing this as you read this article. Other carriers in the UK and Japan are thinking on dedicated core networks for M2M type traffic based on NFV. The NFV start-ups are improving their products, including vendors like Cyan, Connectem, Affirmed, among many others, making these more robust and solving the challenges faced. Some big vendors are also perfecting their NFV offers for entering into the game, including vendors like Ericsson, Juniper, Alcatel Lucent, among many others.

As soon as we start seeing production deployments in the field, and I anticipate you this will happen very soon according to what I saw, other operators will join the trend and learn from the competition. This is the future of telecoms.

A. Rodriguez

Behind the operators’ technical scenes on the new iPhones, new Android, or the new Microsoft deal

Android_Apple_Microsoft1

As we are getting closer to the traditional yearly event from Apple next week, where new iPhone devices (most likely an iPhone 5S, an iPhone 5C, and possibly a new iPad?) are to be announced (here), Google has also announced in parallel the name of their new OS for Android (Android 4.4 Kit Kat) (here). If you live in this world, you should also know by now that Microsoft bought Nokia’s mobile phones division according to this week announcement (here). The battle of the mobile devices and OS is as interesting as every year in the last decade, and this battle has implications for all of the involved in the industry at all the possible layers, including of course the subscribers but also the Communication Service Providers (CSP’s) or operators. No matter if you work with an operator, or a vendor, or a consultancy, or are just a technology fan, you should know from the top of your head what is the current market share for mobile OS, or what is the split between the Android OS versions installed in the handsets, among others stats or facts. I give you a few ones below for feeding your knowledge hunger.

MobileOS1

MobileDevices1

Android1

A few years back when Blackberry was booming the smartphones’ market with their -by then- innovative products, the operators learned many technical lessons from this in the hard way. The network engineers until then focused only on delivering enough bandwidth to supply the subscribers’ traffic demand, saw how the push messages and always-communicating nature of the Blackberries boosted the number of sessions established in the networks while having the same PDP context established. This in example increased the Transactions Per Second (TPS) in the signalling plane of their network elements and business to values never seen before, leading to service downtimes and traffic outages, creating a huge change in the scaling and sizing paradigms and methods for the networks. The introduction of Blackberry devices in masses leaded to multiply those TPS for the same traffic, and the technical effects were also seen in other areas. In example after having a maintenance in the networks, and all the devices re-connected to the operator at the same time leading to transactions bursts, among many other examples I am sure any operators’ network engineer for that time can provide. All of this was traduced on massive revenue losses, which are usually the main triggers for immediate changes in the operators’ methods. As the years passed and more and more devices appeared having this same always-connected or always-communicating behaviour (e.g. pretty much all of the smartphones today), the operators adjusted all of their systems and methods for ensuring no problems were seen for this matter. Whether improving the sizing and scaling techniques, or applying Policy Management and Enforcement tools (PCRF/PCEF), or signalling routers, or traffic control agents, or simply adjusting profiles and timers for a more efficient sessions’ handling, among other methods.

Similar challenges have been seen when OS updates are made available for the smartphones. Since a big part of the subscribers tends to accept the update once the notification is received, typically happening around the same time. So again creating an unexpected increasing traffic (this time in both bandwidth and TPS) in the networks. Situations like these are also seen during important events, like the football world cup finals, etc. and most of the operators today make the scaling and sizing of the systems considering these events.

In other words, in today’s world the communication between the marketing and engineering teams of an operator is more important than ever. Announcements like the ones from Apple next week and the ones from Google must be closely monitored, as these represents new challenges, and -do not get me wrong- also new opportunities for business. In example, the subscribers are having freedom to connect liberated devices into the networks for browsing, or selecting which OS to download and install in the handsets, and this modifies the traffic and usage patterns seen in the networks. We saw this also with the introduction of the electronic books and tablets, and we will keep seeing this in the future with the new devices like the smart watches, or the smart glasses, etc. This represents challenges to meet the changing demands, but also new opportunities for monetizing the new network usage profile, etc. The role of the Business Intelligence (BI) and Analytic’s platforms are becoming critical. The extension of these towards more intelligent models like Predictive Analytic’s for performance of the networks and the systems, plus the actual business indicators of the networks, are and will be a key in the operators’ efficiency and the telecom business profitability.

A. Rodriguez