Understanding NFV in 6 videos

NFV1

If the adage says a picture is worth a thousand words, then a video should worth a million. In today’s post I offer you a quick way to fully understanding Network Functions Virtualization (NFV), Software Defined Networking (SDN), and some of its related trends through six short videos, ranging from the very basics of virtualization and cloud concepts, to the deepness of today’s architecture proposed for the NFV installations.

What “the heck” are virtualization and cloud about?

A short and self-explanatory video from John Qualls, President and CEO of Bluelock, covering the very basics of data centres transition towards virtualized models.

What is the difference between NFV and SDN?

This great summary from Prayson Pate, Chief Technologist at Overture Networks, highlights the differences and similarities between NFV and SDN, and how are these complemented in the telecoms industry.

Let us talk about the architecture

Now the basics are established we can see the overall architecture. Look at these diagrams from HP and Intel where they show the main components involved.

HP SDN architecture

Intel SDN NFV

So, wait a minute, what is that thing they call OpenFlow?

The following video from Jimmy Ray Purser, Technical host for Cisco TechWise and BizWise TV, explains OpenFlow in a quick and straight way.

What about OpenStack?

This piece from Rackspace, featuring Niki Acosta & Scott Sanchez, makes a great summary about OpenStack, its origin, and its situation in the industry.

Now, what are the challenges faced and some real cases for the carriers?

Now that the concepts are clear and defined, we can study a couple of real use cases scenarios in the carriers’ network and its architecture, as well as methods for addressing the challenges faced in the NFV evolution. In the following video Tom Nolle, Chief Architect for CloudNFV, presents Charlie Ashton VP Marketing and US Business Development at 6Wind, and Martin Taylor CTO at Metaswitch Networks, covering some use cases like the Evolved Packet Core (EPC) and the Session Border Controllers (SBC) based on NFV.

Wrapping up, where are the vendors and the operators at with NFV?

The following pitch features Barry Hill, VP Sales & Marketing from Connectem Inc., at the IBM Smart Camp 2013 hosted in Silicon Valley. It covers a summary of the market opportunity for NFV, their specific solution for the operators EPC, and a brief check on the carriers’ status with it.

Although the ETSI ISG group for NFV definition will most likely publish the standards for it in one year from now, it is already a reality, and all the vendors and operators are working on it in some way or another. No matter if you are just starting to explore this trend, or mastering it already, I hope these videos gave you something about it you did not know before.

A. Rodriguez

Advertisements

Three short stories on today’s Mobile Networks Performance

Ensuring the quality of the networks for an optimal end user experience is often a challenging task for mobile network operators. While the carriers’ engineers adjust the systems for getting the most efficient usage according to the load required, you might be affecting the quality of the subscribers’ service in particular conditions, subject to the applications being used by them, the coverage and access technologies available in determined locations, or even the non-always optimal policies used for access technology selection.

Evolved QoE – Application Performance who?

Nowadays delivering quality services to the mobile subscribers has evolved beyond the traditional network availability and quality. Today’s users are demanding sufficient performance for each type of application used, leading to profile based modelling of the traffic and increasing the complexity of the Quality of Experience (QoE) evaluation for the carriers. For the operators evaluating the QoE is hard, as published by the GSA and Ericsson this month (here) “A 2012 study from the University of Massachusetts Amherst and Akamai Technologies found that internet users start abandoning attempts to view online videos if they do not load properly within two seconds. As time goes on, the rate at which viewers give up on a given video increases”, “with the rise of mobile-broadband and smartphone usage over the past few years, the meaning of user experience has changed dramatically”.

Network1

What used to be measured with coverage and bandwidth capacity is now extended to performance per application and end user experience, involving signal coverage maps, latency analysis, QoS, security features, loading speed for web pages or online multimedia content (e.g. HD video) and apps, among others. As explained and exemplified in a recent Ericsson Whitepaper on Network Performance (here) “Network performance cannot be generalized because the only true measurement of performance is the experience of those who use it.”, “App coverage is one way we describe this performance. It refers to the proportion of a network’s coverage that has sufficient ability to run a particular app at an acceptable quality level. For example, an area that has 95 percent coverage for voice calls, may have 70 percent coverage for streaming music and only 20 percent coverage for streaming HD video. A consumer’s usage patterns will determine their preferred type of coverage”

Network2

Indoor small cells – Please mind the gap between the macro and small cells platforms

Evolved small cells for indoor installations are coming to fill the coverage gap between the macro networks (i.e. 4G/LTE, 3G, 2G, etc.) and the small cells technologies (i.e. Pico and Femto cells, etc.). A new solution was recently announced by Ericsson called Radio Dot System (here), which is according to them “The most cost-effective, no-compromise solution to indoor coverage challenges”. It is well known the operators have challenges for covering indoor areas and buildings on a cost effective manner, while more than 70% of the traffic is generated in this domain. The solution is ultra-small, light, scalable, with fast deployment, and relies on Ethernet connection for integrating with the existing mobile network.

Although Ericsson’ solution should not be available before next year, we would expect to see other similar solutions in the market in the near future. This trend would potentially look to take over part of the current usage being done on WiFi technologies, preferred by most of the users for indoor communications.

Network3

Smart access network selection – The seamless cellular and WiFi access marriage

A recent report from 4G Americas (here) analyses the role of the WiFi technology in current mobile data services, and the methods for overcoming the challenges appearing as a result of the integration and mobility between cellular technologies and the WiFi. As stated by them “with smartphone adoption continuing to rise and the increasing prevalence of bandwidth-intensive services such as streaming video, the limited licensed spectrum resources of existing cellular networks are as constrained as ever. Wi-Fi, and its associated unlicensed spectrum, presents an attractive option for mobile operators – but improved Wi-Fi/cellular network interworking is needed for carriers to make optimal use of Wi-Fi.”

Network4

The so-called interworking between traditional mobile access technologies and the WiFi networks must be seamless and transparent to the end users. In such way, the service continuity must be assured when a subscriber moves in example from 4G/LTE coverage to WiFi covered zones and back, using methods like an automatic offload policy. Different methods are currently used for this interworking like session continuity, or client-based mobility, or network-based mobility. One of the most popular and accepted, also standardized by the 3GPP, is the network-based Access Network Discovery and Selection Function (ANDSF), which is already supported by most of the WiFi devices and network elements, including Policy Managers and specific network gateways. Other innovations have been made available for addressing the seamless interworking issues, in standards like the Hotspots 2.0, or the seamless SIM-based authentication.

Network5

As it was commented in my previous post “The top 10 fast facts you should know about LTE today”, the 5G will be a combination of access technologies for jointly fulfilling the requirements of the future. In these scenarios the seamless network selection and mobility becomes even more important beyond the classical offload scenarios, and some particular issues for these are commented by 4G Americas and vendors like Ericsson. These issues include: Premature WiFi selection (access technology shifted when coverage is still too weak due to distance), Unhealthy choices (traffic offloaded to systems overloaded), Lower capabilities (offload to alternative technologies having less performing networks), or Ping-pong effects (frequent access technology shifting due to mobility affecting the QoE).

A. Rodriguez

My hands-on experience with NFV for the EPC

SV_4

I have written many times about the Network Functions Virtualization (NFV) and/or Software Defined Networks (SDN) revolutions for the telecom operators, but last week I had the chance to attend to a hands-on workshop with some of the most advanced vendors for the Core Network virtualization field. I was able to test the products, ask all the questions I wanted, and get a real feeling on the vendors and operators opinions on the subject. Despite the trip to Silicon Valley in the sunny California USA, the beautiful San Francisco sights, and the unavoidable visit to the technology monsters (e.g. Google in Mountain View CA, Apple in Cupertino CA, Oracle in Redwood Shores CA, etc.), my objective was doing a reality check on the NFV trend which I will try to share with you.

What is ON with NFV:

The advantages of the NFV for the CSP’s are obvious, as previously commented in my article “The Network Functions Virtualization, a different kind of animal”, these includes: using COTS hardware, flexible automatic scaling & HA based on software, licensing costs reduction as a consequence of unified software domains, signalling load reduction, and pure IT software based maintenance & operation, among others. The operators are all well aware of this, either by their own initiative or because of the NFV/SDN vendors sales efforts, and that is the reason why most of them are researching the technology and have already done trials (e.g. Telefonica, AT&T, NTT, Vodafone, Verizon, Orange, Verizon, Optus, Telecom Italy, T-Mobile, for naming a few I know).

According to the information seen these days the release of the ETSI ISG standards for NFV will most likely happen around October 2014, and this should unify the different approaches in the market today. In the meantime the vendors seems to be taking different paths, like virtualizing the current core network nodes one by one (e.g. virtual S-GW, virtual P-GW, virtual MME, etc.), or virtualizing the functions required in the core (e.g. virtual attach & register, database, bearer handling, policy, etc.). If you think the NFV for the core or the Evolved Packet Core (EPC) is going slow, and the tier-1 operators will wait years for testing these technologies, you had better think again. Many products are available now, and some mavericks in the industry are already betting hard for the change.

In terms of the actual products these already deliver some of the promises commented. I was able to see virtual EPC’s based on software running, and handling test traffic with the equivalent functionality of the traditional core while reducing the signalling messages, and having an impressive flexibility for the flow logic and scaling. I also saw OpenStack based orchestration working, and API’s connecting to the operators OSS/BSS. Some HA capabilities are also quite innovative, like methods for managing the SCTP flows when a virtual machine gets down and other takes over. All of this was running on standard Blades, or Bare Metal, having a ridiculous cost compared to the current traditional solutions.

What is OFF with NFV:

As you would expect the current NFV solutions are not all roses. The bad news are the lack of maturity seen in most of the solutions, typical of the starting and revolutionary technologies.

The automatic scaling is not yet mastered, and the management & monitoring capabilities neither. Some solutions are still not able to match the performance of the traditional cores when activating Deep Packet Inspection (DPI) up to layer 7, which is being optimized with virtual DPI’s now. Some challenges are also seen when handling distributed no-SQL DB’s for things like the HA. The standards support is still not complete neither, as most of the solutions still do not cover the 3GPP Release 12 for naming an example. The Policy and Charging features are very limited, often relying on external solutions, which potentially affects the improved performance. There seem to be a lack of security features in the products. Among other limitations.

These challenges combined with the fact a new EPC represents additional costs, as no operator would intend to fully replace the current core network yet, the fear for mentality change in the different areas of the carrier, and the lack of knowledge in the NFV/SDN details and possible use cases, are currently blocking the technology embracement. An interesting article on this is available in Light Reading (here), and reflects what I felt from some operators in the field.

???????????????????????

What is coming for NFV:

Lucky for us some intelligent carriers are solving those challenges by having a vision of the future today. Some operators in the US are thinking on interesting use cases, like having portable EPC’s for special events in highly congested areas (e.g. you can imagine installing a virtual core network next to the radios around the stadium during the Super Bowl day, reducing congestion and improving the QoE), they are already testing this as you read this article. Other carriers in the UK and Japan are thinking on dedicated core networks for M2M type traffic based on NFV. The NFV start-ups are improving their products, including vendors like Cyan, Connectem, Affirmed, among many others, making these more robust and solving the challenges faced. Some big vendors are also perfecting their NFV offers for entering into the game, including vendors like Ericsson, Juniper, Alcatel Lucent, among many others.

As soon as we start seeing production deployments in the field, and I anticipate you this will happen very soon according to what I saw, other operators will join the trend and learn from the competition. This is the future of telecoms.

A. Rodriguez

Behind the operators’ technical scenes on the new iPhones, new Android, or the new Microsoft deal

Android_Apple_Microsoft1

As we are getting closer to the traditional yearly event from Apple next week, where new iPhone devices (most likely an iPhone 5S, an iPhone 5C, and possibly a new iPad?) are to be announced (here), Google has also announced in parallel the name of their new OS for Android (Android 4.4 Kit Kat) (here). If you live in this world, you should also know by now that Microsoft bought Nokia’s mobile phones division according to this week announcement (here). The battle of the mobile devices and OS is as interesting as every year in the last decade, and this battle has implications for all of the involved in the industry at all the possible layers, including of course the subscribers but also the Communication Service Providers (CSP’s) or operators. No matter if you work with an operator, or a vendor, or a consultancy, or are just a technology fan, you should know from the top of your head what is the current market share for mobile OS, or what is the split between the Android OS versions installed in the handsets, among others stats or facts. I give you a few ones below for feeding your knowledge hunger.

MobileOS1

MobileDevices1

Android1

A few years back when Blackberry was booming the smartphones’ market with their -by then- innovative products, the operators learned many technical lessons from this in the hard way. The network engineers until then focused only on delivering enough bandwidth to supply the subscribers’ traffic demand, saw how the push messages and always-communicating nature of the Blackberries boosted the number of sessions established in the networks while having the same PDP context established. This in example increased the Transactions Per Second (TPS) in the signalling plane of their network elements and business to values never seen before, leading to service downtimes and traffic outages, creating a huge change in the scaling and sizing paradigms and methods for the networks. The introduction of Blackberry devices in masses leaded to multiply those TPS for the same traffic, and the technical effects were also seen in other areas. In example after having a maintenance in the networks, and all the devices re-connected to the operator at the same time leading to transactions bursts, among many other examples I am sure any operators’ network engineer for that time can provide. All of this was traduced on massive revenue losses, which are usually the main triggers for immediate changes in the operators’ methods. As the years passed and more and more devices appeared having this same always-connected or always-communicating behaviour (e.g. pretty much all of the smartphones today), the operators adjusted all of their systems and methods for ensuring no problems were seen for this matter. Whether improving the sizing and scaling techniques, or applying Policy Management and Enforcement tools (PCRF/PCEF), or signalling routers, or traffic control agents, or simply adjusting profiles and timers for a more efficient sessions’ handling, among other methods.

Similar challenges have been seen when OS updates are made available for the smartphones. Since a big part of the subscribers tends to accept the update once the notification is received, typically happening around the same time. So again creating an unexpected increasing traffic (this time in both bandwidth and TPS) in the networks. Situations like these are also seen during important events, like the football world cup finals, etc. and most of the operators today make the scaling and sizing of the systems considering these events.

In other words, in today’s world the communication between the marketing and engineering teams of an operator is more important than ever. Announcements like the ones from Apple next week and the ones from Google must be closely monitored, as these represents new challenges, and -do not get me wrong- also new opportunities for business. In example, the subscribers are having freedom to connect liberated devices into the networks for browsing, or selecting which OS to download and install in the handsets, and this modifies the traffic and usage patterns seen in the networks. We saw this also with the introduction of the electronic books and tablets, and we will keep seeing this in the future with the new devices like the smart watches, or the smart glasses, etc. This represents challenges to meet the changing demands, but also new opportunities for monetizing the new network usage profile, etc. The role of the Business Intelligence (BI) and Analytic’s platforms are becoming critical. The extension of these towards more intelligent models like Predictive Analytic’s for performance of the networks and the systems, plus the actual business indicators of the networks, are and will be a key in the operators’ efficiency and the telecom business profitability.

A. Rodriguez