As the mobile communications industry makes its way to Barcelona for the 2015 GSMA Mobile World Congress, it’s hard to believe the industry that serves almost as many cellular phones as there are people on the planet had its humble beginnings on the pages of The Saturday Evening Post on July 28, 1945. Back then, in an interview titled ‘Phone Me by Air,’ FCC commissioner E. K. Hunt discussed the future of wireless communications and described frequency reuse within a small area, the main element of cellular radio.
While the concept of spectrum reuse still exists, our modern cellular network looks nothing like the Mobile Telephone System (MTS) launched a year later on June 17, 1946. Today, 26% of all mobile connections are via a “smart” connection, and it is forecasted that by 2019 this percentage will increase to 54%.
‘Smart’ phones have become personal entertainment hubs providing voice, data and video services. The one commonality among these services is that they are all traversing the network as IP packets – to the tune of 30.3 exabytes of data!
The challenge for mobile operators is understanding the performance of the modern IP-networks that are carrying traffic and services to consumers reliably and consistently. That by itself is challenging, but today’s consumer behaviors are changing rapidly. What’s ‘hot’ today may be quickly replaced by some other trend tomorrow.
Service assurance in modern IP networks requires a new approach. Traditional workflow structures — session trace and decode, single subscriber views, manual processes and reactive approaches — will no longer work efficiently in today’s complex IP networks. Subscriber expectations are high and service providers need to develop and implement new, modern workflows that significantly reduce the time between problem discovery, fix and verification.
Today’s service delivery environments are very complex. Service delivery requires multiple processes and network elements before the service can ever begin to flow to the customer. These processes include such steps as the ability to connect to the network, the ability to get routed to the correct servers, and the ability to authenticate with the network and service. Each of these steps can require multiple routers, switches and servers and all must work in unison to successfully enable a service. If any one of the devices in the service creation chain fails, the entire service fails, and some of these devices play no more important role than merely directing traffic.
To successfully deploy and operate these new digital services, the network operations team must have visibility into the end-to-end service environment and have that visibility in real-time. To successfully address the complexity of these IP networks, service providers are investing in solutions that provide early warning capabilities, that can scale to support millions of subscribers, and that can efficiently handle all IP services.
Gone are the days of knitting together point diagnostics, logs and system files in an attempt to get a “big picture” view of the network and services. Today’s monitoring systems are able to provide operators with an end-to-end, real-time view of network and service performance. This real-time view allows operators to see a failure or outage as it develops — much the way a smoke detector can provide early warning of a fire.
As a problem begins to develop, these new monitoring systems are able to proactively notify key personnel that issues are developing, alert them to the nature of the problem, and provide the forensic data required to address and resolve the issue.
To truly understand how a network or service is performing, there is only one place to get the real-time data required — the system needs to examine the actual packets that traverse the network, as they traverse the network. On the surface, this appears to be a daunting task, especially for a service provider whose network will handle billions of IP transactions. However, the key is in how the monitoring solution handles the volume of traffic.
There are two traditional approaches to handling packet flow data for performance, service and application monitoring: full packet storage and packet slicing.
Full packet storage requires the system to make a copy and save every packet that crosses the wire. This method is very inefficient because it requires the storage of payload information that provides no value in terms of performance monitoring and requires a significant investment in storage.
Simple packet slicing is “a one-size fits all” approach since all packets are sliced at exactly the same point. This slicing approach means that sometimes useless details are maintained and other times useful details are lost.
NetScout has developed an innovative third approach called Adaptive Session Intelligence (ASI). With NetScout’s ASI, service providers have access to the most efficient real-time traffic flow capture and analysis platform. NetScout allows an operator to capture and store only the packet metadata needed for market, operational and cyber intelligence and insight. This allows operators to understand subscriber behavior by device, location, service or community, and optimize resources based on that behavior. NetScout captures the signal and discards the noise making traffic flow data practical and affordable for big data applications.
Senior leaders from NetScout Systems will be in attendance at the Mobile World Congress 2015 in Barcelona, Spain, from March 2 to 5, at the Fira Gran Via, Hall 6, Booth #6C20, to demonstrate how ASI technology is allowing service providers the “Confidence to Innovate in an Untethered World.”
- Service Provider