Are You Up to the Challenge of Software Defined Networking?

Today’s successful enterprise requires a ‘fast and flat’ network that can provide the business agility to quickly spin-up compute and storage resources to deliver applications wherever and whenever they are required.

SDN Challenge
NETSCOUT

Today’s successful enterprise requires a ‘fast and flat’ network, ‘composeable’ infrastructure that can provide the business agility to quickly spin up compute and storage resources to deliver applications wherever and whenever they are required. Software Defined Networking (SDN) combined with public cloud infrastructure and evolving orchestration tools holds out just such a promise. [The new VMware Cloud (VMC) running on AWS is a good example of such flexibility. VMC delivers a complete Software-Defined Data Center (SDDC) stack running VMware vSphere, VSAN and NSX on hardware inside Amazon data centers.] And according to a 2017 survey of over 1,000 IT professionals, 85 percent of enterprises have a multi-cloud strategy already in place.

Multi-cloud strategies

Yet the abstraction of storage, compute and networking, whether on premise or in the cloud, transforms the provisioning for and delivery of business services and applications. The adoption of SDN and public cloud infrastructure will change how we configure and consume business services in ways we are only beginning to realize. Alongside the undeniable benefits, where are the ‘gotchas’ inherent in composeable infrastructure? What might be, and how do we anticipate the inevitable Software Defined Networking challenges that are bound to emerge?

"As more workloads move to the cloud, cybersecurity professionals are increasingly realizing the complications to protect these workloads. The top three security control challenges SOCs are struggling with are visibility into infrastructure security (43 percent), compliance (38 percent), and setting consistent security policies across cloud and on-premises environments (35 percent)." - 2018 Cloud Security Report, CyberSecurity Insiders

Spin Up Whenever and Wherever You Want, but…

The biggest challenge facing enterprise in the pursuit of truly cost-effective, high performance and secure infrastructure must be achieving continuous, end-to-end network visibility. A disruptive approach to the entire application provisioning, networking, and infrastructure ecosystem naturally requires a different approach to system monitoring. Lack of seamless, real-time visibility makes it difficult (if not impossible) to assure applications, optimize performance, or secure the infrastructure.

Just as business operations have become dependent on the availability of high performance applications, monitoring service assurance has become increasingly more complicated. Current monitoring solutions are fragmented and piece-meal, leading to different data sources, different levels of granularity and questions of relevance. What are the right Key Performance Indicators (KPIs)? For whom?

complex network systems

There is little automatic coordination (if any coordination) between different monitoring groups and tool sets. Relying on server logs, or the flood of alerts and potential Indicators of Compromise (IoCs) from NextGen Firewalls or SIEMs, or piecemeal, byte code instrumentation from current Application Performance Management (APM) tools does not provide complete nor continuous monitoring. Such a strategy will only get more risky as the complexity of the network and the scale of enterprise applications increases. And monitoring workloads which can be readily distributed across hybrid architectures is only the beginning of the achieving real visibility challenge.

Achieving complete and continuous visibility is complicated by the evolution of business applications themselves, from monolithic, server-centric code to transitory, distributed microservices. Existing APM tools focus by design on application inter-process communications within a server instance. Yet microservices communicate across networks using lightweight APIs, which in itself adds more traffic to the network.

Microservices

The ease of automatically spinning up virtual machines and applications running in temporary, substantially opaque containers adds yet more layers of complexity. This model increases the risk of performance degradation or failure due to difficult to pinpoint load and latency errors, communication problems, and logic or sheer scale creating time out issues across extended infrastructure. Naturally any application is dependent upon the reliability and performance of all its microservice components. The metaphor that comes to mind is the ‘weakest link in the chain’.

The new architectures tend to result in networks that are segregated into East/West (within an application) and North/South (across VMs, across clouds, between infrastructures) configurations. Just as distributed workloads initiated by microservices are difficult to monitor with server-centric APM tools, North/South traffic also presents its own set of visibility challenges.

Applications naturally need to access databases; and these communications now typically traverse the North/South network. A microservice, likely running out of a container, could be doing database I/O across an extended network on prem, hosted by a third party, or residing in a public cloud. Many applications may be accessing this database, and any one business service (e.g. customer service portal) could be relying upon multiple databases. Without end-to-end visibility across these networks it becomes virtually impossible to identify problems and meet service SLAs.

East West Traffic

There are other application dependencies (themselves not ‘containerizable’) that typically must traverse increasingly distributed and complex networks: for example, access to the Active Directory (LDAP), or Internet services such as DNS. With current monitoring solutions, these new SDN-enabled configurations can result in limited windows into traffic and application behavior.

Nailing down the real source of a problem becomes difficult, time-consuming and problematic. Point solution tools might rule out potential areas of disruption but struggle to pin-point the true source of the problem.

Overcoming Software Defined Network Challenges

Meaningful end-to-end visibility on today’s complex networks must be based on network traffic data. Only wire-data can serve as the real source of ‘truth’ – the insight that is complete and not constrained by application pathway nor diverse, hybrid infrastructure, whether internal or third-party. End-to-end wire-data visibility encompasses unobstructed views into the dependencies spanning the network, servers, service enablers, databases and applications.

It becomes especially challenging (and valuable) to accurately understand the user experience or collect the critical data needed to assess, redesign (re-factor) and optimize applications and networks. Getting to actionable data requires going beyond packet instrumentation. Actionable intelligence on application behavior, service dependencies and pinpointing the root cause of failures requires extracting smart data.

ASI Model Continuous Visibility

Smart data enables an in-depth understanding of application and system performance issues that is independent of the source code. Real traffic based smart data delivers real-time and historic telemetry of all system components including physical and virtual networks, n-tier applications, workloads, protocols, servers, databases, users, and devices.

Augmenting VMCloud Visibility

The new VMware Cloud (VMC) exemplifies the flexibility of SDN. VMC allows for the networking and firewalling configurations to be done within a single framework. This makes it much easier for IT operations as they don’t have to define multiple processes for on-prem configurations and issues, and others for the public cloud, such as AWS. NETSCOUT’s Adaptive Service Intelligence™ (ASI) enabled smart data helps understand application dependencies, and hence what security policies should be established. It also goes further and automatically and consistently provides the analytical framework, the context by which you can measure performance KPI’s mapping to SLAs, and security indicators to Cyber Threats.

With smart, continuous visibility you can know, and document, you are meeting performance SLAs and security KPIs. Only by extending smart monitoring to all the traffic traversing the SDN, from microservices running in containers, across private, public and hybrid environments, can the business:

  • Fully “illuminate” the data center so all application and microservice communications to and from can be monitored;
  • Understand all the changing, application inter-dependencies informing higher performance and a superior user experience;
  • Baseline resource usage better for more accurate, flexible, and cost-effective infrastructure design and planning – whether in public cloud or hybrid;
  • Allow the accurate assessment, redesign (re-factoring) and optimization of existing applications

Smart Visibility into Network, Applications, Dependencies, and Security

NETSCOUT’s vSTREAM allows you to illuminate your entire infrastructure: on-prem, private and public cloud. The foundation for end-to-end visibility is smart data, powered by NETSCOUT’s Adaptive Service Intelligence™ (ASI) technology. With smart data, it is possible to analyze performance, traffic indicators, load and failures as well as offer contextual workflows to quickly triage and find the root cause of application performance degradations. Wire-data is the foundation of NETSCOUT’s smart data: highly scalable metadata that delivers real-time and historic telemetry of all system components including physical and virtual networks, n-tier applications, workloads, protocols, servers, databases, users, and devices. Since every action and transaction is encapsulated in wire-data that traverse hybrid cloud and multi-cloud environments, it offers the best vantage point for end-to-end visibility. More so, smart data based on wire data enables an in-depth understanding of application and system performance issues, that’s independent of the source code and with no need for agents or byte code instrumentation.

This Blog Post was authored by Ray Krug and Arabella Hallawell.