Realizing the Full Value of DevOps
The ultimate business promise of DevOps, agile development and continuous deployment, is an increase in IT workflow and applications that deliver stronger customer engagement. DevOps is not only a process but a culture that ties together workflows across development and operations to quickly respond to business demands. Not surprisingly the adoption of the DevOps model continues to grow within the enterprise. Puppet and DORA’s 6th annual “2017 State of DevOps Report” has shown a steady increase in respondents who work in DevOps teams, from 16% in 2014 to 27% in 2017. Virtually every enterprise is at least exploring DevOps, whether in specific Lines of Business, digital services groups or Centers of Excellence.
Yet significant challenges remain to realizing the full business value of DevOps. The inherent characteristics of greater complexity and scale of modern enterprise applications puts reliability and usability at risk.
In his seminal book the “Phoenix Project: A Novel about IT, DevOps, and Helping Your Business Win” Gene Kim described The Three Ways of DevOps: understand the entire system and increase flow, shortening and amplifying feedback, and based on that data, learning to improve. It might be said businesses are only in the early stages of The Three Ways. They are building new systems (spanning refactored or lift-and-shift applications, microservices, hybrid cloud environments) and increasing the flow (more, and more frequent deployments), but organizations have yet to establish complete system visibility to assure application availability and performance. As such, DevOps teams are challenged to achieve meaningful feedback loops and reduce MTTK. Both this lack of visibility and failure to attain timely insights are in turn restricting business growth.
The Current Landscape
What empowers the DevOps transformation, from an IT organization's point of view, is the ability to fail fast, fail often, and learn. When DevOps is adopted to its full capability then business can not only meet customer needs but also deliver differentiation. It means Dev must increase the frequency of releases and Ops must be more responsive. It means moving from a monolithic code base to microservices to deliver application features. And it means going beyond byte code instrumentation to using smart data for system-level telemetry and situational awareness to run everything reliably and confidently on hybrid cloud or multi-cloud infrastructure.
At the same time, undeniable cost efficiencies continue to drive the adoption of hybrid cloud and multi-cloud infrastructures. According to a 2017 survey of over 1,000 IT professionals:
- 85 percent of enterprises have a multi-cloud strategy, up from 82 percent in 2016;
- Cloud users are running applications in an average of 1.8 public clouds and 2.3 private clouds;
- And they are experimenting with an additional 1.8 public clouds and 2.1 private clouds.
Blind Spots – Increasing Risks
Application performance management (APM) tools designed for byte code, server-centric applications have not evolved to support dynamic, microservices and running workloads in the cloud. With a microservices architecture, an application is built as independent components that run each application process as a service. These services communicate via a well-defined interface using lightweight APIs. Existing APM tools by design focus on application inter-process communications within a server instance. This limitation causes critical ‘blind-spots’ along the service delivery path spanning data centers and clouds. Combining inadequate visibility (limited by silo-specific tools and data) with a continued explosion in dependencies, it is only a matter of time when DevOps teams will hit a wall when trying to assure application availability and performance, and maintain a delightful customer experience.
*number of microservices, volume of API calls
Microservices add more traffic and increases application performance degradation risk due to:
- Load, latency and errors
- Communication problems
- Scale or logic creating time out issues
Relying on server logs, or piecemeal, byte code instrumentation from current APM tools will only get riskier as complexity, the ‘number of moving parts’, as well as the scale of enterprise applications increases. The interdependencies of microservices can result in cascading performance or availability issues which are difficult to anticipate, let alone spot the root cause. The business risks from these ‘blind-spots’ are aggravated as organizations migrate more and more workloads to the cloud (public or private), more applications are deployed in short-lived containers, and the applications themselves become more complex, interdependent.
As the continuous deployment pipeline grows and frequency of release accelerates (a fundamental tenet of The Three Ways), IT professionals must spend more time and effort managing microservices complexity. If a microservice fails, it is small (with respect to software code), but can be huge in terms of application performance degradation experienced by the end-user. Without system-level telemetry and a common situational awareness, DevOps teams run the risk of becoming a bottleneck that restricts the flow of high-performance, reliable services to lines of business, and ultimately customers.
For IT organizations embracing DevOps, more problematic are byte code-based, server-centric tools that have restricted visibility into hybrid cloud and multi-cloud environments. Getting a clear, direct line-of-sight on application behavior, service dependencies and pinpointing the root cause of failures is not only difficult in real-time, but it becomes challenging to collect the critical data needed to accurately understand user experience or assess, redesign (re-factor) and optimize applications.
IT – DevOps or not – will simply find it increasingly difficult to operate effectively and efficiently. They will find it harder and harder to:
- Increase resources to adequately monitor the development/deployment/operations stack;
- Pinpoint/fix dynamic service-delivery bottlenecks, minimize disruption, and reduce (or maintain) MTTR;
- Cost effectively deploy reliable applications across physical, virtual, hybrid and multi-cloud environments;
- Ultimately, to deliver higher quality, better performing applications and services.
In short, byte code, server-centric application performance management tools can’t keep up with the pace of change and the DevOps promise. To transform DevOps requires system-level telemetry, and continuous learning and improvement, using smart data and smarter analytics.
The Value of Visibility
The burden of assuring application performance and responsiveness in a more complex and dynamic environment is shared between Dev, QA and Ops. They require pervasive visibility that’s integrated into their IT best practices to become more responsive to customers and business demands. Visibility needs to be based on system-level telemetry, to empower the DevOps organization to be more agile and efficient and help their business to achieve market differentiation. Visibility encompasses telemetry of load, latency and failure metrics for application and service delivery systems and unobstructed views into the dependencies spanning the network, servers, service enablers, databases and applications. This insight accelerates continuous planning, delivery, integration, testing, and deployment pipeline.
The foundation for system-level telemetry is smart data, powered by NETSCOUT’s Adaptive Service Intelligence™ (ASI) technology. With smart data, it is possible to analyze performance, traffic indicators, load and failures as well as offer contextual workflows to quickly triage and find the root cause of application performance degradations. Wire-data is the foundation of NETSCOUT’s smart data: highly scalable metadata that delivers real-time and historic telemetry of all system components including physical and virtual networks, n-tier applications, workloads, protocols, servers, databases, users, and devices. Since every action and transaction is encapsulated in wire-data that traverse hybrid cloud and multi-cloud environments, it offers the best vantage point for end-to-end visibility. More so, smart data based on wire data enables an in-depth understanding of application and system performance issues, that’s independent of the source code and with no need for agents or byte code instrumentation. NETSCOUT has the only highly scalable DevOps performance monitoring solution that continuously collects, normalizes, correlates, organizes, and analyzes large volumes of wire data in a system contextual fashion.
See how NETSCOUT’s Smart Data powered by ASI technology allows DevOps and security teams to get powerful insights when they need it most, move faster when tackling big problems, and continuously demonstrate great agility when lines of business demand an outstanding customer experience in a very complex and changing digital environment.