Improving Performance and Service Delivery Dominate Top Concerns in Hybrid Cloud

NETSCOUT and IEEE Cloud Computing Probe Cloud Architects

top concerns hybrid cloud

Most companies have begun migrating workloads to the cloud. But did you know that speeding up operations, optimizing costs, and improving performance and service delivery are the top goals of IT professionals?

That’s just one of several key findings of a new survey conducted by IEEE Computer Society of behalf of NETSCOUT. The organization surveyed 303 IT professionals from January to March 2018, and found that more than half (56%) have already started migrating workloads to the cloud while another 15% will start the migration process within the next 12 months.

When asked to identify the primary business goals for migrating to the cloud, the most common response was IT operations speed and agility, cited by 61% of the respondents. That was followed by optimized costs (shifting CapEx to OpEx), at 56%; and improve performance and service delivery to business customers, mentioned by 49%.

The survey results confirm the importance of cloud migration to organizations and the widespread use of migration resources already in place.

The adoption of cloud services entered the mainstream several years ago with enterprises first adopting the cloud for software-as-a-service (SaaS) offerings such as CRM, and then moving workloads to cloud service provider (CSP) data centers to compute close to where their data is stored or to provide extra capacity and improved flexibility, says Cliff Grossner, senior research director and advisor, Cloud and Data Center Research Practice, at IHS Markit.

“In recent years, CSPs have developed specialized hardware, such as Google’s Tensor Processing Unit, for artificial intelligence and machine learning, and enterprises are migrating workloads to access specialized compute instances,” Grossner says.

The promise of the cloud has been unwavering since its inception, Grossner says: providing a means for enterprises to more quickly turn up or throttle back IT infrastructure delivering applications, improve application performance with the latest infrastructure run by highly skilled personnel, and enable consistent functionality connecting cloud service provider data centers with on-premises data centers and end users across the global via multi-clouds.

Security and compliance is the top challenge when adopting and managing hybrid cloud environments, cited by 62% of the organizations. The next most common challenges are preventing data loss (51%) and minimizing service downtime (47%).

Security breaches and downtime are critical issues for enterprises, Grossner says, with significant consequences when they occur. “Enterprises have made significant investments in protecting their on-premises data centers and need to be certain they are not exposed to new unmitigated attack vectors when migrating to the cloud,” he says. “Until they feel off-premises public cloud security is on-par, many enterprises will opt to use a hybrid approach, keeping their sensitive data in private data centers.”

The top applications targeted for migration to the cloud are Web services (84% of respondents), and unified communications and collaboration (61%).

In-house developed Web services applications are a natural target for migrating to the cloud, Grossner says. “They are architected using a traditional three-tier architecture, where each tier can be scaled independently,” he says. “Many enterprises migrate Web servers to the cloud, allowing them to scale very rapidly as demand changes while keeping the back-end logic on premises.”

Unified communications and collaboration applications such as Microsoft Office 365 are often consumed as SaaS, Grossner says, as enterprises are offloading the burden of maintaining on-premises software that can demand frequent installations of updates and patches.

The applications less likely to be migrated to the cloud are human resources management and proprietary enterprise applications, according to the survey.

Respondents were asked about their performance monitoring strategy for applications and workloads migrated to the cloud, and nearly 40% said the best strategy is to implement methods

for pervasive visibility of traffic flows on-premises and in private and public clouds. About one quarter said the best strategy is to conduct active synthetic service performance testing for SaaS environments.

For enterprises that have adopted a DevOps process for continuous delivery, monitoring application performance in the cloud is a key concern.

“DevOps teams run the risk of restricting the overall flow of the value stream to customers,” says Ron Lifton, senior solutions marketing manager at NetScout Systems. “Bytecode, server-centric application performance management technologies can’t keep pace with delivering innovation and new experiences quickly for customers.”

As such, moving applications to the cloud requires a DevOps transformation to reduce risk, Lifton says. “Risk is best managed by having more information; more importantly, the right information that comes from end-to-end system-level telemetry,” he says. “An application performance management solution for the hybrid cloud that uses smart data, powered by the acquisition and transformation of traffic flow data, provides system-level telemetry and unobstructed visibility anywhere along the service delivery path.

Such a solution allows DevOps to fully understand the inherent complexities of application workloads in hybrid cloud environments and not compromise user experience, Lifton says.

When survey respondents were asked where they see gaps or shortcomings in maintaining the visibility they need to deliver cloud services before, during and after migration, 45% cited a lack of correlation and situational awareness across disparate tools. Another shortcoming was cloud provider platforms not being sufficient to meet service assurance needs, cited by at 40%.

“Previously you had to rely on incomplete data like byte code instrumentation or piecemeal instrumentation because of APM limitations or narrowly focused platform-specific monitoring,” Lifton says. “Unfortunately, the continuous deployment pipeline is often hindered by APM or silo-specific monitoring challenges, putting operations at risk of becoming a bottleneck to deploying services in the cloud and negatively impacting customer experience.

NETSCOUT offers Adaptive Service Intelligence (ASI) technology that allows IT organizations to acquire smart data and get visibility into the deepest parts of the network and applications, on-premises and in cloud environments.

“While the usual DevOps mantra is to accomplish more with fewer resources, using a system-level telemetry platform complements software development automation by accelerating deployments. Real-time and continuous monitoring of traffic flow data allows for a common situational awareness and an effective analytics feedback loop,” Lifton says.

Respondents were also asked to identify what the DevOps key performance indicators are in hybrid cloud environments. Application reliability, availability, and responsiveness in production environments was the most common response (59%).

“Bytecode-based APM tools have a feedback loop constrained by server-centric application telemetry, and as a result DevOps have service delivery ‘blind spots’ in the production environment,” Lifton says. “This is a real issue for teams responsible for delivering specific app functions and who are on the front line to continuously deliver and support them.”

With migration to the cloud has come a paradigm shift from server-centric to workload-centric performance management, Lifton says. Application performance management has now tipped to operations because of cloud migration and the need for a superior feedback loop based on workload-centric system-level telemetry.

“As the continuous deployment pipeline grows, the burden of optimizing application performance and assuring service delivery increases for operations,” Lifton says. “IT professionals must spend more time and effort managing service complexity. If a function fails from a software perspective, it can be huge in terms of application performance degradation the customer will experience.”

Therefore, protecting the continuous deployment pipeline and assuring application reliability, availability and responsiveness requires smart data to get big picture, infrastructure-wide visibility and understanding of service dependencies in a dynamic environment. 

You can learn more about NETSCOUT’s Smart Data-driven approach to DevOps here