Ready for Big Data Training & Certification? Browse courses like Big Data – What Every Manager Needs to Know developed by industry thought leaders and Experfy in Harvard Innovation Lab.
Explore the area of big data Application Performance Management (APM) and why enterprises need it. APM is not a new discipline, but it is a new best practice for big data – adopting an application-first approach to guarantee full-stack performance, maximize utilization of cluster resources, while minimizing the TCO of the infrastructure. For architects, it means that the big data architecture has to be designed to meet new business needs for speed, reliability, and cost-effectiveness, as well as align with architecture standards for performance, scalability, and availability.
Evolution of APM
Big data is trending from experimental projects to becoming a mission-critical data platform offering a range of big data applications. Enterprises look to these big data applications (e.g., ETL offload, Business Intelligence, Analytics, Machine Learning, IoT, etc.) to drive strategic business value.
As big data applications move to production, performance expectations also need to be production grade. The business needs answers in seconds and not hours, hardware and resources need to be continuously optimized for cost, and deadlines / SLAs need to be guaranteed. This means that APM needs to become a strategic component of a big data architecture in order to eliminate risks and costs associated with poor performance, availability, and scalability.
Current Challenges
The fundamental problem is that the big data stack is complex due to its’ distributed nature where infrastructure, storage and compute are spread across many layers, components, and heterogeneous technologies. This problem exists regardless of the specific architecture (i.e., traditional, streaming analytics, Lambda, Kappa, or Unified). From the perspective of the production big data platform and the applications that run on it, everything must run like clockwork: ETL jobs must happen at fixed intervals; users expect dashboards to be up-to-date in real time; user-facing data products must work constantly. But from the perspective of the underlying platform, the application is not an isolated job, but rather a set of processing steps that are threaded through the big data stack. For example, a fraud detection application (i.e., Data Consumer) would be comprised of a chain of many systems from Spark SQL, Spark Streaming, HDFS, MapReduce, and Kafka, as well as many processing steps within each system. As a result, the entire process of managing performance and utilization across all these layers is exponentially complex.
This complexity makes it very hard to implement Application Performance Management services that provide a single view to manage performance and utilization across the full-stack. In particular, there is no rationalized instrumentation across the stack to enable a holistic approach to guarantee performance and maximize utilization. Instead, performance and utilization information is scattered across disjointed metrics, buried in logs, or spread across performance monitoring / management tools that only provide an incomplete infrastructure view as opposed to a full stack view.
Challenges for Architects
As a result, the process of planning, operationalizing, and scaling the performance and utilization across applications, systems, and infrastructure is not production-ready. This challenge is called out in Gartner’s March 2017 Market Guide for Hadoop Operations Providers. The report states “scaling Hadoop from small, pilot projects to large-scale production clusters involves a steep learning curve in terms of operational know-how that many enterprises are unprepared for.”
The lack of production readiness spans multiples areas across business units, developers, and operations. At its core, it makes it impossible to implement a multi-tenant cluster model, where a small Ops team needs to support a large number of applications, business units, and blended workloads with a combination of SLA-bound jobs vs. data discovery. The ultimate impact affects adoption of big data and business value realization.
Architect’s checklist
The architect should play a pivotal role to ensure that the big data platform is designed for production. The architect can ensure that the big data platform will meet the needs of the business within time and budget constraints, as well as ensure the architecture will adapt to new business needs as they evolve over time. The architect’s checklist can be used in planning, operationalizing and scaling the big data platform in order to manage performance, utilization, and cost:
- What types of applications is the business trying to build and deploy?
- Will applications be SLA-bound or ad-hoc? How will workloads be prioritized cost effectively?
- Which systems are best suited for the applications (e.g., Spark, Hadoop, Kafka, etc.)?
- Which architecture approach is best suited (e.g., Lambda, etc.)? Will the cluster be on-premise, in the cloud, or hybrid?
- How many concurrent users need to run on the same cluster without running out of resources? How many applications need to run on the same cluster within 24 hours? How will throughput be optimized?
- How should storage be tiered?
- How many nodes will the cluster need? What infrastructure capabilities need to be in place to ensure scalability, low latency, and performance, including computing storage and network capabilities?
- What data governance policies need to be in place?
- How will dev, QA, and production be staged?
- How much will the cluster cost to run? How will the business be charged back?
The operationalizing phase can be broken into 4 stages. The staged approach helps to gradually shape and scale the successful implementation and ROI of big data applications.
For each stage the following set of key questions apply:
- What are the SLAs for applications? How can they be guaranteed?
- What are the latency targets for applications? How will they be met?
- How will Ops be able to support business units and users in a multi-tenant cluster? How will dev be able to monitor applications in a self-service fashion? How will Ops troubleshoot issues?
- When do users typically log in and out? How frequently?
- Do different groups of users behave differently? How do activity profiles of users change over time?
- How do I track costs in a multitenant cluster? How do I assign them to projects, business units, users, applications, etc.?
- How will data governance policies be enforced?
Conclusion
Big Data is hard. With so many new technologies and emerging layers, the big data stack is exceedingly complex. The only way to properly navigate this complexity is to take an application-centric approach. But this approach alone isn’t enough, as it’s difficult to get clear and consistent insight into the performance of your applications. APM is the answer.
To properly roll out APM, you’ll need to do thorough planning. Once you’ve made your way through the above checklists, you’ll be ready to execute APM see better returns on your big data investments.