Large enterprises usually have fragmented data deposits in department-centric silos or customer service systems. When cross-functional data are required for advanced analytics or for concurrent storage, the enterprise big data systems fail to deliver as they are not well integrated. A solution to that problem has been addressed by recent Apache Hadoop technology, where silo stored data are continuously moved to a central Hadoop system with a multi-server scenario for storing terabytes or petabytes of business data. Additionally, any MapR distribution with Hadoop can greatly enhance the data processing power of a wide variety of data types at high speed. In other words, Hadoop with MapR implementation truly characterizes the big data environment of volume, velocity, variety, and veracity.
For some years now, HP has been facing steep competition from hardware vendors, and while contemplating more efficient methods for big data analytics across their six data centers, or for enhanced customers experiences, they visualized the concept of a data lake to integrate all the data funnels from different silos into a single repository for advanced analytics. HPs basic mission behind redesigning their enterprise data management included providing integrated big data analytics across the enterprise network, and enhancing the consumer engagement experience for increasing sales. HPs choice of a suitable Hadoop vendor was narrowed down to MapR because they found this was the only Hadoop distribution that offered scalability, low downtime, high maintenance, and knowledgeable customer support. Keith Dornbusch, Manager of HPs Big Data Services points out
The MapR technology was top-notch, particularly their performance and high availability features. Plus, our working relationship with the vendor was excellent. MapRs responsiveness, service, and support, were second to none. And MapR addresses our long-term needs by being a reliable Hadoop distribution geared towards the enterprise.
Hadoop with MapR implementation at HP
The enterprise data management architecture has been built around the concept of a data lake, which is nothing but a single, centralized platform to store data derived from different customer touch points or internal departments. Paul Westerman, Senior Director of IT Big Data Solutions at HP, says, The real value is breaking down silos and bringing the data together. With Hadoop, you can drop files in and the data is there. Its a different type of development environment. The HP Hadoop clusters are distributed over 320 servers with Dual Intel processors. The servers are capable of storing about 20 terabytes of data, while the processors provide 128 gigabytes of SD RMmaking the environment ideal for big data storage and processing.
The benefits of HPs Hadoop environment
Flexible file formats MapR ensures data of any type can be moved from silos to Hadoop for quick processing.
Scalability
With data growing at the rate of 50% every year, HP is well positioned to scale up their data storage due to Hadoop.
Zero downtime
Built-in disaster recovery thresholds and fail-over between clusters ensure high up time.
Cost benefits
Hadoop provides massive data storage at low cost; and MapR ensures quick processing, which again saves cost.
Monitoring customer activity through telemetry data
This machine-generated data can monitor customer experience with products and send back feedback to HP.