Red Hat announced its big data direction and solutions earlier this week, aiming at enterprise requirements for scalable and reliable infrastructure to run analytics workloads. The company also announced it would contribute the Red Hat Storage Hadoop plug-in to the ApacheTM Hadoop open-source community to turn Red Hat Storage into a Hadoop-compatible file system for big data requirements.
Red Hat says its big data infrastructure and application platforms are well suited for enterprises running open hybrid cloud environments.
Ashish Nadkarni, research director at IDC, said Red Hat was uniquely positioned to do well in the big data space for enterprises. IDC expects this market to grow from US$6bn in 2011 to $23.8bn in 2016.
“Red Hat is one of the very few infrastructure providers that can deliver a comprehensive big data solution because of the breadth of its infrastructure solutions and application platforms for on-premises or cloud delivery models,” Nadkarni said.
Many enterprises use public cloud infrastructure, such as Amazon Web Services, for the development, proof-of-concept and pre-production phases of their big data projects. The workloads are then moved to their private clouds to scale up the analytics with the larger data set. An open hybrid cloud environment enables enterprises to transfer workloads from the public cloud into their private cloud without the need to re-tool their applications.
Red Hat is engaged in the open cloud community through participation in projects like OpenStack and OpenShift Origin.
The company has multiple solutions available for managing enterprise big data workloads. Focused on three primary areas, Red Hat’s big data direction includes extending its product portfolio to deliver enhanced enterprise-class infrastructure solutions and application platforms, and partnering with data analytics vendors and integrators.
The company said most big data implementations run on Linux. An enterprise version of the operating system is on of Red Hat's core products.
The aforementioned Red Hat Storage is built on Red Hat Enterprise Linux. In a big data deployment, Red Hat Storage Servers can be used to pool commodity servers to provide a cost-effective and scalable infrastructure.
The Red Hat Storage plug-in for Hadoop, which will make the storage system available to the Hadoop community later this year, is promising to provide enterprise storage features while maintaining application programming interface (API) compatibility and local data access.
Integrated with Red Hat Storage is Red Hat Enterprise Virtualization. This enables virtual servers to access a shared storage pool created by the storage system.
Finally, Red Hat JBoss middleware enables creation and integration of big data-driven applications that can interact with technoloiges like Hadoop or MongoDB.