So, we did it again! Another rapidly growing open source project is now formally supported and packaged in the MapR Distribution including Apache Hadoop. This time the project is Apache Storm.
I must say, the Storm project is special, given that we were the first ones to champion this project two years ago. Our own Ted Dunning has mentored the Storm community to get it to Apache Top Level Project status recently. Furthermore, Storm is associated with real-time processing—one of the core strengths of the MapR platform—with features such as a random read-write file-system and the option to use NFS-based spout. Not surprisingly, we already have customers using Storm on MapR in production.
So why did we include Apache Storm when MapR already supports Spark Streaming in its distribution? Simply put, it is about providing the users with deployment flexibility and freedom of choice. Until now, MapR has supported several SQL-on-Hadoop technologies, multiple NoSQL technologies, and multiple machine learning libraries, but provided no choice for stream processing. We decided to change that and provide users with a choice for stream processing as well. MapR foresees good traction and several use cases for streaming applications on Hadoop using Storm and/or Spark Streaming.
For those of you who are new to Hadoop, Apache Storm is a stream processing framework that allows you to process continuous data streams in real time, so you can gain online insights from your data. Here is some content on Storm where you can check out some of the use cases that MapR customers have already implemented using Storm. In addition, here’s a video that explains how Storm and Spark Streaming differ, and what you should consider before you pick your technology.
To sum it up, Apache Storm has moved out of MapR forum-based support and into mainstream product support. Storm (version 0.9.3) joins the club along with Spark Streaming on the MapR Distribution including Hadoop, enabling new users as well as our existing MapR customers to expand stream processing use cases on Hadoop.