New York Hadoop User group September 2015 - Apache Drill
New York, NY
Monday, September 28, 2015
The New York Hadoop User group is a community for those using or interested in Apache Hadoop, related projects, and big data in general.


Rethinking SQL for Big Data with Apache Drill

Jim Scott View Bio

Learn the basics of Apache Drill, from installing the tool to running your first query within minutes. The demo will show users how to install and setup instantly and start getting value out of data on their computers. For Drill queries, the data can be in any format or could be from any data source - including JSON, CSV, HBase or even MongoDB data. The talk will also cover interesting use cases and code snippets to query different data formats and data sources in a self service fashion without going through the pain of creating centralized schemas or metadata stores. This is a hands-on demo so, if you want to follow along, we will provide instructions to download and install the tool prior to the meetup.


Jim Scott

Jim drives enterprise architecture and strategy at MapR. Jim Scott is the cofounder of the Chicago Hadoop Users Group. As cofounder, Jim helped build the Hadoop community in Chicago for the past four years. He has implemented Hadoop at three different companies, supporting a variety of enterprise use cases from managing Points of Interest for mapping applications, to Online Transactional Processing in advertising, as well as full data center monitoring and general data processing. Prior to MapR, Jim was SVP of Information Technology and Operations at SPINS, the leading provider of retail consumer insights, analytics reporting and consulting services for the Natural, Organic and Specialty Products industry. Additionally, he served as Lead Engineer/Architect for dotomi, one of the world’s largest and most diversified digital marketing companies. Prior to dotomi, Jim held several architect positions with companies such as aircell, NAVTEQ, Classified Ventures, Innobean, Imagitek, and Dow Chemical, where his work with high-throughput computing was a precursor to more standardized big data concepts like Hadoop.