2nd Annual Global Big Data Conference
Santa Clara, CA
Monday, September 8, 2014
Wednesday, September 10, 2014
The Global Big Data Conference is offering a three-day extensive event which will be fast paced, vendor agnostic, and a technical overview of the Big Data landscape. This event is targeted towards both technical and non-technical people who want to understand the emerging world of Big Data, with a specific focus on Hadoop, NoSQL and Machine Learning.


Having Problem Scaling Your SQL Database? Getting Started with HBase Application Development (Tutorial)

Sridhar Reddy View Bio

September 8, 2014 1:00pm-5:00pm

HBase allows you to build big data applications for scaling your Database needs. This tutorial will help you jump-start with HBase development. It starts with a quick overview of HBase, HBase data model and architecture and then dives directly into code to help you understand how to build HBase applications. As part of this, we will also cover how to do schema design and also some advanced concepts like doing transactions. 

This tutorial will cover: 

  • An introduction to HBase data model and HBase Architecture 
  • Setting up a Sandbox [one-node cluster on your laptop] [hands-on] 
  • Using HBase shell to create HBase tables and insert data [hands-on] 
  • Basic Java API to perform CRUD operations on HBase tables [hands-on] 
  • Understand how the data flows for writes and reads 
  • Schema design concepts for rowkey design 
  • Advanced Java APIs to perform scans and do transactions [hands-on] 
Big Data Case Studies Across Industries and Functions

Nitin Bandugula View Bio

September 8, 2014 3:00pm

Hadoop is emerging as a valuable competitive advantage as enterprises seek to leverage fast growing data sources to optimize revenues, decrease costs and mitigate risks. In this session, hear about real-world applications of Fortune 500 as well as emerging Web 2.0 customers who have successfully leveraged this new data platform to provide huge operational advantages, change how organizations compete, and realize 10x performance gains at 1/10 the cost. 

Using Apache Drill

Jim Scott View Bio

September 9, 2014 4:10pm

Apache Drill brings the power of standard ANSI:SQL 2003 to your desktop and your clusters. It is like AWK for Hadoop. Drill supports querying schemaless systems like HBase, Cassandra and MongoDB. Use standard JDBC and ODBC APIs to use Drill from your custom applications. Leveraging an efficient columnar storage format, an optimistic execution engine and a cache-conscious memory layout, Apache Drill is blazing fast. Coordination, query planning, optimization, scheduling, and execution are all distributed throughout nodes in a system to maximize parallelization. This presentation contains live demonstrations.



Sridhar Reddy

Sridhar is a Director of Professional Services for MapR Technologies. Sridhar has over 20 years of experience working with Java and JaveEE in many roles of the software development life cycle, including design, development, management, training and technology evangelism. Prior to MapR, Sridhar managed the Java Platform development team at PayPal, where he led a team of Java developers to build the next generation of the Java platform. Prior to PayPal, Sridhar worked as a Technology Evangelist at Sun Microsystems for over 10 years, where he increased awareness and adoption of Java technology in the worldwide developer community. While at Sun, Sridhar also managed the JavaOne Hands-On Labs as well as Sun Tech Days, a worldwide developer conference. Sridhar holds a BS in Mechanical Engineering from Osmania University in India, and an MS in Computer Science from the Florida Institute of Technology.

Nitin Bandugula

Nitin Bandugula, Sr. Product Manager at MapR is responsible for strategy and go-to-market for MapR Worldwide Services covering Professional Services, Education and Support Services. Nitin leverages his experience in Engineering, Management Consulting and Marketing to bring new big data services and solutions to the Hadoop market. Nitin has a master's degree in Computer Science and holds an MBA from the Johnson School at Cornell University.

Jim Scott

Jim drives enterprise architecture and strategy at MapR. Jim Scott is the cofounder of the Chicago Hadoop Users Group. As cofounder, Jim helped build the Hadoop community in Chicago for the past four years. He has implemented Hadoop at three different companies, supporting a variety of enterprise use cases from managing Points of Interest for mapping applications, to Online Transactional Processing in advertising, as well as full data center monitoring and general data processing. Prior to MapR, Jim was SVP of Information Technology and Operations at SPINS, the leading provider of retail consumer insights, analytics reporting and consulting services for the Natural, Organic and Specialty Products industry. Additionally, he served as Lead Engineer/Architect for dotomi, one of the world’s largest and most diversified digital marketing companies. Prior to dotomi, Jim held several architect positions with companies such as aircell, NAVTEQ, Classified Ventures, Innobean, Imagitek, and Dow Chemical, where his work with high-throughput computing was a precursor to more standardized big data concepts like Hadoop.