Can I reduce the time to value for my business users on Hadoop data?
How can I do SQL on semi-structured types?
How do I create and manage schemas for my data when the applications are changing fast?
What types of distributed systems problems do I have to solve when you move beyond traditional MPP scale to Hadoop scale?
Overall, a new way of thinking is needed to bring end-to-end agility with the BI/Analytics environments operating on Hadoop/NoSQL data. Along with the table stakes requirements to support broad eco system of SQL tools, close attention must be paid to the new requirements such as working with flexible and fast changing data models, semi-structured data and achieving low latencies on the scale of ‘big’ data. This session will cover how Apache Drill is driving this audacious goal to bring Instant, Self Service SQL natively on Hadoop/NoSQL data without compromising either the flexibility of Hadoop/NoSQL systems or the low latency required for BI/Analytics experience. It covers the exciting architectural challenges the Apache Drill community is working with, progress made so far and the roadmap.