Big Data Everywhere - Sydney
Sydney, Australia
Wednesday, June 10, 2015

Big Data Everywhere is an all-day Hadoop conference aimed at helping you understand the technical, business and practical aspects of Hadoop and related technologies.

Talks

How to Move your Business to Big Data: the Next Generation Enterprise Architecture

Jim Scott View Bio

In this session, I’ll explain how to move your business to the next level by implementing your enterprise architecture. I’ll lay out which products can fit into this architecture in order to fulfil the needs of your business, as well as the benefits that can be derived from these components, both independently and together. With this architecture, log shipping is a thing of the past. It will no longer be needed, because this architectural simplification removes work that is required for most businesses to function and drive revenue. Log shipping is one less thing that employees will have to worry about in the wee hours of the night, and it won’t break. By storing the data where it is created in a distributed file system, and processing it in place, you can remove all the required transport technologies, and simplify the application architectures of many enterprise applications because the enterprise architecture solves these problems.
Cisco IT's Hadoop Journey

Get insights into Cisco’s own Hadoop Journey and how Cisco implemented a highly agile Enterprise Hadoop platform, delivering multiple use cases including ETL migration, content management, smart analytics and compliance.  Presenter: Virendra Singh, Architect, IT Architecture
Big Data: Enabling data agility at scale

The idea of agility isn't new. However amidst the Big Data hype the core promise seems forgotten. As Big Data paradigms transition from the peak of inflated expectations to the trough of disillusionment, organisations need to be clear about business benefits Big Data can deliver. Enabling data agility at scale aims at revisiting the core promise of Big Data and why organisations need to invest in building Big Data capability sustainably whilst demonstrating business benefits early and frequently. Presenter: Vivek Pradhan, GM, Victoria at Servian
What Is A Data Lake, Anyway?

Data Lake discussions are everywhere right now - and to read some of the commentaries is to believe that the Data Lake is almost the prototypical use-case for the Hadoop technology stack. But there are far fewer actual, reference-able Data Lake implementations than there are Hadoop deployments – and even less documented best-practices that will tell you how you might actually go about building one. So if the Data Lake is more architectural concept than physical reality in most organisations today, now seems like a good time to ask: What is a Data Lake anyway? What do we want it to be? What do we want it not to be? In this presentation, we will address these questions and present a conceptual architectural model for the Data Lake. Presenter: Alec Gardner, GM, Advanced Analytics, Teradata ANZ

Speakers

Jim Scott

Jim drives enterprise architecture and strategy at MapR. Jim Scott is the cofounder of the Chicago Hadoop Users Group. As cofounder, Jim helped build the Hadoop community in Chicago for the past four years. He has implemented Hadoop at three different companies, supporting a variety of enterprise use cases from managing Points of Interest for mapping applications, to Online Transactional Processing in advertising, as well as full data center monitoring and general data processing. Prior to MapR, Jim was SVP of Information Technology and Operations at SPINS, the leading provider of retail consumer insights, analytics reporting and consulting services for the Natural, Organic and Specialty Products industry. Additionally, he served as Lead Engineer/Architect for dotomi, one of the world’s largest and most diversified digital marketing companies. Prior to dotomi, Jim held several architect positions with companies such as aircell, NAVTEQ, Classified Ventures, Innobean, Imagitek, and Dow Chemical, where his work with high-throughput computing was a precursor to more standardized big data concepts like Hadoop.