We’re incredibly excited to announce our partnership with IBM. We worked closely with the IBM Cloud Data Services group to ensure that Looker can be flexibly deployed in Bluemix or on premises to provide data exploration and visualization capabilities to managed DB2, dashDB, and even managed Spark via Spark SQL.
In this blog I’ll share how Looker, by leaving data where it lives in IBM’s suite of cloud data services, directly leverages the power of IBM’s compute infrastructure and provides managed data access across an organization. I’ll also discuss how to build complex analysis, like a Salesforce sales conversion funnel, in hours utilizing IBM’s Simple Data Pipe and Looker Blocks.
The IBM integration we’ll focus on in this post is Looker on dashDB, which is IBM’s fully managed cloud data warehouse service. dashDB’s hosted MPP database architecture is massively scalable and performant, which makes it an ideal data source for Looker. As part of the IBM ecosystem, it also supports advanced Netezza analytic functions, geospatial functions, and an R integration for statistical analysis - allowing advanced functionality without having to move data out of the database. All of these features can be directly leveraged by Looker in data exploration and visualizations. And when connecting Looker to dashDB, you don’t have to worry about data latency, because the data is as fresh as is available in the data warehouse.
Oftentimes, warehousing data from disparate sources can be quite cumbersome and time consuming, taking weeks or months to consolidate data in a queryable format. Simple Data Pipe makes it easy to create a centralized data warehouse in dashDB. Popular data sources like Stripe and Salesforce can easily be brought into IBM’s ecosystem via dashDB at the push of a button. The Simple Data Pipe moves JSON from the source API into Bluemix’s Cloudant NoSQL service. It then uses Bluemix’s DataWorks to convert Cloudant JSON into a relational format that dashDB can understand. This service provides table level access to the data, which can be transformed and explored with Looker. Looker Blocks bring almost immediate value to this data - enabling commonly sought after metrics and reporting for expected datasets like Salesforce to be implemented via a LookML model.
Let’s consider the Salesforce integration. Once the Salesforce data has been moved to dashDB, we’ll see expected tables such as as Leads and Opportunities. Looker Blocks can provide a starting point to get the value we want from these tables. For example, by providing counts and status metrics from each of these sources, Blocks easily enables us to construct a sales funnel to observe conversion rates at each stage. We can then easily compare this funnel across teams and individuals to compare and evaluate performance.
By adding in the expected date timeframes included in the Looker Block, we can also look at trending counts over time - to report on historical pipeline and analyze trends like seasonality.
We can then build on Looker blocks to customize the model to our own specific metrics. For example, we might want to observe the demographics of our current accounts so we can better target future prospects. It’s as easy as defining new custom dimensions for vertical and company size from our custom Salesforce fields and selecting the count measure for the Account table.
dashDB’s powerful MPP architecture supports a function-rich SQL dialect that Looker can directly leverage as an in-database BI platform. With Looker Blocks on top of dashDB’s Simple Data Pipe, insights can be quickly realized through a LookML model that provides a starting point for reporting on sources like Salesforce and Stripe. Additional metrics can easily be added to supplement analysis as well as entirely new data sources can be added to dashDB and modeled with Looker for extended visualization and exploration. The joint IBM and Looker solution can make data quickly and easily available across an organization.