Andrew Montalenti
CTO - Parse.ly

Parse.ly recently introduced their Data Pipeline product. Building on the analytics infrastructure expertise they’ve developed by processing over 50 billion user events per month, Parse.ly is now making its fully-managed pipeline available as a service for developers. Specifically, their open source recipes for streaming data compatible with Redshift and BigQuery provide an easy way to get started with Looker. Andrew Montalenti, CTO of Parse.ly, details the joint solution.

Making Analytics Work for Content

Parse.ly is an analytics platform designed to make it easy for anyone access and understand their digital audiences. Whether you need real-time or historical insights about your content, the Parse.ly dashboards and APIs help teams monitor, promote, and improve based on data.

Thousands of typically data averse editors, marketers and content creators have adopted the intuitive dashboards because we’ve removed the complexity and jargon around digital analytics. Plus, we’ve helped product teams rapidly develop features like content recommendations to drive higher on-site engagement through a simple API.

parse.ly computer and iphone

The real-time overview screens within Parse.ly's web and mobile dashboards.

Parse.ly has over 170 customers, including media companies like TechCrunch, Slate, and Mashable, and brands like Artsy, Ben & Jerry's and more.

Questions No Vendor Dashboard Can Answer

With our dashboard covering the basic questions for the entire organization, our customers became more sophisticated and started asking questions specific to their business. So, we introduced Data Pipeline: a new service from Parse.ly that provides clean and enriched raw event data collected from your sites and apps via a fully-managed service.

We handle data collection on infrastructure that has already been scaled for over 700 top-traffic websites and 500 million monthly unique visitors. We also take care of enriching the raw events with useful information like geolocation and device categorizations leaving you with something immediately ready for analysis.

When Parse.ly decided to branch out from our core dashboard offering to provide raw data access, we knew we wanted to partner with a business intelligence platform that shared our company's philosophy of "analytics for everyone". Looker completely embraces this philosophy and was an easy choice as a launch partner for Data Pipeline.

parse.ly data flow

  • Are you a media company or digital publisher?

If you answered "yes", you now have your ideal data source for Looker. A clean, enriched raw data source for metrics like unique visitors, pageviews, sessions, time spent, video starts, video watch time, and more, then you can just integrate Parse.ly via our standard JavaScript tracker and SDKs, and the data will simply flow.

  • Are you a B2B or B2C marketer?

As more marketers invest in content marketing, getting data on content/audience engagement has become key to understanding and improving their strategy and proving their value.

If you're a B2B or B2C company that has been investing heavily in your website, knowledge base, online resources, public documentation, and blog, and you want to unify audience data from all of these sources to get a complete picture of your content marketing efforts, Parse.ly can help. You simply follow our standard integration instructions, and data will flow.

Getting the Data Pipeline to work with Looker

Event data is delivered to you in raw form via a fully-managed AWS Kinesis Stream (for real-time data) and AWS S3 Bucket (for historical). From there, you have two great options to load it into a Looker-compatible SQL database, while fully controlling your extract, transform and load (ETL) process:

  • Near-Real-Time Bulk Loads: Run a cron job or similar to issue a Redshift COPY command from your S3 bucket or a BigQuery load command from the same. This will get data into your warehouse with latencies as low as 15 minutes from the time of data arrival.

  • Real-Time Streaming Writes: Spin up a long-lived process in your favorite language (we recommend Python) that consumes data from your Kinesis stream and does streaming writes to either a Kinesis Firehose Stream configured to point to your Redshift instance or to BigQuery's streaming write API. This provides the fastest latencies possible; for Google, this can be sub-minute latency.

Our product has standard schemas for our event records, and we have these defined using Redshift and BigQuery DDL, as well. But even better -- our partnership with Looker means we've built a Looker Block atop this standard schema, meaning that most of the basic LookML modeling work is done for you already. This means you don't have to spend time deciphering attributes or learning how to properly structure a query to count unique visitors or sessions, you can just explore your data and start getting answers.

parse.ly data flow

Above is an example Looker dashboard built from Parse.ly's Data Pipeline and the Looker Block for Parse.ly. The customer receives the streaming data (via Kinesis/S3) and loads it into Amazon Redshift or Google BigQuery, while maintaining full control over the ETL process. Using the Parse.ly Block, Looker queries the standard column names and types that are common to Parse.ly raw events. This provides an easy starting point for your exploration, from which the LookML model can be further developed to support analyses unique to your company.

Want to get started with Parse.ly and Looker?

Refer to the Looker Discourse site for more details on the Looker Block for Parse.ly. Or access the Block by reaching out to your assigned Looker analyst, or request a Looker demo and trial.

To learn more and start using the Parse.ly Data Pipeline, sign up for an account today.

Subscribe for the Latest Posts

Next Previous