<![CDATA[Looker | Blog]]> /blog Looker blog discover.looker.com Copyright 2019 2019-07-22T23:10:33-07:00 <![CDATA[Putting the Hack in Hackathon: The Winners of Looker_Hack : London]]> http://looker.com/blog/putting-the-hack-in-hackathon http://looker.com/blog/putting-the-hack-in-hackathon Note: This is one part of a two-part series on Looker_Hack : London, hosted in May 2019. This post is focused on highlighting the winning projects and the rockstar developers that built them. The second post is about the tools we used to make the Hackathon such a success, which you can find on our Looker Engineering Blog.

Following the success of our first Hackathon last October at JOIN 2018, we wanted a way to show some #LoveLookerLove to the folks across the Atlantic. To do this, we decided to fly out to the UK, meet more of our innovative Looker customers, and host our first Hackathon event in London.

Why A Hackathon?

Hackathons are a great way to quickly build projects and solutions that can help teams meet specific business goals. For Looker specifically, hackathons provide an avenue for our engineers, designers, and product managers to meet developers and understand how they’re building on the Looker platform.

Our hope is that by attending these hackathons, attendees get more enjoyment and benefit out of using Looker and that everyone is able to learn more about the Looker API and all the powerful integrations and customizations that can be built on Looker.

Our Favorite Hacks

After a day of collaboration, a panel of three judges scored the day’s hacks and presentations on four main criteria: ambition, execution, coolness, and impact.

The winners for Best Hack and Nearly the Best Hack were then announced and bestowed some amazing hardware to proudly display in their office, at home, or over their fireplace.

Best Hack: Acrotrend / Yoti

The Best Hack for the 2019 London_Hack was for an application that allows users to ask quick followup questions of a dashboard using plain English. Leveraging Looker’s Embed SDK to embed a dashboard into their application and Looker’s core API to extract Looker model information, the Acrotrend/Yoti team generated a lexicon which then extracted semantic meaning from the English questions. Using this to generate queries (again through the Looker API), the winning team was then able to visualize those results using their own custom visualizations.

Nearly the Best Hack: Farfetch

The award for Nearly the Best Hack went to the team that built a custom Data Action through Looker’s Action Hub. The data action sent data to a Google spreadsheet, which in turn was the backing data source for a Google presentation. This awesome team of hackers demonstrated nearly instant updates to a slide deck that then had the ability to be used to present on the state of their business.

(Very) Nearly the Best Hack: Turner

Because of an incredibly tight race, we also awarded a Nearly the Best Hack to a team who extracted results from two data sources Looker’s API that had a nearly common field, performed fuzzy logic using a Jupyter notebook and a generated Python SDK to match the data, and pushed back into Looker to visualize.

What impressed us about their hack was that the team itself represented two different departments within their organization, and this hack was built to solve a real problem that affected some of their business goals. In just under a day, this team was able to come together and, leveraging the Looker platform, was able to make it easier for both departments within the organization to answer critical business questions.

Future Hacks

We’ve had a blast hosting and being a part of our Looker Hackathons. Not only have the completed projects impressed and inspired us, but they have also encouraged us to continue iterating on their successes for Hackathons yet to come.

Our next Hackathon is scheduled for November 4th, 2019 in San Francisco to coincide with JOIN 2019. We look forward to sharing more updates about this and future Looker hackathons and hope to see you in attendance at one in the near future!

<![CDATA[The Rise of the Multi-Cloud Data Platform]]> http://looker.com/blog/future-of-multi-cloud-for-looker http://looker.com/blog/future-of-multi-cloud-for-looker Cloud infrastructure is now mainstream, with more than 80% of all enterprise workloads expected to be cloud-based in 2020. But it’s not just cloud that’s the new norm: businesses are now using more than one cloud provider, often deploying several solutions at once. Businesses are rapidly embracing multi-cloud for their business intelligence systems.

In this dynamic, evolving world it’s vital that data teams are able to choose the data platforms and deployment methods that work best without lock-in and with the flexibility to pivot and change approaches as necessary.

Multi-cloud refers to the use of multiple cloud providers to supply infrastructure, applications, and key business functions. Multi-cloud goes beyond “hybrid cloud” deployments to include private infrastructure, IaaS, SaaS, and other new approaches. According to Gartner1, “most organizations have already adopted multiple cloud computing providers for different applications and use cases.”

Benefits of Multi-Cloud for Organizations

Organizations and the data teams that support them are increasingly choosing multi-cloud for BI for the following reasons:

The Benefit Why This Matters
Access to capabilities Not every provider has the same features and who’s winning the feature race changes frequently.
Avoiding vendor lock-in Leverage the strengths of more than one vendor. Avoid costly and time-consuming migrations.
Cost mitigation Multi-cloud keeps vendors competitive and pricing low.
Private cloud needs Some organizations have business requirements for private cloud. In these cases, a multi-cloud approach improves flexibility and allows for hybrid cloud where necessary.

Historically, business intelligence tools have been built as single-vendor architectures, with data store and analytics tightly linked. As a result, changing an organization’s business intelligence tools or databases resulted in costly migrations, reworking of business logic, and months of effort.

The modern Looker Data Platform is different. Looker’s in-database architecture supports a wide range of databases and SQL dialects. With Looker as a multi-cloud data platform, you can deliver data where and when it’s needed, without being locked into a single interface and with the ability to go far beyond simple reports or dashboards. And Looker hosting is designed to meet the unique needs of your business, in a way that’s best for you.

“It’s really helpful to have an analytics platform that’s SOC2 compliant, meets GDPR standards, has all of the privacy we need, and that we can deploy into a closed system or on VPN networks—on-prem or in the cloud.”

Looker’s Database Agnostic Approach

Looker’s innovative in-database architecture leverages the power of your database investment to run queries. Looker speaks to your database via Java Database Connectivity (JDBC) and communicates in a SQL dialect your database understands. Because every database is different, and SQL dialects vary, Looker’s multi-cloud platform has been integrated to support more than 50 distinct versions of SQL and we are regularly adding new dialects as technology evolves. This means if your database speaks SQL, Looker probably supports it, and if it can be reached via JDBC Looker can communicate with it, easily. With Looker you’re never locked into the choice of a database you don’t want.

Looker simplifies database migrations, too. If your organization is modernizing your data infrastructure, retiring old systems, or simply adding new database technology, Looker’s database agnostic approach makes adapting your analytics platform easy. When you connect to a new database, Looker is a cloud data platform that will automatically change the SQL dialect to what’s spoken by the new system. With only minor changes, your existing data models and business logic can be reutilized in your new database.

With Looker, you can connect to more than one database or data store at once, too. Looker supports multiple JDBC connections to the databases of your choice. Using Looker’s ability to merge query results, you can produce insights from data spread across multiple data environments.

Hosting How You Like It

Looker hosts and manages Looker deployments for the vast majority of our customers. The Looker Cloud, our virtual private cloud (VPC) environment, is secure, scalable, and provides excellent system availability to Looker users.

As a Looker customer, you can choose the underlying cloud provider in which your Looker instance is hosted. Looker instances today can be hosted in our VPC on AWS (Amazon) or GCP (Google). Customers can even self-host on private infrastructure if necessary. In the near term, Looker will expand Looker Cloud hosting choices to additional providers.

By providing a choice in hosting providers, Looker helps you leverage the cloud provider you prefer, and address complex issues such as data sovereignty (you can have us host in the region you prefer).

Deliver Data Where It’s Needed

Too often data tools lock users into a single way of viewing and using data. Older report-and-dashboard analytics approaches have limited the value organizations can extract from data.

Looker is an open system that lets you deliver data where it’s needed, to people and systems throughout your multi-cloud environment. From right within your Looker instance you can take immediate action on data and connect users to the insights they need, directly in their workflows. But unlike traditional, limited BI tools, Looker is a fundamentally different BI solution with a powerful API that allows you to automate data delivery on a cloud data platform — using the API you can extract data directly into Amazon S3 buckets, email .csv files, or use text messaging to send alerts to users.

“For organizations looking at multi-cloud and hybrid cloud management challenges, continue to consider Looker as a solution to provide consistent analytics across data environments.”
—Hyoun Park, CEO and Principal Analyst at Amalgam Insights

Learn More About Our Multi-Cloud Data Platform

A live demonstration of the Looker Data Platform is a great way to learn more about the power of the modern data architecture, see how Looker works, and discuss your approach to multi-cloud with one of our experts. Scheduling a demo is easy — try your data with Looker now.

Attending the annual JOIN data conference is another great way to learn more. At JOIN you can learn more about Looker’s cloud partners, including AWS, GCP, and Snowflake, and hear from data specialists using Looker in their own cloud and multi-cloud deployments. It’s a great way to learn tips and tricks of navigating the new multi-cloud world.

1 Gartner Technology Insight for Multicloud Computing, Lydia Leong, 16 August 2018

<![CDATA[Growing with Data at Indigo Ag]]> http://looker.com/blog/growing-with-data-at-indigo-ag http://looker.com/blog/growing-with-data-at-indigo-ag

When I think about what it means to have a data-driven culture, I think of an organization where the use of data is championed anytime someone asks a question or makes a decision. If data is used to track progress, provide transparency, and measure success — to me — that is a true data culture.

Data + Indigo Ag

From the beginning, Indigo Ag was conceived as a data-driven company. With a commitment to improve grower profitability, environmental sustainability, and consumer health through the use of natural microbiology and digital technologies, the problems we’re striving to solve require complex, systematic changes that can only be accomplished through the use of data.

Some of the ways we’re doing this today include —

Indigo Marketplace

Indigo Marketplace, a digital platform for buying and selling grain, enables growers to receive premium prices for producing high-quality crops more sustainably, and buyers to source grain with a range of characteristics.

Digital Agronomy

We combine data from remote sensing technology (moisture probes, drones, satellites, etc.) with data from each farm to provide individual insights directly back to each grower. The provides a holistic view of not only what is going on in their farm, but also how that farm compares in aggregate to every other farm on Earth.

Microbial Product Pipeline

Seeking to improve a plant’s natural microbial makeup, we identify and sequence thousands of endophytes, using an approach called “focused sourcing.” Indigo scientists leverage sophisticated genomic sequencing and computational bioinformatics to catalog and assemble a world-class database of genomic information from these microbes. We apply algorithms and machine learning to this database to predict which microbes are most beneficial to the plant’s health.

To help drive these efforts, we leverage the Looker platform and currently have:

  • 22 active developers writing LookML
  • explores across 14 departments
  • 530 Looker users
  • 200+ average weekly querying by users who spend at least one hour in the platform each week.

Some of our most popular dashboards showcase supply and demand within the Indigo Marketplace, grower profitability, and marketplace bid quality.

Building A Data Culture At Indigo Ag

My entire career has been working in data and using data to help businesses and people answer complex questions. Since joining the Indigo team, I’ve had the opportunity to dive into agriculture data — which is the hardest data I’ve ever had to wrangle. At Indigo, it’s important to me to provide everyone with equal access to data to help them succeed and ensure better results for our customers.

As the primary liaison between end business users and the Indigo data platform, building our data culture here at Indigo has been a key focus for the Business Intelligence Platform Team. Some of the ways we’re driving platform adoption are through onboarding programs, workshops to further education, and continued maintenance and policing of the Looker platform.

Onboarding Programs

The BI Team works with hiring managers to scorecard analyst positions, gather requirements for an onboarding deliverable, and then constructs a tailored onboarding program for the newly hired analyst. This program is designed to educate analysts in the use of Looker, address skill-set gaps through training, and expose analysts to relevant data all in the context of the deliverable.

Training Workshops and Data Education

Another responsibility of the BI team is to provide ongoing data education and communication for all business and operations units. The BI Team communicates out newly available data within the UDP, manages shared documents outlining where data is available, and facilitates data related workshops.

Platform Maintenance and Policing

The BI Team is responsible for managing the Looker platform. This means reviewing PRs, curating explores, and encouraging the use of best practices. The outcome of a well managed Looker Platform should provide: intuitive and easy to use explores, well-defined developer areas, shared spaces that are easily navigated by audiences, and documentation for everything.

Continuing To Grow And Learn With Data

Among all organizations, there are common misconceptions about how to use data to make business decisions, which leads to challenges when trying to develop a data culture. For instance, if your organization wants to succeed with data as a whole, gatekeeping or siloing the data should be avoided. As long as it is in accordance with a privacy-by-design data access structure, relevant, vetted, and smart data should be accessible throughout an organization.

In addition to these, the three biggest misconceptions I’ve come across in my career are:

Bad Data

There is no such thing as bad data — just poor analyses or transparency. Bad data is often the product of poor processes, but even the data that is generated from poor process can be used to highlight where the process is breaking down.

“Give Me All The Data”

Is a common request which typically means there needs to be a conversation with the stakeholder around ‘what are we trying to answer’? This allows us to provide ‘smart’ data — only the relevant data required to answer the question.

Proliferation Of Data Sets

This one is expected, but it’s always worth mentioning. Without a robust reporting platform users find the need to replicate data outside of source systems which leads to shadow systems and trusted data sets living outside of those source systems.

By removing hurdles for using data and maintaining a culture of transparency whenever possible, building a culture where questions and decisions are based on data can begin to spark across the organization.

Join the conversation and share your own insights about data culture and data adoption in the Looker Community

<![CDATA[Analyzing Customer Behavior With A New Looker Marketing Block]]> http://looker.com/blog/analyzing-customer-behavior-with-a-new-looker-marketing-block http://looker.com/blog/analyzing-customer-behavior-with-a-new-looker-marketing-block この記事の日本語版はこちらよりご覧ください。

As individuals living in the digital era, we leave a massive trail of data through each click and pageview. Marketing teams cast a wide net to capture this data in an effort to understand their audience and provide a better customer experience, but analyzing that data, much less taking action on it, has been a challenge to say the least.

This is why we’re excited to announce our latest addition to the Looker Blocks directory — Customer Experience Analytics by KARTE, a turnkey suite of dashboards to understand user behavior trends (campaign effectiveness, Net Promoter Score, etc.), and seamlessly take action on through the KARTE service.

What Is KARTE?

KARTE is a customer experience solution platform from Plaid, inc., a Japanese technology startup building a suite of products to enhance personalization in customer experience. KARTE customers can trigger actions and events to optimize customer experience through analyzing user behavior in real-time. Customers can opt to pipe event and page view data into a Plaid-provided BigQuery environment through KARTE Datahub offering, a huge leg up for marketers who want to get their hands dirty in their quest for unique actionable insights. This also means that all Looker customers have the opportunity to take advantage of this Looker Block without having to set up a new data warehouse environment themselves.

What’s Included With The User Experience Analytics Block?

First of all, let’s talk about the dashboards — probably something a lot of you will see right away. As of today, there are three dashboards — Web Access Analytics, Pageview Funnel, and NPS Overview — to get you started.

Web Access Analytics

This dashboard covers a comprehensive list of standard web metrics, including time series tracking of sessions, bounce rates, and what OSs customers are using to view your website. It’s something a digital marketing manager would view to get their day started, and can be made into a scheduled report that goes straight into your inbox, just in time for your morning commute or first cup of coffee.


Pageview Funnel

This dashboard provides insights into how your customers are moving through your website, and where they’re dropping off. Are your customers putting items into their cart only to never actually check out? Are most of your customers dropping off at a particular link? Or perhaps there’s a massively popular piece of content people navigate to. Understanding these behaviors will help you identify any bottlenecks in your UX, guiding you on a path to remediating — or doubling down — on what and how you present your content.



Net Promoter Score (NPS) Overview

This dashboard shows you trends in your NPS, a critical metric in gauging your customers loyalty in order to increase engagement at every level. We’ve added tiles covering results using the most important attributes so you can dive in deeper to visitor-level insights right away by drilling into any point that piques your interest.


Tools Alone Won’t Make You Successful — It’s How You Use Them

Dashboards are a great starting point to get an understanding of and monitor your customer behavior from a high level at all times. However, it’s when you’re able to get deeper into the details of those behaviors that you’re then able to act on these trends. That’s why we’ve added a seamless linkback to KARTE’s UI from user-level data.

Every result in this Looker Block has a drill through into row-level details, and we’ve set them up so that they always have a User ID field linked to the KARTE application. This means that once you identify a sudden change, say an uptick in detractors in your NPS Overview dashboard, you can easily pull up data on each individual detractor to see what’s causing them to respond with low scores. An additional KARTE feature also enables pageview playbacks, showing you exactly what the customers saw and how they navigate each page in replay, putting you into your customers' shoes.



There are other ways you could add this into your daily workflow. For instance, you might take this drill through to the next level by creating a new visitor list that you can use for retargeting or direct outreach through email. Or perhaps you want to set up an alert that notifies you when your NPS dips below a certain threshold.

With Looker, you take templated dashboards and reports (like the one we discuss in this article) to kickstart your analytics quickly and easily extend on your existing business logic to tailor the data exploration experience and curate insights for your team without having to start from scratch.

If you’re interested in learning more about understanding your customers better, reach out to our team. We look forward to hearing from you!

<![CDATA[Why a Visualization is Worth a Thousand Data Points]]> http://looker.com/blog/why-a-visualization-is-worth-a-thousand-data-points http://looker.com/blog/why-a-visualization-is-worth-a-thousand-data-points We talk about data visualization (or “viz”) a lot at Looker. We believe in its power to tell stories, to help people see patterns and take action accordingly. In other words, visualization matters—a lot.

In the age of big data, visualization is one of the most powerful tools in your arsenal as an analyst or data enthusiast. It can so drastically affect the way we interpret and understand information that it may seem like magic; when, in fact, it works so well because of the way the brain processes large amounts of information. Get it right, and you supercharge your efficiency, your ability to draw connections, and your communication.

Data Visualization Makes You More Efficient

Understanding the scope of data available today illuminates why visualization makes you more efficient, and how necessary it is for modern business.

For example, this picture, taken on my cell phone, occupies 2 MB of storage space.

Image taken in the Sierra Nevada

Now, suppose that 2 MB could be represented by an inch (~2.5 cm) of physical space. If we laid out all the data in the world (33 zettabytes, according to the International Data Corporation) in a line, it could wrap around our planet about 9.5 million times.

It’s unlikely you’re dealing with all the data in the world at your day job, but chances are the amount of data your company collects is still impressive.

At Looker, we have roughly 15.5 TB of data (or 4.8 marathons’ worth, keeping with our earlier analogy). One particular table contained 20,687,442,124 rows of data at the time of writing this post. Making sense of a table like that it is a pretty tall order.

What I can make sense of is seeing those data points plotted on a chart, where trends are rendered visible and unfathomable amounts of data suddenly become digestible. Why is that?

The Difference a Visualization Makes

When I look at a table I can remember a few data points at a time, but certainly not all of them. Numbers isolated from their broader contexts are hard to make meaning out of.

Graphs, on the other hand, give us the gift of seeing a representation of all the values at once, where we can easily and rapidly compare them. They take us out of the deluge of details so we can understand the bigger picture.

To see this in action, look at the data below about revenue in the movie industry. Spend some time examining the default table visualization. Then, try out different visualizations by clicking on the white icons in the Visualization tab and experience the effect for yourself.

The ability to see the big picture in this way illuminates a dataset’s essential characteristics. When we’re aware of the general landscape, identifying outliers is easy. Aberrations can be seen and addressed efficiently.

An easily visible outlier

It’s not all about outliers, though...

Data Visualization Helps You Make Connections

Relationships across data points matter—that’s where patterns emerge. We can draw connections between data points in a visualization by paying attention to characteristics like their relative areas, lengths, and positions.

Communicating these connections can effect significant change. Such was the story Jon Snow, who used data visualization to draw attention to a major issue of his time.

In addition to his remarkable fictitious life in Westeros, Jon Snow lived in London in the mid-1800s. He created one of the most famous examples of a data map ever known. This is an adaptation of it:

Courtesy of Robin Wilson

This is a visualization of the severe cholera outbreak that rocked London in 1854. Upon reviewing hospital records and talking to the sick, Snow, a physician, began to suspect that an infected water pump on Broad St. was to blame.

This went against the accepted theory of the day that disease spread through “bad air,” so he needed proof.

Snow created a dot map to illustrate the proximity of cases (red) in relation to the water pumps (blue) and convinced officials to remove the Broad St. pump handle. The outbreak quickly stopped.

This is a perfect example of the power of visualization to save lives. If you know of other noteworthy stories, let me know, and I might feature them in a future blog post.

Data Visualization Helps You Communicate Clearly

Jon Snow’s map solidified his suspicion of the Broad St. pump as the source of disease. It also helped him communicate that finding clearly to the people who could do something about it.

Even if you aren’t pioneering the germ theory of disease, you can still harness visualizations to tell stories with data and help others see what you see. To learn more, check out our eBook, “The Art of Telling Stories with Data.”

Let’s put this storytelling idea into practice.

Imagine you work in city planning, and you need to determine whether planning for more apartment buildings or more single family homes makes sense for your municipality. You look at this data table showing homeownership and rental rates in the US over time. A trend is discernible, but it’s not that clear or compelling—it doesn’t make a great story.

By contrast, here’s the same data as a column chart:

Looking at this visualization, it’s obvious that 2008 saw rental rates increase sharply while homeownership rates fell. It’s also easy to understand how the impact of the financial crisis has stretched itself over time, causing homeownership rates to drop and the percentage of renters to rise. Based on this visual, you could make a sound case for planning more apartment buildings.

This visualization that ranked health supplements based on their popularity and evidence of their effectiveness for particular conditions demonstrates another practical application of data. With this information, you could communicate with your family why you’d rather take Vitamin D on a regular basis instead of Vitamin E, which has been shown to be dangerous in large amounts.

You can even use visualization to justify moving to a different state to pursue a career as a professional dancer, based on and the concentration of jobs available in the field; or simply to add color to your daydreams about that life (hello, Nevada).

The potential applications of communicating through visualizations feel endless. For more fascinating examples of how visualization can help us communicate, check out David McCandless’s TED Talk on “The Beauty of Data Visualization.”

In Conclusion

To recap, visualization matters a lot because it can help you:

  1. Quickly make sense of overwhelming amounts of information
  2. Draw connections with far-reaching implications, as in the example of Jon Snow
  3. Communicate clearly and tell stories

Good visualization is so important that we decided to dedicate a blog series to it. Look out for upcoming posts on viz-centric topics such as:

  • How to choose the best graph or chart for your data
  • Designing dashboards for UX/UI
  • The problem with pie charts

Until next time,

Jill Hardy
Content Strategist, Customer Education

<![CDATA[Creating Actionable Customer Segmentation Models]]> http://looker.com/blog/creating-actionable-customer-segmentation-models http://looker.com/blog/creating-actionable-customer-segmentation-models What is customer segmentation?

Customer segmentation is a way to split customers into groups based on certain characteristics that those customers share. All customers share the common need of your product or service, but beyond that, there are distinct demographic differences (i.e., age, gender) and they tend to have additional socio-economic, lifestyle, or other behavioral differences that can be useful to the organization.

What type of information is used in customer segmentation

Any information you can acquire about individuals can be used to create a customer segmentation. Direct-to-consumer brands and B2B companies are at a distinct advantage because of the amount of information they can obtain about their customers just from their transaction data alone.

Basic data types typically include:

  • Geography (billing info, shipping info (if applicable), browser info)
  • Product(s)/Service(s) purchased
  • How customers found you (referring URL and/or campaign info, promo codes)
  • Device used (device type, brand (if mobile), browser)
  • If this is a customer’s first purchase
  • Payment method

Beyond these basics, companies may choose to collect more information as part of the selling or checkout process that can augment their customer data, such as:

  • Reason for purchase
  • Marketing or advertising channel that drove purchase*
  • Intended use: business, personal, self-consumption, gift, etc.
  • Company industry segment
  • Job title
  • Age/Gender

*Important Note:
This has become more common, especially with direct-to-consumer businesses trying to assess their marketing efficacy and offer another viewpoint besides last-click in Google Analytics. There is always a healthy margin of error applied to data reported in this way from a customer, but it certainly indicates what they believe to be the most memorable or important reason for their purchase. Daasity has built out specific logic for processing this information, along with other data, to help determine the most likely marketing channel responsible for purchases.

From here, there is the opportunity to either infer additional attributes or purchase additional attributes. Inferring attributes means you have already collected data that results in a strong correlation to another attribute. For example, you might infer gender from name.

The other option is to purchase data and append it to your customers’ existing profile data. Companies like Experian, Acxiom, and others happen to have significant amounts of purchase data from credit card transactions, as well as demographic data that they have mapped to certain behaviors. They have strong match rates to provide additional data, (referred to as 3rd party data) such as:

  • Estimated household income
  • Presence of children
  • Homeownership
  • Amount of spend in your company category or other retail categories
  • Lifestyle or behavioral interests

6 types of customer segmentation models

Common customer segmentation models range from simple to very complex and can be used for a variety of business reasons. Common segmentations include:

  1. Demographic
    At a bare minimum, many companies identify gender to create and deliver content based on that customer segment. Similarly, parental status is another important segment and can be derived from purchase details, asking more information from customers, or acquiring the data from a 3rd party.

  2. Recency, Frequency, Monetary (RFM)
    RFM is a method used often in the direct mail segmentation space where you identify customers based on the recency of their last purchase, the total number of purchases they have made (frequency) and the amount they have spent (monetary). This is often used to identify your High-Value Customers (HVCs).

  3. High-Value Customer (HVCs)
    Based on an RFM segmentation, any business, regardless of sector or industry, will want to know more about where HVCs come from and what characteristics they share so you can acquire more of them.

  4. Customer Status
    At a minimum, most companies will bucket customers into active and lapsed, which indicates when the last time a customer made a purchase or engaged with you. Typical non-luxury products consider active customers to be those who have purchased within the most recent 12 months. Lapsed customers would those who have not made a purchase in the last 12 months. Customers may be bucketed even further based on the time period in that status, or other characteristics.

  5. Behavioral
    Past observed behaviors can be indicative of future actions, such as purchasing for certain occasions or events, purchasing from certain brands, or significant life events like moving, getting married, or having a baby. It’s also important to consider the reasons a customer purchases your product/service and how those reasons could change throughout the year(s) as their needs change.

  6. Psychographic
    Psychographic customer segmentation tends to involve softer measures such as attitudes, beliefs, or even personality traits. For example, survey questions that probe how much someone agrees or disagrees with a statement are typically seeking to classify their attitudes or perspectives towards certain beliefs that are important to your brand.

5 Benefits of customer segmentation

There are several benefits of implementing customer segmentation including informing marketing strategy, promotional strategy, product development, budget management, and delivering relevant content to your customers or prospective customers. Let’s look at each of the benefits in a bit more depth.

  1. Marketing Strategy
    Customer segmentation can help inform your overall marketing strategy and messaging. As you learn the attributes of your best customers, how they are alike, and what is important to them, you can leverage that information in messaging, creative development, and channel selection.

  2. Promotion Strategy
    An overall promotion strategy (i.e., our customers are deal seekers, therefore we should offer frequent deals) for sending promotions for specific segments can be made better with information from a broad customer segmentation scheme. You may find that certain cohorts of customers don’t require discounts when you use certain messaging, thereby saving you from having to offer a discount for those groups at all.

  3. Budget Efficiency
    Most companies do not have unlimited marketing budgets, so being precise about how and where you spend is important. You could, as an example, target similar customers to segments of high value or those most likely to convert to get the most return from your marketing investment.

  4. Product Development
    The more customers you acquire, the more you learn about what is important to them, what features they want, and which customers are the most valuable. Your company can use these insights to prioritize product features that either appeal to the most customers, those categorized as high-value customers, or other characteristics that makes sense for your industry.

  5. Customers Demand Relevance
    Whether it’s D2C, B2B, Millennials or GenZ; it seems that there is a study or resource on every possible group of customers stating that relevant content is important to them. These customer segments are more likely to respond, buy, and respect the brand and feel connected if provided with relevant content. By performing some level of segmentation, you can ensure that the messages you are delivering via email, on site, through digital ads, or other methods are targeted and relevant to the individual seeing it. It is almost counter-intuitive to the hyper vigilance of data privacy to use so many pieces of data in this way, but with so many marketing messages coming at people today, no one has time for something that isn’t relevant to them.

How to make customer segmentation actionable

To make your customer segmentation actionable, first, you must start with a goal in mind. As previously mentioned, segmentation can be simple, complex, or anything in between — and you aren’t limited to one set of segments. With the ease and accessibility of data today, you can devise different customer segments for different purposes.

The amount of information that can be obtained from various sources is endless. But, it’s only useful if you can use it. This requires questioning, being curious, and analyzing the data you have. From there, as you find treasures buried in the data you have, design a test to confirm that is in fact a useful finding.

Examples of customer segmentation

Target has perhaps the most famous story of using customer segmentation, analytics, and marketing techniques to increase their share of wallet with pregnant women. In 2012, the incredible story broke of Target accidentally informing a young woman’s father that she was, in fact, pregnant, before she had broken the news to him herself.

Once a customer has a child, his or her purchase patterns and basket contents suddenly change to contain diapers and other products consistently. That is a whole segment of customers: people who’ve just had babies. Add gender to it and you have women who have just had babies. As the analysts evaluated this segment’s history, they started to see purchase patterns emerge as markers of the pregnancy’s milestones. From here, they surely built predictive models that would classify customers as they hit some of these markers and flagged those customers as newly pregnant. The action that Target took was to market very specifically to these women with highly targeted ads and direct mail for baby items, baby clothes, and supplies. When a young woman received one of the mailers addressed to her, her father was astonished at how foolish and careless Target would be...until he found out that his daughter was indeed expecting, and Target knew before him.

This example is extreme but memorable. Segmentation can be employed using knowledge of your customers, knowledge of your business, common sense and perhaps a few creative variations — even if you don’t have a Target-sized team of data scientists pouring through the data.

An easy way to use segmentation and to start collecting data for immediate results is through email campaigns. Let’s say you are planning a campaign series and really want to learn how different customer groups react to various messaging and offers. You have a healthy database of emails that includes a mixture of customers and non-customers. Using the code below, you can group customers into non-customers, and then groups based on recency of last purchase being 0-3 months, 3-6 months, 6-12 months, and >12 months.

view: customer_recency {
  derived_table: {
    last_order AS
      MAX(order_date) AS last_order_date
    FROM order
      WHEN DATEDIFF(day, CAST(lo.last_order_date AS DATE), CAST(current_timestamp::timestamp AS DATE))  BETWEEN 0 AND 90 THEN '1: 0-3 Months Active'
      WHEN DATEDIFF(day, CAST(lo.last_order_date AS DATE), CAST(current_timestamp::timestamp AS DATE))  BETWEEN 91 AND 180 THEN '2: 3-6 Months Active'
      WHEN DATEDIFF(day, CAST(lo.last_order_date AS DATE), CAST(current_timestamp::timestamp AS DATE))  BETWEEN 181 AND 365 THEN '3: 6-12 Months Active'
      WHEN DATEDIFF(day, CAST(lo.last_order_date AS DATE), CAST(current_timestamp::timestamp AS DATE))  > 365 THEN '4: 12+ Months Lapsed'
      ELSE 'Non-Customer'
    END AS customer_recency_group
  FROM customer c
  LEFT JOIN last_order lo ON c.customer_id = lo.customer_id
  dimension: customer_id {
    sql: ${TABLE}.customer_id ;;
    primary_key: yes

  dimension: customer_recency_group {
    type: string
    sql: ${TABLE}.customer_recency_group ;;

  measure: num_customer {
    type: count


You can then evaluate the performance of each group against sent content to determine if there are specific messages that resonate more.

Accomplish more with actionable customer segmentation models

Customer segmentation is an important part of any business aiming to grow revenues, repeat rates, share of wallet and profitability. Segmentation does not have to be incredibly complex or expensive, and it can be easily accomplished using a Looker dashboard with readily available transaction or demographic data. And customer segmentation benefits your customers and your organization, allowing your customers to feel more connected to your brand because they’ve been received relevant content, and in turn your company should see increased positive results.

Subscribe to the Looker blog below to stay up to date on trending topics in big data, data analytics stories, product news, and more pieces from our Daasity team.

<![CDATA[It’s time to hire a Data Product Manager]]> http://looker.com/blog/its-time-to-hire-a-data-product-manager http://looker.com/blog/its-time-to-hire-a-data-product-manager Every business has data. Most businesses use their data. But not enough use their data well.

Why? In large part, it’s because as the supply of data has exploded and the demand for access has grown, the existing systems simply couldn’t keep up with the squeeze.

Data analysts were trained to be a service department. Request comes in. Analyst prioritizes and responds. Requestor has a follow-up. Analyst helps. In a world of small data and limited data usage, it works great.

But in today’s world, this unleveraged model doesn’t scale. There are too many requests for the analysts available, and there’s too much data for analysts to remain familiar with all of it. What’s needed instead are data products. And to build successful data products, you need new skills on your team.

The situation isn’t that different from how software engineering evolved in the recent past. Companies realized that engineers should build and maintain reusable products—customer-facing apps, CRMs, CMSs, and ERPs—rather than just hand-coding responses to one-off requests. And to do that effectively, they needed partners who could shepherd the product through its entire lifecycle: research, scope definition, prototyping, early releases, refinement, user testing, launch, bug fixes, and upgrades.

Enter the product manager.

Businesses face a similar challenge today when it comes to data. Taking full advantage of the benefits of today’s data-rich landscape requires a shift from a service model to a product model. But too many companies are still leaving data-product-building to data engineers and data analysts.

Those roles are critical, of course, but it’s time to embrace data product managers. Building effective products is simply impossible unless someone takes the time to understand which teams will use the data and how, to research how to meet each team’s unique data needs, to test out different solutions, to observe and onboard people with the data tool, and to gather feedback and iterate on the data product.

Too many data professionals continue with a service mindset, building the data tool they think teams need or exactly what business users request. Then they wipe their hands and move on. They may not have time or motivation to find a champion for the tool, teach people how to use it, answer questions as they come up in the day-to-day, or field additional requests for improvements.

And too often, data products built this way fail. Products are built with good intentions, but they’re quickly abandoned because they’re too hard for business users to use, or don’t actually meet the intended need. Or worse, they may present authoritative-seeming answers that are just plain wrong because without an analyst constantly sanity-checking the data, bad data may slip through.

I can’t count the number of times I’ve seen people take bad data and run with it because “the tool told me this was the answer.” Whether it’s filtering some critical data incorrectly, or misunderstanding what a metric actually means, or taking a user’s request too literally, the impact is the same. And while it’s easy for data professionals to blame users, the reality is, when you’re building products, it’s your job to think about error handling and how to prevent problems proactively.

That’s why deeply understanding the product’s intended usage upfront is so important—and why having a data product manager is such a key advance. In fact, on the very best data teams, it’s not just a product manager, but a host of other new roles who collaborate to make data products great.

A data translator works on the business side, but has a deep understanding of what the data means and how it’s structured. She might be engaged to teach the product manager and data analysts about the data. A data ambassador—the person on a team who uses data most often and most comfortably—might be tasked with coming up with an onboarding and feedback plan for a new data product. And having an executive as a data champion is a crucial lever for getting people to give new data products a try.

But if you’re only going to make one change to the way you approach data, the data product manager is the place to start. She can take responsibility for the entire lifecycle of the way that data is used (or not) within a department or company. And that focus won’t just improve your data culture, it’ll also make your data teams more efficient.

Freeing data analysts and engineers from the need to squeeze product functions into their already packed schedule lets them focus on their job: building tools well and efficiently. And it makes sure that they’re building the right things and anticipating all the possible ways people will use their tools.

That’s the only way that companies will get the full value of their data. Because in today’s environment, with so much data and so many people clamoring for access, the service model of yesterday simply isn’t feasible. The transition to a world of data products is happening, the question is whether we’ll take the lessons from other disciplines and expand our teams’ competencies now, or whether we’ll stumble along with the status quo and learn the lesson the slow way.

<![CDATA[The Podium - June 14th, 2019]]> http://looker.com/blog/the-podium-june-14 http://looker.com/blog/the-podium-june-14 Greetings Earthlings,

Exciting times here at Looker! If you haven’t already, check out our blog post to learn more about the big news. As for the good ol’ Looker Community, though? It’s been nothing but business as usual, and I’ve got a classic round-up of questions, answers, and cool stuff for you all today.

As always, there are some especially neat projects going on that I’d like to highlight. First, there’s a brand new category for community discussion: Data Adoption. We want to provide a space to discuss best practices for building data-driven company cultures and share your experiences driving data adoption. Check out the inaugural prompt, which includes an opportunity to win some Looker swag. Share your secrets, adoption gurus!

Second, Seema from our Customer Experience team has an opportunity to help raise money for Girls Who Code, an organization that’s working to close the gender gap in technology. Learn how you can help by simply giving Looker a 5-minute review here.

Lastly, for those of you in San Francisco and New York City, there’s another Office Hours rolling around on June 20th at 3:30-5:00pm local time. The subject is Powerful Data Drilling, so go ahead and drill into this link to learn more and register.

Izzy Miller
Community Manager, Looker

Questions and Answers

Font Awesome Icons

I actually didn’t know that Looker shipped with font-awesome built in until @simon_onfido shared an example of its use here. You’ll find the fa-hippo icon on every dashboard + dimension I build from now on.
Looker Community Discussion — Font Awesome Icons

Running Total with Pivot

@Ski_Blanchard was working with a pivoted dataset and wanted to create an overall running total for every pivot, while keeping the result set pivoted to do Quarter over Quarter analysis. I took a stab, but @DaanIF came through with the real deal solution.
Looker Community Discussion — Running total with pivot

Two Measures, Two Filters

@Florian_Tourneur wanted to calculate the difference between two measures but filter each measure independently. A few solutions were provided, including @DaanIF with another piece of Looker Expressions magic, and @fabio shared the simplest solution — using custom fields. Give it a Look!
Looker Community Discussion — Two Measures, Two Filters

Community Spotlight - Rui Zhang

Our guest this week is @Rui_Zhang. Hey Rui!

What’s your name?

Rui Zhang (RAY JAWN)

What do you do for work?

I’m a data scientist at Pike13, Inc. We are a Cloud-Based Client Management Solution. I do everything data-related, including maintaining the data warehouse, writing ETL, developing the reporting function in our product, managing Looker, and providing internal reporting.

We use Looker both internally and externally for our customers. I create reports and dashboards for each team in our company to get the data they need easily. We are currently working on embedding Looker into our product so we can provide a better experience for our customers.

You’ve used Looker for a pretty long time — since 2015. What’s your best “back in my day” reference?

Back in my day, we needed to specify if the field type was int or decimal. Now we can just use "type: number" for any type of number fields and then use value_format to format numbers.

Most useful Looker feature?

Templated Filters and Liquid Parameters! We were able to do a lot of customization with them.

What’s your favorite word to describe data?

“Not wrong”

People come to me saying the data is wrong a lot. I have to tell them, it’s not “Wrong”, it depends on how you define it, and what you are comparing it to. Definition and context are key when you are talking about data.

What’s the best meal you’ve ever had and why?

The first meal Mom prepared when I hadn’t been home for a long time.

What’s the coolest fact you know?

China only has one timezone, no daylight savings time, and it NEVER CHANGES (at least it hasn’t yet). It’s really cool for building applications and running analytics. And yes, I’m originally from China.

I have a terrible headache when people change timezone or change daylight savings time. And then it’s changed in certain libraries/packages/gems/databases, but not in others.

Do you have a nerdy data/SQL joke?

I did a SQL101 in our company, and a coworker slacked me —
A SQL query walks into a bar, approaches two tables, and asks, “Can I join you?”

Another one I heard from a friend —
3 SQL databases walked into a NoSQL bar. A little while later, they walked out because they couldn’t find a table.

If you could share a piece of advice for those just starting out with Looker or data analytics in general, what would it be?

Looker has the best documentation and forum (discourse). You can start there.

Community Knowledge Share

Inline HTML Sparkline Graphs

The previous longtime resource for creating inline sparkline charts was deprecated in March. Luckily, @svickers found and shared not one but two viable replacements, both free and loaded with features. In the words of @jeffrey.martinez, “Scott, you’re a real one!”
Looker Community Discussion — Creating Custom Vis via HTML

How To Trick BigQuery Into Running... Big Queries

The marvelous magical Mr. Mintz did battle with BigQuery’s automatic data scanning & resource allocation engine. In a CROSS JOIN edge case, the ‘cost’ estimate will be so off that BQ won’t allocate nearly enough resources to the job, resulting in an error. Click through to see how he beat the system to get his query to run — it’s clever.
Looker Community Discussion — Resources Exceeded During Query Workaround

Motion Charts

There are some really interesting speculations and ideation going on in this thread around animated visualizations. Check out the Custom Visualization sandbox, look at @bens’ example, or add your own thoughts.
Looker Community Discussion — Motion Charts

Join the Conversation

That’s a wrap! Head over to the Community to post your thoughts and tell Rui her SQL jokes made you laugh. Also, in the spirit of community, let me know what you’d like to see in the coming Podiums! Any fun ideas, topics you want to learn about, people you want to see featured in the spotlight — you name it and I’ll make it happen.

Over and out!

<![CDATA[Looker to Join Google Cloud]]> http://looker.com/blog/looker-to-join-google-cloud http://looker.com/blog/looker-to-join-google-cloud In tech, most progress is incremental. But occasionally an existing market is up-ended and an entirely new approach causes a complete transformation of how humans can solve a problem. Looker and Google Cloud are at the heart of such a sea change in data analytics.

This is why I’m incredibly proud to announce some big news today, Looker has entered into an agreement to be acquired by Google. Pending customary closing conditions, Looker will join Google Cloud.

Joining Google Cloud means our ability to affect this mega change is greatly accelerated. The combination of Google Cloud’s BigQuery and associated data infrastructure and Looker’s platform for innovative data solutions will reinvent what it means to solve business problems with data at an entirely different scale and value point. Together, we’ll have better reach, more resources, and the brightest minds in both Analytics and Cloud Infrastructure working together to build an exciting path forward for our customers and partners. The mission that we undertook as Looker is a giant one -- and with Google Cloud, our ability to complete that mission is significantly enhanced.

The decision to make a change is never easy. But Google Cloud and Looker have been very close partners for over four years, and during that time we’ve continued to not only see technology synergies, but also similarities in culture, approach to customer problems, sales motions, and passion to solve large data challenges. When Thomas Kurian, Google Cloud’s new CEO, approached us to become a cornerstone of his new path forward, a light bulb immediately went off for Lloyd and me. Google Cloud and Looker are a natural fit together.

For customers and partners, it’s important to know that today’s announcement solidifies ours as well as Google Cloud’s commitment to multi-cloud. Looker customers can expect continuing support of all cloud databases like Amazon Redshift, Azure SQL, Snowflake, Oracle, Microsoft SQL Server, Teradata and more. Looker partners can expect to continue to work with us as they have before. And, most importantly, our award-winning support experience, which we call the Department of Customer Love (DCL), will not only continue delivering an exceptional chat support experience but also be bolstered by the additional resources and global presence of the Google Cloud team.

I believe this is an awesome outcome for employees, our customers, and investors. But it’s only a step along a much longer path. This is not, by any means, the end for Looker, but simply the closing of our first chapter with many more to come. We have only just started together on this mission, and I look forward to continuing to build Looker within Google Cloud -- creating value for thousands of additional customers across the globe.

You can get more details by reading Thomas Kurian’s thoughts on the Google Cloud blog.

<![CDATA[Will the last BI vendor please turn out the lights]]> http://looker.com/blog/will-the-last-bi-vendor-please-turn-out-the-lights http://looker.com/blog/will-the-last-bi-vendor-please-turn-out-the-lights As someone who believes strongly in the potential for business intelligence (BI) to empower people and transform organizations, I have an important concern to share with every like-minded BI pro: We aren't being disrupted, it's already happened. Despite increasingly powerful dashboards and data exploration tools, traditional approaches to business intelligence are struggling to meet the expectations of the modern data-driven workforce.

Fortunately the core tenets of BI — that combining data makes it more valuable and that people are more powerful with data — are alive and well. What has changed is both the volume and diversity of data, as well as the expectations of end-users who depend on it. The result is that data demands are increasingly being satisfied with tools that simply aren't BI. This should excite us all as it presents a huge opportunity to rethink our data strategies and heighten the impact of our teams. Examining three mega-trends influencing the BI market tells us that the future looks very different and helps point the way forward.

Trend 1. The data driven workforce has arrived.

It should be abundantly clear that everyone is comfortable with data nowadays. It's embedded in our personal lives in all sorts of ways: the reviews when we shop online, fitness tracking when we jog down the street, movie recommendations when we chill on the couch, social apps feeding us the latest personalized news, and so on. This pervasiveness of data translates to the workplace as well because everyone needs data to get their job done. Not just traditional analysts. Not just quantitative marketers. Not just growth hackers. Everyone including factory floor workers, pizza delivery drivers, and even school teachers get more powerful with data.

And, although it pains my heart to say it, these modern data consumers don't expect classic BI reports, or even fancy natural-language enabled dashboards. They expect the data to come to them tailored in an interface designed specifically for the task at hand, ideally integrated into a tool they are already familiar with.

Of course we still need great BI tools for our analysts and data jockeys. But we must also be aware that there are a rapidly increasing number of data-enabled workers who view the idea of using dashboards the same way that you or I might view using a rotary phone. And that’s okay. I'll claim that the future of BI doesn't look much like a BI application at all, and that you don’t need everyone to become familiar with analyst tools to have an insight-driven business. People aren't going to go to BI, BI has to go to the people. This is already happening in a big way.

Trend 2. The proliferation of SaaS applications.

SaaS adoption has absolutely exploded. Think for a moment about the number of SaaS applications you've used today. I'm writing this post at 10am and I've already used Namely, Paylocity, DataDog, JIRA, and about 10 other tools — all of which I love, all of which are essentially polished user interfaces on top of rows of data.

There is an awesome purpose-built SaaS app for most every problem you could encounter at a company. Looker uses about 140 SaaS apps to run our business and we’re not remotely unique. Mary Meeker’s recent Internet Trends report found that the average enterprise is using about 1,000 SaaS applications, and that’s trending upward.

While this is incredibly powerful from the perspective of the average employee, it’s a nightmare challenge for IT departments and data analysts. We BI pros fundamentally believe in the power of connected data. 1,000 SaaS apps means 1,000 little data silos that should be stitched together. But let's be honest, is that really what's happening in our organizations today?

According to a McKinsey Digital study, only 1 percent of all the data created in the past two years has been analyzed. This should come as no surprise. You aren’t alone if you feel like your data team spends 90% of their time trying to wrangle data and keep the chaos at bay. But no amount of workbook automation, no amount of SQL enablement training, no amount of ETL code is capable of wrangling the explosion of data volume and complexity. And there is no going back, this is the world we live in (someone in your company has probably signed up for a new SaaS app while you’ve been reading this blog post). Fortunately, there is hope for data teams in the form of more powerful infrastructure.

Trend 3. Modern data infrastructure.

Modern massively parallel processing (MPP) data warehouses (e.g. Google Cloud BigQuery, Snowflake, Amazon Redshift Spectrum) have made step-function improvements in just the past few years. They can hold immense amounts of data, query it all in seconds, and even do advanced analytics directly in the database — all this at a cost which is bafflingly cheap compared to last generation technologies.

Looker is not traditional BI and we took a huge bet on MPP databases early on. We predicted that they would lead to fundamental changes in data infrastructure. In a world where we can dump as much data as we want into one place, query it fast, and pay pennies for the privilege, whole steps in the traditional data engineering workflow can be simplified.

Rather than doing the heavy lifting of creating aggregate tables, massaging data, and data prep outside of the database, we do much of it in-database by transforming data when it's queried. And since queries can be directly executed against the data warehouse, that data is more fresh, more detailed, and more trustworthy. It’s better data for less effort. This significantly improves the lives of data engineers because they can spend less of their time building and maintaining data pipelines and more time doing what they really want: empowering end-users with data-driven experiences.

So what does this all mean for the future of BI?

One thing remains constant: the analyst is still the hero. It is their knowledge and passion for data that will provide the deepest insights to businesses.

But in a world where everyone — not just analysts — depends on tailored data apps to get their work done, where data complexity and volume is increasing at an incredible rate, and where data infrastructure is exponentially more powerful, I believe the analyst has a new set of responsibilities. In addition to being the hero who wrangles and interprets data with their own powerful set of tools, they must also learn to empower others with data in new and unique ways. This means taking a fundamentally different approach to BI:

  • New infrastructure that connects SaaS apps and departmental data with a BI fabric that flexes and scales with the immense demands of the modern data ecosystem.
  • New governance that provides a single source of truth for business data. One which is unified, trusted, fresh, and comprehensive.
  • New purpose-built data experiences that go beyond dashboards, reports, and embedded visualizations to provide actionable insights delivered when and where they’re needed.

In doing so, analysts can unlock a new frontier in which people from all backgrounds, in any type of company or department, including those who traditionally have not worked in data, will be empowered to work smarter. In this world everyone, each in their own way, becomes a data hero.

rock on

Curious how these three BI trends are impacting actual companies? Check back here in a few weeks as Nick will detail some real-world scenarios.

<![CDATA[Dresner Wisdom of Crowds Affirms the Evolution of Business Intelligence]]> http://looker.com/blog/dresner-evolution-of-business-intelligence http://looker.com/blog/dresner-evolution-of-business-intelligence The business intelligence (BI) landscape is quite different than when I began my career in the late ’90s. That was the age of centralized, IT-led projects, enterprise data warehouses, and on-premises deployments. Today we see decentralized, business-oriented environments, vast data lakes (and a number of “data swamps”), and a multitude of cloud-based solutions.

This continuous evolution keeps my line of work fun and interesting. But it’s easy to lose perspective of just how much the BI and analytics industry has changed. The Wisdom of Crowds Business Intelligence Market Study, published last week by Dresner Advisory Services, is a helpful guide to track the evolution of the BI space.

In this year’s edition of the study, Dresner identifies a number of unmistakable trends that reaffirm a shift in the way we’re used to thinking about BI.

For example, the report shows that Executive Management and Operations are the leading functions driving BI within an organization. At the same time, the data makes it clear that the influence of the IT organization has steadily declined over the past six years.

In my opinion, this development has positive and negative implications. Empowering the business functions to work with data without support from IT experts has long been a goal of BI. But diminishing the role of IT results in lack of standards and safeguards that lead to poorly managed and insufficiently governed environments, undermining the decision making process.

On the topic of governance, Dresner’s research finds that “success with BI correlates directly and powerfully to an organization’s state of data. Organizations that view data as ‘truth’ are more than 80 percent likely to be ‘successful’.”

The Wisdom of Crowds study identifies another encouraging trend. The audience for BI is expanding. According to the study, executives and middle managers remain the most likely targeted users of BI. However, in 2018 they observed “a significant increase in the targeting of customers, individual contributors, and to a lesser extent, line managers.”

In fact, Dresner noticed that “the least successful BI organizations take executive focus to the extreme”, and that successful organizations are more likely to put a high emphasis on individual contributors.

This is an important development that indicates that the value of BI is transcending the boardroom, reaching people who are not executives, but whose jobs can be significantly enhanced with data.

This also means that we need to reassess how we serve people who need data. Dashboards are ideal for executive management, but may not be as useful for the retail store associate, or the food delivery driver, or the warehouse manager. The evolution of BI means recognizing that many people don’t live in reports and dashboards.

This year’s Wisdom of Crowds report found a strong correlation between success with BI and an organization’s ability to take action on insights. According to Dresner, “more than 70 percent of organizations with closed-loop processes are completely successful all or some of the time.”

This finding further supports the idea that we must go beyond reports and dashboards. Success with BI now involves supporting closed-loop processes that seamlessly integrate with business workflows, reaching people in the applications they use every day.

This kind of shift in culture requires committed leadership. The Dresner report found that “fewer than 15 percent of respondent organizations have a chief data officer and only about 10 percent have a chief analytics officer”, but it also found that organizations with a CDO or CAO fare better than those without, across all objectives/achievements evaluated.

It’s clear that there is an important opportunity for data leaders within organizations to take an enormously influential role in defining and driving data strategy. Data is an asset and should be treated as such. In time, I expect we’ll see the Chief Data/Analytics Officer role become as prominent and necessary as the Chief Financial Officer.

Lastly, no market study would be complete without vendor evaluations. In this year’s report, Looker was once again ranked as an overall leader in Dresner’s Customer Experience and Vendor Credibility models. According to Dresner, Looker “scores significantly above the overall sample for virtually all measures and is best in class for sales business practices, technical support professionalism, and responsiveness. It maintains a perfect recommend score.”

The evolution of business intelligence continues its swift pace. It’s thrilling to think about the possibilities. At Looker, we are proud to lead this evolution and are delighted to be recognized in the Wisdom of Crowds BI Market Study. I invite you to download a copy of the study and to contact us to learn more about Looker and how we can help your organization achieve success with BI.

<![CDATA[Why Your Company Needs a Self-Service BI Tool]]> http://looker.com/blog/why-your-company-needs-self-service-bi-tool http://looker.com/blog/why-your-company-needs-self-service-bi-tool Technology is continually and rapidly changing. For organizations today, this means that in order to stay ahead of the pack and get real-time answers to pressing questions, data management is a necessity.

From sales reports to company or industry trends, analyzing raw data has become central to promoting growth and improving decision-making. Big data is the future, and self-service business intelligence can give your organization data on-demand and the ability to quickly connect analysis to action.

What is Self-Service Business Intelligence?

Self-service business intelligence refers to a set of tools that help companies manage their data, transforming raw numbers into streamlined reports that improve company-wide decision-making. These tools allow companies to promote collaboration across multiple departments and to utilize ad hoc querying.

As a branch of data analytics, self-service business intelligence reporting gives organizations the opportunity to interpret data in a meaningful, easy-to-read way. A flexible BI software program helps users understand data projections and other analyses -- even if they don’t have a background in analytics.

Benefits of a Self-Service BI Tool

If you’ve ever looked at raw data and felt overwhelmed, business intelligence reporting can help you make sense of your data without relying on outside analysts. It empowers you to take vital information and draw smart conclusions from it on-the-fly.

With BI reporting, you can harness the power of data on-demand. Anyone from advisors to C-suite executives to department heads and team members can navigate data with ease, allowing everyone to make sense of the data and make informed decisions that impact the company as a whole.

BI tools give decision-makers, not just trained analysts, the freedom to spend more time actually analyzing data, instead of trying to find ways to organize it in a sensible way. Using self-service BI as a springboard, those making decisions can use data to spot trends and identify areas where the company can improve operations. For example, a retail company can use data to easily create BI reports that pinpoint:

  • Which products are selling
  • What time of day sales reach their peak
  • Which marketing efforts are working, and
  • Where improvements can be made within a company’s workflow.

Challenges of a Self-Service BI Tool

Just as every business is unique, so too are the challenges organizations may face when using a self-service BI solution. Depending on the size and ability of your company, self-service BI may create complications, such as wasted resources or a reduced ability to make fast, informed decisions. This can occur if your company selects a BI toolset that relies too heavily on individual user decision-making. Organizing your data can give you and your team more freedom to take action on your data-based findings.

Additionally, when choosing a self-service BI software suite, it’s important to read the fine print. Not all BI tools will keep your business compliant. While data can help you make informed decisions, you’ll still need to bolster that data with good ol’ fashioned research. A BI platform can help you leverage data in the face of industry-wide changes, but it can’t make the right decisions for your business.

Learn about Looker’s specific capabilities with data governance and overcoming data challenges in our whitepaper, "Best Self-Service BI Strategy For Your Business".

How Do I Know If I Need Self-Service Business Intelligence Reporting?

Large and small companies alike can benefit from business intelligence reporting. The reality is, data is everywhere -- people just need an easier way to make sense of it.

Self-service business intelligence reporting may be right for your company if:

  • You want to improve decision making
  • Data seems to become old too quickly
  • Your business lacks proper data visualization, making it difficult for others within the company to see the impact of your findings
  • Your team has a backlog of ad hoc report requests
  • You have difficulty spotting trends in company operations
  • Executives want to become more data-focused

The Future of Business is Data-Driven

In a data-driven business landscape, it’s vital to be able to transform your data into actionable information at scale. A self-service BI software platform can help gather sales data, trend reports, and more -- giving you the tools you need to compile that information in a crisp, concise way that makes sense of complicated figures. Combining data visualization and the ability to carry out ad hoc querying, self-service BI can help your company move forward with the most up-to-date data available.

Click here to learn more about Looker’s self-service BI capabilities or reach out to our teams to get started with your free trial of the Looker platform!

<![CDATA[Augmenting Snowflake Data Sharing with Looker]]> http://looker.com/blog/augmenting-snowflake-data-sharing-with-looker http://looker.com/blog/augmenting-snowflake-data-sharing-with-looker Businesses today are finding it increasingly valuable to share data beyond their own enterprise data warehouse — both inside and outside their organizations. Some are using data to surface key insights between departments or between companies, while others are using it to help monetize their data. While historically data sharing has involved the copying and/or movement of large data sets, data warehouse vendors like Snowflake have made it easier than ever to share data between organizations or groups of organizations.

While the ability to share data with ease has increased, mechanisms for sharing the accompanying business logic and modes of analysis alongside the data itself are still lacking. This business logic is extremely valuable because it contains a consistent interpretation of the data, designed by your data experts, that simplifies analyses and unifies metrics between data consumers.

In this blog, you’ll get a better understanding of how the Looker project import feature uniquely complements data sharing, allowing for the simultaneous sharing of business logic (and data models) in a way that is governed, scalable, and efficient.

Sharing data in Snowflake

Where data sharing has historically been manual, repetitive, and highly technical, cloud-based data warehouse Snowflake’s sharing is different. Built around a metadata architecture and the decoupling of the storage and compute aspects of a data warehouse, Snowflake makes sharing simple, secure, and nearly instantaneous.

Rather than being copied, data shared with Snowflake is made available in-place. This means it’s always the latest data and never needs updating. Snowflake data sharing also avoids the arduous complexity of data copying and sending copies via FTP that is so familiar to data teams. However, while sharing data is be made simpler in this way, analysis of that data can still remain complex.

Enter the need for shared data and business logic.

Sharing data AND business logic

With Looker, sharing governed business logic can be accomplished in a centralized, reusable, and scalable manner. These capabilities paired alongside modern data warehouses like Snowflake change how the sharing of data and business logic can be done, as noted in this great blog by our friends at Hashpath.

One key to sharing business logic alongside data sharing is the use of Looker’s unique project import feature. Because business logic in Looker is codified via LookML, it can be collaborated upon, shared, and controlled. With project import, data sharing is augmented by a simple process for the distribution of data models, metrics, and other key aspects of data analysis. If you’re interested in learning more about the details of project import, check out this blog from Looker’s own Kevin Marr.

Project Import + Snowflake

When combined with Snowflake data sharing, project import is a great way to share data and business logic simultaneously. Allowing data teams to build a one-to-one or one-to-many processes facilitates shared model development in a distributed, scalable manner. In addition, project import ensures that everyone who is sharing data is also speaking and leveraging the same language of shared logic based on the expertise of those who developed that logic.

For data consumers, project import alongside data sharing simplifies the modeling and analysis of data. With project import, Looker users can share models and parts of models with other organizations in a highly flexible manner, allowing for sharing that’s customizable while remaining easy to control and track.

Project import also allows for the use of pre-built data models — or Looker Blocks — to simplify the analysis of a wide range of pre-modeled data. Blocks like those from Heap are already built to simplify the analysis of data shared using Snowflake data sharing. Other existing blocks include models for Salesforce data, Google AdWords, AWS admin data, in addition to blocks for specific analytics use cases such as user behavior analysis by Braze, retail sales forecasting by BigSquid, marketing agency analytics by Improvado, cohorting, forecasting, log analysis, and more. With project import and Looker Blocks, analysts can leverage the work of others to simplify and speed up their own efforts, and data consumers can more easily discover and share insights to fuel their business decisions.

Learn More

Check out our documentation to learn more about project import, the power of Looker + Snowflake, or chat with the Looker Community to discuss your thoughts and questions about blocks, project import, and the sharing of business logic.

<![CDATA[Fostering Diversity, Equity, and Inclusion in Tech: Why I Joined Looker]]> http://looker.com/blog/why-kelli-joined-looker http://looker.com/blog/why-kelli-joined-looker As someone who’s been a Looker user for over three years,I've come to know Looker as an awesome, game-changing product. More recently, I’ve found that Looker the company is filled with genuine, caring, and passionate individuals that naturally treat each other how I've always wanted to be treated at work.

Authenticity & Belonging

Throughout my career journey, I’ve found that great cultures focus on ensuring that their workplaces foster a sense of belonging. This is has become increasingly apparent in recent years, with talent flocking to and staying at companies where they can be their true selves without having to separate who they are at home from who they are at work. Additionally, I’ve seen that people gravitate to companies where respect and transparency is shared not only among their peers and managers, but from leadership teams as well.

The Role That DEI Plays

The concepts underneath the diversity, equity, and inclusion umbrella are different, yet all equally important to a company’s culture. Organizations can be strong or weak across any combination of these three areas and will often claim victory when they’ve checked one off the list. But to be truly authentic, I believe there must be a commitment to being strong when it comes to diversity, equity, and inclusion together in the tech industry.

Demographic diversity as an example is necessary, but far from sufficient. If we are not ensuring that our teams are also inclusive and equitable, simply being a demographically diverse team won’t make our experiences any better. As a female who identifies as LGBTQ+ and is now part of a 50/50 gender-balanced executive team at Looker, I am eager to help continue building in this area with the knowledge and learnings I’ve gotten throughout my journey.

From an external customer perspective, smart organizations are also working to mirror the demographics of their workforce to the demographics of their user base. This allows product teams to identify with all different types of personas and create amazing customer experiences. At Looker, this is even more critical, as our users are those in every function across companies, all working towards a shared goal.

I encourage organizations to share their views and their progress externally as often as possible. Committing to publishing and sharing where you are currently, even if it's uncomfortable, helps ensure progress is being made around diversity, equity, and inclusion in tech. I've been fortunate to have been at prior companies like Intuit and Hired who have taken this path, and Looker has already made a tremendous amount of progress across these areas.

Why I Love The Dreaded Role Of “Human Resources”

To support a business optimally through hypergrowth, proactive companies are looking to grow their teams in responsible ways that constantly blend their talent strategies with their business levers. Because of this, our roles within the people organization (aka Human Resources) are very unique. People teams can provide a holistic view of the most optimal ways to intersect the best strategies through rapid evolution.

It’s a role I believe is never complete and can never be mastered — which is part of what makes it so exciting. Just as humans and businesses continue to change and evolve, so do the roles and responsibilities of those in People/HR roles. We must continue to listen to others, be open to learning and being proven wrong, and continue pushing the envelope as we evolve our thinking and practices around talent and DE&I.

Looker Is My Title

I could not be more excited about joining Looker because Lookers already have these mindsets. Thanks to the founding team, the importance of people has been present at Looker since the beginning. As Looker has scaled, company values have helped to solidify a sense of humility, openness, and genuine caring in our offices all around the world. That’s special, and it’s something that Lookers are working hard to protect and grow.

I am personally excited to help connect the work that has and continues today across culture and diversity, equity, and inclusion together through one holistic People & Inclusion strategy that has the ability to expand as we scale. I look forward to Lookers being thought partners and thought leaders within the external tech community, while also continuing to improve, learn and challenge ourselves internally.

The work in promoting and living values across diversity, equity, and inclusion in the tech industry never ends - it just evolves. And it is with that mindset that I am thrilled to move forward and join the Looker team.

<![CDATA[GDPR - One Year On ]]> http://looker.com/blog/gdpr-one-year-on http://looker.com/blog/gdpr-one-year-on GDPR was the four-letter acronym you couldn’t get away from last year. And while the regulatory deadline of 25 May 2018 has been and passed, the impact of GDPR continues to be a key topic of conversation of many CIOs; drives for compliance and the fear of fines and reputational damage remain front of mind.

Despite the GDPR now being in full force, many are still on the journey to compliance. Getting to a place where you’re confident there’s no data sprawl, everyone’s singing from the same data ‘hymn sheet’ and there’s one single source of truth has been — and still is — a significant challenge for many enterprises.

While still a business challenge, GDPR should be viewed as just another market condition, and shouldn’t be seen as a barrier to creating a data-driven culture across an organisation. Rather, it should be positioned as a regulation driving data empowerment, so long as there is tech in place to enable compliant practices. Fostering this environment isn’t without its challenges, though, particularly for enterprises balancing the complex data architectures — such as hybrid and multi-cloud environments.

Indeed, in the first nine months since the GDPR was implemented, we’ve already seen over €75m worth of fines issued, and over 144,000 complaints from individuals and 89,000 breach notifications in the European Economic Area, as of May 2019.

Given the number of incidents and fines reported recently, one might consider this number fairly low, compared to some of the media rhetoric in the build-up to May 25, 2018. This may be because many organisations reported significant data breaches just prior to the GDPR deadline — meaning they were only subjected to the maximum fines in place before the legislation came into play (a maximum fine of £500,000 from the Information Commissioner’s Office in the UK, for example). It’s likely also influenced by regulators finding easy issues as a means to reinforce the importance of data protection compliance. We’re therefore anticipating the second year of GDPR ‘in action’ to see an uptick in investigations, new guidance and fines measures taken against organisations found not to be compliant.

Maintaining GDPR compliance

For that reason, it has never been more important for organisations to review their data handling and security processes regularly, ensuring policies and processes put in place prior to 25th May 2018 are still being carried out properly.

With access to data storage becoming so inexpensive, easy and accessible in recent years, the instinct has been for businesses to hoard any and all data they can get their hands on. In many cases, this has generated results in the form of new insights that never would have been uncovered otherwise.

However, this has also resulted in businesses housing huge volumes of data, some of which isn’t being used at all, and the rest of which is often duplicated across many locations. This ‘data sprawl’ makes it hard for enterprises to even understand what exactly they’re storing, let alone where it is, how it’s being accessed or how to respond to data subject access or deletion requests. This sprawl can potentially increase risk to the business and to individuals.

Organisations seeking to achieve GDPR compliance may have tackled this issue prior to the deadline, but they’ll need to ensure the right strategies, processes and technologies are in place to maintain this position moving forwards.

Three guiding principles for GDPR compliance

The likes of GDPR, and other privacy regulations on the horizon, aren’t going anywhere — so here are three guiding principles you can adopt as part of your overall data strategy to help drive long-term compliance:

Data governance
Data governance involves the people, processes, and technologies required to create a consistent and proper handling of an organisation’s data across the business. Companies must maintain current documentation of their data supply chain from time of collection to erasure, such as data flow maps and data inventories.

Privacy by design and data retention
Today, the average enterprise relies on over a thousand cloud software applications — each of which require access to real-time data — from sales CRM to ERP and marketing tech. Yet for many, they lack a platform that can take this data, analyse it in the context of where it resides and deliver impactful, insightful information in one, centralised location — avoiding such data sprawl.

Centralising an organization’s data is the most efficient way of documenting where data lives and is used, while providing the capability to substantially increase data analysis effectiveness and speed.

Monitoring and auditing
Monitoring data and data access is critical to GDPR compliance. This ties with who has access to personal data and why the data has been collected and will be used by your organisation. Once you’ve set controls about who — both internal and third party vendors — can have data access and why, you can then monitor to prevent unauthorised access by individuals, and make sure they are not improperly accessing or misusing personal data.

Following these practices will support your ability to deliver regular business insights when maintaining compliance with data protection legislation. While it may sound ambitious, centralised data, a single source of truth and regulatory compliance are all simultaneously achievable with the right platform in place.

Register for our Webinar on Data in the Age of GDPR for more insight on GDPR and what to expect with data privacy and regulation in 2019.

<![CDATA[The Podium - May 22nd, 2019]]> http://looker.com/blog/the-podium-may-22 http://looker.com/blog/the-podium-may-22 Ohayo gozaimasu!

The Podium is a bit tardy this time around, but with a great reason! I’ve been busy getting adjusted to my new habitat of Tokyo, Japan — Looker’s global push is well underway, and I’m thrilled to get to help spark the Looker Community in a brand new country.

Speaking of global Looker-ing, last week’s first-ever Looker_Hack: London was nothing short of fantastic! Massive kudos to everyone who attended and hacked away at an awesome project. Keep your eyes peeled for videos and write-ups of the projects, and make sure to tune into the next Podium for a special Japanese edition.

Izzy Miller
Community Manager, Looker

Questions and Answers

Accounting for missing data in a model

Ianterrellamx chose a meaty subject for their first post! I won’t try to describe it since it took me a few reads to fully understand the problem, but Pan_Sun and bens hopped right in with some suggestions. Ian came back with a solution using Native Derived Tables, too. I feel smarter just having read the thread :)
Looker Community Discussion — Accounting for missing data in a model

Sorting on multiple fields

This one’s a little more simple, but always good to know. Cristian was hoping to sort on multiple columns — in this case, it’s as simple as shift+clicking on the fields you want to sort by. You can keep clicking to reverse the order too, just like a normal sort.
Looker Community Discussion — Sorting on multiple fields

Community Spotlight - Menashe Hamm

Our guest this week is @menashe. Welcome, Menashe!

What do you do for work?

I’m a data analyst at a DIY platform with tens of millions of monthly visits.

What’s your secret skill or talent?

I can walk and chew gum — at the same time!

What’s the coolest thing you’ve ever done with Looker?

Most of the coolness in Looker is in its features: I just use them. Probably the coolest thing I’ve done — this was for my previous employer — was to plot high-accuracy location readings on a map to see what paths people took.

Most useful Looker feature?

Ephemeral (non-persistent) derived tables with Liquid. Looker’s filters and parameters, coupled with the _in_query, _is_selected, date_start, and date_end Liquid variables, allow me to control the SQL in the table for faster, accurate querying.

Milkshake or ice cream sundae?

Sundae. But do I really need to choose?

What’s your biggest Looker feature request?

I think that would have to be better date filtering. Right now, getting dates between an absolute date and a relative date is difficult or at least unintuitive for the typical user. Users mention this relatively often, which is why I’m prioritizing as my biggest request. If I can mention another, though, it’s x-axis annotations.

What’s the question I forgot to ask that you’re dying to answer?

Favorite mathematical proof? Quite possibly this one.

Community Knowledge Share

Update Snowflake database with Looker Action

JeffHuth posted a gorgeous tutorial on setting up a database write-back action for Snowflake. It’s fantastically written and a great tutorial on building out any kind of custom action!
Looker Community Discussion — Update snowflake database with Looker Action

Date comparison block

Bencannon built an excellent block for doing period-over-period date comparisons. Very well written and works great, really well done Bencannon. And shout out to Bens for porting it over to MSSQL so quickly!
Looker Community Discussion — Date comparison block

Join the Conversation

There you have it! Head over to the Community to ask any Looker questions to the cool cats that hang out there. Or, share your comments about any specific discussions and tips that have helped you out lately for a chance to get featured in the next Podium.

And as always, send in your questions for the next Spotlight feature (I’m running out of questions, so it’s about to get real middle school truth-or-dare-y up in here if I don’t get some new ones quick!)

<![CDATA[Conversion Funnel: How to Build, Analyze & Optimize]]> http://looker.com/blog/conversion-funnel-optimization http://looker.com/blog/conversion-funnel-optimization What Is A Conversion Funnel?

A conversion funnel, also referred to as a site funnel, is the path to purchase in an eCommerce store or site. In some ways, it can be compared to the traditional marketing funnel, but different than a traditional funnel, most of the steps are occurring on your site.

Generally, there are five steps in a site funnel for any given website. You can add additional steps depending on your particular website and what makes it unique.

Five Steps In A Site Purchase Funnel

  1. Site Visit - A customer arrives at your site
  2. Product View - Customer views a specific product page for more detail
  3. Add To Cart - Customer shows a strong preference for a product by adding it to their cart
  4. Enter Checkout - Customer shows strong intent that they will purchase your product
  5. Purchase - Customer goes through the entire checkout process and completes the transaction.

Many web traffic measurement tools will offer a breakdown of these five steps. For example, in Google Analytics, you can find the site funnel in the Shopping Behavior area of the eCommerce section.

Although five steps may seem simple, a myriad of factors influence the customer journey from the initial site visit to a completed purchase. The reality is, the vast majority of visitors will not complete the transaction. However, there are ways to improve on these odds to result in more purchases.

If eCommerce transactions are not your first priority, a similar framework can be applied to whatever your website goals are. Perhaps gaining more sales leads is of higher importance, for which a typical funnel for a goal of increased sales leads would be:

  1. Site Visit
  2. View content
  3. Request to learn more
  4. Enter contact info

You can create a goal of ‘acquiring leads’ in Google Analytics or another web traffic tool. Adding steps to the goal will allow you to see the goal flow similar to how a conversion funnel flows.

The Importance Of A Conversion Funnel Analysis

A conversion funnel analysis is a powerful tool any company can use to improve their business. By evaluating each of the stages mentioned above, you can begin to understand how many of your site visitors are progressing through each of the stages in the funnel. To make your analysis more valuable, you may also want to evaluate the funnel flow based on consumer groups, the device used to visit, or other segments that make sense for your business.

Once you’ve completed your first conversion funnel analysis, you will have a baseline measurement to benchmark against and use to improve and optimize your site experience.

Considering More Than Just Conversions

If looked at in a vacuum, optimizing within the funnel based on a single metric can seem relatively easy. However, increasing one metric could have long term negative impacts on the other KPIs of your business. While conversions, transactions, and leads are the ultimate goal, there are other factors that are important to understand when driving towards success. Metrics such as revenue per visitor, average order value, gross margin; for leads, and quality measurement of a lead are useful and important metrics to understand since they all can affect the outcomes of funnel-related actions.

A great example of this is a hyper-focus on lead gen. Although you may find a system that works for consistently improving your lead generation volume, if the new leads never become customers, the significance of this optimization loses its value against the overall goal of trying to generate more customers for the organization.

How Do You Optimize A Conversion Funnel?

Depending on your business and website, there are countless changes that could result in improved performance in your conversion funnel. That being said, you should keep in mind that conversion funnel optimization is an ongoing effort and something that is never truly ‘completed’.

To conduct a conversion funnel analysis, you need to have:

  • a specific area you’d like to focus on improving
  • a hypothesis that could improve the performance of this part of the funnel, and
  • an ability to A/B test the hypothesis

Since not all hypotheses will generate positive results, the A/B testing component is critically important to guide the decisions and strategies that lead to fully-implemented changes.

Two areas on your site that could improve conversions or increase lead generation are product/service pages and the checkout experience.

  • Product Pages
    Having pages specifically for your products allow you to test hypotheses based on questions about the page that affect site conversions and leads, which can include questions like:
    • Is the Request Info or Add to Cart button prominent on all devices?
    • Are there product reviews or testimonials that could be added to the product page?
    • Is there certain information (i.e., product size, trial details) that could be made available on the page?
    • Are there videos that could be embedded on the page that showcase the product in use?
  • Checkout Experience
    A seamless checkout experience, especially on mobile, is absolutely necessary for improved site conversions. Consider tactics to remove friction and improve conversions, such as:
    • Implementing mobile wallets like Apple Pay
    • Adding alternative payment methods such as PayPal, Venmo, or Affirm financing
    • Assessing whether all the data collected during the checkout process is required or if there are parts that could be eliminated
    • Reducing the number of pages in the checkout flow by removing data collection or redesigning the page

Continuing To Build, Analyze & Optimize Your Conversion Funnel

To recap how to start converting more site visitors:

  1. Identify the goal (transactions, sales leads, etc.)
  2. Measure your baseline for each of the five conversion funnel steps
  3. Brainstorm hypotheses and ways to improve the customer experience
  4. A/B test your hypothesis
  5. Roll out positive test results to all users

Improving your overall site experience and conversion funnel is an ongoing process that has many solutions. But at the end of the day, it’s important to remember that the leads you’re looking at in the funnel are human beings in real life, so each part of the conversion funnel you can improve means improving the experience of a consumer. By keeping up with consumer preferences and expectations, the changes you make — no matter how big or small — can add up to long-term improvements for your entire organization.

<![CDATA[Steps for Getting Started with Data-Driven Marketing]]> http://looker.com/blog/steps-for-data-driven-marketing http://looker.com/blog/steps-for-data-driven-marketing No matter your business size or industry, doing more of what’s working and less of what’s not is a no-brainer. With data-driven marketing, you can not only uncover insights on your prospect and customer behaviors, but you can further leverage those insights to inform the strategies that impact the success of your business.

Check out these steps for getting started with data-driven marketing at your organization:

Step 1 - Define Your Goals

Setting your team up for success starts with defining the goals and outcomes you want to achieve. Whether you’re looking to increase the number of new customers acquired, increase website visits, or grow marketing-generated revenue, setting specific and measurable goals will help your team map out the strategies that will lead to a successful outcome.

When developing a plan for the entire team, select marketing channels in which actions can be attributed back to the goal when building out a strategy. Additionally, consider the available budget when mapping out the allocation of resources to help guide effective goal-related actions rather than inefficient activities. For example, a social media campaign creates tangible data points, while a billboard campaign has theoretical data points.

Step 2 - Identify The Methods of Tracking

Once the goals have been set, use them to outline the questions you need answers to. Questions like:

  • What should we measure?
  • Do we have that data?
  • If yes — how do we go about accessing it?
  • How do we use that data as insightful information we take action with?

This will help you determine the data you need to measure and track towards goal performance over the period of time you aim to achieve it.

Step 3 - Analysis, Attribution, and A/B Testing

With goals, metrics and tracking methods set, campaigns contributing to the overall goals can commence. As data gets collected from these various campaigns, your team(s) can begin to analyze the outcomes of their strategies in action.

During this time, the data may reveal an underperforming campaign that is not meeting the predetermined benchmark of success metrics. While removing the dead weight and continuing forward without this activity may seem like the most efficient, cost-effective option, allowing for time to assess the information may uncover variations to the campaign that will save you from needing to remove it at all.

Attribution Assesses What’s Driving Results

Understanding which efforts are actually contributing to the overall goals is an important factor for any successful analytics-driven marketing initiative. To be sure that the methods you use to assign attribution align with the goals of the organization as a whole, keep in mind that:

  1. The attribution model(s) you use should be understood and easy to communicate by those using it.
  2. You should use multiple attribution methods only as needed.
  3. An attribution model is a way to set benchmarks and performance standards, and it is not how to track the outcome of every marketing dollar spent.
  4. If the model isn’t making the jobs of people who use it easier, it’s time to reassess.

Some of the different approaches to attribution include:

  • First Touch Attribution
    First touch attribution credits all of the success from an achieved outcome to the first campaign it interacted with. However, this method only provides limited visibility into the other activities that may have affected the positive outcome, leaving your team with little information to strategize from when formulating new questions to ask of the data.

  • Last Touch Attribution
    Last touch attribution assigns all the credit of a successful outcome to the final campaign associated with it. While this method is easy to implement, especially when re-engaging with existing accounts in your database, the long-term effects of this method limit your ability attribute marketing efforts towards new leads in the funnel, which can cause bigger challenges down the road.

  • Multi-touch Attribution
    Much like it sounds, multi-touch attribution spreads the credit across all campaigns. You can do this through linear distribution, which gives each campaign an equal cut of the credit from a successful outcome. Another way to do this is with weighted distribution, in which your teams can assign credit to the involved campaigns based on factors like time and campaign position as it relates to goal conversion.

By understanding the various factors contributing to the outcomes needed to achieve the set goals, your teams can begin to iterate on the tactics, tests, or campaigns themselves that are generating the best results. Proper attribution allows you to continue refining the questions you want answers to, leading to more specific measures of success that can be used to drive more positive campaign outcomes.

Test Strategically

A/B testing are experiments based on differentiated campaign variables, which is why they are some of the most fun and most tricky parts of any analytics-driven marketing initiative. While they encourage creative and critical thinking, A/B tests must be developed with the overall learning goal in mind. From colors to wording, page flows, and heatmaps, developing and deploying A/B tests will help your teams generate better insights from variables in a given campaign.

Step 4 - Measure And Share Results

Whether your tests run for days, weeks, or an extended amount of time, assessing the results of these tests and the effect of your sample size on the results will allow your teams to measure the impact of their activities against the strategies tied to the overall goal. Sharing the results of these intentional marketing efforts with the entire organization will help educate and demonstrate the impact data-driven marketing initiatives can have on the business’s bottom line.

Go Further With Data-Driven Marketing

Download the Analytics-Driven Marketing for Action ebook for case-study examples of what marketing analytics in action looks like, or reach out to our teams to learn more about how Looker can help realize data-driven marketing at your organization.

<![CDATA[Stepping Out of The Data Silo - The Evolution of Digital Marketing Metrics and Analytics]]> http://looker.com/blog/evolution-digital-marketing-metrics-analytics http://looker.com/blog/evolution-digital-marketing-metrics-analytics Anyone tackling digital marketing today knows that they are responsible for finding the right balance of art, science, and magic to maximize the value of their marketing spend. It stands to reason, then, that the more insight that can be gained from digital marketing analytics and the more timely these insights can be made available, the better decisions one can make when allocating where, when, and how much — or little — to invest in digital marketing spend.

How Digital Marketing Metrics Were First Measured

Early on in digital marketing, success was measured by the number of eyes that could be attributed to a given ad. If you could get an ad in front of an increasing number of people — or even better, get them to click on it — the campaign was considered a success. Metrics like Number of Impressions, Cost per Thousand Impressions (or CPM), Clicks, and Cost Per Click were among the key metrics used for determining the effectiveness of campaigns and accuracy of marketing KPIs. However, it soon became apparent that there was not always a direct correlation between increasing the number of clicks and the impacting bottom line.

To complicate things further, the number of digital marketing channels continued to increase to include Google Ads, Facebook, Yahoo, YouTube, and so on — each with its own completely separate set of metrics. To get the full picture of the data, digital marketing teams were left with the choice to either work with disparate silos of data, or go through the painful process of manually combining data into spreadsheets.

To make better use of digital marketing data, today’s marketers need to be able to look at more than just impressions and clicks. Marketers need to know what happens after a person clicks on an ad and understand what — if any — value that provides to the business overall.

Digital Marketing Attribution Changes for Sales and Marketing

Ideally, any action a person takes following a click on a digital marketing ad will lead that person further into your business’s funnel. Measuring the impact of that action as you get further into the funnel can vary based on your sales cycle. For a standard business-to-consumer website, this isn’t too difficult to do with the correct attribution. But for more complex sales cycles where a click may result in a lead being created in a CRM, followed by a product trial, then a purchase, etc., tracking attribution can quickly go from easy to confusing.

When done correctly, the ability to measure the success of a campaign throughout the entire customer journey with data attribution is priceless. Realizing this, Google and Salesforce teamed up to give marketers a unified view of their Google Marketing Platform products alongside Salesforce. In addition, Google created products such as GA360 to help minimize siloed views of data for better analysis. Because of these advancements, it’s become possible for marketers to focus on metrics that show them how their digital marketing efforts are affecting the bottom line and driving success for the organization throughout the entire customer journey.

Centralizing Digital Marketing Metrics, Attribution, and Analyses

Another challenge associated with digital marketing is the ability to aggregate data in a way that makes analysis seamless yet still reliable. Many times, moving data can result in lost time or information, neither of which help marketers understand how their efforts are or are not working. Using Google BigQuery is one way to solve this challenge.

By moving data from the Google Marketing Platform into BigQuery, marketers have the ability to perform complex analyses that may not have been possible with other solutions. And when you combine this data with data from mission-critical systems beyond Salesforce, the insights that can be gained are limited only by the imagination. This results in being able to better answer questions such as:

  • Which campaigns result in customers with the highest Customer Support requirements?
  • Which campaigns result in customers with the highest renewal rates?
  • How does the effectiveness of our Google Ad spend compare to what we are spending on other advertising platforms?

Digital Marketing Analytics with Looker

Just as data needs to help tell a full story, metrics need to do more than report — they need to provide insight and drive action.

To empower marketers with the necessary data to drive those actions, Looker’s pre-built applications, made for specific marketing use cases such as Digital Marketing, allow for analyses and reporting across digital marketing sources such as Google, Facebook, Bing, Pinterest, and LinkedIn. With one, centralized location to assess active campaigns, conversion rates, and ad spend, marketers can make real-time decisions based on the most effective methods and messages to help improve upon their marketing approach and strategies.

Increase The Impact of your Digital Marketing Efforts

While every organization’s digital marketing strategy is different, by having ready access to the digital marketing metrics that impact the business as a whole, real dollar ROI can be accurately assigned to the campaigns that work and help shed light on those don’t.

Download the Analytics-Driven Marketing for Action ebook or reach out to our team to learn more about increasing the impact of your digital marketing efforts.

<![CDATA[Data of Thrones: Direwolves, Unique Deaths, and Word Trends... Oh my!]]> http://looker.com/blog/data-of-thrones-vi-more-insights-game-of-thrones http://looker.com/blog/data-of-thrones-vi-more-insights-game-of-thrones Like Hot Pie excitedly sharing his new cooking tips with Arya, we’re eager to share some more insights from the #dataofthrones that have been surfaced in the weeks since introducing the Game of Thrones data dashboard and the Gender of Thrones analysis.

So whether you’re still processing the most recent events of this season, in denial about Game of Thrones already being halfway over, or are here for all-things data — you’re in the right place. Check out these GoT insights and share yours with us on the Community and social media #dataofthrones.

Ghost, where have you been?

Almost from out of nowhere, season eight blessed us with the on-screen return of Ghost. This happy but sudden occurrence had us wondering — when was the last time we’ve seen a direwolf on Game of Thrones? We consulted the data and got all the feels.

Explore here

Deaths by house throughout the series

Looker’s own Rufus Holmes shared this cool Look comparing deaths by house season-over-season. If you filter on only house Baratheon and Stark, there seems to be a pattern of one season with major deaths for house Stark, followed by a season with more deaths in house Baratheon.

Curious — I wonder if that will carry on into the end of the series... what do you think?

Explore here

Common and uncommon word frequency

Community Member Haley Baldwin found some interesting trends associated to word frequency in Game of Thrones. As Haley shared:

“The upward trend of NORTH and DRAGON, peaking in season 7, makes a lot of sense and was interesting to see! Dragons became really important to the plot last season and there was also a lot of chanting “King in the North!” in Winterfell.

Explore here

Said with honor

Not that we didn’t already think Brienne was one of the most — if not the most — honorable GoT character, we consulted the data to confirm. The conclusion — Brienne = just as amazing as we thought.

Explore here

The evolution of one-time-only GoT deaths

In addition to word frequency trends, Haley shared these interesting results about GoT deaths throughout the season —

“When you look at the types of death in Game of Thrones, it’s clear that most named characters died by some form of stabbing. I wanted to find out which types of death had just one victim. In this table you can see that the most unique deaths happened in Season 1 and they’ve been declining since then. The violence definitely hasn’t decreased overall, but it seems that named characters have become less likely to die in unique ways.”

Explore here

Who will (or already did) battle karma before the end of Game of Thrones?

Right before the season eight premiere, Looker’s Sloane Ansell whipped up this word cloud of the remaining GoT character who’ve killed throughout the season yet still had not been cut down themselves. If you made bets or predictions for the final season, I hope you had a chance to consult this data...

Explore here

Explore the Game of Thrones data

Ready to dive in yourself? Start exploring our Game of Thrones dashboard and be sure to share what you find!

Disclaimer: Game of Thrones belongs to HBO and is not affiliated with Looker in any way.