<![CDATA[Looker | Blog]]> /blog Looker blog discover.looker.com Copyright 2018 2018-10-21T19:41:15-07:00 <![CDATA[Powering the Greater Good with Better Data]]> http://looker.com/blog/powering-the-greater-good-with-better-data http://looker.com/blog/powering-the-greater-good-with-better-data Looker has been embedded in its community since the very beginning. That’s actually why we’re headquartered in Santa Cruz, CA. Our founder and CTO, Lloyd Tabb, was a long-time Santa Cruz resident who’d raised his family there and taught middle school in the community. So when he was starting his own company, he didn’t want to start it “over the hill” in Silicon Valley, but right in his own community.

And as Looker has expanded to offices in New York, Dublin, London, San Francisco and beyond, Lookers have found ways to give back to the communities they live and work in. From packing backpacks for school children who are living in homeless shelters in NY to cleaning up the river in Santa Cruz and learning CPR in Dublin, Lookers are passionate about contributing back to their communities.

But as the company has grown, we’ve been thinking about how we can give back in a more formal way. And we’ve asked ourselves ‘what we can uniquely give back’? Not surprisingly, what we realized is that Looker is uniquely qualified to provide reliable, self-service access to data. And while not every nonprofit needs that, plenty of charitable organizations are struggling with the same data chaos that their for-profit peers are.

That’s why we’re so excited to announce Looker for Good, our way of giving back to our communities and the groups that enrich them.

Looker Pledges 1%

The first component of Looker for Good is that Looker is joining Pledge 1%. We’re pledging 1% of our product to charity, as well as the employee time needed to help the charities we give Looker to be successful.

We’re thrilled to announce that the first recipient, Accion, is already standing up their Looker deployment and getting value from it. Accion is an amazing organization that’s been providing microfinance loans to small businesses all across America since 1991, and we’re so excited to help them get more value from their data with Looker.

And we are already looking for the next nonprofits to give Looker to, so if you work for a nonprofit or know of one that could benefit from free Looker, we’d love for you to nominate them here.

Nonprofit Discounts

The next component of Looker for Good is focused on making Looker accessible to all nonprofits. We’ve announced significant discounts off Looker’s list price for every nonprofit, whether they need a small deployment or a huge one. You can see all the details here.

Training Future Analysts

The final part of Looker for Good is that we’re offering free Looker deployments to educators who are interested in using Looker to train the next generation of analysts. If you’re an educator who’s interested, please don’t hesitate to reach out so we can discuss your use case.

We’re so proud of the work that Lookers are already doing to care for their communities. And we can’t wait to build out Looker for Good as a new channel for using our unique abilities to power the greater good with better data.

]]>
2018-10-11T00:00:00-07:00
<![CDATA[The Platform for Data]]> http://looker.com/blog/platform-for-data http://looker.com/blog/platform-for-data For decades organizations have been using data to better understand trends or events that are happening in and around their business. Today, business systems are generating far more actionable data than ever before – data that becomes even more valuable when intelligently integrated together. But the explosion of SaaS applications has made things far more complex. Organizations that previously had under 100 applications often have 10x that number today. Having a SaaS solution for every problem is great, but each of those solutions brings more data.

Unfortunately, the analytic data toolchain hasn’t kept up. It’s broken. The mess of point “self-service” tools that were designed to operate on narrow sets of siloed data have been cobbled together to create Frankenstacks – technology science projects that are painful to operate and nearly impossible to maintain. And every new data project often means recreating those complicated stacks from scratch. The Business Intelligence “self service, but only for a single piece of the puzzle” has failed us.

It’s time to rethink how data projects get done. On one hand, we have a wealth of valuable business data being generated by every application and system in our companies. On the other hand, we have no common way to create value out of that data. We’re constantly starting from scratch. What’s required is a common surface for that data that can be quickly shaped into more specific data applications that meet our core data needs. For example, the Revenue team needs a data application to help them understand price optimization. The Marketing team needs a data application to help them understand attribution and focus on ad spend. The IT team needs data applications to help them understand the myriad of event data that their systems generate.

Creating a common surface on which to empower these organizations is the key. To modernize the data stack – and greatly simplify the data supply chain to build value out of data in our organizations more quickly – a new platform for data is required.

The Platform for Data.

The idea of a platform is proven – build the underlying infrastructure pieces and make it extensible with application development components to allow the creation of "end applications" to solve business problems more quickly. It’s time to bring this idea to the data world.

Our industry has historically moved away from platforms in favor of one-size-fits-all BI dashboard tools. We now have a mess of point tools that make it harder to solve end-user business problems. Generic applications for data problems are not nearly as valuable as a common surface for data that can be formed and melded together to solve dozens of problems and is more closely aligned to how different business teams work.

The explosion in data volume and complexity has made a Platform for Data even more critical.

The more data you have, the more valuable it gets if it can be integrated across all business operations. Now that the movement of data to the cloud is the norm rather than the exception, we have untold amounts of data sitting in warehouses and lakes waiting to be piped through our daily workflows. BI grew up around the idea of small data extractions, but now we have new fast databases that give us the opportunity to solve much bigger, broader problems.

We’re also seeing the modern workforce hungry for data in areas traditionally ignored by point tools. The movement away from generic dashboards is well underway. In the past sales data was for the Sales team, systems data was for the Operations team, and ad data was for the Marketing team. Now, we see that bringing the sales data together with marketing data has huge value... and that's just the start. Companies want a more complete and integrated modern solution that goes beyond BI and includes specific applications like Customer Success, Marketing Attribution and Event Monitoring. It’s a way to work in data and operationalize your business around accurate, current data.

So, Looker started to build this platform. We’re organizing it around three big ideas:

Core Services: If you want to retain flexibility, you have to assume you’ll need to be looking at all of the raw source data, not just subsets for a specific application. Then you need to bring together and rationalize the hodge podge of data products that are needed to make a solution: data preparation, data integration, embellishment, governance, security, caching, visualization, access, etc…. Take what you currently need five different products to do for each task and build it into a single layer.

Extensibility: Integration is the key – how do X and Y come together to mean Z? Take those core services and put them together in an open, web-native architecture. Then build development environments to that integration, develop a language, and make every part extensible and accessible to other systems. The goal needs to be 110% API coverage – meaning all and more of the core service functionality must exist through these APIs.

Applications: In the world of SaaS you can't just build the platform, you have to build the first big applications. For Looker our first big application was BI, but that quickly evolved to address more specific functions, like Marketing, Event Analytics or Customer Success. To achieve true adoption of the platform, we also knew we had to embrace the people who will build third party applications on the platform and give them tools they want to use to deliver business value to their end users. In the world of SaaS, time-to-value is everything, so providing the applications on top is critical.

Our vision is simple: to build a platform for data that easily integrates all of a business's data and then allows it to be melded to specific work processes in a way where users can do more with it and solve higher-value problems. It gives today’s workforce want they want: the ability to work in data rather than just viewing it. At JOIN today we introduced Looker 6 – the next evolution of the Platform for Data. When you build a platform, you don't even know what people are going to build on it later. Five years from now, we’ll be surprised at what people will create on the platform. Today is the beginning of boundless possibilities.

]]>
2018-10-10T06:01:00-07:00
<![CDATA[6 Reasons To Love Looker 6]]> http://looker.com/blog/looker-6 http://looker.com/blog/looker-6 This week, we are proud to announce the launch of Looker 6.

Looker 6 takes the analytics platform to a new level with a robust set of new features, greater extensibility than ever before, and an application-focused approach designed to provide users with more value, faster.

Whether you’re a Looker customer or considering the benefits of a self-service analytics platform for your organization, here’s a brief look at some of the reasons to be excited about Looker 6.

1. Advanced Analyst and Model Developer Tools.

Analysts and developers are going to love how Looker 6 platform now builds upon the core services with a suite of open, web-native features designed to make building impactful data applications easier. And why should software developers have all the fun? Looker 6 provides a model development environment that includes branching, folders, version control, code sharing, and code validation capabilities.

2. Customizability and Extensibility.

Looker 6 includes version 3.1 of the Looker API and a number of extensibility features that allow analysts to better meet the growing demands of data users. With Looker 6, embedded analytics can be customized to better match the look and feel users expect. You’ll find that custom visualizations are more sophisticated to help data teams build easy, more instinctual dashboards, allowing users to find valuable new insights, faster. And thanks to our expanded network of technology partners, Looker 6 can be seamlessly integrated with tools such as AI/ML.

3. Looker Applications.

Applications offer easy plug-and-play analytics for specific use cases, making it easy to tap into the hidden value of your data. Structured enough to help users perform day-to-day tasks, applications sit within the larger Looker platform, supporting strategic cross-functional analytic needs. And announced with Looker 6 are public beta programs for applications, focused on digital marketing and web analytics.

4. Content Discovery and Easier Data Exploration.

Along with improved formatting options and greater flexibility for report scheduling, Looker 6 also includes custom fields, giving all types of users more powerful self-service exploration capabilities for lightweight ad-hoc data analysis. And you can help users explore even sensitive data in Looker; the Looker 6 platform is SOC 2 Type 2 compliant.

5. Advanced Administrative Controls.

If you’re the admin trying to control costs or improve governance, Looker 6 has a new way to understand how, when, and by whom your instance is being used. Because this data is pre-modeled, you can use it the same way you use any data in Looker -- not just for cost control or governance, but to encourage a data-driven approach to business among your users based on their habits.

6. Encouraged Focus on Analytics Value.

With the launch of Looker 6, we are continuing to improve upon our already outstanding customer satisfaction scores. By providing support, services, and other assistance to our customers, owning and using Looker is now easier than ever. And as the ecosystem of Looker consulting partners continues to grow, we’re ready to help you with every aspect of Looker 6 deployment, management, and customization.

We’re just getting warmed up…

The launch of Looker 6 doesn’t stop here with these 6 feature sets. In 2019, you’ll see local language versions of Looker 6, fully integrated workflows connected to 3rd party applications and platforms, more ways to take action on data from within Looker, and many more tools built on the Looker platform. Get all the details of what’s new with Looker 6 or see it for yourself by requesting a demo.

]]>
2018-10-10T06:00:00-07:00
<![CDATA[Looker Achieves SOC 2 Type 2 Compliance]]> http://looker.com/blog/looker-achieves-soc-2-type-2-compliance http://looker.com/blog/looker-achieves-soc-2-type-2-compliance Looker remains committed to continually improving its security and compliance practice. In September of 2018, our Service Organization Control 2 Type 2 Report for the Looker Cloud Hosted Data Platform became available for customers and prospects. The SOC 2 Type 2 assessment was conducted by independent auditors, The Cadence Group, who specialize in compliance across multiple industries.

The Type 2 report addresses service organization security controls that relate to operations and compliance, as outlined by the AICPA’s Trust Services criteria. The report includes management’s description of Looker’s trust services and controls, as well as Cadence’s opinion of the suitability of Looker’s system design and the operating effectiveness of the controls, in relation to availability, security, and confidentiality.

While our SOC 2 Type 1 report, released in February of 2018, was a "test of design," showing that specific security controls were in place at a specific date in time, our Type 2 report is a much more rigorous "test of operating effectiveness" of the design, evaluated over a period of six months. A company that has achieved SOC 2 Type 2 certification has proven that its system is designed to keep its clients’ sensitive data secure, and that the design of relied-upon controls is operating effectively.

By implementing the data security controls necessary to achieve SOC 2 Type 2 certification, Looker continues to build on the trust that customers and prospects have in the Looker Data Platform. To provide further reassurance that the Looker platform is secure and highly available and that customer data remains confidential, we will renew our SOC 2 Type 2 certification every six months, beginning in Spring, 2019.

In addition to our ongoing SOC 2 efforts, Looker's compliance team is continually pursuing other opportunities to make the hosted Looker platform secure and trustworthy, including pursuing and achieving ISO 27001 compliance, self-assessing to the Cloud Security Alliance's cloud security assurance program, demonstrating that Looker handles customer data in accordance with the HIPAA data security standards and the PCI -DSS (Payment Card Industry) standard, and ensuring that the Looker platform is aligned with GDPR data privacy obligations.

]]>
2018-10-08T06:00:00-07:00
<![CDATA[How To Use Predictive Analytics and Forecasting To Save Your Company Money]]> http://looker.com/blog/how-to-save-money-with-predictive-analytics-forecasting http://looker.com/blog/how-to-save-money-with-predictive-analytics-forecasting Predictive analytics and forecasting can save your company considerable amounts of money, especially when it comes to sales forecasting. For B2B companies, accurate sales forecasts can be the competitive advantage that keep the business running smoothly, while inaccurate ones can be quite costly. By using forecasting analytics that leverage the latest technology and are based in data, businesses can significantly reduce costs.

Using Forecasting Analytics With Your Data

To help you get started, here are the steps you should take when setting up a predictive analytics and forecasting model for your sales pipeline:

  1. Align with your business operations team on the outcome you’d like to optimize for. In this case - closing deals.
  2. Define the data-sets to use and input to the model.
  3. Centralize all the relevant data in a single place.
  4. Define a training data set that includes ample successful and unsuccessful outcomes.
  5. Define a small testing data set to evaluate the accuracy of the model you create. Make sure the testing set is different than the training set.
  6. Apply machine learning, statistical, clustering, and other predictive analytics methods.
  7. Run the model across the training data set and compare it to actual results to judge the accuracy of your model.
  8. Review results with major stakeholders and incorporate the forecasting model into business operations.
  9. Continue to train and monitor the model.

predictive_analytics

What Opportunities Can Predictive Analytics and Forecasting Find?

If you follow the steps above, you should be able to confidently create sales forecasts based on data and be able to compare them with other sales forecasting techniques. By leveraging data and predictive analytics methods, you can dramatically increase the accuracy of sales forecasts, and can even approximate how accurate the model is. And because it’s all based on data, you can feel confident sharing your forecasts and demonstrating your process for getting there.

Predictive analytics and forecasting is a big step forward for many companies, but for some, it only scratches the surface. Similar strategies can be utilized to:

  • Help your Sales team identify and focus on the best opportunities
  • Highlight the optimal times for an Account Executive to engage with a prospect
  • Inform Sales which content is be best suited for specific prospects

With so much potential to save money and create new opportunities, it’s clear why predictive analytics is gaining more and more traction in the market today.

]]>
2018-09-25T06:00:00-07:00
<![CDATA[Actionable Call Measurement Metrics for Inside Sales]]> http://looker.com/blog/actionable-call-measurement-metrics http://looker.com/blog/actionable-call-measurement-metrics Performance optimization in the world of inside sales and call analytics often focus on vanity metrics. Vanity metrics are surface-level data points that are good for comparison, but don't necessarily lead to actionable insights. For example, reports on number of calls made or connected can give you a good sense of rep performance, but provide little insight into why the numbers may be the way they are. You may notice a lower call volume due to seasonality or tracking error. By focusing on why the number of calls seem low or high, you’re able to make better-informed decisions on how to optimize or bring awareness to the sales process.

Using Looker, our inside sales team is able to view and analyze data beyond vanity metrics, and instead focus on actionable metrics – those that drive behavior, growth, and learning.

What Are Call Analytics?

Call analytics encompass a set of analyses used to optimize how Sales reps process leads. Marketing and Demand Generation go through painstaking effort to produce leads, and having an analytical process in place ensures visibility and accountability between Sales and Marketing. Similarly, these systems helps Sales Managers create expectations for their reps, and provide insight into if and how those expectations are being met.

Here are some phone call tracking metrics and call analytics dashboards we use to evaluate our Inbound Sales Development (SDR) team, starting with simple call counts for comparison and moving into more complex efficiency rates that impact behavior and sales effectiveness.

Our Inbound team is primarily responsible for responding to Marketing-generated leads, qualifying them, and routing them to our Sales team to initiate the sales cycle. Let’s take a look at the top call metrics we use to report on sales effectiveness, and how we can make those metrics more actionable.

Number of Calls Made

metrics

Number of calls made is the most common metric I hear when discussing inside sales. Many sales managers think that putting a goal around this metric is sufficient to set their team up for success. The thinking is that it provides an activity number to work toward, and can be a good lens into the level of effort individual reps are putting forth.

But stopping there is a mistake. While it's necessary to set expectations, this metric doesn't tell you much about efficacy and doesn’t give any opportunities for true optimization. Is the rep actually speaking to anyone, or are they just logging sales calls? Is SDR 1 really more productive, or are they burning themselves out?

Managers may be impressed by SDR 1's high call volume, and direct the rest of their team to achieve similar results. But more analysis is required to truly optimize the sales team’s process and performance.

Number of Calls Connected

metrics

Here's the same view of calls made, adding in how many of those calls are connected – how many of each rep’s phone calls result in conversations with prospects? A rep’s goal is to have as many conversations as possible to initiate the sales cycle, and together with calls made starts to give a fuller picture of each rep’s efforts.

A common characteristic of vanity metrics is that they can be “gamed,” created or influenced in ways that are disingenuous. Calls made falls into this category, since reps could theoretically log calls after hours when there’s a very low chance their prospect picks up. The measure of connected calls is much tougher to game, since it requires a conversation occurring between the rep and the prospect, and therefore gives a better level of insight into a rep’s effectiveness.

We're starting to get a bit more info, but this alone is still not super actionable at a glance. One quick addition to this dashboard will start to point us in the right direction.

Connected Call Rate

metrics

Connected Rate gives a good apples to apples comparison of efficiency. This is a simple calculation of Connected Calls ÷ Calls made, and provides a more complete view into each rep’s process.

What are SDR 5 and SDR 7 doing differently to connect with more people? You can interview reps for anecdotal thoughts, or do further quantitative analysis on other factors that could affect this rate – for example, what day and time are reps making calls? Leadership should tease out those best practices and share them with the rest of the team to increase everyone's efficiency.

SDR 1 is making the most calls, but could benefit from 5 & 7's strategies around connecting with more people. Similarly, 5 & 7 have something to learn about increasing call volume. Action items exist for each rep here, as well as for inside sales leadership.

Now we can shift a bit to focus on results. Typically, the main indicator of success for reps is the number of Meetings they book for the Sales team. But only focusing on results lacks a certain amount of context – just like only focusing on calls made – so I'll skip that view and compare results alongside activities.

Calls Connected & Meetings Booked

metrics

Above is a dashboard of Connected Calls next to a count of Meetings Booked. Again, this is helpful for comparison purposes, but is not ideal to take action from.

The addition of another metric to this dashboard makes a major difference.

Meeting Booked Rate from Connected Calls

metrics

The green line shows Meeting Booked Rate, which is the percent of Connected Calls that led to Meetings Booked. This is created by dividing Meeting Booked count by Connected Call count. Just like the report with Call Connected Rate, we now have a good point of reference to gauge sales efficiency.

SDR 1 and SDR 8 have a similar quantity of Meetings Booked, but SDR 8 is twice as efficient. What can SDR 1 learn from 8's phone techniques to improve their meetings booked rate, and what can 8 learn about increasing call quantity? This kind of analysis allows team members to learn from each other's personal processes and consequently improves the output of the team.

Inside sales leadership should use this information to inspire their coaching and training sessions. Sit in on call sessions to understand strategies and techniques, and be sure that the team continues to build on the existing foundation and improve the process.

Meeting Booked Rate from Calls

metrics

Back to the Calls metrics – here's a look at the overall percent of calls that turn into meetings (Meetings Booked ÷ Calls Made). This can be useful for a Manager to set or reset expectations about how many calls reps should be making to hit their meeting goals, and is another way to evaluate rep efficiency.

This gives similar information as the Meeting Booked Rate from Connected Calls Dashboard in a slightly different view. These percents can be a bit tough to interpret, so flipping them around may lead to more actionable insights.

Calls per Meeting & Connected Calls per Meeting

metrics

In this dashboard, we see that SDR 1 is making 29 calls for every meeting booked, where SDR 5 is making 16.9 calls per meeting booked.

Similarly, SDR 1 gets a meeting from every 5.3 people he they speaks with, where SDR 5 gets a meeting with every 4 people she speaks with.

This is a good way for reps to understand their efficiency and sales effectiveness, and it helps dictate their day-to-day planning and forecasting. If SDR 2 has a goal of booking 2 Meetings per day, she can set activity targets of ~56 calls and ~10 connections per day to get there.

Taken at an team-wide aggregate level (below), this is great analysis for the inside sales team to do headcount and lead flow planning. Starting with the goals around Meetings Booked, management can make informed decisions on how many reps to staff based on the inside sales goals. This kind of predictability is essential for scale and growth.

metrics

It's great to monitor effort and performance through the lens of data, and it's even better to utilize your call measurement and tracking analysis to influence action and behavior.

Challenge yourself and your team to find and focus on these actionable inside sales metrics. This removes a lot of the guesswork from staffing and workflows, and ensures that individuals, managers, and teams know how they can continue to improve and what they need to do to hit their goals.

]]>
2018-09-20T07:00:00-07:00
<![CDATA[JOIN 2018 Hackathon]]> http://looker.com/blog/join-2018-hackathon http://looker.com/blog/join-2018-hackathon Now more than ever, the accomplishments developers are having on organizations and product users are inspiring future innovations across the tech community. At Looker, I am continually excited by the vibrant, growing community of smart, accomplished analysts and developers who are using Looker to make analytics great for their own users and organizations. The scope and diversity of their accomplishments - enhancing and improving Looker - is a significant part of why Looker has gained popularity in the tech industry.

In response to the growing network of developers and analysts using Looker, I am thrilled to announce the JOIN 2018 Hackathon event. In conjunction with JOIN 2018, this hackathon is a place to meet fellow developers, Looker Engineers, and members of the Looker partner community to build new, interesting things and expand your Looker skills.

Come spend time with fellow developers building products, models, and dashboards on the Looker platform for a chance to win “Best Hack” and “Fan Favorite” awards. We’ll have technical representatives from Looker along with our technology partners to assist with technical questions, as well as open Looker instances and some pre-modeled data sets to work off of.

A few additional notes before you attend;

  • Please bring your own laptop
  • You’re welcome to use your own data + Looker instance for the event
  • You’re encouraged to begin building beforehand, so as to make the most of the day with your peers

Join us on Tuesday, October 9th from 2:00pm-8:00pm at the San Francisco Palace of Fine Arts (3301 Lyon St., San Francisco, CA 94123) for the Hackathon event!

Sign up here, and I hope to see you there!

]]>
2018-09-19T08:00:00-07:00
<![CDATA[An Analyst’s Guide to Data Virtualization]]> http://looker.com/blog/an-analysts-guide-to-data-virtualization http://looker.com/blog/an-analysts-guide-to-data-virtualization Imagine that everything you wrote had to be written on a typewriter. Any typos meant getting out the Wite-Out, and any larger edits meant retyping the whole page. Now compare that to our reality today, where we have word processors that allow us to edit and update our work instantly. By virtualizing the process of typing, we unlocked much more efficiency and solved huge editing and cleaning pain points. What word processors have done for the written word, data virtualization does for the world of data.

In Why Data Virtualization is an Analytics Game Changer, we provide an introduction to data virtualization, share the key pains it alleviates for data teams and analysts, and take a look at the data virtualization landscape. While the white paper takes a deep dive into several real world examples where data virtualization alleviates problems, this post aims to focus on where data virtualization fits into your technology stack and how data flows through it.

Where Data Virtualization Fits In

Data goes on a journey as it passes through your tech stack. Throughout this journey, data is joined and transformed in different ways. One way data virtualization can help you is to virtualize the data before you send it to your warehouse. This opens up the power of Virtual Events, in which you can define, erase, and update events retroactively without touching your data and codebase. For example, you may add a new campaign landing page to your website to drive new signups. With Virtual Events, you can define those signups to be a “campaign signup” event a week or two later to see all the data for that event. If the campaign becomes less important in later months and you want to make those campaign-specific signups part of “all signups” instead, you can go in and redefine that “campaign signup” event to be a general “signup” event, without any lost data or impenetrable naming conventions.

After you collect and virtualize behavioral data on your website, you can push that data into your data warehouse. Then in your business intelligence tool, you can virtualize your customer behavioral data, along with the rest of the data you’ve replicated in your warehouse. With modeled data in your warehouse, you can generate more powerful visualizations, use it for performing ad hoc analysis, feed it to advanced learning tools, and get powerful action and data out of the warehouse.

These types of virtualization work extremely well in tandem. Virtualization across different layers quite often complement each other. For example, at Heap we see some of our customers:

  • Use Heap for collecting and virtualizing behavioral data on their website
  • Push Heap data downstream into their data warehouse where it sits alongside other sources of data
  • Use SQL Views to further model that data in their warehouse for specific analytical applications
  • Use Looker as a visualization and insights layer, and use LookML to model their warehouse data to make it business-focused for use within Looker and other applications

Data virtualization is a technological trend that every analyst should be aware of. There are many benefits to virtualizing data, and it can make a big impact to an analyst’s job at several different parts of the data flow within their tech stack.

To get more in depth with data virtualization and why it’s an analytics game changer, check out Heap’s whitepaper.

]]>
2018-09-19T07:00:00-07:00
<![CDATA[How to Measure Event ROI & Impact]]> http://looker.com/blog/how-to-measure-event-roi-and-impact http://looker.com/blog/how-to-measure-event-roi-and-impact Analyzing offline marketing activities is crucial to understanding the return on investment (ROI) and impact of these tactics. In particular, offline events like trade shows and road shows are notoriously difficult to track, measure, and calculate ROI. Doing so requires:

  • Extensive tracking of online and offline marketing efforts
  • Mapping and attributing these efforts to offline conversions
  • Tying in qualitative data to quantitative data to understand what & why

And, last but not least, we also need to pare down the plethora of data available to the few key performance indicators (KPIs) that measure the influence of offline events to multiple points along the buyer’s journey.

So when we decided to take JOIN on the road for the first time, we knew we needed to create a killer event management dashboard. For JOIN The Tour, we wanted to make sure we were on the right track. As a road show that required a ton of resources from a huge team of people, we had to be able to justify JOIN The Tour as a worthwhile investment for the company.

Our goal was to understand the data so that we could determine which tactics to pivot and optimize and which tactics to continue to expand.

Ultimately, what we found provided us insight on how to continue driving the value of offline events in our overall marketing strategy, understand the behavior and habits of different regions, and how to plan and market future events.

Step 1: Determine offline events success metrics and KPIs

roi_impact

While we wanted to know just about everything about how these events were performing, the key to getting the insights we needed required the right metrics.

Before we started creating the event management dashboard, we first needed to determine what success would look like for an event.

  • After much discussion within the team and with stakeholders, we focused on:
    • Number of registrants
    • Number of attendees
    • Number of no-show’s
    • Attrition rate (percent of non-attendees to registrants)
    • Event ROI
  • Audience was also important to understanding the success of an offline event. To analyze audiences, we decided on geography of the event and size of company as our segments.
  • To help us better calculate event ROI, we used pipeline created or influenced to give us a dollar value for offline events.

Step 2: Event success metrics to include in your dashboard

Calculate event registration by city

roi_impact

Choosing the right place for your event is as important as the event itself. The goal of JOIN The Tour was to bring the JOIN conference experience to cities all around the world. However, we had to face the challenge of making sure there was interest to attend an event like ours in the cities we chose.

Registration volume, along with each city’s share of registrations, were the event metrics we chose to measure demand. It also helped us to plan resources as cities with a high volume of registrations had a higher likelihood of requiring more staff, budget, and time to plan and execute.

Calculate event registration by company size

roi_impact

Along with geography, company size plays a key role in how we plan and staffed the event. We wanted to make sure that each attendee could talk to someone from Looker who was intimate with the unique data challenges and needs specific to their company, in size and industry.

Monitoring registration by company size also allowed to us plan accordingly in how to best staff each city’s event so that we could be sure each attendee could connect with a Looker who could best address their needs.

Calculate event attendance and attrition by city

roi_impact

Since it was the first time we did an event series of this magnitude, we wanted to make sure we kept a close eye on attendance in each of the JOIN The Tour cities. After the day of the event, we closely monitored the number of attendees vs. non-attendees and attrition rate to determine an event’s success.

This report gave us insight into how likely registrants are to attend another event. Once we layered in qualitative data which was specific to the culture of each city, we were then able to come up with future plans on how to run a truly successful event in each city that would enable higher chances of turnout.

Calculate pipeline attribution models for event ROI

roi_impact

Finally, we get to the holy grail of offline event marketing: event ROI. Here at Looker, we examine a tactic’s impact through both first-touch and multi-touch attribution. Some tactics have a much stronger influence when they are the first interaction someone has with Looker (first-touch), while other tactics are better suited for those that may already be familiar with Looker (multi-touch).

To understand the ROI of offline events immediately after they occurred, we analyzed the potential pipeline generated through these events by comparing first-touch attribution against multi-touch attribution. This gave us insight on how best to tailor the event experience based on the likelihood of whether or not someone had interacted with Looker previously.

Step 3: Gather key learnings for offline marketing strategies

The final product of our event ROI dashboard was empowering. One of the best feelings is knowing the data you are analyzing is correct and that you can take direct action from it.

Thanks to our JOIN The Tour dashboard, we now understand the percentage of people who would show at an event thus making future event planning easier. The attrition rate also prompted us to change outreach to figure out why a registrant didn’t attend.

One of the most important parts of creating this dashboard was how we made sure to understand the attribution of an event to pipeline based on different models. Using the 5 Tips for Growth Attribution Modeling That Actually Work, we compared different attribution models so we could better understand the differences in influence and impact of events in each city.

Our attribution metrics became one of our major success markers. We uncovered regions where events could have high impact on brand awareness and engagement. We are also now better enabled to plan effectively for future events as we could now confidently allot more event budget for specific regions since we had the data to back it up.

]]>
2018-09-12T06:00:00-07:00
<![CDATA[Must-Have KPIs for Your Marketing Dashboards]]> http://looker.com/blog/must-have-kpis-for-your-marketing-dashboards http://looker.com/blog/must-have-kpis-for-your-marketing-dashboards Over the last few years I have really come to appreciate the value of data for driving and aligning what marketing does within an organization. In the past, I would use data to let me know what I needed to do to improve my tactical efforts and show other people whether or not something was successful. Since so few marketers leveraged data in their projects, it was an “ego thing” for me: I called it my ‘data stick.’ I have since learned that data is best used as a collaborative tool rather than a tool for showing off.

Let me share with some of the best ways I have found to leverage data for improving your programs and the company in general. I have some marketing metrics dashboard examples for you, but first we need to think about who we are sharing with, why they need the data, and what they need.

Marketing KPIs for Company

Let us first think about the data you want to share with the broadest group of people in your organization: the company key performance indicator (KPI) Dashboard.

You want to share a metric that’s both easy to understand and highlights whether or not marketing is fulfilling on their goals or if there are difficulties in reaching those goals. You want the metric in context with other company metrics, so people can understand the marketing to sales hand-off within the funnel. You may need to give context as to how your goals are calculated, so understand your cohorts and assumptions (ex. first touch leads, new sales meetings sets, and marketing qualified leads delivered to sales.).

The main marketing KPI is the metric that is the most clearly owned by marketing. It is likely leads captured, or form fill-outs. It may be the number of sales opportunities or pipeline dollars delivered this quarter. Just be clear and consistent - it is ok to have simple explanations and context setting.

kpis

This is a single KPI for a company dashboard. If leads are the one thing most important and in control by marketing, this single KPI is the best thing to show to the entire company.

Marketing KPIs for Marketing Teams

The next group you want to have high level KPI tracking is your marketing team itself. They want to see KPI visualizations that show how they, as a group, are achieving their goals. So, it is far more detailed than what you show the entire company in the company KPI dashboard.

You will want this marketing dashboard to reflect achievement to goal by the subgroups, regions, segments, and channels. For example, at Looker we look at the achievement to goal by acquisition medium (ex. online, trade shows, email), business segment (ex. small and mid-sized businesses, enterprise), regions (USA, EMEA, APAC), and subregion (UK, France, Eastern US, Western US, etc.).

This detail allows me as the team leader to help divert funds and resources from areas that are overperforming to areas that are underperforming. It also helps to provide the team with a unified goal, so we are working together—not against each other, and it makes it easy for our team to show other groups how our efforts are working at a detail level. We make these dashboards available to all, so that people have an idea of what we do and know we are not hiding anything.

If we are doing well, great! If we aren’t, we can dig in together to uncover what is going wrong. It also allows me to easily highlight to other people in the organization if we have a future pipeline issue. Most of the time when there is an issue, it is an issue with how the data is being captured, tagged, and tracked. On rare occasions, we’ve found that we have just had a hard time generating interest in a region, and I can talk to other teams with data and less emotion.

Return on Investment (ROI) Dashboard

A note on tracking ROI: it depends on your business.

If you are in a mature company, you want to be able to show ROI as much as possible because you probably have restricted resources and want to show ROI for what you do. If you are in a growth stage company, I would suggest look at a rolling 12-18 months window of ROI and do not look at it every day or week. This is because you have limited data, so it is not smooth. You probably also have marketing efforts that are too young to look good and profitable. Looking at ROI too early will cause you to stop doing something that might just take awhile to turn into a won customers.

Getting into the details: KPI performance for segments

OK, now that you have your best KPI dashboards for sharing and tracking how the large effort of marketing is doing, you now want to dig into some nice detail data so you and your team can figure out how to optimize the programs and tactics. Just like the team tracking KPI tracking above, it depends on your programs and how your company does business.

For any established or growing concerns, you will want to drill into a specific audience and understand how different regions, business units, or segments perform.

At Looker, we are now to the point that we need to know how marketing efforts in a specific region are working. With sales teams serving only specific geographic regions, the ability to round robin leads and opportunities is becoming more difficult. So, like the team KPI tracking, we have regional KPI tracking dashboards.

kpis

This report shows the number of first touch leads and opportunities created in the last 2 months by tactical medium for Emerging Regional Markets for 2 product lines. This is a lot of information in a simple format that is very easy for anyone to absorb and understand how marketing efforts are impacting two products in a specific region.

You need to determine what is most important for your company and team. What are the sales territories that need attention? What business lines need specific tracking? What segment of customers has reached a size that can show trends?

Acquisition channel dashboards

One of the best ways for the marketing program managers to optimize their programs is to have a dashboard specific to their program. By analysing data specific to a program, a marketing manager can confidently make changes. These dashboards are not viewed by many people within an organization. This is because it is at such a detail level that few people really understand it or need to understand the assumptions, metrics, goals and achievements. But, it will be invaluable if each of your managers is looking at data and improving their area. All of those small optimization add up to big improvements on the team dashboards. To learn more about dashboards for programs, read how one of our Demand Generations Managers has built her Demand Gen dashboard.

kpis

This report is part of a marketing program dashboard. It compares lead volume week by week for vendors in our affiliate program. It allows the program manager to track and look for issues quickly and easily.

This small portfolio of marketing dashboards, whether they are visual dashboards or dashboards of tabular data are key to the success of your marketing team. You can uncover issues early through the consistent tracking of KPI and you can celebrate success.

]]>
2018-08-30T06:00:00-07:00
<![CDATA[Fantasy Football: Using Data to Guide your Draft Strategy]]> http://looker.com/blog/data-driven-fantasy-football-draft-strategy http://looker.com/blog/data-driven-fantasy-football-draft-strategy When it comes to preparing your draft board, it’s important to find value in your picks. Fantasy leagues are rarely won in rounds 1 and 2. It's critical to find players in the later rounds that can contribute to your team's success. It also goes without saying that team management (waivers, weekly rosters, injury management, etc.) is also mission-critical, but let’s just focus on the draft for now. So how can we uncover value when possible on draft night?

Well, I always think of value in two ways:

  1. Holistic value. How do you value different positions? How do you value building a well-rounded team vs. taking the best player available? In other words, how do you approach your draft? Having some overall philosophies on your approach will go a long way come draft night.

  2. Pick value. Some people think of this in terms of finding “sleepers.” Sure, finding guys in the late rounds that have breakout seasons is fantastic, but it doesn’t have to be so boom or bust. How can you ensure you’re finding value in every level of the draft; from your top picks to your last?

Let’s start with positional value. In real life, the QB is the most important position on the field. On a fantasy roster, I’d argue the contrary. If we look at the top 10 fantasy QBs last year, aside from Russell Wilson at #1, there are marginal differences in performance for the remaining (granted, Aaron Rodgers was injured). Cam Newton at #2 totaled 299.5 points*, while Ben Roethlisberger at #10 had 260.7 points, roughly 39 point difference. Over the course of 16 games, that’s 2.4 points per game. Furthermore, there were 22 QBs who had over 200 points on the season. Comparing that to RB, the difference between #1 and #10 was 161 (10 ppg) and only 9 total with 200 points or more. Parlay that with the fact that you only need to select one QB (in most leagues) and multiple at other positions, moral of the story; don’t value your QBs too high.

Now let’s determine value between RBs and WRs. Again, the popular belief is that the NFL has become a passing league, and therefore WRs should see more action on gameday. That’s not wrong, but it doesn’t necessarily translate to fantasy value. Most teams run offensive sets of 1 RB, 3 WR, 1 TE or 1 RB, 2 WR, 2 TE as their base. Even if the passing game accounts for 60%+ of a team’s snaps, that is spread across multiple WRs on the field at once, whereas the sole RB on the field will get all touches on running plays. Let’s see a breakdown from last season:

fantasy

Notice a difference? On average, receivers were targeted on 11% of the 44 snaps they played per game. That’s roughly 4.8 targets per game (targets don’t mean catches). RBs, on the other hand, see action on 40% of their snaps, roughly 13 touches per game. As you can imagine, this provides RBs with more opportunities to make something of their involvement in a game. Don’t take my word for it though... take a look at average fantasy points per 100 snaps between the two positions:

fantasy

Okay, so core philosophy number two: Take RBs early. In addition to taking them early, try and favor “bell cow” RBs. These are guys who rarely split carries, and average in the low to high 20s in touches per game. There were only 8 of them last year, so like I said... take RBs early. Even if at the league average of 4.1 yards per carry (which these guys all exceed), that’s 80-100 yards before touchdowns or receptions.

fantasy

There’s definitely some of you reading this right now saying, “Well I’m in a PPR or Half PPR league, so I’m disregarding this.” Fair, but this also comes down to scarcity. As we see above, there are only so many productive RBs to go around. Most teams have one if any, draftable RBs you can start, whereas they usually have 2 receivers worth drafting. Also, there isn’t as much of a drop off when it comes to receptions per game for receivers as there are touches per game for RBs. Last year, #1 ranked WR Antonio Brown averaged 7 receptions per game. #36 ranked WR Allen Hurns averaged 4 receptions per game. That’s a difference of 1.5 points per game for half PPR leagues.

Alright, so we’ve discussed some holistic value approaches when it comes to how you view positions on your draft board. Now let’s talk about how we can find value in each pick. I’ll point out some things to look for, but be sure to reference this draft board for insights deep into the later rounds on draft night. Let’s first take a look at a projected draft pick vs. total points distribution for this year compared to last year’s draft pick vs. total points distribution outcome:

fantasy

It’s pretty funny that the relationship is never as directly correlated as you think it’s going to be. So how do we find the DeMarco Murray (see above) of the 2018 draft class? Drafted late, if at all, Murray was the 23rd ranked RB scoring 152 fantasy points on the season. Well, Murray averaged 15 touches per game (above the league average) and 3 targets per game. This almost puts him in the bell cow category, mentioned earlier. It also means Murray made the most of his snaps. One area to focus on when finding value is points per snap. If you look at the top projected players this season, there's one clear outlier:

fantasy

If you haven’t spotted it yet, it’s Alvin Kamara. As a RB who split time last season, he was incredibly efficient per snap. Enough to land him the 6 overall projected player in 2018. Will the Saints increase his snaps as a result? Will increased snaps have diminishing returns? Probably yes to both, but still an incredible value. So who are some lower projected guys with high efficiency per snap:

fantasy

Here are some players getting picked in the 5th round or higher with some pretty incredible efficiency. By this time, full time starters are gone. If you need to grab a backup or #2, might as well grab the most efficient.

Another core philosophy I have when deciding between two players in a draft is consistency. Boom or bust players will leave you frustrated more often than not. Below is a prime example. This is T.Y. Hilton's yards, touchdowns, and running point total from 2017. Yes, Andrew Luck wasn’t playing QB, but his production was incredibly inconsistent regardless. He is currently projected to get drafted around pick 25 (Round 3).

fantasy

Meanwhile, Doug Baldwin, currently projected to fall 7 picks later at pick 32 (Round 4) had a different story:

fantasy

You can see his production was much more consistent. With guys like this, you know what you’re getting week in and week out. This not only helps with scoring but with your overall team management. Reduce your Sunday morning stress of Googling “who to start” websites by drafting consistency.

To compound on this, I also look at how players pace over the course of a season. Meaning, do they start off great and then trail off? Do they take a few weeks to get going? Again, fantasy seasons should be treated in full, not week by week. I find it interesting that Todd Gurley is the undisputed #1 pick for most leagues, while Ezekiel Elliott is projected to go #3, even #4 behind Le’Veon Bell, in most leagues. If you look at last year, after 8 weeks, right before Elliott’s suspension:

fantasy

He was matching pace with Gurley going into the second half of the season and was outpacing Le’Veon Bell through the first half of the season. In fact, by week 9, Elliott had 18 more points than Bell on the season. If you are picking in the 3, 4, even 5 range and Elliot becomes available, it's a sure-fire pick in my mind. I will conclude that statement by saying I am a diehard Dallas Cowboys fan.

That’s all I have. To recap:

  1. Have some core overall philosophies entering your draft.
  2. Don’t value QBs too high.
  3. Draft RBs early and aim for “bell cows”, even if they are on bad teams.
  4. When it comes to later rounds, look for efficiency or high points per snap.
  5. When comparing players, look for consistency and pace throughout the season. Boom or bust players make weekly management a nightmare.

*All player data is from Fantasydata.com and covers the 2017-18 season, 2018 projections are from Fantasydata.com and 4for4.

]]>
2018-08-28T00:00:00-07:00
<![CDATA[For ease of use and speed to insights, Shinesty Chooses Looker and Panoply]]> http://looker.com/blog/shinesty-chooses-looker-and-panoply http://looker.com/blog/shinesty-chooses-looker-and-panoply Shinesty is one of the quirkiest brands on the web. The company sells clothing, costumes and general fun for those who want to stand out at a party. One look at their website, Instagram feed or Facebook page and you’ll see, this brand has a fun edge.

shinesty

Shinesty is also a Looker and Panoply customer - and uses this data stack to visualize and coalesce data from dozens of sources.

In a recent interview with Bob Vermeulen, Shinesty’s Director of CRM, Bob shares his background, his role at Shinesty and how he’s pushing data intelligence at the small but data-rich internet retailer.

Tell me more about yourself and Shinesty!

I’ve been at Shinesty for a few months. My background is in big data and running direct marketing analytics for 20+ years. Shinesty is about four years old and we’re growing really fast. In 2018, the Boulder-based company realized they had an increasing need for business intelligence insights.

What is data and business intelligence like at Shinesty?

When I was first brought on, Shinesty used RJMetrics, which is a great solution for, say, a 5-20 member company that just needs basic information about their sales - but that tool doesn’t have the flexibility we needed for this stage in our business. For instance, we were limited to only being able access our own database and were confined to the given data model. As a result, as a BI pro, I’m severely limited as I can’t write back to the tables and you can’t make calculations or changes to the data model without jumping through hoops with their developer team. In that reality, my requests get put into a queue - with the speed of business these days, I needed another solution.

For example, if I’m running an analysis and I put a request in the queue - by the time the issue is executed it’s been one to two weeks and we’ve moved on to wanting other types of insights/reports.

So, Shinesty knew they needed a different solution and I was brought on board to help that transition. We’ve gone down the path of choosing a three-vendor stack (at least for the time being). We have data integration and warehousing with Panoply, supplemental ingestion via Fivetran, and Looker as our data platform.

There were a few things that initially attracted us to our chosen data solution.

  1. The vast amount of connectors available into the data warehouse
  2. Integration with an easy to use and fast data visualization solution in Looker
  3. Ability to add data and metrics at a quick speed.

Tell me about your data sources.

Our data needs started on the marketing side with ads and web analytics and now that we have a bonafide data stack in place, we’re adding new data sources all the time (operations, merchandising, app performance, etc.). In Panoply, we have data from:

  • Shopify
  • Facebook
  • Klaviyo
  • Bing
  • Netsuite
  • Delighted
  • AdWords
  • ZenDesk
  • Google Analytics

What are the goals for your data stack?

Our biggest goal is democratizing data - giving dynamic dashboards, insights and the ability to explore date to as many Shinesty employees as possible. We’re a small yet nimble company and we have goals we need to meet - and putting data into the hands of those daily decision makers is huge to us. After that it’s speed to insights and the flexibility to do more complex analyses.

Now that we have more data integrated, we can use insights to better target and mix our paid budgets. Soon, we will we better understand our various customer segments across touchpoints, as well as predictive analytics so we can be even more forward looking.

Before, we had basic analytics, but now we can track which channels are performing best - and whether people coming through those channels are lower or higher lifetime value targets for Shinesty so we can invest or course correct according.

In the new future, we’ll start tracking which inputs yield the highest value customers for us and which signals predict product success. These are things we could never imagine with our old solution. Also - we’d like to create propensity models and add those to our emails and customize our web page based on your buyer profile. But, to do any of these things, we need a flexible data environment - that we have now with Panoply and Looker - which allows us to plug in web tools/services without involving IT or change requests.

What metrics or formulas do you track an ongoing basis?

Before, our business channels and marketing spends were largely tracked at a high level. More detailed analytics had to be pieced together from one spreadsheet model to the next.

As such, we pulled metrics from the tools themselves (such as Google Analytics and Facebook) and imported them into spreadsheets. Now, we’re looking at 75% of the same numbers - but in an automated, scalable way - our product and channel managers can spend far more of their time making smarter decisions because they’re not copying and pasting data into spreadsheets.

Which channels perform best for Shinesty for attracting customers?

We’ve developed a strong mix at this point. Though I’ll say the nature of our products and marketing biases us toward more visual channels (social, email, etc.). With the data, we can see the inflection points of which channels are attracting the right customers right now. In general, our focus is on entertaining our customers, that way when they’re interested in purchasing a product we haven’t burned them out.

We’re excited to look at the new data we’ve realized and see what else we can do with it.

]]>
2018-08-21T06:00:00-07:00
<![CDATA[JOIN 2018: A Look Inside]]> http://looker.com/blog/join-2018-sneak-peek http://looker.com/blog/join-2018-sneak-peek On October 9th-11th in San Francisco, the data-driven community will gather to present and discuss trends, approaches, and tips for the smarter use of data and analytics at JOIN 2018.

This year, JOIN is shaping up to be the biggest and best yet -- a truly educational data conference with the best of last year’s events paired with new and important data topics. There’s a lot to look forward to this year. Here’s a little sneak peek of what will be happening this October in San Francisco:

What’s coming back

Conversations, not just seminars
Like last year, this year’s JOIN provides ample opportunity to not just hear sessions from experts riding the latest wave of analytics, but also to meet them and hold meaningful conversations. Speak with all kinds of data people at the forefront of the data ecosystem, and bring back their ideas and guidance to what you do everyday.

Lessons from the Trenches
This year’s JOIN will include folks from Stack Overflow, Turner Broadcasting, The Girl Scouts, WeWork, 451 research, and a lot more. These are the visionaries making decisions with data at their organizations every day, and pushing the boundaries of what it means to have a data-driven culture.

Hands on labs
JOIN 2018 features hands-on labs with topics ranging from introductory data exploration to integrating data science workflows. Sit in on labs this year and learn more about topics like table calculations and building environments that encourage data exploration.

What’s new

Data for Good Sessions
We believe it’s important that we use the power of data to make a difference, so this year at JOIN, there will be a focus on how organizations can improve their world with data, not just their businesses. There will be sessions that are specifically focused on security, privacy, and compliance, and activities that allow attendees to give support directly to the larger community.

We hope that everyone will leave the conference having had the opportunity to find new ways to make the world a better place with data.

A Deeper Dive into the Pipeline
The data pipeline can be complicated. New data sources are everywhere and ETL, virtualization, and databases are improving everyday. All of these improvements are exciting, but they can make it even harder to choose the tech stack that’s right for your business.

At this year at JOIN, companies like Amazon Web Services, Fivetran, Google Cloud, Panoply, Segment, and Snowflake will be sharing the latest updates in their technology as well as how to integrate different solutions to build the right stack for any company. These deep dives will uncover what’s possible with the latest improvements in this space and share stories from real-world implementations that you can apply to your own stack.

New Looker Product Updates and Best Practices
Our goal is to build the best product we can to make you, our customers, successful, and we are looking forward to sharing some exciting new updates to the Looker platform to do just that.

But the excitement goes beyond the main stage - there will be deep dive sessions with the product managers and engineers who are building this stuff and workshops on how to get the most out of every part of Looker - both new and currently existing.

Save Your Spot Today

We are very excited for this year’s JOIN and we hope you are too.

Register today on the JOIN 2018 website.

We can’t wait to see you in San Francisco!

]]>
2018-08-16T06:00:00-07:00
<![CDATA[7 Questions to Take Your Marketplaces & eCommerce Analytics to the Next Level]]> http://looker.com/blog/7-questions-marketplaces-ecommerce-analytics http://looker.com/blog/7-questions-marketplaces-ecommerce-analytics Competition as an eCommerce or marketplaces business is fierce. Consumer expectations increase a little with every new website or app that launches: more options, the ability to customize, lower pricing, and seamlessly speedy service. How do you keep up with the competition let alone exceed expectations to stand out?

Here are 7 questions the best companies, like Amazon, Deliveroo, and Etsy ask themselves:

1. Where are your customers coming from?

There are a few things you should know when paying to acquire customers. You need to know where they’re coming from and how much it costs to bring them to your store. To which channel can you attribute these acquisitions? Exactly how many customers were acquired, and at what cost?

With the right tools, you can build your own cross-programme metrics that empower you to take control of customer attribution claims, and any charges that may ensue.

7_questions

SmugMug, a premium photo-sharing service, discovered that the numbers from one of their vendors directly contradicted their own data. The vendor claimed to be the source of a certain category of new subscribers, but SmugMug had concrete data to prove those subscribers had started a SmugMug trial before seeing their first ad from the source in question.

Once you have a source, you can then look at the number of new customers different sources are delivering to your business. You can also group your most valuable customers—those who spend a lot, make frequent purchases, buy products with high margins, and/or rarely return—and see where they’re coming from and how much you spent on the source(s). Together, this information equips you to make better, more informed marketing decisions.

2. What are your most valuable referral sources for customers?

Whether it’s providing rewards through referral programmes or offering discounts for friends, every e-commerce business employs various guerilla marketing tactics.

Ask yourself: Are your referral tactics working? Or are people gaming the system? Who are your referrers and what is their customer value? What are the characteristics of referred customers versus organically sourced customers?

HotelTonight, an app for booking hotel rooms from your phone, has a program that gives referrers a discount on their next hotel stay. An analyst at HotelTonight observed there were people who referred a lot of customers and collected the reward but had never used the app themselves. He wondered, “Are we getting gamed?” When he investigated further, he discovered their programme was working in an unexpected, but very beneficial way. It turns out there are professions—taxi drivers, flight attendants, airport information staff, etc.—who are often asked for last-minute hotel advice. These individuals never used the HotelTonight app for paid stays, but got a lot of credit for free hotel stays by driving of HotelTonight business.

3. Who are your most valuable customers and their lifetime value?

You can use your transactional data to calculate what a customer is worth to you, so you can focus your best efforts on those with the highest lifetime value. To calculate customer lifetime value, calculate the lifetime revenue less the cost of getting and maintaining the business. Somebody who buys a lot from you already may be a great person to target with specialised offers—for example, those that build brand loyalty with minimal impact on margins—saving your discounts for customers you’re hoping to win over.

7_questions

Things get even more interesting if you add referrals to the mix. You can calculate the lifetime number of referrals for a customer, calculate the value of all those referrals, then determine the average value of a referral for the customer. Afterwards, you can build different classes of referrers and create offers that incentivise each one appropriately.

4. How are customers engaging on your site?

In addition to third-party data sources and first-party transaction data, online retailers can capture event-level data that describes customer interactions in great detail. By combining granular event data, consolidated across all digital sources, it’s possible to roll up your data into sessions for a very interesting view of customer behavior. How often do people visit and not purchase? What percentage of customers bounced? Does the percentage change over time? How did people from one referral source engage compared with people from another source?

7_questions

Lyst is revolutionizing the way people shop by connecting millions of consumers to the world’s leading fashion designers and stores all in one place. At the heart of Lyst is an enormous amount of data, generated by tracking implemented on their website to better understand their customers' online behaviour.

By tracking everything, from a .gif loading in the background to the products a visitor browses, the Lyst team gains insight into how their customers navigate the site, and can then optimize it to create a painless shopping and checkout experience.

Analysis of event data can suggest opportunities to improve your site or create new tools for better engagement. It can help you determine which prospects are worth something, so you can remarket to them in a way that will delight them. It will reveal the sources that drive the behaviours you want, so you can take action in just the right way, with the right level of resources.

5. How can you turn help requests into happier customers?

Customer Support requests are chock full of information about friction points in your customer experience. Unfortunately, this info is often trapped in single solution tools and company bottlenecks.

The Dollar Shave Club customer service team improves customer experience by tracking the volume of help tickets. For example, if 25% of tickets are unresolved at the end of the day, the team can add an extra shift or otherwise reallocate resources to make sure a bigger backup doesn’t occur. Customers stay happy because their issues are resolved quickly and the Dollar Shave Club team never gets too deep in the weeds.

The customer service team also tracks the relationship between help requests and churn. By connecting all their company data, they‘re able to see how shipping errors were leading to lost customers and recommended improvements to the shipping process.

6. Where can you be more efficient?

For each SKU, you can compute “days on hand”— available inventory divided by average daily sale—to see how fast the warehouse will clear out of a product and to understand how big your bu er is. For available inventory, you track how many of the SKU were in inventory at multiple points in time to understand how fast inventory is going down or how fast it’s increasing as it’s restocked. For daily average sale, you can collect data over a relevant period of time and divide by the number of days.

7_questions

ThredUP, an online consignment shop specialising in “practically new” clothing and runs their business on data ThredUp tracks and analyses the variables that impact operational efficiency and overall productivity. For example, ThredUp inventory managers evaluate performance and Units Per Hour against metrics for each processing station in its distribution center, enabling them to quickly make changes to improve outcomes. In addition, the company generates regular productivity reports, maintains a dashboard of activity, and distributes weekly awards in its warehouse to create incentive among employees.

7. How do I scale growth for both sides of the marketplace in balance?

Marketplaces face the added trick of needing to balance customers on multiple sides of the interactions: those interested in providing a service and those interested in paying for a service. Both sides come to the table expecting to walk away with something they need—be it payment or a good, say a uniquely handcrafted gift or a ride between locations—and both sides expect the process to be much more easier by going through the marketplace. Are people coming back on both sides?

7_questions

Deliveroo, an award-winning environmentally friendly delivery service, is on a mission to deliver the best selection of food in the fastest time possible and with the best customer service. Data is key to make sure Deliveroo meets the supply and demand for hungry diners, restaurant orders, and delivery jobs. Everyone at Deliveroo has access to data, allowing employees to go beyond asking the basic question of ‘what happened’ to asking ‘why’ certain things happened.

By looking at how regional and seasonal differences impact orders, Deliveroo has been able to speed up the time it takes for a customer to get their meal, thus leading to higher customer satisfaction and repeat purchases. Deliveroo then shares these insights with their restaurant partners, which have helped to identify prime spots for new locations, where consumers are hungry for their food but previously had to travel farther or wait longer to get it, opening up new revenue streams for these restaurant partners.

The repeat customers and thriving restaurants are providing lots of work and earning opportunity for Deliveroo riders, thus growing all sides of the marketplace.

Summary

With what feels like new competition popping up everyday and so much data to try and make sense of, it can be easy to lose sight of what matters -- your customers. We hope these questions help you take a step back, think about the big picture, and really focus on the questions that matter most.

Enjoyed our tips for getting the most out of your data?
Follow us on Twitter and LinkedIn for more data tips, stories, and news.

]]>
2018-08-06T00:00:00-07:00
<![CDATA[Women of Data: Anicia Santos, Sales Engineering Manager at Looker]]> http://looker.com/blog/women-of-data-anicia-santos http://looker.com/blog/women-of-data-anicia-santos We are very excited to introduce our very own Anicia Santos as the next member of the Women of Data series.

Anicia spent the first ten years of her life on the island of Saipan, and then moved to Colorado, where she graduated from college. Afterwards, she took a road trip to San Francisco and never left.

Anicia is a passionate advocate for inclusion, diversity, and equity in tech, and loves mentoring new sales engineers as a Sales Engineering Manager at Looker.

What’s your background and how did it lead you to get into a career in data?

I was pre-med in college, and in some of my lab courses, I did data analysis. I also helped my mom, who is a pediatrician in private practice, collect and analyze data on the different growth patterns of children raised in altitude. Most of that just required basic Excel skills.

I ended up not wanting to pursue medicine, so I looked for any other job that would take me. Turns out, working at a recruiting agency wasn’t a good fit for me, but then I lucked into a job at a college athletics recruiting software company called CaptainU.

While I was working in a sales operations job, I was constantly asking our CFO and our data analyst to pull back different data points from our database. Our CFO eventually asked me if I wanted to learn SQL. They both taught me some SQL and enough VBA to lock up my macbook for hours trying to run different macros. Having access to all that data, being able to see the impact I had on the business, and discovering insights on my own was empowering. I was hooked. After a while, I took over data analysis for the entire company.

What has been the biggest surprise in your career?

As a woman who struggles with impostor syndrome, I am constantly surprised that people will ask for my advice and benefit from following it.

How do you think individuals can use data to advance their ideas or careers?

Data can help strengthen our narratives. When you want to push for a new idea or show how you're making an impact, lead with your story, but complement that story with data.

What are some of the biggest challenges in leading today? And how are you thinking about dealing with those challenges?

I can’t speak to everyone’s experience, but something I have been thinking about a lot lately is how can I best support very different people on my team. I value the unique backgrounds and strengths that each person on my team brings, and I want them to feel that. In sales engineering, we are trying to create repeatable success, so sometimes people make the mistake of seeking out carbon copies of team members who have performed well in the job. But I find that we execute better and can work more collaboratively as a team when we seek out different people and value those differences. For now, I am just trying to learn as much as I can about my team members–what motivates them, what they struggle with, and what they like most about themselves.

What advice would you give to other women who are interested in pursuing a similar career path to yours?

There is probably an easier way to do it than the way I did. I didn’t know anything about the tech industry when I graduated. I didn’t even know that software engineering was a thing. I only found out about sales engineering a couple years ago. But know this: if you are passionate about teaching people about things you love and convincing them to love those things too, you could make for a great sales engineer. So find what you love, learn everything you can about it, and then go find the company who has built a business model around that thing.

“The experience of mentoring another person, or even just helping them in a tight spot, will teach you a lot about yourself and how you can improve in the areas you’re struggling.”
What can women in the workplace do today to help build the foundation for successful careers?

The experience of mentoring another person, or even just helping them in a tight spot, will teach you a lot about yourself and how you can improve in the areas you’re struggling. When we mentor others, we often have an easier time praising their successes and strengths and helping them grow. I think that’s because we don’t have the baggage and impostor syndrome that comes along with trying to objectively judge those same characteristics in ourselves.

When I think about my own strengths and weaknesses, I can get caught up pretty easily in guilt over my confidence in some areas and then start feeling bad about my deficiencies. However, if I think about our new sales engineers that I mentor and their strengths and weaknesses, I can easily see those without baggage and figure out how to help my mentees emphasize their strengths and strategize on how to compliment or improve their weaknesses. That, in turn, helps me to re-frame my self-evaluation to make it more constructive.

Do you think that data can help build a more diverse and equal workplace?

I don’t think you can lead with data when trying to build a more diverse and equal workplace. We have run the numbers up and down to show that a more inclusive workplace is a more successful one. Where data needs to play a part now is in measuring our progress. If you are doing a training, instituting a new program, or changing a process, you need to think about the effect you want that to have and on whom, and then you need to measure that effect. Like any other strategic business goal, you need to hold yourself accountable to making real change with goals and metrics. Data helps us do that.

]]>
2018-08-02T06:00:00-07:00
<![CDATA[Accessible Data Science with BigQuery Machine Learning + Looker]]> http://looker.com/blog/data-science-with-bigquery-machine-learning-looker http://looker.com/blog/data-science-with-bigquery-machine-learning-looker At Google Cloud Next ‘18 today, Google took a step toward more accessible machine learning with the announcement of a new feature for Google BigQuery called BigQuery Machine Learning (BQML). BQML is a fully managed service that makes it easier for data scientists to build and train machine learning models in BigQuery using SQL syntax.

The Traditional Data Science Workflow

bqml

Most organizations have failed to realize the value of predictive analytics because the data science workflow requires a lot of resources, and the largest resource consumption often has little to do with the actual discipline of data science or the creation of machine models.

A typical data science workflow can look like this:

  1. Generate hypothesis & define features -- define relevant attributes (features) of data that they believe can be used to predictive future behavior.
  2. Prepare training dataset -- from their hypothesis, data scientists will build a training dataset and move the training dataset into data science environment to feed their model.
  3. Build model in data science environment -- build a model in R or Python and use the features within the training dataset to predict behavior
  4. Validate the model -- compare how accurate the model predicted real-world behavior and make adjustments until model can be generalizable to be run on real-world data
  5. Export the model -- once model can be productionized, data scientist moves data back into dedicated data warehouse to export insights of model to their business users.

You might guess that the most important part of the workflow, building and validating a data model, takes the most time in a data scientist’s workflow. However, the breakdown of time actually looks like this:

bqml

Frequently the most interesting portion of a data scientist’s job (really their core competency as data scientists)—analyzing and interpreting data—is only a small fraction of their day-to-day responsibilities. Much more of their time is spent munging and cleaning dirty data. In fact, “dirty data” was by far the biggest barrier faced by respondents in Kaggle’s 2017 “State of ML and Data Science” Survey.

And this is because data environments within many companies are messy. Data is strewn across various tools and departments, so data scientists spend a vast amount time simply preparing the dataset for their analysis and moving that data into a place where they can do their work.

Google, one of the leaders for AI and machine learning, is leveraging their BigQuery database solution to help address this problem.

The Google BigQuery ML Advantage

With BigQuery Machine Learning data scientists can now build machine learning (ML) models directly where their data lives, in Google BigQuery, which eliminates the need to move the data to another data science environment for certain types of predictive models.

Data scientists will still want to leverage dedicated data science environments such as R-Studio and Jupyter Notebooks for more complex analyses. However, for common types of linear and logistic regression models, a data scientist can dramatically reduce time spent moving and consolidating data by iterating on their machine learning models directly in BigQuery.

A New Workflow with BQML + Looker

Once the model has been built and is ready for testing, a data scientist must ensure that the outputs of the model are piped back into the database and made surfaceable for business users. Traditionally, this step might require pushing the data back into a data warehouse or setting up a new data pipeline to bring the data scientist’s work closer to the broader organization.

With Looker on top of BigQuery, this step is eliminated. Because the data never leaves BigQuery, data scientists are able to easily unlock the value of this final step for their business users by immediately pushing the output of their models to their end users in the same methods already being employed on top of BigQuery.

bqml

Now, with BQML + Looker, the workflow for data science looks like this:

  1. Define features -- define relevant attributes (features) of data that they believe can be used to predictive future behavior.
  2. Build training dataset in Looker -- data scientist can rely on existing business logic and pre-cleaned data to define features in LookML model
  3. Build model within Google BigQuery -- data scientist easily selects any set of those features to iterate on a ML model directly in BigQuery. BQML objects can be defined inside of Looker with cadence for retraining.
  4. Operationalize via Looker -- Predictive objects can instantly be used anywhere in the Looker platform, for operational or analytical use cases

bqml

Connecting directly with Google BQML reduces additional complexity for data scientists by eliminating the need to move outputs of predictive models back into the database for use, while also increases the time-to-value for business users, allowing them to operationalize the outputs of predictive metrics to make better decisions every day.

We believe the future of data lies in amplifying the capabilities of everyone, from data scientists, to analysts to deliver more value and insights to their organizations, and we’re proud to work with Google to make this vision a reality.

Want to learn more about how Looker improves the data science workflow? Visit our data science solutions page and learn about how Stack Overflow uses Looker to increase the efficiency of their data science workflows.

Want to understand how to use Looker to leverage your Google Cloud platform? Visit our Google ecosystem page to learn more about Looker’s integration with Google BigQuery.

Ready to see Looker and BQML in action? Request a demo to see the benefits of Google BigQuery and Looker on your data.

]]>
2018-07-25T00:00:00-07:00
<![CDATA[Women of Data: Cara Baestlein, Data Scientist at Snowplow]]> http://looker.com/blog/women-of-data-cara-baestlein-snowplow http://looker.com/blog/women-of-data-cara-baestlein-snowplow Cara Baestlein is a Data Scientist at London-based Snowplow Analytics. Before getting her MSc in Economics from University College London, Cara received her undergraduate degree from Edinburgh University, where she realized her interest in Data Science while spending a year abroad at Columbia University.

After university, Cara worked in finance, consulting, and startups, gaining the experience she now brings to her work helping companies of all sizes design and implement the data collection they need to build data-driven cultures at their organization.

Hi, Cara! Can you tell us a bit about your background and how it lead you to get into a career in data?

I studied economics at university, first in Edinburgh and then at UCL. During an exchange year at Columbia University I took an introductory course in Computer Science, and that really awakened my interest in applying all the statistical techniques I was learning in Econometrics classes to real world data. For my Masters thesis at UCL I applied Vector Autoregressive models to estimate the effect of Quantitative Easing during the recent financial crisis. And after graduating, I participated in research project using machine learning to determine the sentiment in the German population towards refugees based on Twitter feeds. What I enjoyed most was putting the economic theories I was learning to the test.

But while in the economic setting you mostly have to rely on data sets prepared by others, with large lags and imperfect assumptions, in the digital analytics world data can easily be collected in real time. As part of Snowplow’s Professional Services team, I get to help clients around the world not only design and implement what data they want to collect, but also to use the data most effectively to answer the questions that make the difference to their business.

What advice would you give to other women who are interested in pursuing a similar career path to yours?

Find out what you are really passionate about, what you would be happy spending every day doing, as opposed to a job that you like the idea of more than the actual day-to-day work. And keep an open mind about what your ideal job might look like, whether it be what industry you will work in, what location, or what size of company. You might surprise yourself and end up enjoying a job you never set out to get.

What has been the biggest surprise in your career?

A real surprise to me was that business culture seems to be a bigger blocker in using data effectively than a lack of sophisticated tools to do so. The technologies available today really enable even the smallest of companies to use data effectively to drive their decision making. Yet a lack of understanding and trust in these technologies often means they are not adopted as widely as you would expect.

“Especially at the beginning of a career I think it is really important to feel like you are growing through your work. It can be scary always reaching slightly further than what you feel comfortable with, but in my experience that simply is the fastest way to learn.”
What are some of the biggest challenges in leading today?

Technology is evolving rapidly, and while this provides wonderful new opportunities for growth, it also presents new challenges. Leadership needs keep up with the progress of technology, adapting and responding to new ways of working and doing business to continually be able to attract happy customers and highly skilled and enthusiastic employees.

What can women do today to help build the foundation for successful careers?

I think it can be tremendously valuable to continually identify new areas you are curious about and invest some time into learning new skills. Especially at the beginning of a career I think it is really important to feel like you are growing through your work. It can be scary always reaching slightly further than what you feel comfortable with, but in my experience that simply is the fastest way to learn.

Do you think that data can help build a more diverse and equal workplace?

Definitely, I can think of two reasons in particular: firstly, data allows companies to evaluate strategies and performance more objectively, free from human biases or stereotypes. Secondly, because the field of data analytics is a relatively new industry, particularly with regards to digital data, the expected persona and career path of a great data scientist haven’t been solidified yet. And so it’s really more about your passion and aptitude for the job as opposed what university you went to or where you worked before.

How do you think individuals can use data to advance their ideas or careers?

I see data and the insights derived from it as another tool under our belt to help us make better decisions. It allows us to understand the problems we face agnostically, and tackle them more efficiently. I feel like it brings this whole new dimension to the way we work today: we can test and evaluate our strategies, practices and assumptions in a way that was previously only accessible to scientists, running experiments in labs. We can now run experiments in the real world, whether its A/B testing new website features or evaluating the performance of an algorithm recommending products to customers in real time.

]]>
2018-06-28T06:00:00-07:00
<![CDATA[Announcing the Looker User Guide: Your resource for getting the most out of Looker]]> http://looker.com/blog/announcing-the-looker-user-guide http://looker.com/blog/announcing-the-looker-user-guide Whether you are brand new to Looker, or an expert in LookML, Looker now has a single destination to find all of the content, training and resources you need...the Looker User Guide!

Here is a breakdown of everything you have access to.

User Guide Homepage

If you are new to Looker, start here.

The User Guide homepage is broken into three goal based pathways. Collectively, they will guide you through all you need to know to get started with Looker.

The Pathways

View: Learn how to use Looker to view dashboards, reports and more.

Build: Learn how to pivot and filter data, create custom visualizations, and share dashboards with your stakeholders.

Develop: Learn how to connect to your database and get started writing LookML, our SQL-derived development language.

user_guide

Once you’ve selected your pathway, you're free to browse through Looker's curated content and learn more about the aspects of Looker that interest you most.

user_guide

Help Center

This is where you’ll find answers to the most commonly asked Looker questions. Our Help Center is full of troubleshooting advice, how-to guides and assorted best practices. Type in your question for thorough answers courtesy of Looker's team of service experts: The Department of Customer Love.

user_guide

Documentation

Your one-stop authoritative reference for using Looker and understanding the technical details.

Closely aligned with releases and product updates, you’ll find articles, in-depth video tutorials and useful information on everything from administration functions, writing LookML and organizing content.

user_guide

User Forums

Looker User Forums are your direct connection to the ever-growing Looker community of data lovers. Always an excellent resource for users of all skill levels, join conversations with other users and ask questions or share your learnings. A User Forum is a great place to learn with your peers and help others solve data problems.

user_guide

Training

Our new online learning platform is the place to go for accessing live and self-paced training courses. Register for any of the free eLearning courses for on-demand videos and self-service learning. Or, if you prefer, join one of our weekly live webinars or in-person classroom trainings.

user_guide

Not sure where to look? Type your question into the User Guide search box and see where you can find applicable content across resources.

user_guide

Go ahead, dive in.

Find what you need or learn something new with the Looker User Guide.

If you have thoughts on ways we can improve this experience, don’t hesitate to reach out. Email me directly at Shanann.Monaghan@looker.com and we can chat.

]]>
2018-06-14T06:00:00-07:00
<![CDATA[Women of Data: Liz Hartmann, Data Analytics Lead at Segment]]> http://looker.com/blog/women-of-data-liz-hartmann-segment http://looker.com/blog/women-of-data-liz-hartmann-segment Liz Hartmann is the Data Analytics Lead at Segment. She was born and raised on the East Coast but has been living in California since graduating from Cornell University.

Liz previously worked at Dropbox, Freeman, Sullivan & Company (acquired by Nexant) and Acumen, LLC. When she isn’t nerding out over data, she volunteers at the local YMCA teaching a group fitness class.

Hi Liz! Can you share a bit about your background? How did it lead you to a career in data?

I was a bit of a math nerd growing up but I studied public policy in college. I was interested in working in non-profit business, but in the end, I couldn’t resist the pull of data analytics. It was actually in the process of writing my honors thesis (exploring factors that affected the progression of relationships among low-income couples) that I figured out how much I liked working with data. I used Stata (a statistical programming software) to clean up, analyze and summarize the data I needed for my thesis. Once I graduated, I decided to dive into data analysis as a career and I’ve never looked back.

What advice would you give to other women who are interested in pursuing a similar career path to yours?

Go for it! Data is only becoming more and more abundant in the age of all things tech, and it’s not going to analyze itself!

What has been the biggest surprise in your career?

Honestly, it’s been getting into tech! I spent the first four years of my career working in consulting – one year analyzing Medicare and Medicaid data for the federal government and three years analyzing residential and commercial electricity usage data for utility companies like PG&E. I learned a ton about all facets of data analysis in both of those jobs and, by switching industries, I also realized that the skills I had acquired were useful across many different types of data.

When I was ready for a change, I just happened upon the listing to start up HR analytics at Dropbox. Focusing on internal employee data seemed like a great role for me given my experience in data analysis and my undergraduate background studying social policy and demographics. Luckily for me, Dropbox agreed, and I’ve been in analytics for tech companies ever since.

If you had told me when I graduated with a degree in public policy that one day I would be working for Silicon Valley tech companies, I would have laughed at you - but here I am! I think it's a good reminder that your degree doesn’t necessarily have to dictate your career for the rest of your life.

What advice would you give women about building the foundation for a successful career?

Speak up for yourself. There is nothing wrong with or arrogant about advocating for yourself, and I think most women (and men!) should be doing it more.

Don’t forget to take care of yourself. Yes, it might seem like working 100-hour weeks will pay off in the long run, but that isn’t sustainable for anyone and you will eventually burn out. You can work hard but still take time to recharge.

What are some of the biggest challenges in leading today and how are you thinking about dealing with those challenges?

As someone who is fairly new to leading at a startup, I would say one of my biggest challenges is learning to let go and delegate things that I used to do myself. I think Molly Graham (most recently the VP of Operations at the Chan Zuckerberg Initiative) put it best when she talked about needing to “give away your Legos.” Instinctively, you may want to hold on to the all the balls you’ve been keeping in the air as your company grows, but in order to make a bigger impact and zoom out a little bit, you need to share your Legos.

Do you think that data can help build a more diverse and equal workplace?

Yes, I do! A great example is workplace diversity statistics. In recent years a growing number of public and private companies have been releasing diversity stats about their workforces. I think this transparency is amazing because it holds companies accountable, not just to their employees or their stakeholders but to the greater public. Segment hasn’t released our data yet because we are a bit small for that, but we track diversity data internally and use it to inform decisions around hiring initiatives.

How do you think individuals can use data to advance their ideas or careers?

As the mousepad I’m using at this exact moment says, “Without data, you’re just another person with an opinion.” Regardless of your team or role in a company, you can use data to better understand your baseline, set quantitative goals and work towards those goals. Along the way you can keep tabs on the important metrics so you can keep doing the things that are helping and iterate and change the things that aren’t.

]]>
2018-06-08T06:00:00-07:00
<![CDATA[Why centralized data access is key for your organization becoming 'GDPR ready']]> http://looker.com/blog/why-centralized-data-access-is-key-for-gdpr http://looker.com/blog/why-centralized-data-access-is-key-for-gdpr After the better part of two years of preparation, debate and conjecture across the technology industry, today, the general data protection regulation (GDPR) is finally upon us.

In the past, the impact of this type of regulatory change would have been confined to the IT and data teams. Nowadays, however, nearly everyone handles data. From customer communications to employee records and beyond, much of this information will qualify as personal data. This means, according to the GDPR, that data must be controlled, used based on published commitments, secured, and ‘deletable’.

Yet, for many companies, allowing access to data has typically required copying, exporting, and extracting data – which leaves a trail of personal data across any number of laptops, servers and systems, both inside the company and with third parties.

Tackling data sprawl

Once data is disconnected from the central source, people begin to rely on the types of decentralized storage “systems” mentioned above. “Oh, I have that list of email addresses on my laptop.” They’re then left with disparate data ‘swamps’ that are impossible to search and even harder to manage and protect.

From the perspective of IT, it’s one thing to control one highly guarded fortress. It’s another challenge entirely when you don’t know how many fortresses exist, what data is inside, how it’s used, you have no record of how many keys have been copied and you don’t know who has access to them. This is the challenge Chief Privacy and Data Protection Officers are being presented with. It’s a problem we need to tackle as an industry – or many will fall victim to GDPR and its potentially severe punishments, or a loss of customer trust.

This is an issue that requires a long-term solution – and cannot be solved by a one-time, CIO-led data swamp cleanup. Because if the data analysis tools encourage “data sprawl” -- extracting data and moving it to ‘data workbooks’ for analysis -- the problem will reoccur. So even after CIOs and IT teams have transformed their data swamps into clean and organized data lakes, their analysis tools go and start the problem all over again – creating a never-ending spiral of pain.

Why you need a single access point for your data

That’s why any long-term solution has to address the root of the problem. Businesses need a single access point for their data. They need to see who has accessed it and what they’ve done, all in one centralized, managed, secure place.

Introducing this kind of system immediately cuts down the number of steps required to start examining data and delving into whether it’s actually useful or not. Analytics can happen faster, and without encouraging data sprawl. Additionally, platforms leverage the world-class security of today’s most advanced databases, giving administrators control over and insight into who’s accessing data and how long it’s cached for.

The role of centralization

Looker is a centralized flexible data platform that leaves your data in your database. This means that employees no longer need to extract the data to analyze it. They can interpret it and act on it directly, accessing only the data they need to answer their immediate questions, while still retaining the ability to ask more.

This means the development of a long-term data governance and analysis strategy, in which analysts can still provide their organization with game-changing business insights - while maintaining compliance with regulation - becomes possible. An easier process. Cleaner data. And GDPR ready. That’s the modern approach to analytics your data-led business should consider.

]]>
2018-05-25T00:00:00-07:00