<![CDATA[Looker | Blog]]> /blog Looker blog discover.looker.com Copyright 2019 2019-09-21T15:27:58-07:00 <![CDATA[The Next Wave of Data: Top Takeaways from Barcelona]]> http://looker.com/blog/top-takeaways-from-barcelona-event http://looker.com/blog/top-takeaways-from-barcelona-event The demand for access to actionable data has never been higher.

The challenge for data professionals lies in meeting this demand while ensuring that:

  • Access to data is managed in a secure, governed, and scalable way.
  • Adoption of data tools and technology occurs.

To delve into this, we hosted a recent meetup in Barcelona, a thriving hub for data-driven organizations. We brought together some of the brightest regional minds in data from CIS Consulting, Looker, Snowflake, Fivetran, King, Marfeel, and more.

Here are the core themes we observed:

Three Key Considerations for your Data Tech Stack

Traditional data infrastructures have evolved significantly. The increase in cloud-based solutions has given organizations unlimited and uninhibited capabilities to store, process, and analyze their ever-growing databases.

Because of this, companies can now tap their data's real potential with the help of new and reimagined data solutions.

Data Integration

There is a new approach to ingesting data from multiple sources which involves extracting, loading, and then transforming data (ELT approach) as opposed to traditional ETL.

This new blueprint for data integration tools (which leaves the transformation stage until last) allows engineers to create a more flexible stack. With it, they can easily apply changes to the business model at a later stage — saving time and money.

Data Storage

Data storage, like data integration, has also evolved. Modern tech stacks need a database solution that can support organizations as they scale, handling increasing volumes of data without compromising on reliability or performance.

Security and data protection must exist in every aspect of a cloud warehouse architecture. Whereas warehousing used to be complex and inflexible, newer cloud-based databases have turned the industry on its head.

By separating storage from computing, both data providers and consumers are now able to share live data concurrently in a secure and managed environment.

Business Intelligence

Today’s data analytics platforms need to fit into the workflow of the entire company and provide a single version of the truth. By doing this, organizations enable their business users to carry out the analytics they require on a daily basis.

This approach to analytics, where quality information is easily accessed, means that high-functioning and efficient teams can make critical data-based decisions.

Cost Saving and Growth Aiding

Most businesses are actively looking for ways to reduce unnecessary costs and drive growth. Understanding the new data technologies available can be the key to a data-driven strategy. This strategy enables lowered operational costs, improved profit margins, and — ultimately — a competitive advantage in the market.

King, the mobile gaming giants that brought Candy Crush to the world, implemented an incident management process that allowed them to reduce the operational cost of incidents by 70%. They highlighted how Looker has been pivotal in the process by providing the ability to run root cause analysis and anomaly detection on vast amounts of data.

In general, there is a significant inefficiency in IT staff and analysts spending endless hours on laborious and time-consuming “data cleansing” tasks.

Implementing a data stack that removes the complexities of data preparation and transformation allows these teams to focus their efforts on creating real value for the enterprise and data consumers.

This reallocation of brainpower to focus on data-driven strategy is often the key to driving creativity and unlocking new market potential.

In the End, It’s About Culture

Organizations are focused on creating and embracing a data-driven culture as a part of their core data initiatives. Driving adoption is the key to unlocking true business value from a data stack.

For adoption to happen, the value of technology must be communicated and shared across the company. Utilizing a data stack with agile technologies that grows with the company, inspires new ideas, and unifies all business departments makes this value obvious.

Once a culture achieves data appreciation — and there is a desire to shift the culture to revolve around data — being “data-driven” can go beyond theoretical.

When guided by the use of the right technologies, it can become one of the operational foundations of the company.

See what else happened at the Barcelona Meetup here and check out future events with Looker near you.

<![CDATA[Looker Lets You Choose What Works Best For Your Data]]> http://looker.com/blog/looker-supports-multi-cloud-solutions http://looker.com/blog/looker-supports-multi-cloud-solutions Looker prides itself on helping customers choose the data stack that best serves their specific needs. Looker’s unique architecture lets our customers take advantage of public, private, hybrid, and multi-cloud environments—along with the features and benefits each provides.

To continue expanding our multi-cloud offering, we’re excited to be announcing the following features and capabilities that will provide greater choice for our customers.

Looker now:

  • Supports more databases than ever, with 50+ SQL dialects supported (including new additions such as Actian Avalanche, BlinkDB, Mongo, Vector, and more)
  • Supports OAuth with Snowflake to improve data governance and control.
  • Has achieved SOC 2 Type 1 compliance when hosted on Google Cloud Platform.
  • Continues to support data export (via the Action Hub) to cloud platforms such as Amazon S3, Azure storage, DigitalOcean, Google Cloud Storage, and more.

Also, beginning in November 2019, Looker will allow customers the choice to have their instances hosted on Amazon Web Services (AWS) or Google Cloud Platform (GCP), with plans to offer Azure hosting in early 2020.

And, as always, you can self-host.

We continue to invest in capabilities that enhance our multi-cloud approach—as well as improving interoperability with a wide ecosystem of technologies.

Expanded choice of databases and dialects

Looker speaks to databases in the language they understand—SQL. But because every database is different and speaks a slightly different dialect of SQL, creating universally applicable SQL queries is virtually impossible.

So, Looker takes a different approach. Our platform abstracts your query from the underlying SQL dialect and allows data teams to write a query once, leaving the SQL creation to Looker.

Looker now speaks 50+ different dialects of SQL, including those of the most popular modern database and data warehouse technologies. The latest database integrations from Looker include Actian, Avalanche, BlinkDB, Mongo, and Vector.

“We are proud to partner with Looker to provide our customers powerful modern data infrastructure on premises or in the cloud environment of their choosing. Together we’re helping our customers realize the true value of their data virtually anywhere and at any scale.”
— Jason Wakeam, VP Business Development and Alliances, MemSQL

Supporting multiple databases and their SQL dialects has direct business value. Looker customers can choose the database that best suits their data needs—and Looker’s support for a wide range of databases can simplify migration.

One organization using Looker with two different databases as they migrate between enterprise data warehouses (EDWs) is HR tech company Namely. By leveraging this mix of technologies, they’re continuing to bring intuitive, powerful HR tools to midsize companies.

“At Namely, data security and privacy are extremely important to us, and so is the database we choose. With Looker, we don’t need to rewrite all our queries to make them work with a new database. Looker helps us focus on putting data in the hands of users, wherever it’s located.”
— Jessica Ray, Sr. Product Manager, Reporting & Analytics, Namely

Simplified authentication and control for Snowflake users

In addition to supporting new SQL dialects, Looker is continually updating and improving how it supports databases. As a part of this announcement, we now support OAuth with Snowflake to help our customers using Snowflake authenticate and authorize data access between the two systems.

OAuth is an open-standard protocol that allows supported clients authorized access to Snowflake without sharing or storing user login credentials.

SOC 2 Type 1 compliant

Looker hosts and manages Looker deployments for the vast majority of our customers. As a Looker customer, you can now choose which underlying cloud provider hosts your Looker instance.

Deployment of Looker-hosted instances has historically been on Looker’s virtual private cloud (VPC) on Amazon Web Service (AWS). Beginning in November, customers can choose between hosting on AWS or the Google Cloud Platform (GCP).

Customers can even self-host on private infrastructure if necessary. Looker plans to support hosting on additional cloud providers soon, with Azure hosting planned for early 2020.

“While we deepen the integration of Looker into Google Cloud Platform (GCP), customers will continue to benefit from Looker’s multi-cloud functionality and its ability to bring together data from SaaS applications like Salesforce, Marketo, and Zendesk, as well as traditional data sources. This empowers companies to create a cohesive layer built on any cloud database, including Amazon Redshift, Azure SQL, Snowflake, Oracle, Microsoft SQL Server, or Teradata, as well as on other Public Clouds, and in on-premise data centers.”
— Thomas Kurian, CEO, Google Cloud

Earlier this year Looker achieved Service Organization Control 2 Type I (SOC 2 Type 1) certification for our hosting environment on GCP. Looker already maintains a SOC 2 Type 2 report for Looker Cloud instances hosted on Amazon Web Services (AWS).

The SOC 2 report for GCP demonstrates Looker’s commitment to security, availability, and confidentiality in our hosted production environments.

This report includes design and operating effectiveness tests for our existing hosted environment. The report also provides information on our practices — ranging from vulnerability management to endpoint protection — that affirm your information has appropriate safeguards in place.

Data where you need it, even across clouds

Looker continues to make it easy to operationalize data and insights using Looker Actions. Actions allow Looker users to deliver data directly into workflows using connected systems such as Slack or Jira.

Actions also allow users to deliver data between clouds, with query results dropped into a range of cloud storage “buckets” for use within those clouds. These pre-integrations include out-of-the-box data delivery into Azure, Google Cloud Storage, Amazon S3, DigitalOcean Storage, and other systems.

Using Looker Actions to deliver query results between clouds allows customers to leverage features that are available in specific clouds.

For example, you use Google BigQuery as your enterprise data warehouse but want to use Amazon Sagemaker for machine learning. Looker can automatically deliver training, validation, or test data sets based on a schedule you define into an Amazon S3 bucket for use by our data science team.

In other words, Looker Actions enable the use of cloud features across a range of cloud providers.

Looker supports your cloud strategy

Every organization’s data environment is as unique as its business. Success requires an analytics platform that supports your unique data stack.

Looker can do that.

With Looker, you’re able to build on cloud technologies regardless of the cloud provider. You can consolidate into a single cloud, or leverage the benefits of multi-cloud quickly, securely, and in a consistent, governed manner.

Whatever your strategy may include, Looker is here to help you do more with your data.

Experience platform freedom. Request a demo or proof of concept to see it in action.

<![CDATA[Looker Achieves SOC 2 Type 1 Certification for Google Cloud]]> http://looker.com/blog/looker-soc-2-type-1-certification http://looker.com/blog/looker-soc-2-type-1-certification Looker is excited to announce new hosting options for our customers hosted in the Looker Cloud. Beginning in November, Looker customers can choose to have their instances hosted on either Amazon Web Services (AWS) or Google Cloud Platform (GCP), with plans to expand on these hosting options to include Microsoft Azure in early 2020.

Customers who choose to have Looker host their environments can focus on data and insights while avoiding unnecessary infrastructure management burden. Looker-hosted customers receive benefits that include system performance monitoring, managed upgrades, backups and recovery, and more.

As we continue to support a choice of clouds, our customers’ growing multi-cloud needs, and a range of hosting options, our commitment to security and compliance best practices remains strong. In June 2019, Looker achieved Service Organization Control 2 (SOC 2) Type 1 for Looker Cloud hosted on GCP, well before hosting on GCP will be generally available. The SOC 2 report demonstrates Looker’s commitment to the principles of security, availability, and confidentiality in our hosted production environments.

Looker already maintains a SOC 2 Type 2 report for Looker Cloud instances hosted on Amazon Web Services (AWS). This report includes tests of design and operating effectiveness for our existing hosted environment as well as information on our practices, ranging from vulnerability management to endpoint protection, providing assurance that your information has appropriate safeguards in place.

Looker has also chosen to transition to a Kubernetes architecture, which provides more hosting flexibility over traditional virtual environments. The SOC 2 Type 1 for GCP report provides tests of design for our new Kubernetes based infrastructure housed on GCP. Now that we have our SOC 2 Type 1 for GCP, we will be testing this, along with our AWS infrastructure, during our Type 2 testing in the beginning of next year.

If you’re interested in learning more about Looker’s approach to security, compliance, and the security responsibility we share with our customers, check out our security page or reach out to our team to speak directly with a Looker expert and see the platform for data in action.

<![CDATA[Building with Looker: What I Learned During My Internship]]> http://looker.com/blog/building-with-looker-during-my-internship http://looker.com/blog/building-with-looker-during-my-internship If there’s one thing I learned during my time at Looker, it’s that the Looker platform is really really cool. Being able to answer any business question with a few clicks of a button is a powerful ability, especially when everyone is enabled to do so.

Connor, our resident Looker Marketing Analytics Manager (and my mentor), knows our platform and how to leverage data with it exceptionally well. He's so knowledgeable that marketers often recruit him when they are building out their own explores or dashboards.

He realized that many of the marketing team’s data requests are similar. However, there wasn’t one single dashboard he could point folks to so they could find and use the metrics they were looking for.

My project was born to help solve this bottleneck and enable the marketing team to easily find and pull the metrics they needed—when they needed them.

There’s more to marketing than just SWOT

To kick off my project, I interviewed people within the Demand Generation team to learn more about their business and how each of their jobs impacted the marketing team and beyond. It was a great way to learn the ins and outs of marketing and how teams impact Looker as a whole. It was especially fun to learn how intricate the Marketing org is.

I took a Fundamentals of Marketing class at Cal Poly, but I didn’t realize just how much more I had to learn. I made a joke with Connor and Brenda, my manager, that there is way more to marketing than what I learned in school.

One of the main things I learned in that class was how to create a SWOT (Strengths, Weaknesses, Opportunities, and Threats) analysis. But my time at Looker taught me there is so much more to marketing than just SWOT analyses.

The beauty of the unknown was that it gave me so much room to learn and grow. I learned the different parts of the marketing funnel, how Marketing and Sales intertwine, and the marketing metrics used to measure all of this in Looker (which was the best part).

Did someone say HTML?

After conducting all the necessary interviews, it was time to get started on my dashboard. I began by compiling a list of all common data requests, such as the number of Marketing Qualified Leads (MQLs) in a quarter and Stage1 to Stage 2 opportunity conversions.

From there, I started to create Looker explores based on these common requests. As I was building them out, I would occasionally notice I had missed adding a filter and would have to go back and edit my work.

When I told my mentor about these instances and asked him how to check my work best, he replied “Do you see why we need your project? It’s not just you that’s missing a filter here and there.”

Hearing this was so comforting and made working on this project an even better learning experience. I began to realize that my project would have a huge impact once it was completed.

Once my explores were built and pre-filtered for common marketing metrics, I began working on the layout of the dashboard. One thing I definitely wanted to include was Text Tiles.

You can include anything—from a header for your dashboard to a button that links out to a website or explore—on a Text Tile. To make the most of Looker’s Text Tiles, I needed to format them using HTML. When creating a dashboard with Looker, you can insert a Text Tile and format it using HTML. The HTML will then display the written information on the dashboard.

Since I’ve only taken coding classes in school, I worked through learning HTML and getting the formatting right to display what I wanted. It took some trial and error, but referring to an internal Looker dashboard created to guide anyone diving into the world of Text Tiles helped a ton.


How to use the dashboard


So how will marketers at Looker use this dashboard?

Say you’re a new member of the Marketing team at Looker. In your role, you need to track how many Stage 1 opportunities convert to Stage 2 opportunities, but you’re still learning how to use Looker to find your answer. Or perhaps you know how to make an explore, but you’re not completely positive that the data you’re pulling is correct.

With this dashboard, you can read through, find exactly what you’re looking for, click on a button that takes you to an already created, pre-filtered explore, and then add that explore to your own dashboard. This will enable everyone—in marketing and beyond—to use this information to learn and build dashboards with the metrics they need.

What comes next?

The best (and possibly most exciting) thing about my project is that it can continue to be iterated on. While my dashboard specifically focuses on top of the funnel metrics, Looker lets you add middle and bottom of the funnel metrics or any other metrics you may need for successful data analysis.

Everyone, regardless of technical expertise, should feel empowered to build their own explores and dashboards. I’m proud that my project will enable folks to do just that. I hope that everyone will be able to make the most of their data with dashboards like the one I built.

<![CDATA[A Collective Impact: Interns of Looker]]> http://looker.com/blog/interns-of-looker http://looker.com/blog/interns-of-looker Uncharted Territory

Larry Tran walks into orientation with the confidence of familiarity.

This summer, he has created a dashboard to “guide and give pulse to how EMEA Marketing, Sales, and Sales Development reps are doing throughout the year or quarter” — a project that has given him insight into the challenge of international business.

He spends his mornings on calls with the European offices and feels his time is valued — that his peers recognize the challenge of his task and appreciate his efforts to build a globally beneficial tool.

“[Europe] is very different from the US... different territories within Europe have different needs and cultures. Thus, I created a dashboard specific to each of their needs that helps guide decision making and gives insight into how they’re doing.”

This is his second summer interning at Looker — an opportunity that came with an abrupt change in perceived expectations.

At Cal Poly, the path of a BIS major is clearly defined. Business Information Systems majors are ushered towards internships, and eventual employment, with the Big Four — major accounting and consulting firms partnered with the university. Larry has watched his brother’s future take shape through these opportunities and is excited to do the same.

A recruiter, however, tells him he is “not as focused on BIS as other candidates” because of his interest in marketing. He finds Looker at a career fair — he knows nothing about the company but they are excited by his BIS background and curiosity for other fields.

Now on his second summer interning for Looker, he is thankful he leaned into what spurred his interests.

“Without being told no, I would not have ended up here.”

Empowered By Looker

For Saloni Agrawal, the Looker internship comes with unexpected revelations.

As an intern in the Department of Customer Love (DCL), Looker’s customer support team, she has two projects. The first is her thesis — an ongoing project hosted by DCL meant to provide lessons in storytelling through Looker to develop customer empathy. For Saloni’s thesis, she takes AirBnB data and uses Looker to discover the optimal travel options depending on location, season, and price. The second is to revamp a section of the DCL ramp for new hires. “DCL is evolving and scaling rapidly,” Saloni explains, “so we have to develop a new onboarding system by expanding on the current one to pass on information without it falling through the cracks.”

Saloni initially found Looker through a tour set up by the University of California, Santa Cruz Information Systems Management Association. There she was invited to ask questions to a panel of recent graduates working at Looker who gave her insight into the culture and work available. As fate would have it, she found Looker at a career fair soon after and applied for the internship program. “I didn’t think I had the experience necessary... but I wanted to work here.”

She remembers being engulfed in the positive energy and excitement of her mentor and manager as they burst through the door at orientation, ready to sweep her away for introductions. From the moment she meets her team, she knows this is where she wants to be.

“The other DCL interns have been amazing. When we get stuck, we ask each other first and are able to help each other find a solution... I’ve learned so much from them. I’ve found friends within them and gained a new perspective on life because of the conversations we have had. Truly happy I was able to meet them.”

Looker has been an opportunity for Saloni to realize her potential. She was unsure what she wanted to do but Looker gave her the opportunity to explore her options. She explains, “I want to be a powerhouse but sometimes I don’t know how to do that. Looker is right there, next to you, while you’re figuring it out.”

Making An Impact

Working as an Intern on the security operations team, Roy’s project is to build an automated system that would respond to alerts and notify security of unexpected behavior. “We're snapshotting a Kubernetes container and using a malware detection algorithm I designed to alert to potential threats.” He explains, “We also developed a pre-approved whitelist that analyzes the changes in the snapshotted container and alerts if some event occurs in the container that is not on the whitelist.”

Not only has Roy built this system from scratch, but it is the implementation of a patent idea from his manager that is currently under a patent-pending status. This makes Roy among the first to ever work with this concept.

Having interned at three companies prior to Looker, he recognizes “what makes Looker unique is the culture.” Roy discusses his experience through his interactions with his fellow employees. Every day he enters the building, he is warmly welcomed with what he describes as “vibrant happiness.” His workday is defined by community and collaboration and a desire to help each other so that everyone can experience success.

Roy is a Ph.D. candidate at UCSC. In his undergrad, he wanted to study business but a single class became a pivotal moment in his education — a computer science class where he built a replica of Battle Star Galactica. This is where he fell in love with his trade.

Now Roy finds himself in the position to shape both the company and data security as a whole. “I want my work to impact Looker and keep our systems secure, however, if I build a tool that could help other companies' systems, I'd be more than honored to have that wide of an impact in the security community... To contribute to a more systems secure world is something I pride myself in.”

Redefining Success

As an engineering intern, Grace Lin’s project requires her to go into the backend of the Looker platform to shorten the time it takes for Looker to connect with a client’s database. She explains, “Even if it takes a second, those seconds add up.”

“Looker is representative of the reason that I'm interested in tech. Coming into college and not really knowing what I wanted to do, I chose computer science because I saw that a lot of real change was being made through technology. I didn't know what I wanted to do but wanted to be able to make some kind of positive impact. I think it's really beautiful and special how Looker enables others to utilize tech.”

Handshake, a career network for college students and recent grads, introduces her to the Looker Internship Program. The vetting process is not what she expects. “Applying to technical positions is very impersonal... with Looker, it was the complete opposite. They carry their values from HR and recruiting to the rest of the company.”

To Grace, Looker is “perfect in that it combines computer science with data science applications” She looks forward to learning through problem-solving as a software engineer. More than that, she enjoys her team for their desire to help her grow.

Situated between her managers’ workspace and the co-founder’s office, Grace’s desk is a center for collaboration. With the position of her workspace, she gains inspiration from observing this environment, especially the interactions of Looker Co-Founder and Chief Technical Officer, Lloyd Tabb. “I can see how much he loves this company... He’s coding every day and talking to people — always helping them be the best they can be.”

Grace’s intern experience allows her to partake in a truer definition of success — becoming a person who works toward compassionate understanding to further others and, thus, the company as a whole.

A note from the author

I spent most of the time leading up to this internship worried about whether or not this was the right fit for me.

Performance and writing are my passions — pursuits I momentarily felt I was giving up.

But I’ve already touched on my experiences, so I’ll be brief.

It took five minutes to change that.

Five minutes of talking to other interns, meeting them for the first time and realizing they were nervous too. For their own reasons, but nervous nonetheless.

There is not a single experience that you are alone in — my peers see that. We have been built up from that foundation — supported and cultivated in an environment teaching us to recognize success as something we experience together.

It does not matter if you’re a musician, a lawyer, or an intern, Looker helps you build the mindset to continue to find success in all endeavors.

<![CDATA[Data Storytelling in Action: Using Data to Guide your Fantasy Football Draft Strategy (Part II)]]> http://looker.com/blog/data-driven-fantasy-football-draft-strategy-part-ii http://looker.com/blog/data-driven-fantasy-football-draft-strategy-part-ii Last year, I wrote a post in preparation for the 2018 Fantasy Football season about two concepts to keep in mind when approaching your draft: Holistic Value and Pick Value.

This year, I’ll expand on these concepts while introducing new ways to prepare for your draft. You can find all of this information on this interactive dashboard.

One mistake I’ve made in the past (and I’m sure I’m not alone) was overly focusing on the top of the draft at the expense of my picks in later rounds.

I’ve found that it’s just as important to round out the bottom of your roster as it is to draft your stars at the top.

You can learn from my mistakes. I’m going to lay out the exact strategies I now use to prepare for every stage of the draft.

Here are some concepts to be aware of that I’ll discuss throughout the post.

Key Concepts

Round Composition

Round composition is the strategy of considering multiple rounds at a time to understand the makeup of players to be drafted by position — rather than solely focusing on a single pick or a single round. You can then match that strategy to your team’s draft needs.

While this sounds simple, many people default to looking pick-by-pick or round-by-round instead of considering the bigger picture.

For example, in rounds three and four, there are twelve Running Backs (RBs) to only seven Wide Receivers (WRs). Even if you need a RB, it may make sense to still target the best available receiver in the third round knowing there will be plenty more RBs to grab in the fourth.

Player Trajectory

Each player’s year-over-year trajectory is something I always like to keep in mind. Simply put, is a player expected to take a leap forward — or take a step back — this year? You want to watch out for guys in for a significant regression, especially in respect to their draft position.

Point Regression usually comes when a player is aging, injured, or coming off a “fluke” year that may be hard to repeat. The goal is to mitigate our risk by avoiding players with any of the above.

Looking at the first couple of rounds, you’ll see all the top guys are expected to regress a little (which makes sense considering they are all coming off of spectacular seasons).

But if you look at the Projected Points Per ADP vs. 2018, it's only a small regression compared to their overall value. Players like Nick Chubb, Odell Beckham Jr., and Dalvin Cook are other examples of guys trending in the right direction.

Player Point Composition

The composition of how a player scores fantasy points is crucial in later rounds. For example, a top-five RB in 2018 point composition looked like this:

This tells me to look for RBs who also catch a lot of balls. This is a great benchmark for finding RBs in later rounds who match this composition in hopes of finding a breakout star.

Okay, now onto the round breakdowns where you can see these concepts in action.

Premier Rounds (Rounds 1-4)

These are the rounds that should determine your weekly starters. Remember, it's about total points, not the team that looks best on paper when the draft ends.

Running Backs

Last year I recommended spending your premium picks on RBs and to specifically target “bell cow” RBs who average more than twenty attempts per game.

Well, one year changes a lot.

Last year there was only one RB (Ezekiel Elliot) who averaged more than twenty rushes per game, compared to eight backs the year before. This year, zero RBs are expected to have that volume.

With that information, it makes sense to explore RBs with more involvement in the passing game. To do so, I brought in a new metric this year to evaluate RBs: receiving targets per game.

Now our list is starting to resemble mock draft RB rankings:

I like that Ezekiel Elliot is falling a little, allowing someone with a mid-round pick to steal him (assuming he ends his contract dispute). Other guys to watch out for are Le’Veon Bell, coming off a year of not playing, David Johnson and Nick Chubb. Joe Mixon was surprisingly high on this list as well (5th).

RBs are plentiful in rounds three and four, but WRs are not, so I recommend going with the best available or WR in the third and waiting for RBs like Chris Carson, Mark Ingram, or Josh Jacobs in the fourth.

If somehow Damien Williams falls to you, I like him as well. All are expected to improve from last year’s performance (Jacobs is a rookie).

Wide Receivers

I couldn’t believe when I looked at the top projected forty players and saw only fourteen receivers. Contrary to my belief of prioritizing RBs early, it looks like WRs should be the focus this year (aside from the top four or five picks).

Let’s take a quick look at our star WR point composition. The 22% on touchdowns stands out to me. Focus on finding WRs who don’t rely on big yardage every week and are projected to score touchdowns instead.

Last year, the guys with high touchdown counts were DeAndre Hopkins, Davante Adams, Tyreek Hill, and Antonio Brown.

This year, Davante Adams and DeAndre Hopkins are both expected to reach double digits again and should be taken very early, with JuJu Smith-Schuster and Odell Beckham Jr. joining Tyreek Hill and Antonio Brown in the next tier.

A sleeper that seems to be falling in drafts is Amari Cooper, the only expected WR to increase in average points per game this year.

Tight Ends

Travis Kelce is a safe pick, and George Kittle is acceptable as well, but I think there are TEs you can find in the later rounds that will be just fine.


You don’t need to focus on QBs in the early rounds. Take Patrick Mahomes if you want, but generally, you can wait.

Later Rounds (Round 5-10)

While you’ll still be rounding out our rosters with starters in these rounds, it's important to think about your premier bench players here as well. These are players that you can mix into your lineup throughout the year or may become starters if other players get hurt. Quality depth is important.

Running Backs

The mix of RBs and WRs is more even in these rounds, and people will likely draft the rest of the starting QBs:

As you can see, guys like Sony Michel and Philip Lindsay are expected to be near double digits in projected touchdown volume. Philip Lindsay also had ten touchdowns last year.

Overall, the RBs in these rounds are projected to have similar average points per game as the WRs (strengthening the argument for prioritizing them early).

An interesting player from a point composition standpoint is Kenyan Drake. Out of all the RBs in the later rounds, his point composition most resembles the star players since he is involved in the passing game.

Watch out for him as a sneaky sleeper.

Wide Receivers

WRs in the later rounds can be tricky as many of them won’t be the number one option on their teams. If you can find players who are the number one option, they could have good value this late.

If not, look for players with a high trajectory from the previous year, and receivers expected to get a lot of yards (or who play for teams with good offenses).

It looks like many of the WRs in the later rounds are expected to improve. Guys like Will Fuller, Curtis Samuel, and Christian Kirk stand out. For players who are likely to have the most targets on their team, I like Julian Edelman, Kenny Golloday and D.J. Moore.

If you are targeting players with high touchdown projections, Calvin Ridley, Cooper Kupp and Mike Williams are good picks.

Another note on D.J. Moore: he is expected to improve his points per game by two points. This makes sense now that he is in his second year and likely the number one receiver for Carolina. Could be a great late-round pick.

Tight Ends

I like to depend on a lot on previous year stats when I look at Tight Ends. If you take a look at last year’s stats for the Tight Ends projected to go in rounds five through ten, Zach Ertz is a clear front runner. If he is hanging around, it makes sense to grab him. Eric Ebron also stands out with his fourteen touchdowns from a year ago.

Generally speaking, from Zach Ertz down to David Njoku, it's only a 2.2 point difference in projected points per game over the five-round span. Don’t stress too much about TEs as all of the ones listed here will be serviceable. Focus on rounding out your WR and RB starters before grabbing one of them.


Similar to TEs, I think there are plenty of QBs to go around. There are usually more starting quality QBs than there are teams in your league.

Again, looking at projected points per game, there is only a one-point difference between Deshaun Watson and Cam Newton.

With this in mind, let’s try and find guys projected to make a leap forward this year:

Baker Mayfield and Carson Wentz stand out. I think Baker Mayfield is in for a big season. Kyler Murray is a rookie, hence his expected improvement. And lastly, similar to finding RBs who catch the ball this year, people seem to be gravitating towards QBs who can run.

I don’t necessarily hate this strategy, but let's go back to the composition of the top five QBs last year:

Only 5% of points came from rushing. I’d be careful about going all-in on that strategy. Guys like Baker Mayfield, Matt Ryan, and Carson Wentz stick out as potential top-five QBs this year, in my opinion.

Happy drafting!

In conclusion, make sure you are looking at your draft holistically (two, three, or even four rounds at a time) to understand when it makes sense to draft by best available, your team needs, and where you may find value in different positions.

Use this dashboard to scout out which players are expected to be available in different stages of the draft. Additionally, make sure you’re looking at both last year’s stats and this year’s projections when picking players. You want players expected to improve, not regress.

Finally, have fun...after all, it’s only fantasy football.

<![CDATA[Your Data, Integrated: Embed Looker in Your Tools & Bring Analytics into Your Team’s Everyday Workflows]]> http://looker.com/blog/private-embedding-examples-workflow http://looker.com/blog/private-embedding-examples-workflow One of Looker’s coolest capabilities is bringing data to people in the tools where they already spend their time through embedded analytics. People can get their answers in the same window, without having to go somewhere else or ask anyone for help.

And data analysts are freed up from a ton of everyday data requests — hello, time for awesome projects!

You can empower your organization this way too. Embedding Looker dashboards, Explores, and Looks in your internal tools and applications for analytics is so easy we explained how to do it in five steps.

Embedded reports come with the Looker security you expect since they’re presented through an iframe (not passed into the tool).

Sounds great in theory, but what are people actually doing with embedded reports? How might you integrate them into your teams’ workflows?

Here’s some inspiration to get your wires firing. Below are two really effective examples of embedding Looker. Get ready for some sparks!

Embedded Analytics in Zendesk

Our beloved Department of Customer Love (if you’ve ever talked with Looker chat support, you’ve met them) spend their time in Zendesk. Their chats move quickly, and they need information in real-time to be as helpful to Looker customers as possible.

To that end, our experts embedded data from Looker in the Zendesk side panel to give our support team some context about the folks they’re chatting with.

They can quickly get information about the version of Looker that someone is running, which types of users are active in their instance, and other details that provide helpful context.

Now you know the secret behind our DCL team’s success. If you’ve ever wondered why our DCL team is so great — wonder no more! Not only are they a highly technical team of fun-loving people, their awesomeness is fortified by data.

Embedded Analytics in Salesforce

We also embedded Looker in our Salesforce instance. Check it out:

The embedded bar chart at the bottom gives our customer success team a customer’s license information right on an account page. This way, the team can quickly see if an account is over-deployed and reach out if necessary.

Where Will You Embed?

These are just two of the many possibilities of privately embedding Looker. Compare different platforms for embedded analytics by downloading our whitepaper.

Which tools in your company would benefit from embedded data?

Get the process rolling today with our 5-step "cheat sheet."

Jill Hardy
Content Strategist, Customer Education

<![CDATA[5 Tips to Make Your Next Dashboard Your Best Yet (Designing Dashboards for UX/UI)]]> http://looker.com/blog/dashboard-ux-ui http://looker.com/blog/dashboard-ux-ui Today I’m going to describe five principles that will help you create dashboards that serve the people that count, rather than just serving up data.

The principles are:

  • finding your dashboard’s “big idea”
  • getting buy-in with a wireframe
  • ensuring clarity
  • keeping it simple
  • creating a good flow of information

I like to think of the first two as the research phase because they take place before I start developing my dashboard. And I think of the last three as the creation phase, since I’m thinking about them as I build.

A clear dashboard that focuses on a central theme speaks for itself. You’ll spend less time explaining the dashboard, and data-driven decisions can be made more easily because the right information is readily accessible.

Sounds like a solid way to work, doesn’t it?

Well then, let’s get started.

The Research Phase

What’s the Big Idea?

Knowing what you want to convey is the foundation of building an amazing dashboard. What’s the big idea behind your dashboard? Its raison d’etre?

The best way to get these answers is by having a conversation with your audience. It’s crucial to understand who the audience is, what they hope to accomplish with this data, and the actions they will take based on the information they see.

For instance, an executive making strategic business decisions needs different information than an operations manager who is keeping things running smoothly on a day-to-day basis.

Both have a goal. Asking your audience what they’re hoping to get out of their dashboard is the first step in making it happen. Your conversation might go something like this:

You: What’s your role here at Housing Inc.?

Audience: I’m a housing development manager. I oversee the development process from conception to ribbon-cutting.

You: Great, thanks! I understand you want some information about the housing markets in California. What specifically do you want to get out of this dashboard?

Audience: Oh, thanks for asking. I want something that will help me determine where to pursue our next development.

You: Good to know. What kind of information helps you determine that?

Audience: I need to see trends in the marketplace… what are the local rents like for different unit sizes? How much is property selling for? What’s the median income for the area? Are there affordable housing requirements in the local area? How long is the typical permitting process?

It would be great if we could set all of this data up and then I could change which location I’m looking at to compare different markets.

You: If you had this information in front of you, what would you do with it? What action would you take?

Audience: If I saw something that looked promising, I would pick up the phone and start making calls to people in the area to get our process rolling.

You: If you had access to all of that information from different areas, would it be enough to pick up the phone? Is there anything else you might need?

Audience: You know, another thing that can have a surprisingly significant influence on the decision to build is parking. If there is ample parking in the area, meaning we don’t have to build it ourselves, we’ll save a ton of money. I’d like to see what parking is like in each area as well.

You: Good to know — I’ll make sure we get you the parking information too. Thanks for the chat today!

Easy, right?

Even if you aren’t talking to a housing development manager, this method of drilling into the details and making sure that the dashboard will be actionable applies universally.

To help, we put together a guide about how to talk to your audience, complete with suggested questions and a space to take notes. You can download it here.

Get Buy-in with a Wireframe

Not only will these conversations ensure your dashboard is useful, they’re also a way to get buy-in from your audience before you begin. That means you lessen the chances of your audience changing their requirements after you spend time and energy building your dashboard.

To be extra sure you’re delivering what your audience wants, create a wireframe. A wireframe of a dashboard represents what it will look like when it’s finished. Include which types of visualizations will represent the data.

Simply drawing on a piece of paper can do the trick. The point is to give your audience a preview of what the dashboard will look like and work with them to refine it — before you create it.

Once you have agreement on the purpose of the dashboard and its content, it’s time to start building. As you do so, keep in mind the principles of clarity, simplicity, and flow.

The Creation Phase

Make it Clear

Clarity ensures viewers understand what the content of a dashboard means.

Use descriptive titles, labels, and notes to make it clear what people are looking at. They should know what each number and visualization is saying without having to ask. The dashboard pictured below exemplifies the use of these features.

Use your audience’s lingo as you title visualizations and add descriptions.

For instance, business users aren’t as familiar with your data as you are, and they won’t know how to translate database column names into business definitions.

As a data analyst, you can bridge that gap by using language that business users are acquainted with.

Make it Simple

Everything on the dashboard should have a purpose. Think about the “big idea,” the action your audience wants to take after seeing this information. Does every tile support and inform that action? If you find one that doesn’t, show no mercy — remove it.

Ideally, you’ll provide viewers the option to drill into more detail if they’re curious.

The sales manager’s dashboard shown below exemplifies simplicity. Every tile answers a piece of the question, “How is my team performing?”

At the top are real numbers against quota. Underneath that are person-by-person numbers, where I can see how everyone is doing booking meetings.

With this visualization, I can pinpoint who might have strategies to share with everyone, as well as who might need them.

Now, notice what isn’t here:

  • Extraneous text: There is no Y-axis label on the two-column charts because it’s obvious what the numbers mean.
  • Too many decimal places: The value_format parameter is used to limit the number of decimal places shown.
  • A rainbow: The color scheme here is simple. For instance, the Total Won ACV chart stands out immediately because it’s displayed in purple, while everything else is blue or green.

If you’re a LookML developer, you have some extra tools to simplify your dashboards. You can use drills and linking to provide details that fall outside of the scope of the dashboard.

Note: If you’re creating embedded dashboards with Looker, be extra careful to make sure that your audience can access any links or drills you provide. Embed users typically have very specific permission sets — they may not be able to drill, for instance, so be sure to create a smooth experience by providing content that is within their permissions.

Let it Flow

Flow means the dashboard contains a steady, sensical, well-organized stream of information. It’s all about where you put which content.

Positioning Content

Think of your dashboard like a news website — you want the headlines at the top. People can then scroll for more details.

Take advantage of the way people read information. For English readers, that pattern is top left, then right, then down and to the left (if your audience reads a different language, follow their common reading pattern).

In the example below, the Campaign Performance section is the headline. If someone were curious for more detail, they could look to the Profit Analysis section that starts below it.

Use margins and titles to frame the visualizations and break up the story into sections (keeping each section to about a page whenever possible).

Lastly, notice the alignment of the tiles. Standardizing the sizes of the tiles and neatly lining them up keeps your dashboard easy to read.

Colors in Embedded Dashboards

If you’re embedding, you can customize the colors of your dashboards to match the app or webpage where it appears. This maintains flow within your product, as in the example below.

You can even create a customized theme to apply across all of your embedded content.

If you customize the theme of your embedded dashboards:

  • Create a high contrast between the text and background colors so that the text is easily readable.
  • Don’t get too carried away with colors. Playing is fun, but remember that the data is the star of the show. The point of having a dashboard is for your customers to take action on the information that they see.


Well, there you have it — all my best dashboarding tips in one blog post. Which one surprised you the most?

Can you think of a dashboard you already created that could be made even better by implementing some of these ideas?

Know of any helpful tips I missed?

Tell us about it in the Community.

Until next time,
Jill Hardy
Content Strategist, Customer Education

<![CDATA[onPeak + Looker: Increasing Hotel Bookings with Automated Workflows ]]> http://looker.com/blog/onpeak-workflow-automation http://looker.com/blog/onpeak-workflow-automation At onPeak, we manage hotel negotiation and reservation processes for clients that organize meetings, trade shows, and large conferences. We are focused on providing excellent hotel booking experiences for our attendees.

To do this, we needed a solution that gave everyone in the company access to accurate data. We implemented Looker as our company-wide data solution in 2017.

As we implemented Looker, we realized it was so much more than a BI tool. Once we got the foundation in place, we could automate entire workflows and fully capitalize on our data. Here are some examples of how we did just that.

The pre-Looker manual workflow

One of our top priorities is ensuring that our room blocks are full and that both our clients (and the hotels) are happy. Naturally, we wanted to reach people who have registered for an event but hadn’t booked their room yet. Identifying these people was difficult because our registration and booking data lived in separate databases.

To create our email lists, we needed to tie these two sources together. Unfortunately, this required one of my analysts to manually pull data from both systems, stitch the CSVs together, dedupe the bad data, and rationalize this in Excel.

We would then email the Excel file to the marketing department, who would manually upload it to their system and add it to a campaign.

Automating the flow with Looker

Step 1: Make the list

Because Looker transforms data at query time, we were able to dump our somewhat raw data into Amazon Aurora. We then joined our two data sets together to see which email addresses existed in the registration table but not in the hotel bookings table.

Step 2: Send the data to marketing

We knew the data that came out of Looker, rather than our manual process, was going to be up-to-date and accurate. That gave us (and our marketing team) the confidence to automate the next few steps. We scheduled the data to be sent to an S3 bucket every night. From there, we set up a trigger in Salesforce Marketing Cloud to fetch new results every morning.

Step 3: Send emails

The marketing team was then able to automate their process and set up campaigns with parameterized emails that ran off the automated data. Once the list came in, we sent registrants a tokenized email with promotional information about the hotels available for their event.


I am thrilled with this new automated workflow because it gives back hours of analyst time to our teams every week. But the real winners were our customers (and by extension, our business). By automating this workflow, we were able to increase the percentage of email recipients that booked a hotel room by 50%, resulting in increased revenue for us — and our customers.

Stay up to date on trending topics in big data, data analytics stories, product news, and more by subscribing to the Looker blog.

<![CDATA[3 API Tools to Delight Your Embed Customers]]> http://looker.com/blog/api-tools-inspiration http://looker.com/blog/api-tools-inspiration In this post, I’m going to show you three API tools to make an interface that will wow your customers—while also being fun to use.

I probably don’t need to tell you that great user experience can dramatically grow your business and solidify your position as a trustworthy brand.

In fact, according to this Forrester study1, implementing a better user experience has the potential to raise conversion rates by 400%.

That’s an inspirational metric.

And since inspiration is what this post is all about, I gathered some truly awesome examples of what you can do with the Looker API. Whether they shine a light on the exact “oomph” you want to add to your Looker experience or simply illuminate possibilities, I hope you enjoy them.

Custom Filter Bar

The dashboard below uses icons to put a picturesque spin on filtering.

How’d we do it? The filter icons aren’t a part of the dashboard; they’re part of the website where it’s embedded. The iframe that houses the dashboard “listens” to the filter bar and automatically updates when someone clicks a filter icon.

You can get the code here to try it out yourself.

Report Selector for Embed Users

This example allows embedded users to switch between different reports easily.


The “Select Report” section is set up with an API call to Looker that grabs a list of reports from a designated folder. Adding or removing anything from that folder automatically updates what the user sees. No extra development work is required to push new content out to customers.

Put the report in the right place, et voila!

Gotta have it for yourself? The resources in our GitHub repo will get you started.

Create a Living Data Dictionary

Why build a data dictionary? For context.

Context makes the difference between something you quickly throw in the trash (a simple stick figure portrait) and something you treasure (a simple stick figure portrait...that your kid drew).

In the world of data analysis, it’s the difference between your customer glossing over a report they don’t understand and finding an insight that helps their business grow so much they start recommending your product.

But maybe your dictionary “customer” is internal.

In that case, context is the difference between your colleague coming to you for help picking fields for the report they’re building and knowing they can check a reference whenever needed—leaving you time to work on some of those stunning visuals we covered in the previous section.

The definitions are dynamically pulled from your LookML model and include the data type, description of the metric, and the associated SQL parameter.

You can even style it as you please with CSS and HTML. That’s right, your dictionary will always be fresh and looking great—and all you have to do is set it up once.


Interested? Get the code here.

Bonus Eye Candy: Gorgeous API-powered Visualizations

What’s more enticing in a dashboard than sleek, highly interactive visualizations? Tacos.

But also—finding those visualizations in a page boasting seamless tab navigation with a one-click dark mode! Yep, all of this beauty is possible with the API.

Which is almost as good as street tacos. Almost.

There’s more to the “how” of this gorgeous website than I can divulge in a blog post, but you can register for an interactive demo of the website on this page to check it out for yourself.


Which tool are you going to try first? Let us know how it goes in the Community API discussion.

Until next time,

Jill Hardy
Content Strategist, Customer Education

1 The Six Steps For Justifying Better UX Business Case: The Digital Customer Experience Improvement Playbook, Forrester Research, Inc., December 28, 2016

<![CDATA[Click Attribution: Types of Models & Attribution Strategy]]> http://looker.com/blog/click-attribution-and-its-models http://looker.com/blog/click-attribution-and-its-models What is click attribution?

Click attribution is a way to determine what sources or campaigns are driving the most results for online companies. Many people like click attribution because it is trackable back to its site, email, or source, and click-through links can be programmed to include several attributes. Click attribution also allows people to see the relative performance of different messages, executions, or marketing techniques. Plus, it is a strong signal of intent or interest. Whatever content was clicked, you can assume it was compelling enough to incite action by that user.

What types of click attribution models are there?

The most common click attribution models are first-click attribution, last-click attribution, and linear attribution. There can be many variations of attribution algorithms that assign different values based on the type of transaction and channels involved. These three attribution models are common and not proprietary or algorithmic, so they are a great introduction to attribution.

What is first click attribution?

First click attribution is a model that assigns 100% of the credit for a sale to the first channel that a user clicked through. Some customers convert on the very first interaction with a company, but many will have at least two interactions during their journey to purchase. The first click attribution model rewards the marketing channels or activities that are deemed introducers to the brand.

What is last-click attribution?

Last-click attribution is a model that assigns 100% of the credit for a sale to the last known channel that a user clicked through. This is in some ways a time-decay model: rather than giving fractional attribution to the last channel a user touched, it gives all the credit to it. Last-click attribution tends to be common among many companies regardless of their web analytics platform.

What is linear attribution?

Linear attribution breaks the credit for a sale or action into equal parts pending how many touchpoints were measured in the course of the customer’s purchase journey. If the user had four marketing channel interactions that ultimately resulted in a sale, each channel would be assigned 25% credit for the sale.

How to choose click attribution models

Most companies will choose one attribution model to use in standard reporting, and often this is last-click attribution. Last-click attribution will favor channels or marketing activities that are lower in the funnel, meaning that the customer is ready to make a purchase rather than being in their discovery or shopping phase.

When evaluating the results of last-click attribution, companies should consider their entire marketing mix and targeting strategies. The truest measure for last-click attribution is an email or text channel. Almost immediately upon receiving these messages, customers or clients either do or don’t take action. Other channels, such as search, paid social, podcasts, etc. are likely driving one another. How your attribution rules are configured can make a difference in the end result of which channels or activities get ‘credit’ for the conversion.

Click attribution example

Company A runs a marketing campaign that includes paid social ads, podcast ads, and online display banners. The customer hears a podcast ad and is curious, so they look up the company in a search browser and visit the website to learn more, but they do not make a purchase. After visiting the website, the customer begins receiving online display banners and paid social ads advertising the company and product. They later hear another podcast ad the following week and note that there is a promo code offered for a discount. Later that day, the customer clicks a paid social ad, shops on the site, and at check out they enter the promo code from the podcast before submitting their order.

Pending how attribution rules are configured, this order could be attributed in two ways. Either it would either be classified as Paid Social, since that was the last channel that a click occurred, or it would be classified as Podcast, since that it’s that channel that had the associated promo code. Ultimately, the order can only be classified to one channel. Which channel do you think should be deemed responsible for driving this purchase?

This example may seem complex, but in reality, this is a simple example. It does not include more complicating factors like marketplace sales or brick and mortar.

It’s for this reason that understanding attribution is both art and science. There are many algorithms available on the market and countless companies trying to crack the code to have the most accurate tracking, but none of them can solve this for every piece of information or every touchpoint a consumer has. This is why comparing first click and last-click attribution models is a good place to start. Google Analytics Attribution Models are great for this too, because it includes first click and last-click in their default suite. With this, you can easily compare sales that were measured both ways side by side across multiple channels.

Additional ways to improve attribution

As the example above shows, promotional codes are another method for improving attribution. They’re often used as a measurement and attribution tactic for social influencers, on podcasts, radio, tv, and in direct mail. A great way to add an additional attribution layer is to ask customers what caused them to purchase or how they learned about the company. By introducing this one question, you can gain a better understanding of which interaction the customer found most memorable.

Combining all of these data sources to draw insights using a marketing analytics platform will give you a good idea of how your marketing activities are performing. Ultimately, you will have a range of performance pending which data sources you have. Understanding which activities are upper funnel (introducing your brand to new potential customers) and which are lower funnel (capturing the sale from someone ready to purchase) will further help you determine what the corresponding metrics should be.

At the end of the day, there is no silver bullet to having the perfect attribution model. By collecting as much data as possible and considering the role your media mix plays in a customers' path to purchase, you can optimize your marketing spend to customer conversion based on what your optimal channel mix looks like.

Check out these tips to learn more tips on creating an effective attribution model.

Learn more with Daasity + Looker

Daasity has approached attribution analysis in multiple ways in our direct-to-consumer (D2C) Analytics Suite, which integrates seamlessly with Looker. The data model can use additional data beyond Google Analytics to prioritize attributes such as specific promo code usage, post-checkout survey results, or map orders to marketing channels. Using that data mapping with Looker to visualize results, users can slice and dice data by initial order marketing channel to better determine financial metric targets.

Additionally, the D2C Analytics Suite allows users to easily view results by first click, last-click, and ad platform (view + click) in one simple graph to help gauge results.

Daasity and Looker continue to find ways to make it easier for eCommerce and D2C brands to access and see the data they need to inform strategies and tactics for growth.

For more information, visit www.daasity.com and subscribe to the Looker blog to stay up to date on future how-to’s, best practices, and data stories.

<![CDATA[How Milk Bar is Driving Data Adoption with Looker]]> http://looker.com/blog/driving-data-adoption-at-milk-bar http://looker.com/blog/driving-data-adoption-at-milk-bar

Data-driven desserts

“Wait, why does a bakery need a data engineer?” I get that a lot. I’m a data team of one at Milk Bar, the popular dessert brand by chef Christina Tosi, of Chef’s Table and MasterChef fame. In addition to sampling literally every cake, cookie, or truffle that our R&D team sends over to our office, I’m responsible for wrangling information across our omni-channel business. Like any modern retail company, we rely on strong business intelligence and a data-driven culture to open new stores, launch new products, streamline operations, and unify our customer experience online and in store.

Looker allows my colleagues to engage with the ever-growing pool of data generated by our business. Every week at Milk Bar, our leaders discuss a scheduled performance report on the health of our company, store managers receive updates on the previous day of sales, our demand planner pulls sales volumes for forecasting, and our marketers run various analyses to guide their spend. I’m proud to say that about 75% of our 30 users are actively using Looker in a given week.

But it wasn’t always like this!

Preheating the oven

In the past, Milk Bar operated like most restaurants or food companies, which lag far behind when it comes to analytics. Our finance team was the conduit for information, and every request required pulling reports from multiple systems and stitching them together in Excel. We were in the 7th level of shared sheet hell. It was a time-consuming process and discouraged people from asking questions or hunting down data to inform their decisions.

As a one-person team, I knew that I would need to build a platform where my colleagues could answer data questions on their own. I didn’t want to tuck data away in databases that required SQL queries and become the new bottleneck for information. At the same time, I knew that even the best self-serve platform wouldn’t be enough to cultivate a data-driven mindset across the company. I was certain I would need to develop a culture of data seeking and exploring at Milk Bar.

Getting a taste for data

When it comes to rolling out new tools, engineers and developers tend to have a terrible habit. We’ll spend days or weeks building a tool or a feature and only spend a few hours on documentation, training, and communication. If we want to drive data adoption and encourage a data-driven mindset for our companies, that has to change. We have to become like Steve Jobs, convincing people why they should buy an iPad when they already own a phone and a computer. That requires real empathy for our users and some persuasive skills. If you build it, they might come, but they probably won’t unless you’ve stepped into their shoes and answered, “Why should I use this?” and “Okay, so how do I use this?

Practical steps for driving data adoption

Here are some practical steps that I’ve taken at Milk Bar to help our company drive data adoption. None of these ideas are a silver bullet on their own, but cumulatively, they teach, nudge, and encourage our users to develop a more data-driven mindset.

Train early and often

Milk Bar is a small company, so I can afford to train every new Looker user in group sessions. I hold beginner and advanced training sessions to make sure each person with a license knows how to use the tool, including Browse and Explore. I aim to train new employees during their first week of work (even if it means sinking an hour of my day in a one-on-one session). Why? I want Looker’s data platform to be ingrained in their habits and workflows from day one.

I can’t stress this enough — if you teach someone to use a tool and they don’t get it or they later find it frustrating, they’ll look for another way to answer their questions. Before you know it, their alternate path to a solution will become a habit. Good luck getting them back to your tool!

During training, I emphasize that anyone can reach out to me for a follow-up training session, ask me to check their work, or sit down together and build a query. I want the barrier to entry to be so low that it would be strange to seek out the same information any other way.

Develop data explorers, not data consumers

We are an Explore-first company, and we encourage our users to self-serve their own requests. Dashboards are great for busy people, but dashboards primarily describe, not diagnose. Additionally, dashboard-heavy instances usually require an army of analysts making tweaks and changes for users who don’t feel empowered to take responsibility for their own questions. I want my colleagues to be active users of Looker, not passive users. Throughout our use of Looker, I’ve noticed that individuals spend a lot more time in Looker when they feel empowered to answer their own questions.

Find and support your data ambassadors

Each team has 1-2 people who have organically developed into Looker power users. These people are ambassadors for Looker across the company, and the word-of-mouth credibility they provide is powerful. Identifying these folks is critically important to continued data adoption. Here at Milk Bar, I make sure I support these people well by meeting with them quarterly to gather feature requests and concerns. By supporting them in this way, I can continue to point new Looker users in the direction of the power users on their teams so they know they have someone else to go to with questions.

Communicate updates thoroughly

I add each new user to a contact group that I frequently email with release notes: new explores, dashboards/looks, examples of cool things other Looker users are doing, etc. I send these emails monthly. These are all updates I should document anyway, so my thought is, why not email them out as an extra touchpoint for people who have forgotten about or lost interest in Looker? Who knows — maybe the new explore I released will suddenly make Looker relevant to one of my colleague’s jobs when it previously was not.

Consider a weekly performance report

Our weekly performance report goes out to most people in the corporate office every Monday afternoon. This helps reinforce Looker as the single source of truth for performance metrics, and is a natural invitation for people to go deeper in Looker when they have questions about the weekly report.

Collect feedback (and automate it!)

People won’t give much unprompted feedback unless their ability to do their job is significantly hampered. Your users simply don’t have time to explain their minor frustrations or confusion to you. Instead, they’ll probably just stop using your platform. To avoid this, you have to ask them. One way I did this was by setting up an Airflow job that uses i__looker and the Looker API to email inactive and active users different email templates asking for feedback on their experience. This is just one example of how to gather user input, and yours doesn’t have to be that technical. If you’re looking for a simple way to begin collecting feedback, start by setting a recurring calendar reminder to contact new users and ask for feedback on their experience.

Be a user

I try to spend some time in Looker exploring and answering my own questions. Sometimes, this helps me find bugs before my users do! Other times, I find interesting insights that I can pass on to my stakeholders: insights they may not have known how to pull or may not have thought to investigate. It’s a lot easier to provide good user experiences if you take some time to be a user yourself.

Start making your company more data-driven

I’m a firm believer that to have real success with any product, you have to:

  • Remind people it exists and demonstrate how it makes their lives easier
  • Understand the frustrations of active and inactive users and fix them
  • Create a community of users who can help and support each other

I believe the same goes for driving data adoption at your company by using Looker. It’s important to be an engineer or an analyst, but don’t forget to be a product manager too. Remember that good things come when you train, demo, document, and communicate! You’ll find that the extra effort has a compounding effect on data adoption and (hopefully) the success of your business.

Join the conversation and share your own insights about data culture and data adoption in the Looker Community.

<![CDATA[Why We Built a Data Culture at Fivetran]]> http://looker.com/blog/why-we-built-a-data-culture-at-fivetran http://looker.com/blog/why-we-built-a-data-culture-at-fivetran

At Fivetran, we build technology that centralizes data from different applications into data warehouses, so enabling organizations to be data-driven is an essential part of our mission.

But what does it mean to have a data-driven culture?

My teammates think of this in a few ways:

"A data-driven culture means that individuals are tuned in to start thinking about problems and their solutions with a data-focused mindset. Enabled by a data-driven culture, users across the organization can ask questions like: What can the data tell me about this problem?, What hypotheses can I test with the data we have available? and How will the changes I’m going to implement affect the data we’re collecting and other people who are leveraging this data?"

Christine Ndege, Solutions Engineer/Data Analyst at Fivetran

"Creating a data-driven culture is something that requires buy-in from across the organization. Not only does it mean making decisions based on evidence and analysis, but it also requires team members’ hard work to populate and provide the data behind this analysis, and thus is truly a whole team effort."

Ryan Muething, Data Analyst at Fivetran

To me, being data-driven has more than one advantage. Generally, when decisions are made based on facts and not best guesses, important discussions happen naturally. People ask: Is this data showing what we expected? If so, they know they’ve confirmed their suspicions. More often than not, however, the initial result is surprising and unexpected. It reveals things you weren’t aware of, both positive and negative.

Why a data-driven culture?

With numerous interpretations of what a data-driven culture is, you may be wondering why an organization would strive to be data-driven.

Think about the decisions teams within an organization make every day. Product teams are continuously iterating to deliver value to customers. Account reps are tracking their actions against quarterly targets. Marketing teams are building their go-to-market strategies. Finance teams are determining quarterly budgets — and the list goes on.

At any given time, all of these teams are leveraging data to measure, adjust, and deliver on their goals. While everyone works rapidly to help the organization succeed, the data they’re using can influence results.

The difference that having a data-driven culture makes is when everyone makes business decisions based on the same data, confidence in the decision-making process increases. Product teams can prioritize development based on the same data that marketing teams use to inform go-to-market strategies. Finance teams can be sure that their budgets are rooted in the same data that the sales teams use to forecast their pipelines — and so on.

Data-driven with Looker

At Fivetran, we build connectors that deliver ready-to-query data into cloud warehouses. To do this, we utilize Looker as our centralized data hub. All of our data, including (but not limited to) data from our product, sales, engineering, support, operations, and marketing departments, is centralized in a data warehouse and modeled for access in Looker. Many of us utilize scheduled reports in addition to daily queries to stay on top of alerts and changes.

The mission of my team is to ensure that our BI layer is a truly useful single source of truth for all of our teams. For instance, we use Looker to highlight progress towards company goals during every companywide meeting, allowing us to give teams across Fivetran a window into different departments' activities and successes.

Acknowledging challenges

Most of the time, the hardest part about continuing to build on our data culture is simply getting people started with Looker. Once people learn how to use it, our team doesn’t need to do much to keep their momentum going.

What is important, however, is the accuracy of the data and the data models. If folks start seeing that things are incorrect as they begin using Looker data regularly, they may start questioning how much they should trust the tool. It’s crucial that the quality of the information we provide via Looker is accurate and consistent with data from any other sources people may be using so that confidence and trust can continue to grow throughout users’ experiences with data and with Looker.

Building on the foundation

We’ve found that having a general understanding of what questions teams are attempting to answer is a good place to start when looking to encourage data-driven decision-making. Speaking to team members and understanding what they want to track helps our team deliver value to them.

A great way to continue building on this is to get people excited about finding answers and going deeper into the data. Instead of delivering explores and dashboards to answer questions for them, holding individuals accountable for meeting objectives and goals encourages them to be data-driven and track their progress more closely. In addition to this, providing regular opportunities for anyone to ask questions and learn in real-time helps to build on that excitement and trust in the platform. At Fivetran, we hold Looker office hours three times a week to help people get started, learn how to set up their first few dashboards, and do complex merged explores and offsets.

The point of establishing a strong data culture is to drive data usage, so making sure your data model is accurate and easy to understand is key. If the only people who can use Looker to build reports are folks with a background in SQL, the non-technical majority is not going to be able to utilize the explore functionalities at all, which denies people useful insights. To create a data culture that everyone can be a part of, make things simple and provide explores that aren’t visibly complicated — i.e., the data model behind the explores can be complicated, but it should remain hidden from the user. For the user, it should just work.

Join the conversation and share your own insights about data culture and data adoption in the Looker Community

<![CDATA[Women of Data: June He, Data Scientist at Datatonic]]> http://looker.com/blog/women-of-data-june-he http://looker.com/blog/women-of-data-june-he We’re thrilled to introduce our latest woman of data, June He! Since earning her B.S. in Mathematics/Economics from UCLA and her MSc degree in Smart Cities and Urban Analytics from University College London, June now works at Datatonic as a Data Scientist.

Hi, June! Can you tell us a bit about your background and how it led you to a career in data?

Throughout my school years, I enjoyed many different subjects including math, social sciences, and design. During my college internships, I also found interest in things like civil service and UX design. Upon graduation, I decided I would be really happy if my work ticked the boxes of:

  • math and quantitative analysis
  • sense-making and story-telling
  • dynamic and innovative environment

After that realization, it felt natural to end up at a machine learning start-up as a data scientist, where I am very happy!

What advice would you give to other women who are interested in pursuing a similar career path to yours?

If you find it interesting, go for it! Be proactive and find out what people in the industry are doing.

Personally, I think that meetups are a great, effective resource if you want to break into a field, especially if you don’t know anyone in the field already. And if you do take that step and sign up for a meetup, don’t be put off if you find that you’re one of the few women in attendance, because — since the industry is currently male-dominated — you very well might be.

I used to fear that I might not be “nerdy” enough for a career in data science because I didn’t grow up loving Star Wars or something — and look where I am now! Whatever your reservation may be, don’t let it stop you. This field is much too fun to let self-consciousness stop you from pursuing your interests. There is no one “archetype” (gender, background, personality, etc.) for the perfect data scientist, and that same thinking can be applied to other fields as well, so don’t hold back!

Do you think that data can help build a diverse workplace?

Yes, absolutely. Data is an empowering tool/credential that is more democratic in comparison to other alternatives. While the accessibility to STEM education still has a long way to go before reaching equality for everyone, with the ability to utilize data, everybody has the opportunity to amplify their voice.

Despite its objective appearance, however, data — particularly the interpretation and utilization of data — is hardly without bias. If anything, data can expose, reinforce, and amplify biases. It’s for this reason that it is so important that those that work in data collectively make an effort to mitigate bias and promote its use as a tool for learning and empowerment.

What is one of the most impactful ways you see data affecting the workplace today?

Today's prevalent harnessing of data, like through effective and democratized machine learning, has created many new jobs, many of which are to automate other jobs. This ironic phenomenon is not new, as past technological advancements have led to the same thing — working in data science simply makes this even more acute.

While I don't have an answer as to where it may bring society next, I think we as data practitioners should continue to think about and discuss this more.

If you could tell your younger self one thing based on the experiences that have led you here today, what would you say?

Apply agile methodology to life! I used to be obsessed with having a perfect blueprint before action, be it writing code or choosing a career. Now, I've learned that it makes more sense to try and test things quickly and iterate with the feedback and learnings you harvest along the way.

<![CDATA[How Systems of Insight Will Transform BI & Analytics]]> http://looker.com/blog/system-of-insight-bi http://looker.com/blog/system-of-insight-bi What do you think of when you hear the term “business intelligence”? I suspect most of us immediately see visions of reports and dashboards filled with bar charts, line graphs, maps, and those much maligned pie charts. Spend just a few minutes walking through a BI tradeshow and that’s exactly what you’ll see on the screens of the vendors exhibiting at the event. Much like cat pictures and the internet, the concept of reports has become inseparable from the understanding of BI.

The problem with this understanding is it limits the potential impact of data and the insights it can produce. If the purpose of BI is to produce a report or a dashboard, that means additional steps are required before that data delivers meaningful value to an organization. Someone has to take that dashboard, draw some conclusions from it, and take a corresponding action that has an impact — presumably a positive one — on the business. Without this action, insights from BI are useless.

Companies need more. To fully harness the power of information, we need BI solutions that deliver more than just reports and dashboards. Solutions that let us experience data in new ways by infusing it into everyday business functions. These “data-driven experiences” can be tailored to specific operational workflows, like automatically presenting a discount offer to a customer that’s likely to churn, automatically adjusting bids for under- or over-performing online ads, or using natural language to ask about inventory levels in Slack and ordering additional units based on the answer.

Forrester came up with a new term for these systems that help companies move beyond reports and dashboards and can close the loop between data and action. They’re called “systems of insight” and Forrester describes them as an evolution of BI and analytics that can “harness digital insights — new, actionable knowledge that can be implemented in software — and apply these insights consistently to turn data into action.”1

Looker’s modern data platform was designed to enable companies to deliver data-driven experiences that seamlessly integrate data into business workflows. In the 2019 Forrester Wave™: Enterprise Business Intelligence Platforms (Vendor Managed)2, Looker was recognized as a “Strong Performer” and received the highest possible score in the category of Systems of Insight.

For us, this recognition reaffirms our conviction that the future of business intelligence involves more than feeding charts and graphs to a data analyst. Our product vision (which, incidentally, also received Forrester’s highest possible score) is driven by the understanding that business professionals don’t live in a BI tool, and that a modern data & analytics solution should reach us through the applications we already use.

There is an enormous opportunity to transform the way companies use data. If we can think of data as more than just something we analyze, and instead see it as an integral and actionable component of a business process, we can help organizations become vastly more efficient, productive, and successful.

And then we’ll all have more time to look at cat pictures on the internet.

1 The Anatomy Of A System Of Insight, Forrester Research, Inc., January 5, 2018

2 The Forrester Wave™: Enterprise BI Platforms (Vendor- Managed), Q3 2019, Forrester Research, Inc., July 29, 2019

<![CDATA[How to Choose the Best Chart or Graph for your Data]]> http://looker.com/blog/different-types-graphs-charts-uses http://looker.com/blog/different-types-graphs-charts-uses The quality of our lives is determined by the choices we make. Sometimes the choices are simple, like which flavor of ice cream to indulge in. Let’s say the choices are vanilla ice cream, chocolate ice cream, and strawberry sorbet. Sweetness is great, but on a hot day, vanilla and chocolate lack that light and fruity feeling I want.

Here’s how I analyzed my ice cream choice:

Question: Which flavor of ice cream should I get?

Goal: I want the flavor to complement this hot summer day, and to me, that means something light and fruity.

Outcome: Strawberry sorbet

Yum. But what about the more serious choices, like deciding how much income to squirrel away for retirement?

The same process still works:

Question: How much of my income should I save?

Goal: I want to enjoy a good life post-retirement. Research suggests that means saving at least 20%.

Outcome: I’ll save at least 20%.

Each choice can be broken down with the framework of “question, goal, outcome.” In this framework, you have a question and a goal that you’re trying to achieve. Your goal motivates how you choose between options to get to the outcome.

Choosing the best chart or graph for your data is similar, in that the outcome depends on your goal. You can even use the same “question, goal, outcome” framework. I’ll provide some examples of choosing a chart with this framework further on.

For now, let’s focus on the “goal” part of the framework as it relates to displaying data. We see most visualizations as fulfilling one of four main objectives:

  1. Showing how values compare to each other
  2. Showing how the data is distributed
  3. Showing how the data is composed
  4. Showing how values relate to one another

The challenge of choosing the right visualization lies in finding the goal beneath your data question. Once you identify the goal, choosing the right chart becomes easier, particularly when you have a reference like the one below. (Feel free to print it out and keep it nearby.)

Types of Charts & Graphs

This points you in the right direction, but there are multiple charts in each category. How’s an analyst to choose? Familiarizing yourself with the nuances of each graph will help.

How to Choose the Best Chart for Your Data

What’s the best chart to show comparison?

Comparison questions ask how different values or attributes within the data compare to each other.

Tables help you compare exact values to one another. Column and bar charts showcase comparisons across different categories, while line charts excel at showing trends over time.

Chart Type Ideal Use
Column Chart Comparisons with fewer categories and short names (for display and readability purposes).
Grouped Column Direct comparisons of multiple data series per category. Keep in mind that in a grouped column chart, it becomes more difficult to compare a single series across categories.
Bar Chart Comparisons with a large number of categories, since stacking category names vertically makes them easier to read. Bar charts are also great for displaying negative numbers.
Line Chart Showing trends in continuous data over time.
Overlay Line Showing trends in continuous data over time with multiple dimensions.
Table Displaying exact values; not ideal for finding trends or comparing data sets.

What’s the best chart to show distribution?

Distribution questions seek to understand how individual data points are distributed within the broader data set.

Box plots show distribution based on a statistical summary, while column histograms are great for finding the frequency of an occurrence. Scatter plots are best for showing distribution in large data sets.

Chart Type Ideal Use
Box Plot Showing how data is distributed based on a five-number statistical summary. A small “box” indicates that most of the data falls within a consistent range, while a larger box indicates the data is more widely distributed.
Column Histogram Showing distribution of variables, plotting quantitative data, and identifying the frequency of something occurring within a bucketed range of values. This differs from a column chart because histograms have quantitative data on both axes, rather than relating information about categories.
Scatter Plot Showing how data is distributed with two variables, especially good for large data sets; quickly identifying specific data points that are outliers.

What’s the best chart to show composition?

Composition questions ask what general features are present in the data set.

Donut and pie charts are great choices to show composition when simple proportions are useful. Area charts put the composition of data within the context of trends over time. Stacked bar, percent, and column charts show an overview of the data’s composition.

Chart Type Ideal Use
Donut Chart Examining part-to-whole relationships when simple proportions provide meaningful information and pivots for multiple categories are needed.
Pie Chart Examining part-to-whole relationships when simple proportions provide meaningful information.
Area Chart Showing trends in continuous data over time in the context of part-to-whole relationships**. Area charts are not ideal for distinguishing the trends of individual data sets over time because the overlapping colors can prevent the trend from being easily seen.
Stacked Bar Showing an overview of the composition of data among many categories, or when working with time.
Stacked Percent Times when the pure composition of data is the message you want to deliver and exact values aren’t necessary. This chart also works well when comparing the proportional contributions of different categories.
Stacked Column Showing an overview of the composition of data. Avoid having too many segments in a column. If you want to compare the same segment across bars, use a plain column chart instead.

What’s the best chart to show a relationship between values?

Questions in this category ask how values and attributes relate to each other.

Bubble charts and heat maps can help you quickly identify relationships between data points.

Chart Type Ideal Use
Bubble Chart Showing the relationship between data points with three variables.
Heat Map Quickly relating information across a larger data set of exact values.

Identifying the Goal Beneath the Question

Now you have references to help you choose between chart types. These resources are most powerful when you understand what motivates those choices in the first place.

What’s your data question really asking for? Is the answer to compare data, look at its distribution, examine its composition, or show a particular relationship between data points?

You can recognize that goal using our “question, goal, outcome” framework from the top of the post.

For the sake of putting this framework into action, I’ll don the hat of a marketing analyst creating visualizations for my colleagues. The morning is dedicated to coffee and acquisition metrics, essentially answering questions like: “Are we gaining enough customers? Where are they coming from?”

Question: How many new users are we acquiring every day?

Goal: Compare values (number of users) over time (days)

Outcome: A line chart

A dip in traffic on the weekends is expected and indeed appears in the data. Now I want to dive into this traffic further and see where these users are coming from.

Question: What channels are these new users coming from?

Goal: Display the composition of the data (which channel source users came from) over time (still comparing the number of new users across days).

Outcome: An area chart

Looks like the organic search category currently accounts for the largest amount of users—meaning that we're doing well in search engine rankings, thanks to our efforts with search engine optimization. We’re getting a lot of traffic from referrers, too—I want to look at that more closely.

Question: Which referrers are driving the most traffic to our website?

Goal: Compare values (number of sessions) across categories (referrers).

Outcome: A bar chart

The partner website drives the most traffic by a long shot, followed by a stellar blog post. But how does the traffic break down between desktop and mobile devices?

Question: Which referrers tend to drive more traffic to our website from desktops, and which ones tend to drive more traffic from mobile devices?

Goal: Comparing values (number of sessions) across categories (referrers) and looking at composition within each bar (mobile vs. web traffic).

Outcome: A stacked bar chart

Question: How does the traffic from mobile and desktop stack up across referrers?

Goal: Comparing values (number of sessions) across categories (referrers) in multiple dimensions (mobile and desktop).

Outcome: A grouped bar chart

That should be enough to get the team started on the acquisition metrics.

Another area of marketing that generates interesting questions is behavior. Knowing what people are up to will help determine the best times and places to reach out to them.

Question: What time of day sees the highest number of users on our website?

Goal: Comparing values (number of sessions) over time (hours) across multiple dimensions (days).

Outcome: Overlay line chart

Looks like noon to four o’clock are our most popular times by far. Which pages are our visitors looking at in the afternoon?

Question: Which landing pages are driving the most engagement by channel?

Goal: Look at the relationship between channels and landing pages to see how the different combinations influence average session duration.

Outcome: Heat map

Interesting— sessions coming from paid searches and direct traffic show the most engagement.

Of course, after understanding visitor behavior better, a marketer’s eye naturally turns toward conversion. How do we get our visitors to become customers?

Questions in the conversion realm can be broken down like this:

Question: Where do we have opportunities to drive more traffic to high-performing web pages?

Goal: Show the relationship between values (conversion rates and number of sessions) to help pinpoint pages with high conversion rates that could be better promoted.

Outcome: Scatterplot

In this example, the page represented by the upper-left corner dot represents an opportunity to promote a page. This page has an extremely high conversion rate compared to the rest but barely sees any traffic.

This framework works for data across all industries, not just marketing.

Try it out for yourself. Take the next data question you encounter and write it out, identify what the goal behind the question is, and use the guidance above to choose the best chart.


Which chart you use impacts how people understand your data and what decisions they make based on that understanding. Happily, you now have some great tools in your pocket to help guide your choices.

Choosing the right chart is one part of the bigger story of communicating with data—but the colors you choose and the way you construct a dashboard also matter. To that end, keep an eye out for the next installment in our data visualization series on designing dashboards for UX/UI.

In the meantime, check out our eBook, “The Art of Telling Stories with Data.”


Jill Hardy
Content Strategist, Customer Education

<![CDATA[Putting the Hack in Hackathon: The Winners of Looker_Hack : London]]> http://looker.com/blog/putting-the-hack-in-hackathon http://looker.com/blog/putting-the-hack-in-hackathon Note: This is one part of a two-part series on Looker_Hack : London, hosted in May 2019. This post is focused on highlighting the winning projects and the rockstar developers that built them. The second post is about the tools we used to make the Hackathon such a success, which you can find on our Looker Engineering Blog.

Following the success of our first Hackathon last October at JOIN 2018, we wanted a way to show some #LoveLookerLove to the folks across the Atlantic. To do this, we decided to fly out to the UK, meet more of our innovative Looker customers, and host our first Hackathon event in London.

Why A Hackathon?

Hackathons are a great way to quickly build projects and solutions that can help teams meet specific business goals. For Looker specifically, hackathons provide an avenue for our engineers, designers, and product managers to meet developers and understand how they’re building on the Looker platform.

Our hope is that by attending these hackathons, attendees get more enjoyment and benefit out of using Looker and that everyone is able to learn more about the Looker API and all the powerful integrations and customizations that can be built on Looker.

Our Favorite Hacks

After a day of collaboration, a panel of three judges scored the day’s hacks and presentations on four main criteria: ambition, execution, coolness, and impact.

The winners for Best Hack and Nearly the Best Hack were then announced and bestowed some amazing hardware to proudly display in their office, at home, or over their fireplace.

Best Hack: Acrotrend / Yoti

The Best Hack for the 2019 London_Hack was for an application that allows users to ask quick followup questions of a dashboard using plain English. Leveraging Looker’s Embed SDK to embed a dashboard into their application and Looker’s core API to extract Looker model information, the Acrotrend/Yoti team generated a lexicon which then extracted semantic meaning from the English questions. Using this to generate queries (again through the Looker API), the winning team was then able to visualize those results using their own custom visualizations.

Nearly the Best Hack: Farfetch

The award for Nearly the Best Hack went to the team that built a custom Data Action through Looker’s Action Hub. The data action sent data to a Google spreadsheet, which in turn was the backing data source for a Google presentation. This awesome team of hackers demonstrated nearly instant updates to a slide deck that then had the ability to be used to present on the state of their business.

(Very) Nearly the Best Hack: Turner

Because of an incredibly tight race, we also awarded a Nearly the Best Hack to a team who extracted results from two data sources Looker’s API that had a nearly common field, performed fuzzy logic using a Jupyter notebook and a generated Python SDK to match the data, and pushed back into Looker to visualize.

What impressed us about their hack was that the team itself represented two different departments within their organization, and this hack was built to solve a real problem that affected some of their business goals. In just under a day, this team was able to come together and, leveraging the Looker platform, was able to make it easier for both departments within the organization to answer critical business questions.

Future Hacks

We’ve had a blast hosting and being a part of our Looker Hackathons. Not only have the completed projects impressed and inspired us, but they have also encouraged us to continue iterating on their successes for Hackathons yet to come.

Our next Hackathon is scheduled for November 4th, 2019 in San Francisco to coincide with JOIN 2019. We look forward to sharing more updates about this and future Looker hackathons and hope to see you in attendance at one in the near future!

<![CDATA[Growing with Data at Indigo Ag]]> http://looker.com/blog/growing-with-data-at-indigo-ag http://looker.com/blog/growing-with-data-at-indigo-ag

When I think about what it means to have a data-driven culture, I think of an organization where the use of data is championed anytime someone asks a question or makes a decision. If data is used to track progress, provide transparency, and measure success — to me — that is a true data culture.

Data + Indigo Ag

From the beginning, Indigo Ag was conceived as a data-driven company. With a commitment to improve grower profitability, environmental sustainability, and consumer health through the use of natural microbiology and digital technologies, the problems we’re striving to solve require complex, systematic changes that can only be accomplished through the use of data.

Some of the ways we’re doing this today include —

Indigo Marketplace

Indigo Marketplace, a digital platform for buying and selling grain, enables growers to receive premium prices for producing high-quality crops more sustainably, and buyers to source grain with a range of characteristics.

Digital Agronomy

We combine data from remote sensing technology (moisture probes, drones, satellites, etc.) with data from each farm to provide individual insights directly back to each grower. The provides a holistic view of not only what is going on in their farm, but also how that farm compares in aggregate to every other farm on Earth.

Microbial Product Pipeline

Seeking to improve a plant’s natural microbial makeup, we identify and sequence thousands of endophytes, using an approach called “focused sourcing.” Indigo scientists leverage sophisticated genomic sequencing and computational bioinformatics to catalog and assemble a world-class database of genomic information from these microbes. We apply algorithms and machine learning to this database to predict which microbes are most beneficial to the plant’s health.

To help drive these efforts, we leverage the Looker platform and currently have:

  • 22 active developers writing LookML
  • explores across 14 departments
  • 530 Looker users
  • 200+ average weekly querying by users who spend at least one hour in the platform each week.

Some of our most popular dashboards showcase supply and demand within the Indigo Marketplace, grower profitability, and marketplace bid quality.

Building a Data Culture at Indigo Ag

My entire career has been working in data and using data to help businesses and people answer complex questions. Since joining the Indigo team, I’ve had the opportunity to dive into agriculture data — which is the hardest data I’ve ever had to wrangle. At Indigo, it’s important to me to provide everyone with equal access to data to help them succeed and ensure better results for our customers.

As the primary liaison between end business users and the Indigo data platform, building our data culture here at Indigo has been a key focus for the Business Intelligence Platform Team. Some of the ways we’re driving platform adoption are through onboarding programs, workshops to further education, and continued maintenance and policing of the Looker platform.

Onboarding Programs

The BI Team works with hiring managers to scorecard analyst positions, gather requirements for an onboarding deliverable, and then constructs a tailored onboarding program for the newly hired analyst. This program is designed to educate analysts in the use of Looker, address skill-set gaps through training, and expose analysts to relevant data all in the context of the deliverable.

Training Workshops and Data Education

Another responsibility of the BI team is to provide ongoing data education and communication for all business and operations units. The BI Team communicates out newly available data within the UDP, manages shared documents outlining where data is available, and facilitates data related workshops.

Platform Maintenance and Policing

The BI Team is responsible for managing the Looker platform. This means reviewing PRs, curating explores, and encouraging the use of best practices. The outcome of a well managed Looker Platform should provide: intuitive and easy to use explores, well-defined developer areas, shared spaces that are easily navigated by audiences, and documentation for everything.

Continuing to Grow and Learn with Data

Among all organizations, there are common misconceptions about how to use data to make business decisions, which leads to challenges when trying to develop a data culture. For instance, if your organization wants to succeed with data as a whole, gatekeeping or siloing the data should be avoided. As long as it is in accordance with a privacy-by-design data access structure, relevant, vetted, and smart data should be accessible throughout an organization.

In addition to these, the three biggest misconceptions I’ve come across in my career are:

Bad Data

There is no such thing as bad data — just poor analyses or transparency. Bad data is often the product of poor processes, but even the data that is generated from poor process can be used to highlight where the process is breaking down.

“Give Me All the Data”

Is a common request which typically means there needs to be a conversation with the stakeholder around ‘what are we trying to answer’? This allows us to provide ‘smart’ data — only the relevant data required to answer the question.

Proliferation of Data Sets

This one is expected, but it’s always worth mentioning. Without a robust reporting platform users find the need to replicate data outside of source systems which leads to shadow systems and trusted data sets living outside of those source systems.

By removing hurdles for using data and maintaining a culture of transparency whenever possible, building a culture where questions and decisions are based on data can begin to spark across the organization.

Join the conversation and share your own insights about data culture and data adoption in the Looker Community

<![CDATA[The Rise of the Multi-Cloud Data Platform]]> http://looker.com/blog/future-of-multi-cloud-for-looker http://looker.com/blog/future-of-multi-cloud-for-looker Cloud infrastructure is now mainstream, with more than 80% of all enterprise workloads expected to be cloud-based in 2020. But it’s not just cloud that’s the new norm: businesses are now using more than one cloud provider, often deploying several solutions at once. Businesses are rapidly embracing multi-cloud for their business intelligence systems.

In this dynamic, evolving world it’s vital that data teams are able to choose the data platforms and deployment methods that work best without lock-in and with the flexibility to pivot and change approaches as necessary.

Multi-cloud refers to the use of multiple cloud providers to supply infrastructure, applications, and key business functions. Multi-cloud goes beyond “hybrid cloud” deployments to include private infrastructure, IaaS, SaaS, and other new approaches. According to Gartner1, “most organizations have already adopted multiple cloud computing providers for different applications and use cases.”

Benefits of Multi-Cloud for Organizations

Organizations and the data teams that support them are increasingly choosing multi-cloud for BI for the following reasons:

The Benefit Why This Matters
Access to capabilities Not every provider has the same features and who’s winning the feature race changes frequently.
Avoiding vendor lock-in Leverage the strengths of more than one vendor. Avoid costly and time-consuming migrations.
Cost mitigation Multi-cloud keeps vendors competitive and pricing low.
Private cloud needs Some organizations have business requirements for private cloud. In these cases, a multi-cloud approach improves flexibility and allows for hybrid cloud where necessary.

Historically, business intelligence tools have been built as single-vendor architectures, with data store and analytics tightly linked. As a result, changing an organization’s business intelligence tools or databases resulted in costly migrations, reworking of business logic, and months of effort.

The modern Looker Data Platform is different. Looker’s in-database architecture supports a wide range of databases and SQL dialects. With Looker as a multi-cloud data platform, you can deliver data where and when it’s needed, without being locked into a single interface and with the ability to go far beyond simple reports or dashboards. And Looker hosting is designed to meet the unique needs of your business, in a way that’s best for you.

“It’s really helpful to have an analytics platform that’s SOC2 compliant, meets GDPR standards, has all of the privacy we need, and that we can deploy into a closed system or on VPN networks—on-prem or in the cloud.”

Looker’s Database Agnostic Approach

Looker’s innovative in-database architecture leverages the power of your database investment to run queries. Looker speaks to your database via Java Database Connectivity (JDBC) and communicates in a SQL dialect your database understands. Because every database is different, and SQL dialects vary, Looker’s multi-cloud platform has been integrated to support more than 50 distinct versions of SQL and we are regularly adding new dialects as technology evolves. This means if your database speaks SQL, Looker probably supports it, and if it can be reached via JDBC Looker can communicate with it, easily. With Looker you’re never locked into the choice of a database you don’t want.

Looker simplifies database migrations, too. If your organization is modernizing your data infrastructure, retiring old systems, or simply adding new database technology, Looker’s database agnostic approach makes adapting your analytics platform easy. When you connect to a new database, Looker is a cloud data platform that will automatically change the SQL dialect to what’s spoken by the new system. With only minor changes, your existing data models and business logic can be reutilized in your new database.

With Looker, you can connect to more than one database or data store at once, too. Looker supports multiple JDBC connections to the databases of your choice. Using Looker’s ability to merge query results, you can produce insights from data spread across multiple data environments.

Hosting How You Like It

Looker hosts and manages Looker deployments for the vast majority of our customers. The Looker Cloud, our virtual private cloud (VPC) environment, is secure, scalable, and provides excellent system availability to Looker users.

As a Looker customer, you can choose the underlying cloud provider in which your Looker instance is hosted. Looker instances today can be hosted in our VPC on AWS (Amazon) or GCP (Google). Customers can even self-host on private infrastructure if necessary. In the near term, Looker will expand Looker Cloud hosting choices to additional providers.

By providing a choice in hosting providers, Looker helps you leverage the cloud provider you prefer, and address complex issues such as data sovereignty (you can have us host in the region you prefer).

Deliver Data Where It’s Needed

Too often data tools lock users into a single way of viewing and using data. Older report-and-dashboard analytics approaches have limited the value organizations can extract from data.

Looker is an open system that lets you deliver data where it’s needed, to people and systems throughout your multi-cloud environment. From right within your Looker instance you can take immediate action on data and connect users to the insights they need, directly in their workflows. But unlike traditional, limited BI tools, Looker is a fundamentally different BI solution with a powerful API that allows you to automate data delivery on a cloud data platform — using the API you can extract data directly into Amazon S3 buckets, email .csv files, or use text messaging to send alerts to users.

“For organizations looking at multi-cloud and hybrid cloud management challenges, continue to consider Looker as a solution to provide consistent analytics across data environments.”
—Hyoun Park, CEO and Principal Analyst at Amalgam Insights

Learn More About Our Multi-Cloud Data Platform

A live demonstration of the Looker Data Platform is a great way to learn more about the power of the modern data architecture, see how Looker works, and discuss your approach to multi-cloud with one of our experts. Scheduling a demo is easy — try your data with Looker now.

Attending the annual JOIN data conference is another great way to learn more. At JOIN you can learn more about Looker’s cloud partners, including AWS, GCP, and Snowflake, and hear from data specialists using Looker in their own cloud and multi-cloud deployments. It’s a great way to learn tips and tricks of navigating the new multi-cloud world.

1 Gartner Technology Insight for Multicloud Computing, Lydia Leong, 16 August 2018

<![CDATA[Analyzing Customer Behavior With A New Looker Marketing Block]]> http://looker.com/blog/analyzing-customer-behavior-with-a-new-looker-marketing-block http://looker.com/blog/analyzing-customer-behavior-with-a-new-looker-marketing-block この記事の日本語版はこちらよりご覧ください。

As individuals living in the digital era, we leave a massive trail of data through each click and pageview. Marketing teams cast a wide net to capture this data in an effort to understand their audience and provide a better customer experience, but analyzing that data, much less taking action on it, has been a challenge to say the least.

This is why we’re excited to announce our latest addition to the Looker Blocks directory — Customer Experience Analytics by KARTE, a turnkey suite of dashboards to understand user behavior trends (campaign effectiveness, Net Promoter Score, etc.), and seamlessly take action on through the KARTE service.

What Is KARTE?

KARTE is a customer experience solution platform from Plaid, inc., a Japanese technology startup building a suite of products to enhance personalization in customer experience. KARTE customers can trigger actions and events to optimize customer experience through analyzing user behavior in real-time. Customers can opt to pipe event and page view data into a Plaid-provided BigQuery environment through KARTE Datahub offering, a huge leg up for marketers who want to get their hands dirty in their quest for unique actionable insights. This also means that all Looker customers have the opportunity to take advantage of this Looker Block without having to set up a new data warehouse environment themselves.

What’s Included With The User Experience Analytics Block?

First of all, let’s talk about the dashboards — probably something a lot of you will see right away. As of today, there are three dashboards — Web Access Analytics, Pageview Funnel, and NPS Overview — to get you started.

Web Access Analytics

This dashboard covers a comprehensive list of standard web metrics, including time series tracking of sessions, bounce rates, and what OSs customers are using to view your website. It’s something a digital marketing manager would view to get their day started, and can be made into a scheduled report that goes straight into your inbox, just in time for your morning commute or first cup of coffee.


Pageview Funnel

This dashboard provides insights into how your customers are moving through your website, and where they’re dropping off. Are your customers putting items into their cart only to never actually check out? Are most of your customers dropping off at a particular link? Or perhaps there’s a massively popular piece of content people navigate to. Understanding these behaviors will help you identify any bottlenecks in your UX, guiding you on a path to remediating — or doubling down — on what and how you present your content.



Net Promoter Score (NPS) Overview

This dashboard shows you trends in your NPS, a critical metric in gauging your customers loyalty in order to increase engagement at every level. We’ve added tiles covering results using the most important attributes so you can dive in deeper to visitor-level insights right away by drilling into any point that piques your interest.


Tools Alone Won’t Make You Successful — It’s How You Use Them

Dashboards are a great starting point to get an understanding of and monitor your customer behavior from a high level at all times. However, it’s when you’re able to get deeper into the details of those behaviors that you’re then able to act on these trends. That’s why we’ve added a seamless linkback to KARTE’s UI from user-level data.

Every result in this Looker Block has a drill through into row-level details, and we’ve set them up so that they always have a User ID field linked to the KARTE application. This means that once you identify a sudden change, say an uptick in detractors in your NPS Overview dashboard, you can easily pull up data on each individual detractor to see what’s causing them to respond with low scores. An additional KARTE feature also enables pageview playbacks, showing you exactly what the customers saw and how they navigate each page in replay, putting you into your customers' shoes.



There are other ways you could add this into your daily workflow. For instance, you might take this drill through to the next level by creating a new visitor list that you can use for retargeting or direct outreach through email. Or perhaps you want to set up an alert that notifies you when your NPS dips below a certain threshold.

With Looker, you take templated dashboards and reports (like the one we discuss in this article) to kickstart your analytics quickly and easily extend on your existing business logic to tailor the data exploration experience and curate insights for your team without having to start from scratch.

If you’re interested in learning more about understanding your customers better, reach out to our team. We look forward to hearing from you!