<![CDATA[Looker | Blog]]> https://looker.com/blog Looker blog discover.looker.com Copyright 2018 2018-03-23T00:03:43-07:00 <![CDATA[6 ways to integrate a data strategy into your product workflow]]> https://looker.com/blog/6-ways-to-integrate-a-data-strategy-into-your-product-workflow https://looker.com/blog/6-ways-to-integrate-a-data-strategy-into-your-product-workflow Even though most product and engineering organizations today are collecting tons of data, most only use a fraction of what is available. This is often due to competing priorities or lack of implementation experience in data strategy.

Looker and Atlassian are working together to make data-driven product development workflows a reality.

There’s no silver bullet when it comes to making data-backed decisions the norm, but there are steps you can take to shift the culture of your team, or even your company.

1: Expand your definition of data

Data from instrumented products is increasingly valuable to product teams because it shows how users interact with the product directly. But data from the product itself isn’t the only source of information you can use to improve your product and workstreams.

Data from issue tracking tools can provide valuable insight into historical development performance and can help you plan for future development cycles.

Check out the new Looker Blocks that make it easy to get up and running with Jira analysis.

2: Broaden your focus with data from other departments

The most successful product analytics often incorporates a wide range of sources outside of traditional product data to uncover insights into areas of the product effectiveness and broader customer experience.

Marketing, finance, and sales data can provide valuable insight to a product team. Not only does this new data provide a fuller picture of how the product is received, purchased, and used, but it can also determine the larger impact of product development choices. For example, you can see the impact of a new feature on revenue, the impact of a bug on customer retention or the impact of a marketing campaign on the types of new users.

Services like Fivetran make data consolidation with Jira easy and scalable.

Your data should be representative of your company as a whole, not just your department or function because at the end of the day, that is how your customer experiences your product and brand. By looking at the full picture, you can get a much richer and more accurate understanding of your product and your users.

3: Reveal your metrics that matter

Once you have the data you need, keep your product team focused on the task of building great products by developing clear, team-wide definitions of key performance indicators you can all agree upon. This includes not only product performance but customer engagement, as well.

Find your metrics that matter, then break those larger metrics down into levers that each individual on the team can begin to influence. Monthly active users is a great North Star metric but is difficult for an individual to influence directly. Increasing usage of a sticky feature, on the other hand, is a measurable goal an individual can work towards.

4: Apply potent metrics to every conversation

Once you have actionable metrics, make them a conscious priority for your team. Talk about these metrics in meetings and bring them to the forefront of group conversations.

In addition to in-person conversations, your product team likely uses a wide range of tools in your day-to-day workflows. Email, chat, and project management software all play a role in how we coordinate, track, and communicate. Set up alerts in these tools so the whole team knows when certain thresholds are passed or when unexpected events occur so you can act quickly.

Check out the new Looker integration with Hipchat which allows you to push data directly to the place where your team is talking.

Just having metrics isn’t enough. To make truly data-driven decisions, metrics need to be routinely baked into every conversation.

5: Shrink the distance between insight and action

Data ensures you’re informed when making decisions, but the larger the distance between the information and the action, the harder it is to keep the two connected.

Ideally, this distance would be almost invisible. If you have product issues to file, make sure you can do so right from your data tools without delay. If you are describing an insight in a Confluence page, insert that visualization directly so the reader can see the live data in context.

Making sure the data is available when and where it’s needed is crucial to building a data-driven organization. See the new Looker Actions that allow product managers to update Jira tickets directly from where they are analyzing data in Looker.

6: Treat data analytics development as if it were your own product

Data is only powerful when metrics remain relevant. Which is why it’s very important to enable reporting on a single version of the truth.

Your data tool must change and grow with your business, and to do this, you can pull best practices from other development processes you already know and employ. Rely on sprint cycles to build new analytics tools and use version control to keep improve analytic outcomes, encourage collaboration and pivot data development quickly when necessary. Learn more about how to use Bitbucket for version control in Looker here.

You can check out all of our apps in the Marketplace

<![CDATA[Women of Data: Interview with Margaret Rosas, Looker’s Director of the Department of Customer Love]]> https://looker.com/blog/women-of-data-margaret-rosas-looker https://looker.com/blog/women-of-data-margaret-rosas-looker This week, we are very excited to share the perspective of Looker’s very own Margaret Rosas. Since joining Looker over five years ago, Margaret has worn many hats: from release manager to chat analyst to community organizer. Today, Margaret heads up Looker’s Department of Customer Love, the global group of chat support analysts.

In addition to scaling Looker’s DCL, Margaret is a long time pillar of her local community. She connects her two passions - entrepreneurship and technology - by helping to promote burgeoning talent across the Santa Cruz Community with organizations like TechRaising and Santa Cruz Works.

Margaret, can you tell us a bit about your background and how it lead you to get into a career in data?
I’m an accidental technologist, but a native data nerd. Growing up I didn’t gravitate to technology so much, but I was always asking questions about how many people did what, what is popular in different locations, and how did the tabloids justify what their headlines claimed. I wanted to see the data long before I ever knew what the word meant.

But I didn’t connect computers to my data questions until I learned about this little thing called the internet. I was awestruck by the potential to connect people to each other —- the potential of the human network had me at hello world. I started a mad dash to learn how to code and learn every internet protocol I could because I was suddenly determined to be an internet pioneer.

What advice would you give to other women who are interested in pursuing a similar career path to yours?
Understand what drives and motivates you to contribute your best self. Be a sponge for learning new technologies. Learn SQL, it’s the language of data and you will want that foundation no matter how your career unfolds.

What can women in the workplace do today to help build the foundation for successful careers?
Confidence can be hard to come by, but it is essential to leadership. If you don’t feel confident, dig in and figure out how to build your confidence. It might mean learning something new, developing expertise or simply creating affirmations to psych yourself up!

“Leading with the data can shine a light on imbalance and inequities to build a better workplace.”

What has been the biggest surprise in your career?
The fact that I ended up in technology still surprises me. I can remember how I shunned the Computer Engineering program at GW. I didn’t think about why until much later, but I vividly remember walking through the CS buildings and not seeing a single woman student or professor. In reflection, I think I felt greater confidence in the Business school because it wasn’t lacking in gender diversity. It was my passion for the internet that made me completely disregard comfort levels and set my sights on all things internet.

What are some of the biggest challenges in leading today? How are you thinking about dealing with those challenges?
I’ve had to do a lot of work in understanding when and where to use emotion. Emotion can provide fuel to accomplish great things (both anger and passion are great instigators). However, they can also be terribly distracting and ultimately debilitating when used to lead. I know it’s cliche, but this is where data can be the great equalizer. My great challenge in the coming year is to lead with data while also staying true to my emotions.

Do you think that data can help build a more diverse and equal workplace? How so?
Science tells us that diverse ecosystems are the most successful, they flourish while less diverse ecosystems flounder. Leading with the data can shine a light on imbalance and inequities to build a better workplace.

How do you think individuals can use data to advance their ideas or careers?
Gut instincts and hunches are great things to tune into. But the become even more powerful when you are able to support them with data. Develop a strong partnership with your gut instincts and your data savvy to advance your ideas.

<![CDATA[Data of Women: Education and Literacy Around the World]]> https://looker.com/blog/data-of-women-education-and-literacy-around-the-world https://looker.com/blog/data-of-women-education-and-literacy-around-the-world In the spirit of International Women’s Day, we wanted to learn more about how women around the world are being prepared for future opportunities.

To learn more about this, we turned to the World Bank’s World Development Indicators dataset and pulled some metrics around School Enrollment and Literacy by gender, income, and location around the world.

School Enrollment
An education is key to the pursuit of upwards mobility and personal agency. To measure this, we analyzed what percent of the female population are enrolled in school, by country and region.

According to the World Bank’s definition of literacy, female literacy can predict the quality or preparedness of the future female labor force and can also be used as a proxy for the effectiveness of the education system. When looking at the effectiveness of an education system, the quality of education needs to be taken into account; simply having access to an education is not enough to ensure future success.

For both literacy and school enrollment, we obtained metrics from countries around the world over a period of 5 years from 2012-2016. Because we were not able to obtain complete data from all countries, not all countries in the world are represented in all parts of the dataset. We will make note of the missing pieces of data as we move through the findings.

In general, the dataset proved many common assumptions, but in unpacking these numbers, we found that there was more depth to be discovered in the data.

Access to Education by Gender


When looking at youth education, the data shows that developing nations - such as Sub-Saharan Africa and South Asia - have the highest percentage of youths not enrolled in school.

When looking at youth education by gender, the data shows that in the regions with a higher youth enrollment overall (on the left of the visualization above), there is a slightly greater percentage of the female population than the male population in school. However in the regions with the largest portion of the population of children out of school (to the right of the above visualization), the percentage of girls in school is much lower than the percentage of boys in school. In other words, as the overall level of youth population in school decreases, the level of girls enrolled in school decreases disproportionately more than the levels of boys enrolled in school.

Another way to look at educational access is by using the Gender Parity Index (GPI) - a metric used by UNESCO to measure the access to education to girls vs. boys in a given country. The Gender Parity Index measures the amount of girls in school compared to boys in school, with 1 being parity with equal access to education, below 1 being more skewed towards boys and above 1 being skewed towards girls.

Below, is the GPI of Gross Primary and Secondary School enrollment across the globe.


Not all countries report this data, but of those that do, the same regional problems that are highlighted above become extremely clear - the gap between boys and girls is highest in North Africa and parts of the Middle East and South Asia.

But there is good news here too - look at all that green! We took a look at this data broken-down by country to get a better idea of what this Gender Parity Index looks like across all countries:


Above, we look at a country by country comparison, which shows things are actually looking pretty equal. There are inequalities on both sides of the spectrum, but overall, it’s great to see that the availability of education is close to equal in so many places around the world.

Literacy Rate by Gender

UNESCO argues that literacy is a key factor in an individual's ability to take part in the labor market and succeed in society. With this in mind, we wanted to get a better understanding of how women fare compared to men when it comes to literacy

Please note: This dataset did not include literacy rates for North America (USA, Canada and Bermuda) so the region was not included in the following analysis.


Looking at the above visualization, we see a clear literacy drop in certain regions. We also see that as overall literacy rates drop in a region, the literacy rates or women drop much more than male literacy.

But let’s dig into this a bit more....


Another way we can slice this data is by income levels across all countries. When broken down by income tier, we can see that, overall, the literacy disparity increases substantially as income decreases.

But this pattern is not the case for all regions...


Another indicator in this data set is the the Gross National Income (GNI) per capita for each country and averaged across regions. This figure shows the wealth per person a country or region has, but is not representative of what the average person in that country or region would make.

One might assume that as GNI per capita increases, the rate of literacy would also increase, and while that is mostly true, there is a notable flip in the data.

As you can see in the visualization above, the Middle East and North Africa region is essentially tied for the second highest overall GNI per capita, but its literacy rates for women trail far behind that of other regions. And on the other hand, the Latin American and Caribbean region is far behind the first three in GNI per capita, but has high and relatively equal literacy rates.

This flip of expectations could be the sign of a priority difference between the Middle East and North Africa and Latin American and Caribbean regions. This bodes well for the future of women in the Latin American and Caribbean region, as it shows a much closer to equal investment in both male and female education.


So how are women around the world being prepared for the future?

In developing regions and the lowest income groups of all countries, women and girls are still facing a large gap in both educational availability and quality, with both school enrollment and literacy rates falling behind their male counterparts. But this issue has not gone unnoticed. Organizations like UNESCO are specifically investing in improving education around the world in order to bring the opportunity gap to a close. This data proves the need for that investment and it is encouraging to see it prioritized globally.

Overall, we were actually pleasantly surprised by much of this data. The actual percentage of Children Out of School was much lower than we had originally expected for both boys and girls. The Gender Parity Index findings and evidence of cultural shifts like those in the Latin America and Caribbean region were also promising. A global prioritization of quality education is clear in this data, and while there is still much work to be done, the numbers show how much has been invested in this cause to date.

There are many other factors about female life that this data does not cover, and we are eager to dive into more of that data in the coming weeks. Stay tuned for our next post about women in the United States based on data from the US Census.

Thanks for reading!

<![CDATA[Women of Data: Interview with Amy Anthony, Director of Data Operations at SendGrid]]> https://looker.com/blog/women-of-data-amy-anthony-sendgrid https://looker.com/blog/women-of-data-amy-anthony-sendgrid This month we are excited to launch Looker’s Women of Data interview series, which spotlights women leaders in the data field and shares their stories and advice.

We are thrilled to kick things off with a longtime member of the Looker community, Amy Anthony. Amy is the Director of Enterprise Data Operations at SendGrid, a Denver, Colorado-based customer communication platform for transactional and marketing email.


What’s your background and how did it lead you to where you are today?
I fell into software implementation consulting, and the easiest way for me to learn the front end of the software was to learn SQL and run queries to see how the data flowed through the backend and then compare to the front end software. That led to my interest in data and ultimately my career focusing on data.

What advice would you give to other women who are interested in pursuing a similar career in data?
If you have a drive and desire you can do it, even if it's not your originally intended career. There's a lot to be said for experience and hands on learning, to grow in your desired career path, outside of traditional training. And remember you have a perspective and voice that is diverse so don't be afraid to speak up.

What you think women in the workplace can do today to help build the foundation for successful careers?
Find a mentor. Cross-train in as many areas as you can to help widen your perspective. Ask questions, that's how you'll learn. If there's someone else that you value how they manage, present themselves or interact with others, spend time with them so you can learn.

“There's a lot to be said for experience and hands on learning, to grow in your desired career path, outside of traditional training. And remember you have a perspective and voice that is diverse so don't be afraid to speak up.”

What has been the biggest surprise in your career?
I think the fact that I love data the way that I do. I love the pipeline of sourcing data, transforming data, making data accessible and presenting it in a way that people can geek out on analyzing it. I love the challenge of staying current with data frameworks and methodologies and the challenge of finding ways to get the right data in front of the right person as fast as possible. I love that when you have governed, transformed business data you really can use it to learn from the past and make improvements for the present and future.

What are some of the biggest challenges in leading today?
The investment it takes to truly be a data driven company.

How are you thinking about dealing with those challenges?
Continued investment in data governance, user adoption, educating on data best practices in providing self-service analytics, and continually finding ways to get data into the hands of our customers faster

Do you think that data can help build a more diverse and equal workplace?
Yes. Data allows for insights, which is simply someone's ability to see something in a particular way and present and display the data effectively. The more people that do this well, the greater opportunity it opens for people to excel in this space, regardless of age, gender and/or background.

How do you think individuals can use data to advance their ideas or careers?
Without data you're just another person with an opinion (W. Edwards Deming). But when you know your business and you know how to use data effectively to explain and analyze your business, then you have just equipped yourself to successfully answer questions and solve problems.

This is a part of the Women of Data, Data of Women series. Check out this post to learn more.

<![CDATA[Announcing Women of Data and the Data of Women]]> https://looker.com/blog/women-of-data-and-data-of-women https://looker.com/blog/women-of-data-and-data-of-women I love watching the change that happens in companies when everyone discovers how easy it is to access and understand their business data. And it’s not just because these people can now make better decisions, it’s also because it clears the way for everyone to contribute their ideas and participate in solutions. No longer can the HiPPO (highest paid person’s opinion) in the room or the loudest or most aggressive person in the room, dominate the decision-making. Everyone can diagnose issues they are seeing and share their ideas as well. This can often bring a discussion back to where it needs to be: how to make the business more successful.

In the end, a more data driven culture also creates a more inclusive culture allowing for many different perspectives to be heard. And since March is National Women’s History month, we are excited to announce a new project highlighting these diverse perspectives: Women of Data and the Data of Women.

First, Women of Data will be a series of interviews spot-lighting the growing group of female leaders in the data space who are building data cultures at their organizations. Second, Data of Women will be a series of posts examining data about the lives of women today - in the workplace, in society and around the world. We will be looking at trends and share interesting findings about what is actually happening and where we can work together to improve the lives of women around the world.

We are kicking this off with new posts every week in March, and we will continue to highlight the Women of Data and the Data of Women throughout the year. We hope you enjoy them!

<![CDATA[Dashboard Confessional: I’m addicted to my Demand Generation Program Dashboard]]> https://looker.com/blog/dashboard-confessional-demand-generation https://looker.com/blog/dashboard-confessional-demand-generation Companies experiencing high-growth tend to set aggressive quarterly goals, focused on generating new leads.

Ensuring your company maintains quality leads while growing their volume can be a tricky balancing act.

For many, maintaining a cost-effective lead budget while scaling programs can be a challenge. Without the ability to calculate ROI and cost-per-lead on a daily or weekly basis, it's hard to know where to invest and where to cut back.

Looker has provided an operational way to look at program performance and growth that has proven to be a real game-changer.

As a Program Manager at Looker, I regularly use the dashboard shown below. It's helped me to grow the program 5x over the last couple years, in a cost-effective way. I am strangely addicted to it and find myself checking it and gazing at it multiple times a day.

Digging into the Dashboard

From a top-level perspective, we keep a close eye on how far we are through the quarter, as we are on quarterly targets from a planning and execution standpoint.


Next, we have the aggregate volume of leads per with a line graph of the conversion rate. This is helpful to know how the program is doing on a weekly basis.

If there is a dip one week, I usually check in with our SDR manager to see if there is an operational issue in delivering the leads or extra enablement needed on our end.


The cohort analysis shows which vendors are responsible for the majority of weekly lead growth. This is helpful in knowing which vendors to push or raise PPC bids on.


Taking a step back, we have a quarterly view of which vendors are doing the heavy lifting through Donut Multiples. Naturally, the goal is to diversify the vendors we use so that if one under delivers, the program and SDR’s meeting goals don’t suffer.


Lastly, there is a quarterly accumulation of all of the metrics into one table. Here I can check the number of total leads, meetings, and velocity (average time it takes from coming into the database to becoming a booked meeting for sales). I can look at individual conversion rates and if they seem to drop I investigate why that might be happening.


Long Term Growth

Over the past couple years, this program has seen an enormous amount of growth, and I attribute all of that growth to the meticulousness of this dashboard.



We use this dashboard operationally and to review our tactics at the end of each quarter. By constantly monitoring the pulse of the data - and taking a step back when we need to - we are able to use our spend to the best of our ability, and ultimately do our part to drive the business forward.

<![CDATA[2018 Marks the Debut of Looker in the Gartner Magic Quadrant for Analytics and Business Intelligence Platforms]]> https://looker.com/blog/2018-gartner-magic-quadrant-for-analytics-and-business-intelligence-platforms https://looker.com/blog/2018-gartner-magic-quadrant-for-analytics-and-business-intelligence-platforms At Looker, we are honored to debut in the 2018 Gartner Magic Quadrant for Analytics and Business Intelligence Platforms. Gartner is a leading IT research and advisory firm that helps businesses of all sizes evaluate technology and make informed decisions. Being included in the report at all is a pretty big deal and being the only new entrant this year is something we’re really proud of.

I encourage you to download and review the complimentary report here.

Personally, I’m excited that Looker is now recognized as one of the biggest names in BI and analytics... I believe this is a clear validation of Looker’s unique capabilities and our innovative approach to flexible, agile analytics built on a data platform that takes advantage of the power of today’s technology.

Analytics Evolved

It’s important to note that Looker is more than just a business intelligence tool – we’re focused on bringing people together and connecting you to your data, so you can make more informed choices, share and explore insights, and dramatically improve your business. With Looker your analysis is flexible, extensible, reusable, sharable, and scalable – and I believe our satisfied, successful customers are a significant part of why we’re included in the Gartner Magic Quadrant this year.

Since its inception, Looker has focused on providing exceptional support for our users, enabling new users of the Looker platform to come up to speed faster and to make better informed, more authoritative decisions based on the freshest, most complete data available. The 2018 Gartner Magic Quadrant for Analytics and BI Platforms report reflects these values.

At Looker we continue to evolve our product to provider fresher, more impactful analysis you can trust. I believe this recognition from Gartner is a recognition of Looker’s technology and mission as a data platform. If you’re interested in learning more, you can download this valuable report at no cost and learn more about why Looker is so proud to be included with the top companies in BI and analytics.

<![CDATA[Looker Achieves SOC2 Type 1 Compliance ]]> https://looker.com/blog/looker-achieves-soc2-type-1-compliance https://looker.com/blog/looker-achieves-soc2-type-1-compliance Looker is continuously advancing and making improvements to its security programs, policies, and procedures. Today, we are pleased to announce that our SOC2 Type 1 Report for the Looker Cloud Hosted Data Platform is complete and available for customers and prospects. The assessment was conducted by independent auditors, The Cadence Group, who specialize in compliance across multiple industries.

The SOC 2 report includes management’s description of Looker’s trust services and controls as well as Cadence’s opinion of Looker’s system design.

Looker is committed to implementing all necessary security controls, and to ensuring that our customers and prospects trust the Looker Data Platform. To that end, we are already working on our SOC2 Type 2 report to confirm the operational effectiveness of our controls.

<![CDATA[Analytics for All: A Smarter Way to Work with Data]]> https://looker.com/blog/analytics-for-all https://looker.com/blog/analytics-for-all

Today, Looker is announcing the availability of new features that will make it easy for everyone in an organization to have access to the freshest data in order to get rapid, clear answers to nearly any question.

Historically, businesses have struggled to get value out of their data. In a 2015 report,”Digital Insights are the new Currency of Business”, Forrester found that only 29% of businesses are connecting analysis to action.

How are organizations today missing out on getting the full advantage of the insights hidden in their data?

There’s a general understanding that data hides information of great value, but collecting data is only half of the equation.

In order to get better answers, you need to ask better questions. The best questions are asked by people with deep experience with the business -- and while these people are frequently experts in their area, they’re not usually experts in analytics. So, who are they?

The answer is simple: they’re the data consumers in your company.

Everyone is a data consumer

Data pervades our everyday lives so much that we sometimes forget that we’re pretty good at understanding data already.

We all consume data from many sources including:

  • The smartwatches on our wrists that show us how far we’ve walked
  • The speedometers in our cars that help us monitor how fast we’re driving
  • The scores for our favorite sports teams that allows us to see how they’re performing

Data allows us to understand things about ourselves that were previously hard to know, such as if we’re getting healthier or lazier, if we’re following the rules of the road, or if we should cheer or cry after a sporting event.

And when we understand the answers hidden in data, we want to ask more questions.

For example...

  • If you’ve been counting your steps for a while, you might also start to monitor your eating habits or sleep patterns
  • If you’re interested in travel time, you might also invest in a GPS that tells you precisely when you’ll reach your destination
  • If you really like sports statistics, you just might start to read sports analyses like this

And ultimately the more questions you ask of your data, the better answers you get. The better answers you get, the more information you have to improve your life.

How do we better focus on business data?

The question remains: if it’s so simple to get more out of your data, why are companies still struggling to get value out of the massive amount of data they’ve been collecting?


Businesses often have to make a hard choice between empowering their data team or empowering data consumers.


Looker fundamentally changes the way data teams and data consumers work, by helping them work together.

The data consumers bring their questions, expertise, and understanding of context that generates the data. The data teams bring data that’s fresh from the source, vetted with analysis in Looker, and work hard to make sure data consumers can keep asking questions.

Working together, they get better answers to their questions and change the direction of their business.

How is Looker contributing to an understanding of data?

Looker continues to invest in our best-in-class data platform that makes it even easier for data consumers to understand their data and ask more questions of it.

To really understand the data, users need to get the context they need, communicate with their peers in a way that makes sense, and connect answers and insights to their existing workflows. And today, Looker’s is taking dramatic leaps forward for business users on all of these fronts.


To get the answers they need, data consumers need to be able to jump right into analysis at any given time. After all, it’s data they’re familiar with -- analytics tools should help them immediately understand what’s important and what’s not.

That’s why we’re proud to announce new context-relevant homepages in Looker that make it easier for data consumers to:

  • Understand what data people on their team are using to make better decisions
  • Be alerted by critical announcements from their data team about new data sources (and new questions they could be asking)
  • Manage their own data content better

Our new homepages are intelligent and use a sophisticated algorithm to determine the most relevant data for each user. We believe these new homepages will make it even easier for users to develop an understanding of their data.


Context alone, of course, isn’t enough. We also need to communicate our findings with our peers to review and revise our answers.

Critically, the most important medium that a data consumer uses to communicate to others is visualization. And we wanted to ensure that the number of ways data consumers can visualize data is unlimited, so we’re launching a whole custom visualization library, and a workflow that will enable companies to add their own custom visualizations.

We’ve also made it easier than ever to send important data from Looker to your colleagues via email, Slack, or virtually any other medium of communication, with a redesigned scheduler that makes data delivery even easier for everyone.


The final piece of the puzzle is connecting data consumers to the data sources that they need to analyze their data. And we’ve found that the best way to do this at Looker is to connect data consumers to data experts every, through our blocks and applications, built in conjunction with the best companies in the business.

Maybe you use Google Adwords to run ad campaigns. What if you could immediately deploy analysis built by the Google Adwords team to analyze your data? Or maybe you use Atlassian to run your organization. What if you could deploy workflows on JIRA instantly, and automatically from Looker?

And once you’re connected with the best analysis from the best analysts in the industry, you can use our new data actions to make your analysis immediately actionable. Update records in Salesforce or JIRA at the push of a button, or send a discount code to your users through Twilio, all available through Looker at the push of a button.

A more informed world

We believe that when your whole organization can ask better questions, you can get even better answers. And when you get better answers, you can make better decisions.

And one of those decisions just might change the direction of your business.

<![CDATA[Super Bowl LII: A Closer Look at the Matchup]]> https://looker.com/blog/super-bowl-2018-predictions-analysis https://looker.com/blog/super-bowl-2018-predictions-analysis As Super Bowl Sunday 2018 approaches, the matchup between the New England Patriots and Philadelphia Eagles is generating lots of different storylines. Tom Brady and Bill Belichick are going for their astonishing sixth championship together, which would tie the all-time team record held by the Pittsburgh Steelers. The Eagles are trying to overcome a devastating injury to MVP-hopeful Carson Wentz with the incredible redemption story of Nick Foles and three straight wins as an underdog. And of course, Sunday will mark a rematch of the great Super Bowl XXXIX matchup, 13 years later.

While there are many different angles to try and figure out who has an edge in the matchup, we wanted to take a closer look at what the numbers tells us by exploring NFL game and player data from Armchair Analysis.

Overall Team Comparison

At the surface level, the two teams’ stats are remarkably similar (and it’s clear why both have dominated the league). Despite early-season struggles by New England and the late-season injury to Wentz, both teams ended the regular season at 13-3 and as the #1 seed in their conference. You’ll notice a slight dip in production for Philadelphia after Week 14 when Foles took over, but that rust was shaken off emphatically with two impressive postseason wins over the Falcons and Vikings.

Both teams boast high-powered offenses with similar point totals and scoring margins of +187 (NE) and +198 (PHI). The Patriots hold an advantage on total offensive yards and touchdowns, while the Eagles have the edge on defensive yards allowed and interceptions.

There’s a reason these teams were able to make it to Minnesota, and the margins between them are slim.

Advantage: NE on Offense, PHI on Defense

Games Against Common Opponents

superbowl View in Looker

Zooming in a bit from the statistics above, it’s always interesting to see how two teams fared against common opponents when dissecting a matchup. In the 2017 season, the Eagles and Patriots shared five of them: Atlanta, Carolina, Denver, Kansas City, and the Los Angeles Chargers.

Both teams defeated the Falcons, with the Patriots winning much more comfortably. They both dominated the Denver Broncos, with the Eagles victorious by a field goal more. The LA Chargers proved a tough opponent for both teams, but both the Eagles and Patriots were able to eke out wins.

The Kansas City Chiefs might be watching Sunday’s game wondering what could have been as they were the only team that was able to defeat Philadelphia and New England this season, including a shocking 15-point victory over the Patriots in Week 1.

The only outlier was against the Carolina Panthers. While the Eagles handled their business with a 5-point victory, the Patriots found themselves on the wrong end of a last-second field-goal loss.

Take it with a grain of salt, but in this five-game sample the Eagles performed slightly better.

Advantage: Philadelphia

Quarterback Matchup

superbowl View in Looker

There’s a reason why 12 of the past 20 Super Bowl MVPs have gone to the winning quarterback. It’s the most important position in football and you have to figure that the play of Nick Foles and Tom Brady is going to be a huge factor in the end result.

Since this game is being played in a dome at U.S. Bank Stadium and both teams play outdoors at their home fields, we’ve filtered this comparison to only look at their performance in indoor games (Foles has 13 in his career vs. 27 for Brady).

When looking at passer rating, completion percentage, passing yards per game, and passing touchdowns per game, Brady holds an edge over Foles in all four categories.

But these stats don’t account for the vast gap in experience between the two. Foles has 3 playoff starts and 0 Super Bowls in comparison to Brady’s 36 and 5, respectively. Brady’s stats advantage, plus that huge experience gap, gives him a major edge on Sunday.

Advantage: New England

Performance in the Clutch

superbowl View in Looker

When the best teams from each conference meet with the Lombardi Trophy on the line, you can typically expect a close game. Seven of the past 10 Super Bowls have been decided by a touchdown or less, so strong performance in the clutch is huge.

Clutch situations can be defined in many ways, but for this data set we looked at any games in the 2017 season where the game score difference was within a touchdown or less in the 4th quarter or overtime.

Not surprisingly, both teams fared well with the Patriots boasting a record of 7-3 and the Eagles having an even better 9-3. Both teams are battle-tested in the clutch and have proven they have the ability to win when the pressure is on. While the Eagles have two more wins under this definition, I don’t see that as enough of a disparity to consider it an advantage over the Super Bowl veteran Patriots.

Advantage: Even

Our Best Guess

So that’s what the data says.

As you can see, it’s pretty clear this matchup is a toss-up. The models of FiveThirtyEight have the Patriots as a 2.5-point favorite, and we tend to think it will be that close as well. While there is something to be said for the hungry first timer, the seasoned Pats have the leg up in this match. When in doubt, we lean towards the team that has done it five times before.

Prediction: New England 27, Philadelphia 24

<![CDATA[Amazon Redshift Announces Support for Late Binding Views]]> https://looker.com/blog/how-to-use-late-binding-views-with-amazon-redshift-and-looker https://looker.com/blog/how-to-use-late-binding-views-with-amazon-redshift-and-looker Amazon Redshift recently announced support for Late Binding Views.

A Late Binding View is a view that is not tied to the underlying database objects that it references. It is particularly beneficial for Amazon Redshift users that are storing current or more frequently used data in Redshift and historical or less frequently used data in Amazon S3.

Using Late Binding Views, you are able to create a single view that includes data in both Amazon Redshift and Amazon Redshift Spectrum External Tables, providing a single, comprehensive data set for your reporting needs without users having to worry about whether data is stored in Amazon Redshift or Amazon S3. Late Binding Views are the only type of view supported by Redshift Spectrum.

Prior to adding the functionality for Late Binding Views, you could only create a view that referenced existing database objects, and you could not drop the objects referenced within the view without first dropping the view or dropping the table using the CASCADE clause. In cases where the table was being dropped and recreated, any associated views would need to be recreated as well.

Late Binding Views don’t verify the objects referenced until the view is queried. This means that you can create views against tables that may not exist yet, and also that you can drop any of the database objects referenced by the view without first having to drop the view.

How they Work

To create a Late Binding View, include the WITH NO SCHEMA BINDING clause when creating your view. When creating a view with the WITH NO SCHEMA BINDING clause, all tables and views referenced in the SELECT statement must be qualified with a schema name.

To create a view that includes data from both Redshift and S3, use a Late Binding View. For example:

CREATE VIEW all_web_logs_vw as
SELECT * FROM public.web_logs
SELECT * FROM spectrum.historical_web_logs

In this example, a Late Binding View is required because the view references an External Table.

Even when no External Tables are included, it may still be convenient to use the WITH NO SCHEMA BINDING clause in cases where any of the tables referenced in the view do not exist at the time the view is created, or if the tables referenced by the view may be dropped and recreated.

Amazon Redshift has also provided a new System Information Function, pg_get_late_binding_vew_cols, which provides metadata related to all of the columns in all Late Binding Views. For detailed information about the usage of the new function, check out this page from AWS.

Benefits for Looker Users

Besides being able to query both current and historical data in a single view as detailed above, we are in the process of developing some new Looker functionality that takes advantage of Late Binding Views, so stay tuned for more updates!

For More Information

To learn more about Late Binding Views, please refer to the CREATE VIEW section of the Amazon Redshift SQL Reference Guide.

<![CDATA[The Data Has Spoken. Amazon’s HQ2 Should Be In....]]> https://looker.com/blog/the-data-has-spoken-amazon-hq-city-should-be https://looker.com/blog/the-data-has-spoken-amazon-hq-city-should-be If you’ve been on the internet in the past few months, you’ve already heard that Amazon is opening a massive new facility. Hundreds of cities across the U.S. have applied, hoping to get the chance to be the retail giant's next headquarters.

Last week, Amazon revealed their top 20 contenders, and the list ranges from long-established metropolitan areas like New York and Chicago, to emerging tech hubs like Denver and Austin. Bold outer boroughs of major metropolitan areas like Montgomery County, Maryland and Northern Virginia also threw their hats into the ring in a noble attempt to bring much-needed revenue and jobs to their community.

To help Amazon work through this varied list, we thought we’d share some data on the matter, direct from a reliable source: The 2015 Census.

The 2015 Census data (from this Data Block) offers a look into demographic data revolving around population, education, income, and diversity at a very granular level.

How We Broke It Down

Of the metrics presented from the Census, we selected 4 key factors to examine for each of our Amazon contenders. The factors listed below represent factors that any company looking to open an office of this size should consider.

  • Population density: Does a city possess growing room to handle a large influx of new workers?
  • Housing Vacancies: Will it be easy for new and relocated hires to find housing?
  • Diversity: Will the gender and racial distribution across each of these locations attract outside talent?
  • Education Achievement: Will Amazon be able to leverage the existing educated population of the location?

We took these factors and evaluated the top contenders in the nation from Amazon’s shortlist. For every factor, each city was assigned a score from 1-5 based on the quintile that the city fell into. The sum of these scores were used to determine the overall rank of each city.

And with 24 out of a possible 25 points, the Chicago Metropolitan Area emerged as a clear winner. Chicago was closely followed by Los Angeles and Atlanta, both with 22 points.

Chicago’s highly educated population and large amount of available housing makes it a perfect fit for Amazon’s expected 50,000-person workforce.

While these factors were not comprehensive, they provide us with a better lens to determine the overall suitability of a city to be the next Amazon HQ. Take a look at our rubric to see how our top pick Chicago ranked against other cities:

Please Note: We did this analysis with the US Census Data Block, so we were unable to include Toronto

Metropolitan Area Population Density Housing Vacancies Diversity (Race) Education (Bachelors) Education (Masters) Total Score
Chicago, IL 5 5 4 5 5 24
Atlanta, GA 5 4 5 4 4 22
Los Angeles, CA 3 5 4 5 5 22
Dallas, TX 5 4 3 5 4 21
New York City, NY 1 5 5 5 5 21
Miami, FL 4 5 3 4 3 19
Philadelphia, PA 3 4 3 4 4 18
Boston, MA 2 4 2 4 5 17
Denver, CO 5 3 1 3 3 15
Northern VA 2 2 4 3 4 15
Pittsburgh, PA 4 3 1 3 3 14
Austin, TX 4 2 2 3 2 13
Columbus, OH 3 3 2 2 2 12
Indianapolis, IN 3 3 2 2 2 12
Montgomery County, MD 2 1 4 1 3 11
Nashville, TN 4 2 2 2 1 11
Raleigh, NC 2 2 3 2 2 11
Newark, NJ 1 1 5 1 1 9
Washington DC 1 1 5 1 1 9

How did we arrive at these numbers? Let’s dive deeper into the census data.

Population Density

Amazon’s new office will be designed to hold 50,000 employees. While in reality, this population will be a combination of local hires and relocated hires and their families, an upper threshold for the local population increase could, theoretically, be as much as 100,000+ people.

Which of the cities above has enough growing room to handle that large of an influx of new people?

Denver tops the list with the lowest population density (ratio of average number of people per square mile) of the group followed closely by Austin. On the other end of the spectrum are New York City and DC, cities which already possess an extremely high population density.

Available Housing Infrastructure

Equally as important as the existing population is the availability of housing for the relocated hires.

So what about housing vacancies in these cities?

Chicago, our top pick, and Miami have by far the most available housing and could easily handle a significant increase in households. Cities like DC or Montgomery County on the other hand may have to rapidly adjust and invest in real estate projects to accommodate new Amazon workers.

Educational Level

The likelihood of Amazon bringing in ALL relocated hires for this new outpost is very unlikely, so another important consideration is the existing workforces of each city. We decided to look at the education level of each city’s current population to see what Amazon can potentially tap into.

How many Undergraduate and Graduate degree holders already live in these cities?

Areas like Northern Virginia and Montgomery County tend to have a higher % of the population (>30%) with a Bachelor’s degree or higher. But for the sake of Amazon’s move, we ranked metropolitan areas with a bigger population of degree holders (even with a slightly lower % of population with degrees) higher on the list. Chicago and LA are both looking pretty good with some of the highest number of college grads of the group!

Diversity (Gender and Race)

Amazon’s ideal workforce is made up of a diverse group of people. All of the cities that we’ve looked at have an almost equal distribution of Gender ratios -- so we felt best to remove Gender from our ranking algorithm. This doesn’t necessarily hold true for race.

Could a particular location's lack of diversity deter Amazon from attracting and relocating the best talent?

New York City and Washington DC top the diversity list, both with over 50% of their populations made up of non-white individuals. But the drop-off on this list is steep. Our top pick, Chicago ranks in the second highest Quintile with only 32% of the population identifying as non-white, and Pittsburgh is in last place, with only 11% of their population identifying as non-white.

While this is not a definitive pro-con evaluation of the potential cities for Amazon’s HQ2 -- other considerations like tax benefits and household prices are among those that come to mind -- our rankings provide a stepping stone to look at these cities more objectively and provide a way for Amazon to keep their employees and the local community in mind while expanding.

All of this data is available right now for free for all Looker customers. Check out Looker Data Blocks and easily connect to Looker Hosted offerings in a matter of minutes! Or reach out to our support team and to get started.

<![CDATA[Power Up your Zendesk Analytics with Looker Blocks and Blendo]]> https://looker.com/blog/power-up-your-zendesk-analytics-with-looker-blocks-and-blendo https://looker.com/blog/power-up-your-zendesk-analytics-with-looker-blocks-and-blendo Successful businesses know to never underestimate the importance of top customer support. As an integral aspect of the customer experience, effective support can contribute significantly to:

  • Customer Retention. A great customer experience and efficiently addressing issues inspires loyalty.
  • Product Development. Especially relevant for startups, but also valid as a company scales up. A great source of product validation and information for the product roadmap.
  • Customer Acquisition. A pleasant onboarding process increases the chances of converting them to paying customers down the road.

Tools like Zendesk are helping companies of all sizes provide stellar customer experiences through chat applications. But in order to ensure that the best support is being provided, just having a tool to handle the support requests isn’t enough. Companies also need a way to take a step back and find ways to evaluate the quality of their support.

As with every other function in your company, from product development to marketing, the best way to do this is by measuring your activities and keeping track of a number of customer support metrics. For Zendesk users, this is done automatically through the application, but the use of this data is limited through the application itself.

Zendesk reporting is great for providing basic answers, but what can you do if you want to ask more questions?

Start by Getting Your Data out of Zendesk

Sure, you can simply download the data from Zendesk and load it into Excel, but this can quickly become a nightmare to maintain.

Tools like Blendo give you one-click data integration to sync your Zendesk data into your database and present it with a schema optimized for analysis. By bringing these disparate data sources into a central data warehouse, you can build your reporting and data analytics infrastructure in hours or minutes.

Great! Now what?

Start Asking Questions

Once you have the data in a database, it’s time to start digging.

The data from Zendesk can be organized in large categories: Customer Experience, Customer Satisfaction and Workload.

Customer Experience is all about communicating effectively with your customers. Some of the the most fundamental metrics are your team’s response time to customers’ inquiries, your customers’ total waiting time and the amount of time your support team needed to resolve an issue.

You’re now able to easily measure the performance and response of your Customer Support team.


One of the best ways of measuring customer satisfaction is through surveys that you send to your customers to give you feedback regarding the quality of the customer support they have received so far. Using the Zendesk data, performance indicators for measuring overall and per agent customer satisfaction can be easily constructed.

Use your Zendesk data to identify how your agents or support groups perform over time.


Last, but not least, come the the metrics related to support team’s workload. The most important metrics include the number of tickets assigned to each work group, the ticket exchanges between different groups and some more basic statistics like the number of new issues solved, or one-touch tickets.

Identify your busiest days and hours and optimize the allocation of human resources to best serve all of your customers


Want to get started in minutes? Check out the Looker Block for Zendesk by Blendo. This block provides all of the metrics mentioned above and can be deployed quickly and easily.

Refer to Looker’s Discourse site for more details on Looker Blocks.

Try Blendo

To learn more and start syncing your Zendesk data with your data warehouse, visit Blendo and create an account today.

<![CDATA[Amazon Redshift announces support for LISTAGG DISTINCT aggregate function]]> https://looker.com/blog/how-to-use-listagg-distinct-amazon-redshift-looker https://looker.com/blog/how-to-use-listagg-distinct-amazon-redshift-looker Amazon Redshift recently announced support for the LISTAGG DISTINCT function. If you’ve never used the LISTAGG function before, it’s a handy way to aggregate all of the values for a specified dimension into a single string, and as the name implies, LISTAGG DISTINCT ensures that the string contains only unique values rather than all values.

For example, let’s say that we want to do some targeted marketing for our local guitar store based on what brand(s) of guitar a customer owns. We’d like to produce a report that shows a customer along with each unique guitar brand that they own. Below is a sample of our customer data:


Customer ID First Name Last Name City State Country
1 Robbie Robertson Toronto Ontario Canada
2 Link Wray Dunn North Carolina USA
3 Rickey Medlocke Jacksonville Florida USA


Customer ID Brand Model
2 Danelectro Longhorn
2 Gibson SG
2 Gibson Les Paul
1 Fender Stratocaster
1 Fender Telecaster
1 Martin D-28
3 Gibson Explorer
3 Gibson Firebird
3 Gibson Les Paul
3 Fender Stratocaster

To generate a list of each customer along with the guitar brands that they own, we could use the LISTAGG function to produce the following results:

    customers.first_name  AS "customers.first_name",
    customers.last_name  AS "customers.last_name",
    listagg(brand, ', ') within group (order by brand) AS "customer_gear.brand_list"
FROM public.customers  AS customers
LEFT JOIN public.customer_gear  AS customer_gear ON customers.customer_id = customer_gear.customer_id 


This report is helpful, but as you can see, brands may show up multiple times for the same customer. The introduction of the LISTAGG DISTINCT function allows us to clean up this list so that each brand is listed no more than once for each customer:

    customers.first_name  AS "customers.first_name",
    customers.last_name  AS "customers.last_name",
    LISTAGG(DISTINCT customer_gear.brand ,',') WITHIN GROUP (ORDER BY customer_gear.brand ) AS "customer_gear.brand_list"
FROM public.customers  AS customers
LEFT JOIN public.customer_gear  AS customer_gear ON customers.customer_id = customer_gear.customer_id 


This list, with duplicates removed, is exactly what we need for our marketing efforts.

Using LISTAGG DISTINCT with Looker

LookML has a measure type designed specifically to take advantage of the LISTAGG DISTINCT aggregate function. Using the LookML list measure, it’s simple to add your own aggregated lists to reports. For the report above, we added the following measure to the customer_gear view file:

  measure: brand_list {
    type: list
    list_field: brand

Looker generates all of the necessary SQL for you, so now all of your users can easily take advantage of this new functionality.

To read more about using LISTAGG DISTINCT with Redshift, check out AWS’s documentation here.

<![CDATA[Announcing JOIN The Tour]]> https://looker.com/blog/announcing-join-the-tour https://looker.com/blog/announcing-join-the-tour It’s incredible to think that after only two consecutive years, JOIN – Looker’s annual user conference – has generated such a buzz in the data community. Our latest event in San Francisco drew over 800 attendees from around the world, and we expect JOIN 2018 to be even larger!

It’s clear that data professionals are finding JOIN to be a tremendous resource for all things data, so in an effort to connect with data enthusiasts everywhere, we’ve decided to take the show on the road.

Introducing JOIN The Tour!

We’re excited to announce the latest “event” to evolve from our JOIN conferences: JOIN The Tour.

Kicking off in London on 13th February, JOIN The Tour will roll on to host events in 17 cities across Europe, Israel and North America from February through June 2018.

As Looker grows, we hope to connect with our peers on a local level. JOIN The Tour will help us to begin building a sense of community through sharing and collaborating around data analytics and business intelligence.

What to Expect

JOIN The Tour will present thought provoking content and interactive sessions, adopted from the best of our JOIN conferences.

Hear from the Local Looker Community

This is an exciting opportunity for attendees to meet with their peers and interact with industry thought leaders. Each stop will feature regional Looker customers, on-hand to present on how they are leveraging Looker to help their company become more data driven.

Meet Data Experts

Experts from Looker offices across the U.S. (NYC, San Francisco, Chicago, and our HQ in Santa Cruz) will be there to ensure all regions – and questions – are covered.

Attendees will have the opportunity to meet with a variety of Looker experts hailing from all departments - all in one place. A one-stop-shop for all of your data questions, ideas, and needs, JOIN The Tour is one event you don’t want to miss.

You can find a sneak peak of what to expect here.

Spark New Ideas

If there’s one thing we’ve learned from the inaugural JOIN conferences in San Francisco and New York, it’s that spending time in a room with some of the smartest people in the industry provides endless inspiration and new ideas to bring back to your organization. From building scalable ETL processes to making dashboards your users will love, a day at JOIN is sure to send you home with something new to try.

Space is limited for all events, so see when JOIN The Tour is coming your way, and reserve your seat today.

We can’t wait to see you!

<![CDATA[7 Reasons Looker Built a New Language for Data]]> https://looker.com/blog/7-reasons-looker-built-a-new-language-for-data https://looker.com/blog/7-reasons-looker-built-a-new-language-for-data To many people, analytics tools all look the same. Some dashboards, a few reports, a way to slice data. That’s because the real differentiators lie behind the scenes, where the great tools are separated from the merely good ones (and the not so good ones).

Behind Looker’s pretty face is something quite revolutionary: LookML. And even though the vast majority of users will never see a line of LookML, this new language for data is what makes Looker uniquely powerful, agile, and trustworthy--for everyone.

LookML has plenty of fans (me among them), but not everyone is immediately sold. When skeptics hear about LookML, they usually have one of two reactions:

  1. “Why invent a new data language when SQL already exists?” or
  2. “I have to write code (😱😩😭)? Can’t I use a graphical user interface (GUI) instead?”

And if you’re not familiar with LookML, these questions are totally reasonable. Plenty of people have tried to replace SQL or build graphical programming languages and failed. But knowledge and understanding of that history is exactly why we’re confident this is the right path.

Firstly, it’s important to know that LookML isn’t a replacement for SQL—it’s a better way to write SQL.

That distinction is critical, because while general-purpose programming languages have proven vibrant and dynamic, the world of data languages has been pretty stagnant. Assembly became C became Java and Python and Ruby—letting programmers focus on writing great programs instead of low-level tasks like managing memory. But data languages haven’t seen the same evolution--data analysts still mostly write SQL by hand, worrying about low-level concerns every time they write a query.

When Lloyd Tabb, Looker’s founder, started designing LookML, he began with the belief that data languages needed to evolve. But that didn’t lead him to a graphical interface. Lloyd had seen many graphical languages, and he’d learned that they are inherently imprecise, inefficient, and bad at expressing complexity simply. That’s why they’re mainly used as teaching tools today, rather than for building production systems.

But Lloyd’s vision for LookML wasn’t just about avoiding pitfalls, it was about making data users’ lives better. So to spell out our thinking more clearly, let me lay out seven reasons we believe LookML is a major step forward for data languages:

  1. LookML is all about reusability. Programmers have a mantra: Don’t Repeat Yourself. But most data analysis is full of repeated work. You extract raw data, prepare it, deliver an analysis, and...then never use any of that work again. This is hugely inefficient, since the next analysis often involves many of the same steps. With LookML, once you define a dimension or a measure, you build on it, rather than rewriting it again and again.

  2. It’s easy to learn. LookML is constructed around SQL, specifically because SQL is ubiquitous. That lets us tap into a huge pool of people who are already familiar with it. A graphical language would force everyone to start learning from scratch. But the basics of LookML can be picked up by SQL-speaking data analysts in a couple of hours.

  3. LookML is version controlled. Doing data well requires tracking what was changed when, by whom, and why. But graphical languages simply don’t allow that kind of modern version control. Previous attempts to incorporate version control into data systems have been clunky, proprietary, and have actually impeded collaboration. LookML provides version control using Git, the programming industry standard.

  4. It’s simple to debug. If you’ve ever tried debugging a graphical language, you know the pain of trying to untangle a bird’s nest of connectors. LookML developers benefit from a full-fledged Integrated Development Environment featuring auto-completion, error highlighting, contextual help, and a validator that helps you fix errors. None of that would be possible with a graphical language.

  5. LookML is built for today’s complex data. Trying to “dumb down” your data tools doesn’t make complexity go away. Bad tools just make complexity harder to deal with. LookML is a powerful tool for power users that helps them capture that complexity--whether inequality joins, many-to-many relationships, multi-level aggregation, or anything else--and render it invisible to end users.

  6. It fosters collaboration. Software developers have developed immensely powerful tools for collaborating on complex projects, and data products need these too! You can’t serve your whole company’s data needs by yourself. But SQL is inherently disorganized and impenetrable, making true collaboration impractical. LookML is architected to make collaboration natural and easy.

  7. LookML empowers data professionals to empower others. LookML is a tool for data analysts and developers, not end users. By helping analysts get the knowledge about what their data means out of their heads and into Looker, LookML makes that knowledge available to everyone. The “data model” that Looker helps you build enables non-technical users to do their jobs--building dashboards, drilling to row-level detail, and accessing complex metrics--without having to worry about what’s behind the curtain.

In short, LookML is SQL evolved. It leverages SQL’s power in a way that’s familiar to analysts, while abstracting away the low-level concerns that analysts usually have to manage.

It’s a powerful language with a huge community of developers who support each other. The pre-built analyses they share--to help integrate Salesforce data, build a cohort analysis, or model weather data--will give you a huge head start on common analytic tasks.

Finally, it’s a living language that’s constantly growing--adding new features, dialects, and integrations. So as data, databases, and data analysts change, LookML will be there to meet them.

<![CDATA[Our Five Favorite Reads from 2017]]> https://looker.com/blog/five-favorite-reads-from-2017 https://looker.com/blog/five-favorite-reads-from-2017 As 2017 comes to a close, we’re looking back at some of our favorite reads from the past year. From personal stories to practical guides, these articles cover the topics most on our minds, as well as our customers’.

Here are a few of the Looker team’s favorite (and most shared) stories from 2017:

From Burning Millions to Turning Profitable in Seven Months — How HotelTonight Did It
This compelling story by Sam Shank, CEO and co-founder of HotelTonight, describes how the HotelTonight team completely revamped their business approach over a span of just a few months. Shank’s practical, candid advice makes this one of our most read and widely shared articles of the year.

Five Building Blocks of a Data-driven Culture
This TechCrunch article by WeWork’s Carl Anderson and Michael Li argues that data should be made available and used by everyone in an organization, not just data scientists and analysts. Their “building blocks” lay out the groundwork for anyone building or honing their data driven culture.

How to avoid big data project failures: Your 5-step guide
Building the technical solution is only the beginning of the challenge of bringing new tools to an organization. This article from TechRepublic is chock-full of practical advice to help combat the real business challenges of implementing a new data solution, touching everything from justifying business value early to finding balance in timing.

Five Keys to Leading in the Age of Analytics
Data is changing the way we run our organizations, and this article from Data Center Knowledge covers key technologies and strategies every leader should consider, and how they’re shaping business objectives and cultures.

'Big Data' Is No Longer Enough: It's Now All About 'Fast Data'
In this story from Entrepreneur, Tx Zhou shares three practical tips for taking the next step in the data evolution: actually making big data usable in the modern organization.

Did we miss anything?

Send us your favorite reads from 2017 on Twitter: @Lookerdata!

Happy New Year! The Looker Team

<![CDATA[Git More Out of Your Data Model: Announcing Git Branching in Looker]]> https://looker.com/blog/git-branching-in-looker https://looker.com/blog/git-branching-in-looker At Looker, we place a high priority on making LookML development easy, efficient, and effective for analysts.

This idea - along with a strong support of coding best practices for collaboration - is grounded in Looker’s powerful integration with Git.

With the launch of Looker 5, this integration has grown even more powerful with the introduction of Git Branches in Looker.

Looker and Git

Developed by software engineer Linus Torvalds in 2005, Git was intended to help manage large code bases with multiple collaborators and has seen worldwide adoption since then.

Git at its core is version control, which is an absolute requirement for all development as it allows tasks that are critical to writing efficient performant code, such as:

  • Test uncommitted code in a development sandbox without affecting production code.
  • Allow others to view and refine uncommitted code before it’s pushed to production.
  • Manage multiple separate additions to the code base, before pushing out production.

LookML centralizes SQL logic in one place and allows analysts to collaborate on a single codebase, a workflow much closer to that of a software developer’s and one that is perfectly suited for a tool like Git.

Before Looker 5, Looker’s version control capabilities gave analysts the ability to test out new additions to explores and LookML dashboards before making those changes available to everyone else.

But it was a clunky process to view uncommitted code that other developers were working on, and there wasn’t really a way for a single developer to manage multiple additions to the model simultaneously.

So, we’ve made it possible to create shared branches in Looker.

What’s in a branch?

Think of a branch as an entirely separate copy of your codebase that’s still connected to the version of the codebase that’s functioning as the production code for your users. A branch allows you to develop and experiment freely without fear that your code will affect other users. It’s your own private sandbox.

That branch is still connected to the master code however, so when developers are ready, they can “push their code to production”, which merges changes on their branch with the master branch.

Prior to Looker 5.0, the only branches that could be created in Looker were branches for individual developers. Developers in Looker would work on their own private Git branch whenever they were in development mode. Looker would automatically create branches for LookML developers, and they were accessible to only that user (the only way to view another person’s branch would be to sudo as that user).

Enter Shared Branches in Looker

Shared branches in Looker change all of that. Now, developers in Looker can create shared branches that can be edited and modified by other LookML developers.

This is a big deal because this feature finally allows analysts in Looker to collaborate on the same enhancements to their data model. Now, if I’m working on something and want input from my team, I can get help from another LookML developer easily because they can just check out and modify the branch I created.

This of course, doesn’t mean that your private developer sandbox goes away. LookML developers can still develop on their own personal branch. Other developers will be able to view (but not modify) that branch. If another developer wants to modify code on another user’s private branch, they can always create another branch from that user’s personal branch.

We believe this will allow analysts to organize and collaborate in new and productive ways, and hopefully make life a little easier for analysts, as well.

Want to learn more about what makes Looker the perfect platform for version control? Learn what makes building a data model on Looker so powerful.

<![CDATA[Analysts, It’s Time to Focus on Analytics]]> https://looker.com/blog/time-to-focus-on-analytics https://looker.com/blog/time-to-focus-on-analytics I’ve got some bad news. If you’re an analyst, you’re not being well-served by your existing tools. And at Looker we understand your pain.

But I’ve also got some good news: today Looker announced the availability of new features that are going to make your life as an analyst much easier.

Now with Looker, you can:

  • Combine your business data with new data sources more easily than ever with Looker’s Data Blocks
  • Collaborate on your business logic with Looker’s simple, powerful integration with Git
  • Use Looker to build exactly the tool your company needs with Looker’s Action Hub

Sounds pretty good doesn’t it?

But before we get to the good news, let’s take a look at three challenges I faced when I was an analyst -- I think you might be facing these challenges, too. I like to think of these as the proof that analysts need better tools.

1. Write and maintain SQL queries.

One of the most basic job requirements for any data analyst is a fluency in SQL. Why? Because if you’re like me you’re going to be writing SQL a lot. And you’ll most likely be writing variations on the same SQL over and over and over again.

For most analysts, writing SQL from scratch is just a fact of life. Because sadly, the really impressive SQL you write (y’know, the ones you’ll want to save) will be so hard to parse in a few weeks that you’re pretty much just better off building that query from scratch.

I know that workflow because I’ve been there myself.

2. Optimize how fast those queries run over and over.

Writing correct queries is a good start, but sometimes the first thing you write doesn’t return fast enough. So another core task of analysts is examining database usage to improve performance.

And if your data volume is growing rapidly (which, of course it is), you also need to constantly think about upsizing and redistributing data across nodes. I know your pain all too well.

3. Answer any and all questions from business users

Those two responsibilities are just preliminary steps to doing the thing that you (and I) were actually hired to do -- analyzing data and delivering those insights to the rest of your company.

If you’re like me, that’s the thing that made you want to become an analyst in the first place. But that’s too bad, because the previous two tasks are going to take up 75% of your time. So even though your business users could use your help interpreting and analyzing data, you rarely have time.

Isn’t it strange that the task that you were originally hired to do (and that you WANT to do) is being crowded out by two responsibilities that you seem to repeat over and over again?

Does it feel like something’s broken? It is.

And the thing that’s broken is your workflow. But it’s not your fault. And Looker is here to help.

What if I told you Looker can dramatically reduce the time you spend writing and optimizing queries?

Most business intelligence tools on the market focused on making it easier to extract data from the data warehouse or making it easier to manage SQL queries. Those approaches both come from a world where data warehouses needed protecting from your business users.

Today’s data warehouses are ridiculously fast and they’re incredibly cheap. So you need a tool that takes advantage of them, right? That’s Looker.

Looker works to free analysts like you from the tedious work of writing ad-hoc SQL queries. And we do that by making it easy to codify your knowledge in a data platform that’s shared across your company.

The benefits of this approach are straightforward and easy to capture. You get to build on top of a centralized data model that contains all of your business logic. And that business model is used to serve your users by generating the SQL queries they need to answer their own questions. Without you doing it.

Now you can focus on what you actually want to do--game-changing analysis that can drive your business forward.

What’s more, with Looker you can leverage work others have already done in the form of blocks of business logic. So, for example, data experts at Looker have pre-modeled public data and made it accessible to you so you don’t have to reinvent the wheel. We call these Data Blocks.

Check out Looker’s Data Blocks to see how we’ve made it easier than ever to deliver new data sources like data from the US Census or weather data to your business users without the need to develop the logic yourself.

Speaking of leveraging others’ work…. We can help there, too

If building on your own work is hard in your current workflow, collaborating with others is near impossible. This is particularly frustrating because software engineers work on far more complicated code all the time, and they collaborate constantly.

How do they do it? Version control.

Looker leads the industry again, finally bringing advanced version control to modern data analysis. Now, instead of working on queries in your own silo, you’re building on your data like a software engineering team.

What’s more, Looker’s new shared Git Branches make it easier than ever to collaborate with other LookML developers on the same code.

So what are you waiting for?

We’ve worked with a lot of analysts at Looker. And if the problems I described above sound familiar, you’re not alone. I get it.

But I know from personal experience that after adopting Looker, you’ll be able to envision a whole different work life. (I was a Looker customer for a lot longer than I’ve worked here.)

And fundamentally, that’s why I came to Looker. Because giving analysts back the time they need to construct innovative solutions that help their businesses grow is hugely rewarding. And as we fully embrace Looker’s ecosystem, the innovation we’re seeing only grows. Looker’s Action Hub is a huge step forward on that. Now, analysts can easily build tools for their colleagues that reach outside of Looker, and integrate directly with the other tools people are using every day.

Want to make it easy for your teams to send data to Slack? Check out our Slack action. Want to make it easy to email or text a specific group of users through Looker? You’re going to love our Sendgrid and Twilio actions.

If all this sounds interesting, let us show you it in action. You can request time with our team and sign-up for a Looker demo to check out the analyst experience in Looker for yourself. Or maybe you want to learn more about Looker’s Action Hub, and data actions. Sign up for our LookInside webinar where Looker’s product team will be walking through some popular workflows available through Looker actions.

We’ve worked hard to make it easier for analysts to collaborate, experiment, and build insights together on our platform. We can’t wait to see what you build next.

<![CDATA[Agile and Bulletproof: An Approach to Mature Embedded Analytics]]> https://looker.com/blog/an-approach-to-mature-embedded-analytics-with-frame-ai https://looker.com/blog/an-approach-to-mature-embedded-analytics-with-frame-ai Our mission at frame.ai is to make it easy to build better relationships with your customers (internal and external) using Slack. As a small team of startup vets, we often need to work quickly and independently, so as Head of Data, I’ve typically built and configured the services I need myself.

Our ability to make quick changes was recently put to the test. Responding to an increasingly common customer need, I was able to design, prototype, and safely deploy a new analytics dashboard to a large subset of Frame’s customers in eight hours. This new dashboard wasn’t just a nice-to-have, either. As a result of it, one of our partners was able to make critical budget decisions that week based on the new visibility this dashboard provided. Our agility made the difference for them.

Now, I’ll readily admit that although I have strong data science background, I’m a middling engineer, at best. So how’d I manage to pull off a non-trivial feature release with new data models, new visuals, and customer-specific functionality that didn’t risk any of our SLAs?

The answer is Looker, along with a custom deployment framework that leverages its code-based approach to analytics. Looker’s architecture made it possible to programmatically automate big parts of our analytics, and that’s made a huge difference for us as we grow.

In this post I’ll walk through our design, and how you can use the same approach to iterate rapidly and safely for your customers wherever they are. The system, which we call the Frame Analytics Build System (FABS), is a combination of Python, Jinja2, and Git automation.


(1) Anonymized example of customer analytics dashboards from Frame.

Frame for Slack is highly configurable, allowing every team to align Frame’s conversation enhancements with their existing tools and processes. Configurability and easy iteration are built into the engine that handles our operational logic, and I wanted our analytics products to have all of those same great qualities.

Having built customer-facing reporting products before I knew they can seem straightforward, but often require several iterations to get right. Traditionally, I would have assembled a team of UX, design, front-end, and back-end engineering skillsets to build a reporting webapp for our customers, but this approach is both slow and resource intensive.

In search of an agile alternative, I turned to Looker’s embedded analytics capabilities. I knew if I could leverage Looker’s data modeling, permissioning, and visualization capabilities in a way that provided the kinds of production guarantees we needed for our customers, we would be able to move exceptionally fast in bringing new analytics products to market.

I needed a way to guarantee the following:

  • High availability and uptime for every customer
  • Security through per-customer data isolation and granular permissioning
  • Manageable Customization per customer based on feature configuration
  • Deployment of updates to one or thousands of customers easily and with low risk
  • Validation and error-checking for every deployment
  • Rapid Design and development of new analytic products
  • Data Consistency through a single view of data across all customers and internally

Looker doesn’t have all of these features out of the box, but because it exposes all data models and visual artifacts in code (LookML), adding the missing pieces was easy. And because the Looker API can render these artifacts, it’s straightforward to build automated tooling around them. Enter: FABS.


(2) Anonymized example of Frame’s embedded Operational Dashboard

FABS takes a customer configuration file and a set of core Looker view files and renders the full set of LookML files required to fully specify a reporting product for a customer (view, model, dashboard files). The final dashboards are then embedded in our management console and made available to Frame’s enterprise customers (example shown above). Importantly, all core views are versioned when referenced in a deploy, so ALL files that define a single customer’s reporting are effectively immutable. You can see the resulting LookerML structure diagrammed for two customers below:


(3) Example FABS Looker architecture

Core views define the “baseline” for our analytics features in this hub and spoke model, and each configuration file defines the transformations and extensions required to create a single tailored spoke. By separating definition and deployment, we decouple customer applications from each other as well as from any previous versions of themselves. Since rendering the final configuration to any spoke is programmatic, it becomes trivial to specify and (re)generate an arbitrary number of them.

There are a few pretty magical things happening above.

  1. Frame’s internal data exploration and dashboards all reference the most up-to-date view of the core data model, allowing modeling and product development at maximum speed.
  2. Internal and App views all utilize LookML’s extends feature to provide an extensible data interface to each application, allowing us to override any dimension, measure, or dashboard with customizations.
  3. Embedded users only have access to their own data through explicit model level restriction and database query isolation.
  4. Each deploy produces an immutable data model branch for each customer app on top of Looker’s native Git versioning, leaving each app unimpacted by each other or by internal work (diagrammed below).


(4) FABS viewed through the lense of version control

Mechanically, FABS is a mix of Python and Jinja2 templates. We specify high-level model, view, and dashboard configurations using YAML, defining overrides of any dimension, measure, or dashboard as needed. You can see a toy example below:


(5) a toy example of a single customer config YAML

In the above example, we customize how a customer name is presented in reports by overriding the display name and provide custom drill downs for customers in the orders view. Additionally, we define the required joins for the model and include a “Customer Retention” dashboard from our reports library (also YAML) to be deployed.

Once we’ve used FABS to generate the appropriate LookML files, we push them to a development branch of a Looker Git repository. A simple refresh of the Looker developer GUI detects a remote update to that development branch, prompting us pull the recently deployed LookML updates into Looker’s developer mode. Here we can run LookML and content validation, and spot check any updated dashboards for correctness before a final customer-facing deploy.


(6) Rendering from a config YAML to LookML files

Taken all together, FABS and Looker allows Frame to provide our customers a high-quality analytics product in a way that is scalable for us and exceptionally responsive and tailored for them. While we are using this system to deploy for external customers, one could easily imagine using the system to deploy for internal customers at large organizations.

Analytic reporting is just one of the many data problems we are solving here at Frame. If you are excited by building or using cutting edge conversational AI please reach us at contact@frame.ai or, even better, install our Slack app and DM us directly!