Data doesn’t cause bad decisions, people with data cause bad decisions

One of the running themes from the past six months that I’ve been working on is the role of data in decision making. Historically, I’ve been one of the biggest advocates that it is hard to have too much data. Get the data together, test different ways of looking at it, then pick the best to drive decision making.

In the last half year, I have come to the conclusion that even basic data may do more harm than good many times. But the world is moving to more and more data you may exclaim. The trouble is, most people don’t understand numbers.

If you want to test the theory that most people don’t understand numbers, run a simple scenario by someone:

  • Can the price of a stock go down by more than 100%?
  • Can the price of a stock go up by more than 100%?

If you are reading this blog, you probably immediately responded No to the first, and Yes to the second. Stocks can’t go below $0 which means that, on a percentage basis, they can’t drop more than 100%. But a stock can increase to any imaginable number theoretically which means it can go up by as many hundreds or thousands of a percent as you’d like.

My guess, is you are going to be surprised by the number of people who get one or both of these wrong. Or who struggle to answer it. Or who don’t understand what you are asking. Even really smart people miss this.

If people struggle with basic percentages, what hope is there that they will be able to understand the relationship between square feet per person and square fee per desk. These numbers are often more together but projects can make them move in opposite directions. I’ve seen actuaries stop to think about it. Why would we think the majority will grasp it without first really understanding real estate fundamentals completely.

Data in the wrong hands can be damaging. If the person using it thinks they understand it but they really don’t, they can easily draw the absolute wrong conclusion. If they partially understand it, they can misuse it for something it doesn’t even address. Worse, if they pull data themselves, they may miss nuances around how it should be put together and get the wrong number entirely.


Thinking through peak day operational planning #WorkplaceWednesday

One of the blessings of doing data analytics on real estate/workplace data is that you get to see some truly unique datasets and trends. One of the curses of doing data analytics on real estate/workplace data is that you have to figure out how to explain what those trends mean to people who don’t love data as much as you do.

One of the best examples of needing to explain real estate data is around the concept of “peak day.” When designing a workplace, you design for the theoretical “peak day” which is the day of highest occupancy. Depending on the nature of the work in the office and the region of the world you are in, this day will vary in both occurrence and magnitude. Many offices I’ve studied have a peak day that is 20 to 30% greater than the average occupancy and it occurs every 2 to 3 months. Meaning, if you design for average you will dramatically come up short on seats. If you design one seat for every person that could be there on a peak day, your average occupancy rate will appear low. It’s seemingly a no win data analytics problem.

Here’s the thing about data, it doesn’t provide answers on its own. Data is a tool that can help you test hypotheses, predict operational behaviors, and measure solution risk. Data cannot tell you the “right” answer. Two people looking at the same dataset could easily draw opposite conclusions on how to move forward. It happens every day with every dataset. Some believe you design for the worst day, others believe you design for just shy of the peak day, still others say to build “flex” seats to accommodate the peak day. The data can justify any of those directions. 

The real test is in the detail of the solution and the processes in place to help make the solution a success.

Primary data versus secondary data.

I deal heavily with workplace sensor technology. This is the tech where a sensor is placed under a desk or in a room that can tell you if space is occupied or not. Basically, it reports a 1 if it detects new occupancy and a 0 if that occupant leaves. Pretty straightforward.

From a primary data standpoint, we can use this data to understand the utilization of the office. Were we 70%, 80%, 90% occupied on average? What was our occupancy late morning? Mid-afternoon? What days of the week do we see our peaks occurring? It’s pretty cool to see some of these trends.

From a secondary data standpoint, it can report the average occupancy of the office across the day accounting for the time a given desk sat empty. If you measure an office between 7a and 7p, you may never get higher than 50% occupancy because the tails of your measurement period are extremely low occupancy. If you measure between 10a and 3p, the lunch period takes on outsized significance. If you measure between 9a and 4p, the trend changes to something else. Picking the right measurement period is tricky. Even more tricky is understanding what’s good or bad with a particular measure.

I’ve recently seen a number of requests to measure conference room utilization by counting the number of people in a room at any given time. Naturally, a 10 person conference room occupied by only 2 people is under-utilized. Unless those two people are the head of sales and a big client he’s working with. Then it’s perfectly utilized. But what about a 6 person room only occupied by 2 people? Is that under-utilized? Even if it is, is it 33% below utilization target or 67%? Identifying how to define good and bad performance is extremely difficult.

Primary data is binary – good or bad – difficult to argue with. Secondary data is open to interpretation. Focus on the primary data first. Evolve to include secondary data over time as you learn what it means to your business.

There’s a big difference between good data and interesting data.

Good data is often very boring. It contains the basics and comes in as expected (right format, schedule, completeness). There are lots of things we can do with good data in order to move the ball forward.

Interesting data is never boring. It contains interesting attributes and never-before-seen elements. Usually it comes in as a one-off dataset. There are many interesting things that can be done with interesting data but sometimes it is hard to tell if those interesting things are valuable.

Recently, I’ve been involved with a few technology companies looking at their new capabilities in development. There are some truly fascinating things being developed. My first question every time is, does this new feature drive user adoption of the system? Stated another way, does this new feature give users a reason to either contribute data more freely/voluntarily or come back regularly? If no, then you are developing a second tier feature. If yes, it’s a core feature that is making your system better.

Most interesting data comes from second tier features. It’s the data that may or may not be correlated to the main data. It may or may not be indicative of performance. But my goodness can it show some interesting things…..those things just may not mean anything.

The information feedback loop can easily degrade your future innovations.

One of the things about data-driven innovation is that it relies on what came before it. The longer the legacy of a particular innovation path stretches, the more built-in history that is inherited to future generations. As legacy is baked in, it becomes harder and harder to deviate from the paths previously laid out.

When your future paths become fixed, innovation turns into evolution. It’s the iPhone problem. At this point, all new versions of the iPhone are evolutions of what came before and not truly something different. Innovation continues to occur but there are more and more features and things that cannot be changed. It’s unlikely that the iPhone line will ever truly deviate from the path laid out over the past 10 years.

It is true that there are no truly original ideas in this world. Any new innovation will have some history but that history is different than direct legacy.

You can restart the process by taking a product with a legacy and stripping it back to day 1. Often this means splitting the product into two paths – one with legacy and one with an entirely new team, direction, and goals. This isn’t as easy as it sounds as many features have a built-in legacy that can pop up unexpectedly. But the attempt can often yield surprising results.

Do you know how your workplace is actually used? #WorkplaceWednesday

One thing that I’ve seen over and over in my career is that few people actually realize the ways their workplace is used across the business. It’s not uncommon for a real estate group to have a complete misconception of the day-to-day reality of a site they are about to run a project in. This isn’t to say they are operating without asking first but it’s just as common that managers at the site don’t realize it either.

Most of our perceptions about how an office is used come from anecdotal information. We experience a shortage of conference rooms on the occasions that we go looking for them or we think things are too loud because we do a lot of heads down work. It also comes from hearing about things that are going on – but the things people usually share are bad events. Most anecdotes around the office are the negatives.

The day-to-day reality of most offices is that everything runs smoothly. There’s usually enough desks for everyone. Most people can get a conference room when they need it. Most people make use of the work areas to be productive. The biggest risk in a workplace change is breaking the culture.

How does one actually learn how the workplace really works? The basic blocking and tackling that occurs in any other group: asking people. Surveys on how offices are used go a long way and systems to track usage data around desks/conference rooms/equipment. Blocking and tackling is most of the job in most areas and it’s just as true in real estate.

The biggest difference between real estate and other areas is that a workplace design isn’t going to change much from when it is implemented. That design is going to be in place for anywhere from 5 to 10 to 20 years depending on wear-and-tear. Planning too much around today can actually be a bad thing because the primary requirement of an office space is to be useful for years to come.

I can’t tell you how badly I wish I had written this article: Data shouldn’t drive all your decisions

Quartz just published a phenomenal article titled Data shouldn’t drive all of your decisions. Go read it first because I can’t find a single thing I disagree with in it. It hits all of my favorite topics on innovation and decision making.

Go ahead, I’ll still be here after you finish reading it.

Done? Good! Because there’s some summary to unpack:

  • When solving new problems, yesterday’s data isn’t going to give you the answers.
  • Data is best used in story form, not in charts and tables.
  • Just because most of the data says one thing, that doesn’t mean your conclusion won’t be something else entirely.
  • Sometimes experience isn’t everything and can lead you down the wrong path.