The information feedback loop can easily degrade your future innovations.

One of the things about data-driven innovation is that it relies on what came before it. The longer the legacy of a particular innovation path stretches, the more built-in history that is inherited to future generations. As legacy is baked in, it becomes harder and harder to deviate from the paths previously laid out.

When your future paths become fixed, innovation turns into evolution. It’s the iPhone problem. At this point, all new versions of the iPhone are evolutions of what came before and not truly something different. Innovation continues to occur but there are more and more features and things that cannot be changed. It’s unlikely that the iPhone line will ever truly deviate from the path laid out over the past 10 years.

It is true that there are no truly original ideas in this world. Any new innovation will have some history but that history is different than direct legacy.

You can restart the process by taking a product with a legacy and stripping it back to day 1. Often this means splitting the product into two paths – one with legacy and one with an entirely new team, direction, and goals. This isn’t as easy as it sounds as many features have a built-in legacy that can pop up unexpectedly. But the attempt can often yield surprising results.

Advertisements

Do you know how your workplace is actually used? #WorkplaceWednesday

One thing that I’ve seen over and over in my career is that few people actually realize the ways their workplace is used across the business. It’s not uncommon for a real estate group to have a complete misconception of the day-to-day reality of a site they are about to run a project in. This isn’t to say they are operating without asking first but it’s just as common that managers at the site don’t realize it either.

Most of our perceptions about how an office is used come from anecdotal information. We experience a shortage of conference rooms on the occasions that we go looking for them or we think things are too loud because we do a lot of heads down work. It also comes from hearing about things that are going on – but the things people usually share are bad events. Most anecdotes around the office are the negatives.

The day-to-day reality of most offices is that everything runs smoothly. There’s usually enough desks for everyone. Most people can get a conference room when they need it. Most people make use of the work areas to be productive. The biggest risk in a workplace change is breaking the culture.

How does one actually learn how the workplace really works? The basic blocking and tackling that occurs in any other group: asking people. Surveys on how offices are used go a long way and systems to track usage data around desks/conference rooms/equipment. Blocking and tackling is most of the job in most areas and it’s just as true in real estate.

The biggest difference between real estate and other areas is that a workplace design isn’t going to change much from when it is implemented. That design is going to be in place for anywhere from 5 to 10 to 20 years depending on wear-and-tear. Planning too much around today can actually be a bad thing because the primary requirement of an office space is to be useful for years to come.

I can’t tell you how badly I wish I had written this article: Data shouldn’t drive all your decisions

Quartz just published a phenomenal article titled Data shouldn’t drive all of your decisions. Go read it first because I can’t find a single thing I disagree with in it. It hits all of my favorite topics on innovation and decision making.

Go ahead, I’ll still be here after you finish reading it.

Done? Good! Because there’s some summary to unpack:

  • When solving new problems, yesterday’s data isn’t going to give you the answers.
  • Data is best used in story form, not in charts and tables.
  • Just because most of the data says one thing, that doesn’t mean your conclusion won’t be something else entirely.
  • Sometimes experience isn’t everything and can lead you down the wrong path.

Everyone can be a data person – that includes you!

I come across a lot of people who proudly proclaim that they are not “data people.” They avoid spreadsheets, they hate columns of numbers, and they claim to get confused easily amidst it all. I’m here to help them all understand – data is your friend and everyone is a data person.

Let’s start with a simple clarification about what “data” is. Data is simply information. It doesn’t have to be a million line spreadsheet, it can be the text of an email. Data is any recorded and referencable piece of information. That’s it. If you go through your email for the number of times you were asked a question, you are doing data analysis.

The common misunderstanding with data is that you need to know everything about Excel to be able to be a data person. Here there is a misunderstanding of the difference between raw data and formatted data. Anyone can work with formatted data but raw data is a different animal.

Raw data is that information which comes in that hasn’t been cleaned, checked, validated, or organized. This process of turning raw data into formatted data is not something that anyone should do. You have to understand the original intent of the data, understand relational data standards, and generally be comfortable inside of data tools. This is a specialized activity.

After the data is formatted, it’s now anyone’s to work with. At this point, working with data largely comes down to asking questions and using the data to answer those questions.

The basic skill set of many jobs can be boiled down to “knowing what questions to ask and getting the right answers.” Those answers may come from experience, reading tea leaves, interviewing other experts, or (most commonly) analyzing the data. If you know what questions to ask, you are 75% of the way to being good at working with data.

Most algorithms cannot be set and then forgotten.

“Set it and forget it” is a popular saying on many late night infomercials. Take some new cooker, throw your food into it, push a few buttons and then a few hours later you have amazing gourmet meals with no effort. At least that’s the theory.

In the business world, many people have begun treating their algorithms the same way. They create these elaborate rules for metrics, benchmarking and scoring that will assess a thousand variables to come up with the perfect rank. The best will even apply the probability curves around the score that is generated. Today they may even give results that make sense.

Time is fickle. As time passes, conditions change. The rules that governed a process no longer apply because people begin moving back to cities or technologies change the way that work is done or home officing continues to pick up or local policies change the way that financials are calculated. Something always changes.

But this change is often not handled well in algorithms. Often, the team that builds them puts a pin in them and then moves on to the new shiny toy letting the old one run with no supervision. What this really means is that there is no one around to catch it when it stops returning valid answers. To a layperson it may seem like good numbers – everything worked, the data is all there, the results are consistent with what was previously calculated – yet now the answers are no longer statistically valid for some reason.

Shelf-life is a mandatory concept within the perishable food space. It should also be a concept within the data science space. Data can go stale over time much like algorithms can no longer be applicable.

Information and Data is the basis of everything real estate related, be sure you invest resources in it.

It may seem unnecessary to point out, but we are living in the Information Revolution. It’s spurred on by the increase in digital communications but the simple fact is that our world is all about harnessing information for the maximum return. If Return on Information could be accurately calculated it would likely be the new #1 metric that every business measures itself on.

  • Fast food menus are driven by the purchasing behavior of customers. To maximize profits it is necessary to use information to both optimize menu options and prices – ideally by location. A $1 fry may be successful in Georgia but less so in California.
  • Brick and mortar retail is size dependent, sometimes big stores are ideal and sometimes smaller stores are. If you are not able to harness your customer intelligence to know which is right for you then you have a 50/50 shot (or less) of guessing right.
  • Locating a corporate HQ is dependent upon the labor in the market and the trends for competition for your target skills as well the growth/decline of that skill in the area. Your decision 20 years ago to locate somewhere may no longer be an optimal solution even if it still seems so on the surface.
  • Your retail partners, distributors, customer locations, product mix and inventory levels define the optimal supply chain. The mix of all the above likely changes (beyond some threshold) every 3 months or more. How are you using information around each to model the conditions that necessitate a change in location strategy – even if it is simply changing where inventory is pre-positioned. Sometimes having empty space in a warehouse today is the best long-term cost avoidance option.

Information drives everything about real estate. Knowing what is in your lease contract, a given landlord’s financial drivers, the macro and micro characteristics of the market, the labor pool you are trying to tap, future business plans that could impact the decision 3 to 10 years down the road….all of these need to be brought together to optimize any given real estate decision. There’s a lot that can go into a given decision but that doesn’t mean all of it needs to go in. Overkill is a real problem in analysis.

All this to say: invest in knowing how to harness and use information in your real estate decisions. It doesn’t have to all be some fancy, expensive technology (although that may be a component) but it does need to have a rational and consistent approach that meets your needs.

Nate Silver was right, saying otherwise is just misleading.

This one is a little late given that the election was a month ago but I still think it is worthwhile given the opinions I still hear about his performance.

I’m walking out of this election with an increased respect for Nate Silver and the work he is doing at fivethirtyeight.com. Statistics and prediction modeling is hard. Even that is an understatement because anytime you try to predict the future – even tomorrow – it’s more likely that you end up slight wrong that completely right.

A big portion of my job involves trying to understand the impact of decisions today on the business tomorrow. If we build out an office for 40 people today, what is the likelihood we have to close or expand it in 3 years? What is the likelihood that we can support 50 people in the same place without redoing the furniture? Is this city still going to be the right location for this function based on both business and geographic trends?

Nate Silver took a beating the 2 weeks leading up to the election. He was consistently and regularly called out for being too optimistic about Donald Trump’s chances of being elected president.

Why do I respect Nate Silver more today than before? Because he understands the single biggest rule of data analysis: Garbage in, Garbage out. If ou have any questions at all about the quality of the data you are being given to use it is your responsibility to account for that fact and note its potential impacts. Saying that you were wrong because the data was wrong is exactly the wrong answer because the follow-up question is “did you have any reason to suspect the data was wrong?” and any answer other than “yes” makes you incompetent in this case.

Data is fickle and many people think that data itself provides an answer. But if that was really true then IBM’s Watson would have taken over the planet already. Real intelligence is in being able to understand what data means. Where is it best applied, where should it not be applied at all, where is it misleading, where is it incomplete, where is it biased, what conditions could lead to a change in trends…. There is an art to being able to actually deal with data.

It does not surprise me in the slightest that most polling aggregators this year showed Clinton at a 98% chance of winning the election. The data seemed to reflect that. Relying on the cold hard numbers would point almost any model that direction. But this simply proves that some people are better at this than others and you should never trust any model until you sense check its approach, its strengths and its weaknesses. It’s like restaurants. The aggregators who gave 90%+ chance to Clinton are fast food, they throw everything in and give you a generic burger. Those that talked a lot about uncertainty are actually chefs, they know what to do with the raw food they are handed and throw out the worst and make the best sing before the plate goes out.