Three levels of bit/altcoin demand

I follow the world of bitcoin and altcoins with some degree of closeness. It’s  a world that has enough opportunity to change things dramatically but who knows whether it will actually happen. A lot can happen with projects as large as these.

There is a lot of misunderstanding going around about the nature of demand with the various projects that exist. As I sit here writing, bitcoin currently has a value of $6,547 per coin. There’s some degree of demand that drives that value. However, prices of bit/altcoins fluctuate rapidly and regularly implying a large degree of disconnect on how to value the demand for these projects. 

This disconnect resonates throughout how people talk about bit/altcoins. Let’s break down the three levels of demand so we can maybe have a degree of agreement on how to communicate:

  1. Developer/Early adopter demand. This group is interested predominately in the potential to use bit/altcoins for novel use cases. They aren’t interested in buying a gallon of milk, they want to create something new. This level of demand largely creates interest and future potential. Very little of the value of coins as it stands today comes from here.
  2. Institutional/Long-term investors. This group goes around shopping for deals. They want to buy something today for $1 that will be worth $5 tomorrow. Undeniably, bit/altcoins contain this potential. As projects from level 1 become more mainstream, this potential gain realism. Most of the value of bit/altcoins is derived from this level because there really isn’t much else for the average owner of bit/altcoins to do with it.
  3. Everyday users. This group does not exist yet. Sure, there are some limited use cases that allow people to use bit/altcoins in repeatable and useful ways. These cases still fall easily under the early adopter frame. None of the projects is ready for the average consumer to go buy a gallon of milk. No value of the coins sits from this bucket.

Most investments that people can make involve a mix of valuations across these groups. Over time, the gap between the various valuations closes to bring about consensus to the value of an investment. Bit/altcoins are no where near that point. Each investor is still applying their own valuation process which creates significant valuation gaps. 

There’s no right answer, but it’s important to realize that any two people having a conversation around “what is the right value of bitcoin” are probably having two different conversations even though using the same words. 

Advertisements

Trust and accountability are a huge leap AI needs to overcome to take over business spaces

There’s a great episode of 30 Rock where Jack (the head of the network) replaces all of the pages with computers. Naturally, everything is going great until Jack has to ship a package to his boss. The package doesn’t make it because he typed in the 6th floor instead of the 66th. The computer system proves he’s wrong but Jack pulls a page in to take the blame and throws out the computers. It’s a funny episode that contains a large amount of truth.

One of the advantages of humans being involved in a process is that they can check for other human errors that are difficult to detect (such as someone putting in the 6th floor instead of the 66th floor). AI is a black box to all but a small portion of the population. It follows the “And then a miracle occurs” model. Put something in and then the outcome magically happens. No idea how, but it does.

Black box processes require a high degree of trust. They may be fine for rote activities but do you really want one designing your 10 million square foot real estate portfolio? How will you know if the variables are right? How will you be certain it didn’t miss something? Or forget to factor in politics. If you have to triple check all of the outcomes manually anyway, what good is the AI?

Overcoming this lack of trust is going to be very difficult for most AI. There is one class of AI that it will work really well though – AI that speeds up the decision process without actually making a recommendation. AI has a great ability to support existing processes and make them more robust. Replacement of process will be a bit further down the road after a lot more trust is built.

Short-term thinking can lead to long-term disasters

Recently, I read a post on LinkedIn that recommended added free features for job seekers. Give them greater visibility, increased profiles, more InMail messages. Basically, make it easier for people to use the system to find a job which is one of the key uses of the site. 830 comments with most seeming to agree (or simply complain about LinkedIn generally).

This is classic short-term thinking. Many quick ideas have the same flaw: they discount the incentives they create for bad behavior. If you give a free service to a certain class of people (job seekers in this case), you will suddenly find a surge in those classifying as that class in search of free features. If you are a headhunter or HR person, why not classify this way? You could leverage exactly the same features. Think about the new deluge of messages hiring managers would start to get, they’d stop using LinkedIn entirely. 

One of my approaches when I see new technology is to challenge it even if my initial reaction is strongly positive. A great salesperson can make even an awful tool look amazing in a short demo. Challenging an idea is always worthwhile. Fragile ideas that can’t stand up to scrutiny aren’t worth pursuing. This fragility is what leads to the long-term disasters.

It’s one thing to throw out an idea that hasn’t been thought through and tested. That’s a principle of brainstorming and innovation. It’s another thing completely to present an untested idea as a project to begin working on. 

Primary data versus secondary data.

I deal heavily with workplace sensor technology. This is the tech where a sensor is placed under a desk or in a room that can tell you if space is occupied or not. Basically, it reports a 1 if it detects new occupancy and a 0 if that occupant leaves. Pretty straightforward.

From a primary data standpoint, we can use this data to understand the utilization of the office. Were we 70%, 80%, 90% occupied on average? What was our occupancy late morning? Mid-afternoon? What days of the week do we see our peaks occurring? It’s pretty cool to see some of these trends.

From a secondary data standpoint, it can report the average occupancy of the office across the day accounting for the time a given desk sat empty. If you measure an office between 7a and 7p, you may never get higher than 50% occupancy because the tails of your measurement period are extremely low occupancy. If you measure between 10a and 3p, the lunch period takes on outsized significance. If you measure between 9a and 4p, the trend changes to something else. Picking the right measurement period is tricky. Even more tricky is understanding what’s good or bad with a particular measure.

I’ve recently seen a number of requests to measure conference room utilization by counting the number of people in a room at any given time. Naturally, a 10 person conference room occupied by only 2 people is under-utilized. Unless those two people are the head of sales and a big client he’s working with. Then it’s perfectly utilized. But what about a 6 person room only occupied by 2 people? Is that under-utilized? Even if it is, is it 33% below utilization target or 67%? Identifying how to define good and bad performance is extremely difficult.

Primary data is binary – good or bad – difficult to argue with. Secondary data is open to interpretation. Focus on the primary data first. Evolve to include secondary data over time as you learn what it means to your business.

There’s a big difference between good data and interesting data.

Good data is often very boring. It contains the basics and comes in as expected (right format, schedule, completeness). There are lots of things we can do with good data in order to move the ball forward.

Interesting data is never boring. It contains interesting attributes and never-before-seen elements. Usually it comes in as a one-off dataset. There are many interesting things that can be done with interesting data but sometimes it is hard to tell if those interesting things are valuable.

Recently, I’ve been involved with a few technology companies looking at their new capabilities in development. There are some truly fascinating things being developed. My first question every time is, does this new feature drive user adoption of the system? Stated another way, does this new feature give users a reason to either contribute data more freely/voluntarily or come back regularly? If no, then you are developing a second tier feature. If yes, it’s a core feature that is making your system better.

Most interesting data comes from second tier features. It’s the data that may or may not be correlated to the main data. It may or may not be indicative of performance. But my goodness can it show some interesting things…..those things just may not mean anything.

Troubleshooting isn’t only for technology things

One of the hardest parts of any project is getting the Quality Assurance aspects right. How do you really QA a consulting report or check a research report? Troubleshooting non-procedural activities is a fundamental issue.

The problem is, troubleshooting non-technology activities is more important than doing so in technology. Once a system is set up, it usually has controls in place to keep it between the rails. Non-technology can go wrong in any number of ways. Sometimes those ways are subtle and unobvious. Errors can creep in and take up shop for years before anyone notices an issue.

This softer side of QA is where good managers differentiate themselves. Sometimes knowing the right questions to ask and then asking them goes further than anything else. A report that no one cares enough about to challenge probably doesn’t have enough value to continue being run.

Step one is stopping to ask questions. It doesn’t matter whether the process is online or offline, automated or manual, data-based or a manual process. Stopping to think is what troubleshooting and QA are all about.

3 layers of #CRE data. Who is your real customer? Who is your partner’s real customer?

The internet has made the service provider world very different. An amazing number of new business models have come to life that can provide an amazing amount of value if you think through your needs correctly. Being in the commercial real estate industry, this is particularly true.

Data has taken over most industries as the primary coin of the realm. They who possess the most timely and accurate data can name their price. This leads to my biggest concern when working with new companies or tools – understanding their real target customer can be difficult.

In CRE, data is largely generated through three different and distinct parties:

  1. End users. The companies that actually use and operate the space. They know how many people there are and what they are doing in the space.
  2. Building owners. They operate the core building services and control the building financials.
  3. Real Estate service providers/brokers. They market the building and negotiate lease terms.

Only through all three groups can you get a full picture of what is happening. Depending on where you exist in the real estate ecosystem, you have more or less access to the information from these three layers. New technology groups are popping up trying to more clearly paint the picture in each. They often offer tools to the groups in category 1 and 2 with the goal of selling it to category 2 or 3.

End users are the target with some of the most valuable data. Knowing this, it’s worthwhile to watch out for anyone offering you tools to “help solve your problem” while in reality, they are building a tool predominately to leverage your data for someone else entirely. If they can charge both sides of the equation, the technology company becomes the biggest winner.

It’s much like how the general user of Facebook is not their primary customer, the advertisers are.