We’ve Been Thinking About Measurement All Wrong

Ian David Moss
5 min readMar 1, 2019


Doug Hubbard’s How to Measure Anything offers social sector professionals a step-by-step guide to counting what counts.

Measurement is not a simple act of observation disconnected from any larger plan. Instead, it’s an optimization strategy for reducing uncertainty about decisions we need to make. That’s the central argument of Douglas Hubbard’s How to Measure Anything: Finding the Value of “Intangibles” in Business, which remains one of the most important books on decision-making I’ve read since first encountering it more than seven years ago.

How to Measure Anything’s reframing of measurement’s purpose is nothing short of revolutionary, none more so than for workers in what I call the “knowledge industries” — evaluation, research, data science, policy analysis, forecasting, etc. Among other ramifications, it establishes that measurement has value only insofar as it can reduce uncertainty in the context of a decision that matters. This emphasis on specific decisions — in other words, starting with the decision and seeking out additional information only as needed to gain confidence in making it — suggests a hyper-applied approach to evaluation and research that would represent a radical departure from the way these functions operate at most organizations.

Hubbard also argues that if something matters, it must have observable consequences or leave some kind of observable trace. Therefore, everything that matters is measurable, even seemingly “intangible” phenomena that most would consider to be beyond the realm of quantification. If something does not seem amenable to measurement, it’s a sign that either it doesn’t actually matter or it’s not sufficiently well defined.

How to Measure Anything presents a panoply of methods for defining measurement problems more clearly and training stakeholders in solving them, including Fermi estimation, calibrated probability assessment, Monte Carlo simulation, various sampling techniques, Bayesian statistics, and methods to aggregate expert judgments. Many of these fit like jigsaw puzzle pieces into an overarching methodology Hubbard has developed for analyzing and making any decision, which he calls Applied Information Economics. The basic steps of AIE are as follows:

  1. Define a decision problem and relevant uncertainties
  2. Determine what you know now
  3. Compute the value of additional information
  4. Apply measurement instruments to high-value measurements
  5. Repeat steps 3 and 4 until the value of additional information drops to zero
  6. Make decision and act on it.

A Book Full of Math for People Who Hate Numbers

Throughout the book, Hubbard relentlessly seeks to persuade readers that measurement is easier than they think. He points out that our intuition naturally models decisions all the time — as long as we can understand a situation, we are already modeling it in our heads. He demonstrates a so-called “mathless” method he invented for reliably estimating the median of any group of values with as few as five random samples. He deduces that most of the value in measurement typically comes from the initial investment of resources; contrary to conventional wisdom, the more data you have, the less useful additional data will typically be. This means Hubbard’s method can be used even in extremely information-poor environments, and in fact thrives in those contexts.

How to Measure Anything is packed to the gills with practical tools designed to help readers realize the potential of these insights, and the tables, examples, and templates provided throughout ensure that readers can get started building their own decision models right away. In addition, the book’s accompanying website offers a set of reusable Excel spreadsheets demonstrating techniques discussed in the text. At the same time, Hubbard’s constant encouragement for mathphobic readers lends the material a level of accessibility that most texts on the topic never approach.

How to Measure Anything is not without flaws. Its insistence that literally anything can be measured can become dogmatic at times, and glosses over some very real challenges germane to modeling complex systems and extremely rare events in particular. It also fails to consider the process for deciding when an explicit decision modeling approach is needed at all, which is important given that we make the vast majority of our decisions intuitively. In my own practice, I have found that the AIE technique, while fundamentally sound, usually requires substantial adaptations and simplification to gain buy-in from stakeholders, even after they have had the opportunity to read the book themselves. And while Hubbard has a gift for explaining complex ideas in simple terminology, many of the chapters in the most recent edition of the book include digressions that distract from the core points of the book and become repetitive after a while.

Even so, it’s amazing to me that How to Measure Anything isn’t more widely recognized or highly revered in the social sector. Chapter 3 alone should be required reading for any graduate professional aimed at training future nonprofit and public service leaders. In one sense it’s not surprising, as the framing and marketing suggest an emphasis on business applications. But if anything, its lessons and philosophy are even more relevant to realms where stakeholders must balance disparate and hard-to-quantify goals without the benefit of advanced training in mathematics. That’s most of philanthropy right there!

Not Knowing Everything Is Still Good for Something

How to Measure Anything’s emphasis on uncertainty, and thinking of that uncertainty as a continuum, opens the door to a probabilistic way of thinking about the world that makes perfect sense as soon as you stop to think about it, yet is completely foreign to most organizational cultures.

Learning to approach management and life challenges with a probabilistic mindset is incredibly empowering. It makes measurement challenges feel far less intimidating and much more tractable, while reducing the social stress and embarrassment that comes from having one’s predictions revealed to be wrong (because we’re all wrong sometimes). It is also a recipe for far more cost-effective and impactful use of resources whenever any analytical process, including research and evaluation, is called for.

How to Measure Anything may not be a perfect book, but the contributions it makes to the discourse are enormous. If foundations, donors, government agencies, and major nonprofits applied its principles on a more routine basis, even if they didn’t use the full AIE method or go to the effort of explicitly modeling their decisions all that often, I’m convinced the world would be a radically different — and hopefully better — place.