It turns out that group meetings are mostly a terrible way to make decisions

Ian David Moss
7 min readDec 17, 2020

--

Photo by Danielle Cerullo on Unsplash

Over the course of nearly two decades in the workplace, I’ve seen the inside of dozens of organizations and teams as an employee, consultant, or friendly collaborator. With rare exceptions, it seems, important decisions get made one of two ways. If an organization is particularly hierarchical or founder-driven, the leader makes the decision and then communicates it to whoever needs to know about it. Otherwise, the decision gets made, usually by group consensus, in a meeting.

All too often, those meetings are decision-making disasters.

Or at any rate, that’s what the research says. It might not be apparent right away — one of the sneaky aspects of group deliberation is that it reliably increases confidence in the final decision, whether that decision was a good one or not! And yet most group deliberations don’t do much to combat the many cognitive biases we carry into those groups as individuals. In fact, meetings tend to make many of those biases even worse.

Studies show that several biases that are especially relevant in group settings are among those exacerbated by the group decision-making process:

  • planning fallacy (thinking things will get done sooner or cost less than turns out to be the case)
  • framing effects (seeing a situation differently depending on how it’s presented)
  • egocentric bias (assuming that other people are more like you than they are)
  • the representativeness heuristic (relying too much on stereotypes)
  • unrealistic optimism
  • overconfidence
  • sunk cost fallacy (doubling down on a losing strategy)

Groups do perform slightly better than individuals when it comes to availability heuristic, anchoring, and hindsight bias, but the errors are still there.

All of this is chronicled in an essential (if not exactly riveting) book called Wiser: Moving Beyond Groupthink to Make Groups Smarter by Cass Sunstein and Reid Hastie. The duo set out to have Wiser do for group decision-making what Daniel Kahneman’s Thinking, Fast and Slow did for individual decision-making — lay out, in comprehensive fashion, what cognitive science says about group decision-making biases and how those biases can be overcome. Though Wiser is a small fraction of the length of Kahneman’s grand opus, it makes a convincing case that decision-making in groups is a distinct enough phenomenon that it merits distinct analysis. For the purposes of improving decision-making and leadership in professional settings, moreover, it covers far more relevant literature than the better-known work covered in earlier books.

How Groups Fall Short

According to Sunstein and Hastie, groups fail for four interrelated reasons:

  1. Group dynamics amplify the errors of their members
  2. Group members follow the lead of those who spoke or acted first (“cascade effects”)
  3. Group discussion leads to polarization of viewpoints
  4. Groups privilege public/shared information over private information.

As a social species, our brains are hardwired to take other people’s behavior into account when determining our own. We don’t even need to be physically around other people for that to be the case! Sunstein and Hastie, for example, cite examples of studies in which artificially boosting a comment on a website or revealing how many downloads a song got had enormous influence on participants’ subsequent ratings of quality. And when we are near each other, of course, the pull to seek safety in the herd can be even stronger. The authors describe three types of “cascades” that can cement groupthink in place during meetings: information, reputational, and availability. Information cascades happen when successive speakers in a meeting agree with the first speaker, suppressing their doubts due to an assumption that the previous speakers know what they are talking about. Future speakers then assume that opinion is much more unified than it is. Reputational cascades happen when speakers in a meeting want to avoid being seen as argumentative, difficult, or not team players because they are disagreeing. Availability cascades take place when some issue or comparable situation has high salience for the group, blocking out alternative scenarios or interpretations.

As meetings unfold, the judgments of individual group members shift based on who has spoken and what opinions have been expressed. And by and large, those opinions will move in the direction of what the majority of the group members thought in the first place. People who were originally on the opposite side of the issue will start to have doubts or become persuaded, and people who were on the majority side to begin with will become even more confident that they’re correct. This phenomenon is known as group polarization, and it’s a natural consequence of informational and reputational cascades combined with the fact that groups almost always have some initial leaning in one direction or another. More than 100 studies across twelve different countries have found evidence of this effect in group deliberations, and it applies not only to judgments of facts but also values and risk preferences. Polarization is especially potent in groups of like-minded people, and the increase in confidence that comes from polarization serves to accelerate it. It’s not hard to understand why: it’s a stressful experience to go against majority opinion. People will doubt their own judgments and fear being marginalized, and will agree to just about anything if they would otherwise be the sole dissenter in a group.

Making things even worse, groups are really inefficient at sharing information with each other. In any group, some facts and perspectives will be shared by most or all participants, while others may be known only to a few. Not surprisingly, the widely-shared information is likely to have more influence on the judgments of the group — even if the information that only a few people know is no less accurate or relevant to the situation. Not only is shared knowledge more influential in group deliberations, the research shows, but people who have access to more private information participate less and are taken less seriously when they do. That’s especially the case for group members who are seen as lower-status in those settings, including less educated people and more junior employees.

To summarize, then, groups come into decision-making meetings with preconceptions about what the right answer is. The people who speak first and most often are those whose views are most likely to match the preconceptions of the majority of the members. The people who speak subsequently face pressure to quash their doubts or present them in a more moderate light in order to avoid appearing to be too much of an outlier. After a while, meeting participants sitting on concrete information that might promote an alternative viewpoint are unlikely to speak up at all.

It is possible for the group to move in the direction of better-quality judgments as a result of these factors — it depends on whether the group’s initial leanings were accurate or not. In practice, though, research suggests that this is the more the exception than the norm. Broadly speaking, group deliberation performs worse than simply aggregating the individual judgments of group members separately.

Are There Better Ways to Decide Together?

To be honest, these findings represent a damning repudiation of nearly all the most common meeting facilitation practices I’ve seen in organizations. It is way too easy for a group to feel great about a poor decision it’s about to make. And because there’s often no immediate feedback to indicate that the decision was ill-advised, it may be a long time, if ever, before the group members realize their mistake.

So what can we do about it? In general, the research on combating bias is a ways behind the research identifying such biases in the first place. But Sunstein and Hastie run through a number of strategies with varying degrees of evidence behind them to mitigate or avoid the problems endemic to deliberating groups. The strategies are rooted in the premise that it’s essential to preserve cognitive diversity — or different thinking styles and perspectives — in group settings in order to counter the tendency toward conformity. The authors also recommend finding ways to separate the divergent (figuring out what the options are) and convergent (choosing the best among them) components of decision-making into two separate phases, since they involve very different thinking processes and may even benefit from having different people involved.

Several techniques seem especially worthy of further exploration or wider adoption in real-world settings:

  • Having leaders suppress their own opinions at the start of deliberations and explicitly express a desire to hear new information
  • Setting cultural norms that encourage critical thinking and productive disagreement
  • Red-teaming, or negatively focused scenario planning, for countering excessive optimism or robust risk management
  • Adopting the Delphi method or variants of it, in which group members make estimates or express their opinions individually prior to group discussion, and then return to individual expressions at the end

As suboptimal as the typical group meeting may be, a big problem leaders face is that making decisions any other way feels really uncomfortable for a lot of people. It’s not just about aversion to conflict, although that’s certainly a factor. It’s also that most interventions to improve the decision-making process have the side effect of slowing that process down. That’s not always such a terrible thing in the abstract, but in many organizations there is a culture of urgency to resolve uncertainties about the path forward as soon as possible once the decision rises to the top of the agenda. To improve results by improving the process, managers may first have to instill a discipline of strategic patience among their teams when it comes to identifying and acting on the most important decisions to be made.

--

--