A collection of open-access research that explores the difference between a hive mind (a collective intelligence with general problem-solving ability) that solves the problem of collective optimization, and one that solves the problem of optimizing outcomes for a single individual or other entity.
Aims & Scope
To help build a strong and openly accessible evidence base for decision-making, the Nobeah Foundation is calling for submissions for a new Collection on ‘General Collective Intelligence Platforms and the Hive Mind’. The concept of a “hive mind” seems like science fiction, but an innate hive mind called the “collective social brain” is actually predicted to exist. Separately from experimental efforts to validate the existence of this innate hive mind, private funding appears to be pouring into corporate research on potentially all of the components that might be required to implement a vastly more powerful artificial hive mind. From a human-centric perspective, technology solves one of two problems, either it optimizes outsomes for some individual or other entity, or it optimizes outsomes for the group. According to one argument, assuming technology continues to advance, and assuming that technology continues to increase its capacity to integrate with other technology, a "hive mind" might in fact be inevitable, with the only uncertainty being whether we will form a hive mind that solves the problem of optimizing outcomes for a single individual, in which case we might all become slaves to Star Trek’s cinematic vision of a single Borg queen, or whether we will form a hive mind that optimizes outcomes for all of us; a protector of people and the planet that we as people need for our well-being, like the hive mind hosted by the tree of souls envisioned by the movie Avatar. A model for a General Collective Intelligence platform which implements this beneficial or “good” hive mind, and that is potentially capable of exponentially increasing our capacity to solve human problems, like poverty and climate change, has already been created, and academic articles on this model have already been published in peer-reviewed journals catering to experts specializing in this and related topics. There are a concrete set of attributes the model says a General Collective Intelligence must have in order to be “good” in terms of optimizing collective outcomes. This collection explores the difference between the two options, as well as why the "bad" hive mind is the default, because humans generally aren’t predisposed to take the risk of selecting new solutions, even when they might be vastly better for the group, since humans aren’t most greatly motivated by the desire to succeed. Instead they’re most greatly motivated by the desire NOT TO FAIL, and the desire not to be singled out and ridiculed for failure. Decision-making only reliably chooses a new and complex solution that might be vastly better in a few cases. One is urgency, that is, when the group is “on fire” and the solution being offered is water. Another is the case when selecting that solution will ensure a person won’t fail and won’t be singled out and ridiculed for that failure.