But what happens when PRIME information becomes amplified by algorithms and some people exploit algorithm amplification to promote themselves? Prestige becomes a poor signal of success because people can fake prestige on social media. Newsfeeds become oversaturated with negative and moral information so that there is conflict rather than cooperation. The interaction of human psychology and algorithm amplification leads to dysfunction because social learning supports cooperation and problem-solving, but social media algorithms are designed to increase engagement. We call this mismatch functional misalignment.
One of the key outcomes of functional misalignment in algorithm-mediated social learning is that people start to form incorrect perceptions of their social world. For example, recent research suggests that when algorithms selectively amplify more extreme political views, people begin to think that their political in-group and out-group are more sharply divided than they really are. Such “false polarization” might be an important source of greater political conflict. Functional misalignment can also lead to greater spread of misinformation. A recent study suggests that people who are spreading political misinformation leverage moral and emotional information — for example, posts that provoke moral outrage — in order to get people to share it more. When algorithms amplify moral and emotional information, misinformation gets included in the amplification. Brady cites several new studies on this topic that have demonstrated that social media algorithms clearly amplify PRIME information. However, it’s unclear if this amplification leads to offline polarization.
Looking ahead, Brady says his team is “working on new algorithm designs that increase engagement while also penalizing PRIME information.” The idea is that approach would “maintain user activity that social media platforms seek, but also make people’s social perceptions more accurate,” he says.