Mastodon and the Fediverse had maintained a niche existence for many years until they were thrust into the spotlight by Musk’s acquisition of Twitter and the ensuing turbulence. Since then, the Mastodon community has not been growing like a hockey stick, as it’s called in investor jargon, but like a rocket. This is a big win for those who champion open-source principles. However, this rapid growth might also become a curse, and for several reasons.
In September 1993, the previously nerdy gatherings in Usenet were suddenly disrupted by a flood of new users. AOL had opened access to Usenet. Previously, discussions focused on operating systems, but now spam and trolls dominated the agenda. The endless influx of users who didn’t adhere to the unfamiliar netiquette became known as the Eternal September, leaving a mark on collective memory. Before, one was “among like-minded people” with more or less consciously agreed-upon rules; now, new social groups joined, for whom these contents were neither relevant nor intended, and vice versa, leading to a “context collapse.” Since there was no “guardian” of Usenet, no central authority, the self-imposed rules could not be enforced, and Usenet became unusable (great wordplay, right?).
Fast forward to November 2022. Users flood Mastodon servers. Here too, the rules are unclear to newcomers, even though they should be read upon registration on an instance. Soon, the first accounts appear that many didn’t miss on Twitter. Unlike Twitter, there is no central governing body, but Mastodon is not as defenseless as Usenet was. Initially, users can block other accounts, just like on Twitter, and even an entire domain can be blocked. But the administrators of Mastodon instance A can also block another Mastodon instance B, and all accounts on instance B, along with their content, will remain hidden from users of instance A. (As far as I can see, this hasn’t happened yet on digitalcourage.social, where everything is transparently documented). In principle, a Mastodon instance can “defend” itself, unlike Usenet back then.
This brings us to the first problem. Who actually decides what is right and wrong, which opinions are okay and which are not? On Twitter, it was Twitter-paid people and implemented algorithms that tried to act according to defined rules, approved by Twitter’s upper management, certainly inspired by jurisprudence. I experienced how difficult this can be in 2007 when I worked for the search engine Ask.com and was responsible for the international index (international = everything except the US). I received angry messages from Germany about swastikas appearing in image search results. When I tried to block them, my US colleagues told me they would be breaking the law if they removed the swastikas, as they fell under free speech rights. A technical solution to only block swastikas in Germany could not be implemented quickly, especially since swastikas are legally permissible in a historical context. But what is wrong or right depended on which way I was flying across the Atlantic. Please don’t misunderstand me, I don’t want to see far-right or human rights-violating content in my Mastodon feed, and I think it’s right for such content to be blocked. I can block it myself, but other instances might also be blocked by my instance’s administrator. On Twitter, everything was treated centrally (for better or worse), but on Mastodon, it depends on the people running a Mastodon instance and how they interpret their own rules.
Now, I come to the second great realization from my time in the US: In American newspapers, I read articles about Germany that I would never have read in Germany. Some aspects were highlighted that had been little covered in our newspapers (and vice versa, I think, with German media on the US). No falsehoods were spread on either side. The emphasis was just different. In Germany, it seemed as though McDonald’s had been sued in the US for millions because someone spilled hot coffee on their lap, and some judge had allowed it in that crazy American legal system. In reality, the story was different; it just didn’t appear that way in German media. In our attention economy, headline formulations correlate with clicks and thus money. How can I ensure that I read other opinions and portrayals? And how does this work in a decentralized system, where, at worst, it depends on a grumpy admin? No, I don’t want to read conspiracy theories about vaccinations being devilish, nor do I want to read from a professor whose contrary paper didn’t survive peer review. A good example of how this can work is shown by Spiegel’s publication of the counter-argument from the editor-in-chief of the Berliner Zeitung.
From rapid growth arises a second problem. Twitter had grown more or less organically, “atomic networks,” as Andrew Chen would call them. Mastodon is a bit different, as Twitter refugees try to find the familiar groups again, but not everyone is there, and others are. The expectation is that everything will be like Twitter, just better, but then beloved accounts are missing, and the hook of constantly receiving likes and comments doesn’t work the same way as it did on Twitter, especially if the followers didn’t transfer in large numbers. Organic growth takes time. And so, the context collapse is already becoming evident, and it could work to Mastodon’s disadvantage. Therefore, I’m more interested in how well users are connected and interacting rather than how many new users join a Mastodon instance daily. Because that’s what gets users to return to a platform again and again. Twitter helped with this by suggesting accounts and altering the feed, exactly what many hated. But without this “assistance,” growth beyond the user numbers will become more difficult. No, I don’t want to change my Mastodon feed. But I doubt that Mastodon’s growth is currently “healthy” and that an “unmanipulated” feed is what truly encourages “normal users” to interact. Actually, this is a fascinating topic for a research paper 🙂