Social media is in a weird place at the moment. Over the last few years we’ve seen the transition of Twitter to a partisan platform where the algorithm actively has a view on which politics get promoted and buried; a massive move towards video-based social networks, and AI content not just spreading through the platforms, but fundamentally challenging the basic idea that social media is “social”. Established companies as well as new entrants are creating something that seems a lot like social media, but doesn’t actually contain any real people.
I’m using the term “post-social media” to describe digital services that are a continuation of the form, business model and regulatory structure of social networks, but that no longer connect real human beings. Examples of these can be seen in changes to existing Meta platforms like Facebook, and new entrants like character.ai and ChatGPT.
It might seem weird to see these as social media at all - but in practice they are a logical continuation of the development of social media over the last few decades. This is not to say it’s good, just that there’s a lot of continuity, which shapes how the companies themselves behave, and what the regulatory approaches could be.
Evolution of social media
In this framework, there are three loose eras of social media:
- Unintermediated peer-to-peer networks
- Algorithmic timeline
- Post-social media
There aren’t hard-cut offs between these eras, and plenty of older networks survive for a long time in subsequent periods - but with significant challenges and competition from new forms of social media.
Starting at the beginning, from the 90s you see the growth of un-intermediated social networks - initially through collections of email lists, bulletin boards, etc. Here both the good and bad are people - the people who join a group, who you've chosen to connect to, what they choose to do and say. For instance, concerns about pro-suicide forums are anchored in this form of social media. The story of the 2000s is the absorption of separately hosted online communities into mass social media, containing overlapping communities and interests in a single platform. Early Facebook and Twitter belong to, but are the tail end of this era, because bringing more things together in one place creates the positioning and motivation for the next step.
At this point, large platforms start to develop algorithmic timeline. The scale of content needs new mechanisms to curate the platform for users. The platform is structurally millions of "relationships" between the user and the platform, that is responsive to engagement and mediates the content you see on the platform. This focuses your attention on real content by real people based on what engages you (which can be bad). While there are concerns about “what if the platform takes a stance” (now very validated through X), the main concern is the extent to which harmful things can happen without that being necessary. Engagement optimising approaches knowing what made you click, or where you lingered longer, without needing to understand what the content actually is, can self-reinforce and take people down (very engaging) rabbit holes. Concerns about algorithm-promoted self-harm content on Instagram fit in this era.
What we’re now seeing is the evolution of the algorithmic timeline into post-social media where the contributions of real people are less and less present. Through platform-run chatbots, the "relationship" with the algorithm has become more intense and personified. It's the form of social media, but without people.
The transition between the two is gradual. The algorithmic timeline encourages actual people to behave in engagement-maximising ways. This goes from people understanding what gets numbers, and going and making it, through content farms that start to rely on automated approaches, and then increasingly AI to generate content. At the transition point, you have platforms running their “here is what people want to see” algorithms, and then “users” of the platforms running “make what people want to see” algorithms to provide that content and extract some money from the process. You can see if you’re the platform with some appeal in cutting out the middle man, replacing the nominally human content wranglers with actual machines and not really noticing you lost something of value about five steps before. A system that started out selling human connection has been step-by-step replaced by machine processes offering a simulation of that.
Harms and regulation
Post-social media represents continuity with the social harms/regulatory problems of social media, and the general approaches these organisations develop to manage risk. Meta’s guidance on "how much should our robot flirt with children" makes more sense when you see it as an evolution of the "what should our users be allowed to say to each other" guidance, which similarly gives both big principles and then specific examples. OpenAI’s “we will use AI to monitor your conversations with AI and report people plotting crime to police” continues the thread of Facebook’s user and machine-learned assisted flagging of content through to massive public moderation teams.
But the trade-offs in regulation are completely different because they’re not social networks anymore. Social media regulation navigates a trade-off between the value of community, ideas of free expression, and harm to users. These are trade-offs that don’t exist once you lose the “social” element of social media.
The new set of trade-offs is “this same technology has very divergent use-cases”. While they can be packaged and sold in more specialist ways “auto-complete for programmers” and “emotional support robot” are growing out of the same fundamental model (and one of the defining features of LLMs is this flexibility).
At the same time, it’s not impossible to shape what these tools can and can’t do and companies are responsive to social and regulatory pressure on what is acceptable.
Regulation of post-social media isn’t a completely new problem - but it requires engaging with LLMs as a technology that can be shaped to human needs, rather than an alien technology that just has to be accepted. The technical approaches are different, but once you get over being invested in this technology solving all your problems, there are options on the table as regulators. Regulation here will look different from regulation of social media, but it doesn’t need to be invented from scratch.