Someone should demonstrate this more mathematically, but it seems to me that if you start with a random assortment of identities, small fluctuations plus reactions should force polarization. That is, if a chance fluctuation makes environmentalists slightly more likely to support gun control, and this new bloc goes around insulting polluters and gun owners, then the gun owners affected will reactively start hating the environmentalists and insult them, the environmentalists will notice they're being attacked by gun owners and polarize even more against them, and so on until (environmentalists + gun haters) and (polluters + gun lovers) have become two relatively consistent groups. Then if one guy from the (environmentalist + gun hater) group happens to insult a Catholic, the same process starts again until it's (environmentalists + gun haters + atheists) and (polluters + gun lovers + Catholics), and so on until there are just two big groups.
— Scott Alexander of AstralCodexTen, in his article Why I Am Not A Conflict Theorist
This seems like a super interesting problem that one could model with relatively low effort - about one day's worth of focused effort at most. That is, if someone had the mathematical and computational acumen required to run a simulation like this.
Alternatively, you could prompt an LLM to generate the code for you (which i did try), but that didn't really work out all that well. The first draft of the code was decent, but it generated a static final histogram that basically went from a random distribution of opinions among agents to a supremely polarized distribution. Which is like, cool, but all the computing happened in the backend, so does anyone really care? I wanted to see more step-wise evolution so someone could look at it and go "wow! mimetic desire exists!" (I don't know if I'm using this in the correct context but it sounds cool).
I also think that the basic rules of the LLM-generated model are flawed (copy-pasted from my conversation with claude):
This looks like you're basically forcing these agents to be polarized, which is not the goal of the simulation. The goal of the simulation is to only start with a fundamental rulset of "small fluctuations plus reactions" and then see if this results in group polarization towards the end.
This is my guess of what an extremely basic algorithm for this would look like:
Super interesting.
Update: I managed to make a working simulation of this! I prompted Claude with the updated algorithm, and it did a surprisingly good job of coding the simulation in one go. I tried with ChatGPT and deepseek before that, but despite underperforming on certain benchmarks and tests, Claude remains the best at coding, at least in my experience.
The initial simulation didn't really reflect my beliefs - it displayed an upward trend initially, but it ended up dipping back down to polarization levels below starting, which was super confusing to me. When I asked claude why this could be happening, one of its bullet points prompted me to realize that I was updating the beliefs randomly at every step, which is stupid because beliefs have persistence. (Note: I haven't modified this in the original algorithm in this pdf because I want flaws in my logic to be documented. Updated step-by-step plaintext algorithm can be found in the readme of BiasNET's git repo)
This is the word by word modification prompt that I used: "You're not supposed to repeatedly change random beliefs - beliefs should have some persistence. I think something that would work better is an initial randomisation of beliefs and then we let the model run its course and see if that ends up in belief clusters, rather than continuously randomising beliefs because that is just not how it works in the real world. Belief systems have some _persistence_, for a lack of a better word."
And Claude made some changes, the gist of which can be summed up with "In the new version, beliefs are initialized randomly at the start and then only change through social influence." You can see this conversation with Claude here.
I've also deployed this online here.
Once you've played around with that a bit, if you want some help interpreting the output (apart from the line graph - that's pretty self explanatory), here's a technical overview generated by claude with a few modifications (primarily added context, better formatting, and outgoing explanatory links) added by me.
The simulation models interactions between agents with beliefs on multiple issues and evolving affinities for each other. Inspired by the following quote by Scott Alexander in his post Why I Am Not A Conflict Theorist.
Someone should demonstrate this more mathematically, but it seems to me that if you start with a random assortment of identities, small fluctuations plus reactions should force polarization. That is, if a chance fluctuation makes environmentalists slightly more likely to support gun control, and this new bloc goes around insulting polluters and gun owners, then the gun owners affected will reactively start hating the environmentalists and insult them, the environmentalists will notice they're being attacked by gun owners and polarize even more against them, and so on until (environmentalists + gun haters) and (polluters + gun lovers) have become two relatively consistent groups. Then if one guy from the (environmentalist + gun hater) group happens to insult a Catholic, the same process starts again until it's (environmentalists + gun haters + atheists) and (polluters + gun lovers + Catholics), and so on until there are just two big groups.
a) Affinity Updates:
b) Belief Updates:
num_agents
: Number of agents (default: 40)num_issues
: Number of belief dimensions (default: 5)affinity_change_rate
: Scaling factor for affinity updates (default: 0.05)positive_influence_rate
: Strength of positive influence (default: 0.03)negative_influence_rate
: Strength of negative influence (default: -0.03)Basically get yourself out of echo chambers, don't hate things just because people you hate love those things or vice versa, and be self-aware and cognizant of the way your beliefs and perceptions are shaped.
much love,
hari