Whose AI will cast your vote?
As election fever rises in the United States, there’s a growing realisation that the 2024 election is unlike any before. It will take place against a backdrop of the increasing influence of social media and growing political polarisation. This polarisation has manifested in the US as an intemperate dialogue between Republicans and Democrats, where divides are attributed not to differences of opinion, but to the opponent’s lack of intellectual or moral fibre.
According to a YouGov more than half of all American adults get their news from social media. Yet, as widely reported, the AI that decides what content to send to whom is not designed to deliver unbiased information but rather to engage users, keep them on the platform, and drive the social media companies' advertising revenue. To make matters worse, human nature responds more strongly to negative information than positive. As Steve Rathje from the University of Cambridge University, who studied the phenomena, puts it; “This business model has ended up rewarding politicians and media for producing divisive content in which they dunk on perceived enemies.” This creates a mismatch of incentives between society, which needs reliable sources of news; and social media companies, whose directors are under a fiduciary obligation to deliver shareholder returns.
This is the tip of an iceberg below which malevolent actors lurk. In 2018, Cambridge Analytica hit the headlines after it was revealed that they had misused data relating to 87 million people that they had acquired from Facebook. Cambridge Analytica’s business model centred on influencing elections by using people’s digital footprints to construct their psychological profiles. These profiles were then used to target the voters with political adverts, which played to their psychological motivations to sway their vote. Over time, Cambridge Analytica constructed a database which their CEO Alexander Nix claimed; “… in the United States we have somewhere close to four or five thousand data points on every individual ... So we model the personality of every adult across the United States, some 230 million people.” The company played a role in Ted Cruz’s campaign in 2015, Donald Trump’s campaign in 2016 and the Brexit referendum. Following the scandal, Cambridge Analytica went into administration, although some of the staff went on to found similar businesses. Cambridge Analytica wasn’t an isolated rouge business but rather just one competitor in a vibrant market. In 2023, an international team of investigative journalists broke the story of “Team Jorge”, an Israeli outfit specialising in manipulating elections using an army of 30,000 fake digital identities used to post disinformation across social media. This misinformation would then get picked up by the mainstream media and amplified. Team Jorge was led by the Israeli ex-special forces operative Tal Hanan. Hanan is alleged to have boasted to undercover journalists that he’d been involved in 33 presidential campaigns globally, 27 of which he’d secured ‘successful’ outcomes. According to the undercover journalist, Hanan quoted Team Jorge’s fee of ‘between €6m and €15m for interference in elections.
The dividing line between such private companies and state actors is blurred. Russian interference in Western elections hit the headlines during the 2016 US presidential election and has been detailed in the FBI’s ensuing “Crossfire Hurricane” investigation. More recently, in June 2024, as the European Union went to the polls, Peter Stano, the European Commission’s lead spokesperson for foreign affairs and security policy, accused Russia of using social media to spread disinformation during the EU election. The Commission estimated that Russia had spent about $1 billion dollars on their campaign, which primarily targeted Germany and sought to boost the popularity of far-right populist parties. With the war in Ukraine war raging and the risk of conflict between Russia and the West growing, Russia may view incapacitating the West by sowing division as a way of weakening its adversary, in case the new cold war turns hot.
All this raises the question of how these trends will develop and what the future has in store for open, liberal democracies. To understand this, we need to look at how technology is developing and consider what new threats and countermeasures will emerge. Today, our digital landscape is being redefined by Artificial Intelligence. With rival models from Open.ai, Google and Meta being released on an almost monthly basis and Apple readying its debut, the pace shows no sign of slackening. In April 2024, Apple published a research paper describing an approach to AI called ReALM, which demonstrated the power of feeding AI models details of the user’s broader context. This helps the model better understand what answers are relevant to the user, significantly boosting the model’s performance. ReALM used data about what was on the user’s phone screen and was detectable from its mic and camera. However, more data will offer more context and further boost the model’s power. In this endeavour, Apple is well placed as, with the user’s permission, it can access other data stored on the phone, in their iCloud account and data from other organisations through initiatives like ‘Open Banking’. The result will be a new powerful form of personal AI, not in the cloud, but embedded in each person's phone. Indeed, the likes of Apple, Google and Samsung have been preparing this ground for years, introducing ever more powerful neural engines into their phone's microchips to power AI.
Initially, the impact of personal AI will be modest: smarter Siri and better suggestions. In time, new forms of Apps will emerge containing ‘Robotic Process Automation’, which won’t just suggest options but also carry out their user’s wishes. For example, they won’t just provide suggestions for better utility provider tariffs, but they will also be able to switch providers, too – and do this weekly if the user wishes. These new ‘personal AIs’ will act as decision support tools, helping their users become happier, healthier and wealthier. There’s no element of your life in which they won’t be able to make insightful recommendations; “Are you sure you should be dating that person?” The problem is not that these recommendations are intrusive, although they may be – the problem is that they will be right. No one will indeed force you to follow these suggestions, but what choice will you have when you see those who don’t are poorer, less healthy and less happy? In truth, you’ve no choice at all. Yet, how do you know that your personal AI’s recommendations are entirely in your best interests and not skewed by a few percent to benefit the commercial or political interests of big tech? Who guards the guards?
Personal AI will become the lenses through which we see the world. They will help us find information that is relevant and engaging; they will essentially become personal filters shaping what we see from all information channels, including social media and mainstream media. Today, we spend 40% of our waking hours online, and in the future, much of this will be spent speaking and interacting with our personal AIs. They will develop a unique understanding of our interests, personalities, hopes and fears. This will give them a unique ability to influence us, and so it’s vital that they seek to further their users' interests, not those of their ‘big tech’ creators, political parties, or malign hostile states. Unless ways are found to ensure that this is the case, our democratic processes will become hollowed out, and all that will remain is the ritual and theatre of casting our vote.
Yet, we must be careful not to mistake the medicine for the disease. In the future, those who control the data and AI will wield evermore power - they will control society's nervous system. If we centralise that data and AI in today’s internet platforms, of Facebook, Google, Apple and Amazon, their unaccountable power will lead to growing tension with the Western open, liberal, democratic tradition. Decentralising the data and AI is a vital step to preserving the agency of individuals. The real challenge we face is how to ensure personal AI is aligned with its user’s best interests, not those of Elon Musk, Team Jorge or Russia.
Today’s revolution in data and AI could herald a new enlightenment, but dystopian alternatives are also at the door. At this juncture, it’s vital that we grasp the relationship between our technology’s architecture and the character of the societies in which we want to live. As the American academic John Culkin said, 'We shape our tools, and then our tools shape us.’ Only then can we hope that in the 2028 ballots, our algorithms will cast our votes.