Exorcizing the ghost in the machine

In this latest podcast in our ‘Beyond Data’ series, Tessa Jones (Calligo’s Chief Data Scientist) and Peter Matson (Data Science Practice Lead) talk with Oxford University’s Professor Philip Howard about the threats posed to democracy by technology, specifically in the shape of Lie Machines.

Fact or fiction? Microtargeting with lie machines

In this age of social media, chatbots and AI it’s never been easier for individuals to share their opinions.  Instant communication to, and engagement with, a global audience is now commonplace, and it seems there’s no need to let facts get in the way of a good angle. As Mark Twain, or maybe Winston Churchill, or most probably Jonathan Swift famously said, “a lie can travel halfway around the world whilst the truth is still putting on its shoes.” A great example in itself of the ease in which misunderstandings and misappropriations can become canon.

In this vein, Professor Howard has spent years studying the mechanisms in which opinion, behavior and values can be manipulated and misdirected by lie machines:

“Lie machines are large, complex mechanisms made up of people, organizations, and social media algorithms that generate theories to fit a few facts, while leaving you with a crazy

conclusion easily undermined by accurate information. By manipulating data and algorithms in the service of a political agenda, the best lie machines generate false explanations that

seem to fit the facts.”

Lie Machines: How to Save Democracy from Troll Armies, Deceitful Robots, Junk News Operations, and Political Operatives

We find lie machines in all types of countries and governing structures. They share common elements – political actors produce the lies, social media firms distribute them, and paid consultants market them. High profile examples of the effectiveness of the lie machine include the UK’s Brexit campaign, and Trump’s electioneering – in both cases patently untrue ‘facts’ and arguments were targeted at key voters by disinformation networks, troll farms and lie machines. Algorithms direct individuals towards ever-more insular sources and extreme content:

 “A healthy, public-facing algorithm might occasionally introduce another credible source…  we know the platforms play around with this stuff, especially during elections in the US”

Controlled by bad actors and forming a global ecosystem of lie development and propagation, these lie machines spread their tendrils across every social media platform, moving out from Facebook as new outlets develop.

Computational propaganda

Lie machines have evolved and finessed themselves as technology advances. Instead of stealing the photos, social media handles and biographies of real people, AI now generates new pictures and personas and thus evades technology platforms’ troll-spotting software.

Spreading propaganda far and wide, with a convincing voice, the lie machine

  • Has a profound effect on society, with a scale that is difficult to quantify
  • Is perfectly engineered to target human vulnerabilities, reducing critical thinking
  • Deliberately misrepresents and appeals to emotions and prejudices, using our cognitive biases to bypass rational thought and create echo chambers
  • Is vague and unknowable – what training data was used for large language models? (Professor Howard postulates that every Gmail sent over the last 25 years may have been scraped, along with content from junk news sites)

Doing better – where does the onus sit? User or developer?

When it comes to developing processes to combat the lie machine, there’s no one legislation or guiding principle that works. We must always consider the regional and cultural context of both data and users. Research can’t necessarily be amalgamated or directly compared from different regions and countries – for example, we know that the placebo effect is always greater in US medical studies. To date, technology has not always built in cultural nuances in how people use words, with intent and meaning lost in translation – the majority of network takedown orders are for sites that are not in English.

Wherever there is human input, there are behavioral differences that make it much more difficult to apply common rules:

“People who manage cookies are above average in terms of their knowledge of technology, so these people are generally more purposeful in terms of how they set up their news feeds and where they go for information”

The huge amount of disinformation spread around Covid and the resulting vaccination campaign demonstrates how potent the lie machine is. It doesn’t need to convince people its argument is right, all that is required is to introduce enough doubt, to highlight there is a chance of harm. After all:

“If everybody really understood probability, nobody would ever buy a lottery ticket”

Balance the field – breaking the lie machines

Professor Howard believes that whilst we are justified in our concern about the threats to democracy, the principles behind the lie machine can be harnessed for good – promoting topics that are in the public interest and generating democratic discourse:

“I am cynical, but not fatalistic”

He describes the steps we can take to break the lie machines:

  • Public policy oversight, founded in ongoing public data capture and analysis
  • Designing social media to highlight emerging consensus, rather than heated conflict – machine learning can amplify common ground
  • Setting election guidelines to create more opportunities for civic expression
  • Giving journalists, civic groups and researchers access to all the public opinion data that is currently in the hands of the technology firms
  • Ensuring that the big data collected by technology platforms is added to public archives

The answer is more social media, not less. But it needs to serve society much better.

IPIE – bringing down the lie machine

Professor Howard has recently launched a new program, creating an independent scientific body to foster global cooperation in safeguarding the online information environment. The International Panel for the Information Environment (IPIE) will assess the scope of the misinformation crisis, analyze its effects on our societies and the planet itself, and propose solutions. Featuring data scientists and engineers alongside neuroscientists and sociologists, IPIE hopes to be the beginning of a global effort to save our common information environment.

Watch the podcast for yourself below to hear more from Professor Philip Howard about the power of the lie machine, and crucially, to learn how we can use it for the collective good.

Professor Philip Howard is a social scientist with expertise in technology, public policy and international affairs. He is Director of Oxford University’s Programme on Democracy and Technology, a Statutory Professor at Balliol College, and he is affiliated with the Departments of Politics and Sociology. Currently, he is also a Visiting Fellow at the Carr Center for Human Rights at Harvard University’s Kennedy School.