Running the Global Assembly was incredible. Sixty-eight hours spent with all kinds of people from across the globe, sharing, learning, and co-creating in 42 languages. Seeing the transformative effect it had on the assembly members and all my colleagues was an astounding experience. Never before had a ‘snapshot’ of the human population been brought into one space like that. However, when I would tell people what I was working on, amazement at the prospect of the project would be quickly followed by the same question time and time again - “so how many people did you have taking part?”.
Citizens’ assemblies (sometimes called “citizens’ juries” or “mini-publics”), the jury-like decision-making structure where randomly selected individuals co-create policy solutions, are becoming ever more popular and for good reason. They have proven to be an effective tool in more actively engaging people in policymaking, facilitating wider involvement of classically under-represented voices in politics, and, most importantly, making strides on sticky issues where politicians have struggled to act alone - just look at abortion and same-sex marriage in the Republic of Ireland.
As the ‘deliberative wave’ maintains its surge across the Western hemisphere and beyond, many people are looking to these lot-selected processes to help us overcome the ‘wicked’ issues we face today, be it climate change, questions around democratic expression, or the future of virtual worlds.
However, this methodology still faces a major criticism. Whilst most assemblies generate demographic representation, by utilising a selection system called ‘sortition’, they are far from statistically representative bodies when you move to the national scale and beyond. This leaves some with doubts that these chambers could ever have real political legitimacy, after all how could 150 unelected people ever represent the diversity of views held by millions, if not billions? Whilst there are arguments that “random sampling offers a means of representing the diversity of view-points in the population at large” (p. 32), I believe the small numbers participating remains a barrier to mass acceptance. This is especially true if we wish to continue expanding to larger geographic scales, like more regional or global citizens’ assemblies.
This conundrum facing assemblies isn’t new, in fact it’s a classic democratic trade-off: number of people participating vs depth of participation. It seems you can have many people vote yes/no in a referendum (or decide who they defer their decision-making to in parliament), or you can have a few people actively co-design policies, but not both.
Sadly, most mass participation processes simply can’t provide the kind of demographic representation, structured learning, cross-pollination of ideas, and well designed facilitation needed to support the high quality deliberation an assembly can offer. At a time of wide-spread democratic back-sliding and political dissatisfaction, they are widely seen as the best example of a deliberative democratic process and, I think, one important tool we could use to overcome the issues in our existing political systems.
A new model
In a paper my collaborators and I are currently working on, we’ll outline in depth an exploratory methodology in which Large Language Models (LLMs), the type of artificial intelligence (AI) made famous by OpenAI’s chatGPT, might help us overcome this barrier to citizens’ assembly development and enable participation at never before seen scales. Importantly I’d like to affirm that this isn’t a fix-all solution; there are factors that must be considered and there will of course be barriers. However it’s one aspect we believe is worth exploring if we hope to engage more people in these processes.
The basis of this method revolves around utilising LLMs to play the important role of aggregating and analysing deliberation outputs. This role is typically held by a team who might be involved directly in the process design or stand apart from it slightly. Now, tools exist (like one I’ve been helping develop) that can analyse swathes of qualitative data and produce accessible, interactive reports that highlight the primary narratives, agreements, disagreements, and cruxes in a mass of viewpoints. This means nuanced, long-form outcomes from many deliberations can be fed into the system, and one clear picture will come out the other side. This could be used to generate proposals that may best reflect the variety of views, preferences, and trade-offs outlined by members.
There are a few forms this deliberation could take, but I envisage most likely one of two. Firstly, you could run multiple citizens’ assemblies deliberating on the same topic, utilising the same learning journey and working off the same framing question. Alternatively, it might be better to have semi-randomised breakouts groups that could enable more diverse deliberation cohorts, working in a similar vein to an option Professor Hélène Landemore explores in a recent paper. Both of these have benefits, drawbacks, and considerations, but both of these processes could feed their outputs into this system and have aggregated proposals produced.
The interpretation of the report and subsequent proposal generation is incredibly important, and all results would require an accompanied justification and reasoning that the assembly members could interrogate.
In order for these outcomes to carry any legitimacy, they would need to be vetted and passed by the assembly members, just like we do today. This could also take place in multiple ways, but, like many citizens’ assemblies currently operate, the most straightforward option might be to carry a vote. This would assess the extent to which the proposed results, or a series of proposals, generally match the views of the members and/or their assembly/deliberation group. Most importantly, this vote would assess whether members are happy to validate these outcomes. A majority, or other selected threshold, ‘yes’ vote would mean the results carry, whilst a ‘no’ vote could trigger multiple processes including a second round of deliberation or a re-generation of the proposals.
Of course, there are many things that need exploring and we hope to develop these in the coming paper. This technology is new, and without a true understanding of its accuracy and validity this process would be useless at best and dangerous at worst. Importantly, with the consideration of introducing any new technology into our political systems or the ways we collectively organise (especially technology with the society-bending potential of AI) there are serious moral and philosophical questions that we must ask.
We are in no way endorsing the instant incorporation of untested technologies into these processes, but simply the exploration of how emerging technology might help us bring this form of deliberation to a wider audience, much like video-conferencing software did during the pandemic.
We understand this blog doesn’t answer the many questions this idea might raise and does not start to grapple with the many issues and contingencies it will generate. We hope to answer all of these in our longer paper.
Interested in discussing or collaborating? Get in touch.
Like this post? Maybe think about subscribing to stay up to date with my writing.