Exploring Opportunities for LLMs in Public Discourse
Reflections from a conference and workshop hosted by Plurality Institute and the Council on Technology & Social Cohesion.

Setting the Stage: LLMs and Public Squares
Vibrant and inclusive public discourse is essential for a functioning democracy. Open deliberation presents an opportunity for citizens to share and understand different perspectives, mediate differences, and collectively chart a shared path forward. Over the past several decades, digital technologies and social media have collided forcefully with our public squares, reshaping them for better and worse.
More recently, the rise of large language models (LLMs) presents new questions, risks, challenges, and opportunities for our increasingly-digital public spaces. How might LLMs shape the future of public debate and understanding? Will they further amplify misinformation and division or could they present opportunities for new forms of “augmented” deliberation, inclusive of a broader range of voices and scales of conversation? Could LLMs help us better translate across perspectives, understand public attitudes, and identify new areas of agreement?
To explore these questions, Plurality Institute – in partnership with the The Council on Technology & Social Cohesion, and with generous support from Google.org – hosted a conference and workshop on February 28th and 29th to review and promote new research and development related to “LLMs in Public Discourse”. The event convened a mix of academic researchers, industry experts, civil society organizers, and developers to discuss existing work and identify opportunities for new LLM-driven projects that could promote constructive conversation and pluralism online.
Hosted at the Hillside School in Berkeley, CA with support from Society Library, the event kicked off on Thursday evening with a “showcase” of existing research. The showcase opened with a presentation from Professor Dave Patterson (UC Berkeley), who presented a vision for shaping AI development, co-authored with a team of computer scientists. Patterson emphasized that we are still in the early days of practical AI, meaning that big opportunities remain to steer its development in the public interest. To this end, Patterson and coauthors outline a series of milestone goals for prosocial AI development – including one related to public discourse – and suggest a key role for inducement prizes and interdisciplinary research centers in achieving them.

Google Jigsaw’s Beth Goldberg took up related themes, highlighting that the polarization and dysfunction of our current digital public spaces is a design choice rather than an inevitability, and emphasizing the potential of LLMs and thoughtful technological design in helping us to do better, both by enabling broader forms of participation and inclusion – as in vTaiwan’s use of Pol.is – and by helping us identify new areas of agreement in public conversations. Alex Loginov from Jigsaw further detailed the team’s developing tools for making sense of large-scale conversations.

Subsequent speakers zoomed in further on existing, real-world uses of LLMs in public discourse and deliberation from around the world. For example, Ken Suzuki (SmartNews) and Shutaro Aoyama (Columbia University) presented the Japan Digital Democracy 2030 Initiative, an effort to support open source digital democracy projects in Japan that was born out of Takahiro Anno’s 2024 Tokyo gubernatorial campaign.
Anno – an engineer, science fiction author and entrepreneur – took inspiration from the recent Plurality book (by E. Glen Weyl, Audrey Tang, and community of coauthors), using technological tools to promote new modes of interaction between political candidates and the public. Digital Democracy 2030 will further develop these efforts, with a focus on supporting tooling for “broad listening” (in contrast with broadcasting), large scale deliberation and political funding transparency. Suzuki and Aoyama explained that the initiative has already gained traction within the broader Japanese political landscape across a variety of parties and levels of government.
Other speakers discussed new general-purpose LLM tools and research. For example, Glenn Ellingson and Kristin Hansen of Civic Health Project presented Social Media Detoxifier (now known as “Normsy”), an LLM-powered tool designed to help identify and address toxic online content through LLM-supported counterspeech. Normsy is a human-in-the-loop solution which helps users identify toxic content on X that attacks groups, institutions, and norms; the tool then suggests potential responses that the user might post to address the toxic content and reinforce democratic norms. Hansen highlighted Normsy’s potential to detoxify online discourse at scale, while also respecting users’ freedom of speech & expression.

Jeff Fossett of Plurality Institute presented a related project called Bridging Bot– an LLM powered tool for bridging and de-escalation in online conversations. Bridging bot is a tool for Reddit moderators which can identify and intervene in escalated disagreements online, helping to translate across viewpoints and identify potential areas of agreement. Fossett emphasized the importance of building and testing LLM tools in collaboration with focal communities, both for ethical reasons, and to give the best chance for contextually-appropriate interventions.
Lisa Schirch (Notre Dame & Council on Technology and Social Cohesion) and Kristina Radivojevic (Notre Dame) presented three projects related to AI and public discourse. First, Schirch presented work on an AI assistant designed to help facilitators navigate the space of deliberative platforms and design deliberative processes. Second, she described ongoing research that aims to understand and taxonomize the strategies that existing counter speakers and conflict resolution experts use to address toxic speech; Schirch’s team aims to better understand which strategies are most effective in which contexts, and to train LLM tools that can help users enact particular strategies.
Finally, Schirch and Radivojevic described a role for LLMs in the research process itself, as a tool for synthetic data generation and for creating “digital twins” that could be used for testing new forms of intervention. To this end, Radivojevic presented a prototype of a public discourse “sandbox” for researchers interested in testing chat-based interventions both with agents or with human participants.
Mapping Research & Developing New Ideas
The event reconvened the following morning for a day-long workshop led by Plurality Institute Executive Director Rose Bloomin, which focused on identifying and incubating new research and development opportunities.

To set the stage for this exploration, Julia Kamin of the Prosocial Design Network hosted a collective “mapping” session, which invited participants to organize existing LLM public discourse tools and research across different “dimensions”, with the goal of identifying gaps and new opportunities in the space. The session invited participants to reflect on different “times” and “spaces” where LLM tools might intervene productively in online conversations, as well as to consider the feasibility and potential impact of different types of interventions. The mapping session highlighted both the wide range of existing projects, as well as the many opportunities for additional work. Findings from the mapping exercise will be shared further in a forthcoming whitepaper authored by Kamin and conference participants.
The remainder of the workshop focused on developing new ideas & collaborations among conference participants. To this end, workshop participants first convened in small groups to discuss different areas of research and develop new project ideas. Participants then broke into small project-focused groups to further incubate new ideas.
The workshop concluded with project groups having the chance to “pitch” their new project concepts to the broader group. Conference attendees then voted to allocate $50,000 in seed grant funding to support the development of these projects beyond the conference. Seed funding was made available through gracious donations from The John Templeton Foundation.
Watch the talks from the event on our YouTube channel.