Possible Projects

A National Deliberative Process

Compelling National Need:

Political discussions at all levels are often characterized by a lack of broad-based participation and engagement. Many citizens believe that their input is unimportant, or unwelcome, to perceived elites who are not interested in truly democratic governance. This perceived futility of engagement threatens the vitality of our electoral democracy while robbing policy-makers of the perspectives of citizens who may be able to make valuable contributions to policy decisions. Tools that allow ordinary citizens to engage in meaningful policy discussions with neighbors, peers, and policy-makers may generate both increased confidence in decision-making processes and innovative solutions to difficult problems.

The change.gov website administered by then President-elect Obama's transition team during the last months of 2008 was the most visible example to date of an emerging form of participatory democracy. Building off of established web antecedents including digg.com, change.gov allowed users to submit ideas that others could then vote upon. Popular ideas would presumably rise to the top, for consideration by the incoming administration.

Although change.gov effectively closed with President Obama's inauguration, tools operated both by the Obama administration and others outside of the government continue to explore related ideas. The National Academy of Public Administration recently hosted an "Open Governmnet Dialogue", colllecting 900 submissions and 33,0000 votes on ideas for making government processes more open (National Academy of Public Administration, 2009). The privately-run site Whitehouse2.gov combines user-submitted proposals that can be endorsed with tags, and tag clouds generated from these that illustrate prominence of issues, personalized talking points on priorities (based on user submissions), discussions, and tools for managing and exploring user influence on discussions.

These systems demonstrate the opportunities and challenges facing broad-based discussion and deliberation efforts. Tools that support the ability to suggest, discuss, and endorse specific policy proposals provide a basic framework for broad-based discussion and debate of important issues. This debate might be informed by talking points and discussions can be used to build collections of evidence in support of positions. Votes endorsing or opposing positions can be used to gauge the viewpoints of the community as a whole. Reputation systems that rate participants, talking points, and comments provide guidance to discussants struggling to decide who should be believed.

Informal reviews of proposals and discussions on whitehouse2.gov also plainly illustrate the challenges presented by the ambitious goal of moving these models from "toy" discussions to meaningful frameworks for substantive discussions (Although change.gov is no longer functioning, many of the same issues appeared to arise there as well).

Scientific Foundations:

The foundations for an online national deliberative process would be based on experiments with progressively larger groups, ranging from local groups of a few dozen to nationwide deliberations involving thousands.

In the first years of the project, several small communities would use prototype tools to conduct ongoing deliberation of one or more topics that are relevant and meaningful to diverse stakeholders. For these early efforts, relatively homogeneous groups in a single geographic location would be selected based on their commitment to trying the experiment and the presence of a concrete need for deliberation. Participating groups should be chosen to represent a diverse range of participants and concerns from community and/or school groups, neighborhood associations, student groups, and other similar associations. Quantitative and qualitative analysis of interactions, log data, and outcomes would be used to identify difficulties, evaluate interfaces, and guide revisions to the design.

Subsequent efforts would expand in scope to include problems covering broader ranges of geographical coverage, types of participants, and scales of problems. Traditionally oppositional groups such as college students and residents of surrounding neighborhoods might be asked to work to resolve differences. These efforts would inform further redesign, with an eye towards scaling up to national problems.

Research efforts would address understanding of the interplay between the dynamics of the discussions and the design of the tools. Interviews and analysis of participation trends might help understand motivations for engaging in deliberation and reasons for withdrawing. Models of the processes involved in discussing potentially controversial topics and their resolutions might help inform the design of tools that might help participants work towards success. Communication breakdowns and failures might provide insight as to how future deliberations might avoid self-destruction. Interviews or surveys of stakeholders who decline to participate might provide additional insight.

Research Challenges:

1) Improving the quality of discussion: How can deliberative systems encourage meaningful and substantive debate? Many of the proposals and discussions on whitehouse2.gov are consistent with a political culture that has been described as being completely polarized. As participants talk use "party-line" arguments to talk past each other without documenting claims, directly refuting counter-arguments, seemingly "robust" discussion of controversial topics may in fact be mere recitations of well-known, pre-established positions.

Finer granularity of stated positions on controversial issues might be helpful. The initial choice between endorsing, opposing, or (implicitly) being neutral with respect to a given proposal is inherently polarizing. A more fine-grained set of options, particularly with respects to aspects of a discussion, could let people see the relative strength of conviction. Histories of viewpoints - both individually and in the aggregate - would let others see how views change. Annotations on revised opinions could be used to help participants understand how their peers had been convinced to revise their views: "Jane said this post helped convince her that her original viewpoint was ill-informed."

The widespread use of ratings of participants and comments in online conversation provides a starting point for exploration of design alternatives. Richer ratings schemes - going beyond thumbs-up or thumbs-down - may be augmented by more nuanced approaches that consider the source of an assertion and the trustworthiness of the person making it.

2) Evaluating Information: High-quality, comprehensible information is necessary for promoting civic involvement (Knight Foundation, 2009). Deliberative discussions will necessarily involve the use of external information to provide context, document claims, and inform discussion. This information will likely come from a broad range of sources, including traditional or online news sources and discussion groups (Robertson, 2005). Appropriately-constructed simulations can be particularly useful for illustrating possible consequences of alteranative solutions to specific problems (Borning, Friedman, Davis, & Lin, 2005). Regardless of the form they take, these information sources will be viewed critically by deliberation participants.

Although the introduction of external sources for supporting arguments may improve the quality of discussion, these sources may simply add a level of indirection, as participants will need appropriate tools for understanding the quality and believability of diverse information providers: information from highly-regarded organizations or peer-reviewed academic publications may be more credible than reports from unknown entities. Ratings for sources along with comments on their trust, reliability, perspectives, and history, can all be useful. Links to high-quality external data sources, such as the District of Columbia Data Catalog (District of Columbia, 2009), the forthcoming federal data.gov, and independent efforts like those run by the sunlight labs (Sunlight Labs, 2009) might be used to encourage the use of publicly-available government data sets whenever appropriate.

Evaluation of information, and information sources may go beyond simple binary rating schemes. Ratings of participants and comments might be expanded to include richer annotations in support of informed debate. For example, a "documentation?" annotation associated with a claim in a posted discussion item, might challenge the author to provide links to support documentation. Content might also be rated implicitly. One effort used measures of the lifetime of edits made in Wikipedia to implicitly assess the reputation of authors (Adler and de Alfaro, 2007). Similar approaches could be used to evaluate arguments and supporting documentation in a collaborative system.

Other moreconfrontational possibilities might include providing users with the ability to directly add annotations refuting claims from others. Such annotations might include references to alternative information sources with differing viewpoints or interpretations.

3) Gaming of Priorities and Votes: Both whitehouse2.org and change.gov allow any users to make proposals. Votes on these proposals may be used to indicate perceptions of priorities: ideas that generate few votes are possibly not high on the list of anyone's concerns. Conversely, relatively high-interest levels may not be an indication of the levels of interest in or support for a given position. As illustrated by NASA's recent contest for naming a portion of the space station (Klotz, 2009) concerted online efforts can generate seemingly substantial support for positions that may in fact be somewhat marginal.

Weighted endorsement schemes that give greater consideration to users with reputation ratings indicative of a long history of thoughtful consideration might be used to discount electronic "ballot-stuffing". Visualizations of voting trends, including identification of voting clusters, might help participants identify votes that may be more indicative of group membership than considered opinions.

4) Evaluation of participants: The process of deciding which proposals to endorse or oppose, and which comments or discussion points might inform that decision, involves an ongoing process of evaluating materials. Just as reputation systems on sites like eBay help wary buyers and sellers decide who to do business with, reputation systems in support of deliberative discussions help participants determine the credence that should be given to various individuals or data sources.

As internally consistent evaluations of participants, these tools may add substantial value, but additional measures will be needed to strike a balance between full-disclosure of external interests that may indicate conflicts and possibilities for appropriate anonymous contributions.

Reputation systems might be designed to support open disclosure. Registration with a full name (as opposed to a pseudonym) and endorsements from other participants (based on "real-world" dealings) might improve reputation. Verifiable disclosure of financial and employment details that might pose conflicts of interest would provide additional increases in reputation. Posted opinions and comments might link to these disclosures. Support for visualization of relationship between participants (perhaps based on employment or affiliations) might be used to infer alliances behind various positions (see littlesis.org as an example effort aimed at demonstrating connections between business and government leaders). To allow people to change their views over time, mechanisms are needed to allow a participant to show this evolution in their thinking, rather than having their views perceived as represented by isolated statements.

Facilities for "outing" participants with undeclared conflicts of interest might be used to expose participants acting as shills for undeclared interests. Participants who wish to disclose relevant information about another would disclose their intentions to the alleged bad-faith actor prior to posting them publicly, allowing for rebuttal. Reputation systems would help users evaluate the relative merits of the conflict claim and rebuttal.

Anonymity will likely be a concern in evaluating participant contributions. Rewarding reputation points to those who choose to reveal their full name while participating may also be seen as discriminating against those who choose to remain anonymous. Anonymous participants might use appropriately anonymized endorsements from highly-reputable individuals to overcome these limitations. These statements might act as "reputation escrow". Investigation of the implications, utility, and design issues associated with more nuanced identity management models will be necessary.

Many deliberations will present questions regarding constituency. Who should be allowed to participate in deliberations? Although some discussions may be local in nature, limiting participation to residents of a given neighborhood or municipality may not be desirable or technically feasible. Other deliberations focused on specific subpopulations present similar concerns: how can a national deliberative system help determine who can "vote"?

5) Multiple means of evaluating content/comment: Multiple streams of information about specific proposals, together with meta-information rating relevant individuals and data sources, present significant challenges in usability. Designing universally usable interfaces for exploring and interpreting these data sets will be a substantial challenge.

6) Supporting active, sustained, engagement in ongoing, evolving discussions: Policy-makers and stakeholders may be faced with the challenge of interpreting and synthesizing large volumes of ongoing discussion, potentially involving multiple streams of inter-related topics. Debate and discussion will need to support evolution of views and specific proposals. Appropriate tools for coordinating this evolution, investigating histories, and summarizing discussions will be needed. Effective participation will mean that individuals are able to contribute to discussions and understand their progress without becoming overwhelmed. Furthermore, mechanisms are needed to facilitate the incorporation of ideas from people who join the conversation at different points in time, to get late-comers up to speed and avoid having them restate issues that have already been hashed out in the discussion.Tools for navigating large volumes of conversation, summarizing trends, gauging opinion, and identifying relationships between participants and discussions will be needed.

Appropriate visualizations of these processes will be necessary. Visualizations of wikipedia edits (Viégas, et al. 2004) illustrating discussions of controversial topics provide some inspiration, but broadly-based deliberative processes will present significant scaling challenges for thes displays: with potentially dozens of related discussions on different facets of a complex topics, participants will face the challenge of both identifying specific conversations of interest and then understanding the context and content of those discussions.

7) Ensuring representativeness of results: Given the likelihood that only a small minority of citizens will engage in such deliberations, any outcomes should not be oversold. When available, demographic information might be used to demonstrate a range of diverse voices taking a given position. Similarly, votes or attempts at consensus must be carefully structured: what is the necessary quorum for a large, distributed, online-only, transitory group? Building on earlier work examining the possibility of using displays indicating the value of contribution (Rashid, et al., 2006), deliberative interfaces might be fine-tuned to encourage participation by members of under-represented groups.

8) Working toward consensus or agreeing to disagree. Discussion, moderation, and voting may be helpful, but other means of assessing opinion may be useful. Participants might be able to prioritize items, choose between binary alternatives, assign monetary values to choices, or use any of a variety of other mechanisms to express their views. Other approaches might separate discussion from decision-making, perhaps having discrete times for various forms of participation. Which of these approaches work, and under what circumstances?

Consensus may not be always be a desirable or achievable outcome, particularly when contentious matters may leave little common ground. When this happens, the relevant question might change from "how can we agree on a solution to a shared problem?" to "how can we best accept that this debate may have 'winners' and 'losers' without harming chances for future deliberative successes?" Design alternatives that might support this process include support for clear narratives that explain and illustrate decision-making processes, with an eye towards illustrating clearly who "won" and who "lost". Participants on the "losing" side of these debates might be given incentives for continuing to participate in other discussions - as opposed to choosing to disengage entirely. One possibility would be to increase prestige ratings for participants who respond constructively to losing votes.

Visualization is can play an important role here as well. Computer-supported argument visualization tools might provide a range of displays aimed at helping participants understand the progression of an ongoing deliberation. Possible visualizations include overivews, chronological maps of dialogs, and argument visualizations that link alternative decisions to pro- and con-arguments (Renton & Macintosh, 2007) . Scaling these tools to support large numbers of alternatives, arguments, and participants remains a challenge.

9) Supporting moderation and oversight: Particularly for controversial topics, tools for oversight and moderation will be needed. Appropriately trained and responsible individuals will need facilities for responding to interpersonal problems, moderating heated discussions, maintaining an appropriate level of focused, on-topic conversation, and identifying any behavior that maybe illegal, threatening, or otherwise harmful. Although the emerging study of the evolution of rules in online spaces such as Wikipedia (Butler, et al. 2008) provides some useful guidance, more research will be needed to understand how appropriate administration of deliberative sites can provide accountability, flexibility, and transparency.

10) Building confidence: deliberative processes will not work unless participants feel that all actors are working in good faith, and that the process can have meaningful impact. Deliberations that are ignored by policy-makers will not encourage further participation. Combining large-scale deliberations with clear and transparent decision-making processes that disclose the full range of voices that influenced decisions may help build trust in these processes.

11) Supporting appropriate flexibility of tools and processes: As deliberative processes evolve over time, technological capabilities required to support them evolve as well. Inappropriate tools can place undesirable constraints on the process, deliberation tools will need to have both a range of flexible capabilities for managing both consensus and disagreement, along with appropriate support for helping participants determine which capabilities are needed at any given point in a deliberative process (deMoor and Aarkhus, 2006). Effective strategies will support appropriate end-user customization of both the underlying process and relevant user interfaces.

The change.gov experience illustrates the problems associated with deliberative input into public policy-making. Allthough change.gov, the White House Office of Science & Technology Blog (OSTP, 2009) and related efforts solicit citizen input into government policy activities, the influence that this input has is not yet clear. Good faith demonstrations of sustained engagement policy makers with these processes will be necessary to motivate continued citizen involvement: if participants aren't confident that policy-makers are listening, they may stop talking. Recent governement efforts aimed at reconsidering online services (Federal Web Managers Council, 2008) and using social media for communications with citizens (Godwin, et al. 2008, Webcontent.gov, 2009) are a good start in this direction, but widespread deliberation might benefit from more specific and direct indications of policy-maker engagement. These indicators might range from active participation in discussions to direct attributions that link elements of policy proposals, legislation, and/or regulation back to specific deliberations.


Adler, B.T., and de Alforo, L. 2007. A Content-Driven Reputation System for the Wikipedia. Proc.eedings of the 16th international conference on World Wide Web

Borning, A., Friedman, B., Davis, J., and Lin, P. Informing Public Deliberation: Value-Sensitive Design of Indicators for a Large-Scale Urban Simulation. Proceedings o the 2005 European Conference on Computer-Supported Cooperative Work, September 2005.

Butler, B., Joyce, E., and Pike, J. 2008. Don't look now, but we've created a bureaucracy; the nature and roles of policies and rules in wikipedia. ACM SIGCHI Conference on Human Factors in Computing Systems.

deMoor, A., and Aarkhus, M. 2006. Argumentation support: from technologies to tools. Communications of the ACM, March 2006, 49(3), pp. 93-98.

District of Columbia, 2009. Data Catalog. http://data.octo.dc.gov/, Accessed May 15, 2009

Federal Web Managers Council, 2008 Putting Citizens First: Transforming Online Government. http://www.usa.gov/webcontent/documents/Federal_Web_Managers_WhitePaper.pdf, Accessed May 15, 2009.

Godwin, B., Campbell, S., Levy, J., and Bounds, J. Social Media and the Federal Government: Perceived and Real Barriers and Potential Solutions. http://www.usa.gov/webcontent/documents/SocialMediaFed%20Govt_BarriersPotentialSolutions.pdf

Klotz, I. 2009. NASA in Colbert conundrum over Space Station Reuters, March 30, 2009. http://www.reuters.com/article/newsOne/idUSTRE52T5TN20090330, Accessed May 15, 2009.

Knight Commission, 2009. The Knight Commission on the Information Needs of Communities in a Democracy: Draft Report. http://www.knightcomm.org/files/kcconsolidatedfirstdraft160409.pdf, Accessed May 15, 2009.

Introne, J.E. 2009 Supporting group decisions by mediating deliberation to improve information pooling. Proceedings of the ACM 2009 international conference on Supporting group work

National Academy of Publication Administration, 2009. Open Government Dialogue. http://opengov.ideascale.com/, Accessed June 1, 2009.

OSTP, 20009. The Office of Science and Technology Policy OSTP Blog. http://blog.ostp.gov, Accessed May 15, 2009

Rashid, A. M., Ling, K., Tassone, R., Resnick, P., Kraut, R., and Riedl, J. 2006. Motivating participation by displaying the value of contribution. ACM SIGCHI Conference on Human Factors in Computing Systems.

Renton, A., and Macintosh, A. 2007. Computer-supported argument maps as a policy memory. The Information Society 23(2), 125-133.

Robertson, S. 2005. Voter-Centered Design: Toward a Voter Decision Support System. ACM Transactions on Computer-Human Interaction (12)2, June 2005, 263-292;

Sunlight Labs, 2009. The Sunglight Labs Wiki. http://wiki.sunlightlabs.com/index.php/Main_Page, Accessed May 15, 2009.

Webcontent.gov, 2009. Social Media and Web 2.0 in Government - Webcontent.gov - Guide to Managing U.S. Government Websites http://www.usa.gov/webcontent/technology/other_tech.shtml

Viégas, F., Wattenberg, M., and Dave, K. 2004 .Studying Cooperation and Conflict between Authors with history flow visualizations. ACM SIGCHI Conference on Human Factors in Computing Systems.

Possible Projects