AR; coding deliberation, again

In the review of relevant literature, I’ve again come across two articles that try to tackle the thorny issue of measuring the qualities of deliberation. Of course the thorniness (thank you, thank you) of the issue lies in the fact that competing definitions of the term abound.

The two articles – both published in the Journal of Public Deliberation, here and here – approach the problem from quite different angles. One of them, by Jennifer Stromer-Galley takes the more usual route, conceptualizing good deliberation on the basis of theories, such as those of Habermas. From the theoretical background, she derives what elements need to constitute deliberation, and creates a coding scheme in order to measure the presence or absence of these elements in the analyzed discussion.

Authors of the other article (Mansbridge, Hartz-Karp, Amengual and Gastil) take a different approach. Their study is inductive: they asked ten professional debate facilitators to identify moments of “good” and “bad” deliberation in a series of recorded debates; and then conceptualized deliberation based on these answers.

Not surprisingly, the end results are somewhat different.

For Stromer-Galley, deliberation comprises of the following six elements:

  1. reasoned opinion expression,
  2. source referencing (mass media / prep material / other participants / personal narratives as sources),
  3. exposure to diverse views on the topic,
  4. equal participation (because this is a procedural guarantee of exposure to the largest possible number of views)
  5. coherence of the topic and conversation structure (so that discussion is relevant), and
  6. engagement of the parties with one another.

I think the list is quite clear; one thing to note however is the absence of an explicit “respect” or “honesty” category.

Mansbridge et al.’s list of such elements is much more loosely defined. Professional facilitators identified good deliberation as something that

  1. brings about a “positive group atmosphere”, and
  2. makes the group progress in its task.

So, from the front lines of deliberation, satisfaction and productivity are the two key words; and it’s important to note how these two are intrinsically related in a self-supporting (or self-weakening) manner.

“Freedom”, as often cited by theoreticians as a key feature of deliberation, was reflected on by the facilitators in the notion of the free flow of ideas; and the abstract value of “equality” was translated into the trio of extensive and inclusive participation, group self-control, and the fair representation of views, in practice.

If I were to synthesize the results of these two studies, I’d say that, while it is important to operationalize deliberation based on a strong theoretical background, facilitators’ experiences remind us that factors related to the personal, sometimes emotional, experiences of the participants should also be considered; not only because it’s nice to be nice to people, but also because, apparently, striving for a good atmosphere brings about better output.

( – of course the rub here is that the process of deliberation can hardly be measured without some reference to its contents – )

Anyhow. Mansbridge et al. conceptualize deliberation, but don’t get as far as operationalizing it. Stromer-Galley does; but once again what’s missing is the calibration of the scale, so to say. Example: it is established that the “reasoned expression of opinion” is an important element of deliberation. It then is reliably measured that in a given conversation, 55% of the total thoughts (which is defined here carefully and established as the unit of analysis) in a conversation were expressions of opinion, and 84% of these were supported also with reasons. This, according to the author is “a fairly high volume of reasoned opinion.” But is it? fairly high, compared to what? Is it “enough”? How to evaluate this 55% figure? Supposing there’s a deliberative assembly where only 35% of the thoughts are expressions of reasoned opinion; should we then reject the decisions of this assembly…?

So the problem I’m awkwardly getting at is that of establishing independent standards – without these, even the best operationalization of deliberation will fail the test of empirical relevance.

Stomer-Galley, Jennifer (2007): “Measuring Deliberation’s Content: A Coding Scheme”, in: Journal of Public Deliberation 3(1), Article 12.

Mansbridge, Hartz-Karp, Amengual and Gastil (2006): “Norms of Deliberation: An Inductive Study,” in: Journal of Public Deliberation 2(1), Article 7.

This entry was posted in Uncategorized and tagged , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s