Misunderstanding Free Speech

In the wake of the white supremacist rally in Charlottesville, there has been a lot of discussion about free speech, and whether this right protects such groups.  This debate is coming out of a larger one about harmful speech, particularly on college campuses.  I struggle with this question (for reasons noted below), but find that the way this debate is popularly framed is misleading in two respects.

(1) It is not a debate between free speech advocates and opponents of free speech.  Rather, it is a debate about what free speech means, and how this right is best implemented politically.  Consider those who want to restrict the speech of hate groups.  The argument for this position is that such hate speech limits the ability of other people in society to speak.  Thus, limiting hate speech may actually *increase* the access to free speech in a society.  The basic idea is that we should think about free speech as the ability to do things with words (like make assertions that others listen to), not to merely vocalize syllables.  When others engage in hate speech, they help create an environment in which the oppressed are unable or unwilling to speak, because others do not listen, or out of fear for their safety.  It is this latter idea which motivates the ACLU's recent decision not to defend armed protestors (their guns, it is argued, have a chilling effect on speech). 

On the other side, defenders of greater legal protection for speech tend to argue that the legal right has to be absolute, otherwise it will be selectively applied in ways that favor the powerful.  While this may mean permitting harmful speech, it would be more harmful, it is argued, to give those in power (who can do more to reinforce systems of oppression) the legal authority to silence the voices of the powerless.

It is a difficult issue, and one I struggle with myself.  The important point, however, is that it is not a matter of supporting or opposing free speech.  It is about what kind of society best promotes free speech.

(2) Following this point, it is not a matter of "offensive" speech.  Taking offense is a fairly weak property.  People can take offensive to all sorts of things, including the innocuous and the true.  There probably are white supremacists who marched at Charlottesville who would take offense at being called a "white supremacist."  I hardly think that would be adequate grounds for denying my right to say so.

If, however, we understand the debate in terms of speech that silences or removes the voices of others from public discourse, we have firmer grounds for distinguishing truly harmful speech from that which merely offends someone.  I'll be offended if you call me an idiot, but your doing so in no way reduces my ability to speak in our society (it is not systemic, and it is not part of any broader social structure that oppresses me; in fact, I am a benficiary of social power structures).

As I said, I still struggle myself with these questions.  What I think is crucial, however, is that we engage the actual points of disagreement, rather than a caricature of the positions.

New Paper

My latest paper, entitled "Steering into the Skid: On the Norms of Critical Thinking" has just been published in Informal Logic.  You can find it here, and on the research page.

In the paper I look at the implications of arguments from Gigerenzer and Mercier & Sperber that cognitive heuristics and biases are ecologically rational, that is, that they are rational when used under the right conditions.  This might be taken as a challenge to teaching critical thinking skills, as such as skills may not match the reasoning success of our informal heuristics.  I argue that a sophisticated conception of the goals of critical thinking, namely, critical thinking as the metacognitive skill to recognize and implement the right cognitive strategies in the right circumstances, helps us to avoid this objection.

Comments and thoughts welcome!

This Article Acknowledges the Amazing Properties of Language

Here's the cool thing about language: it is transparent in our everyday use, in that we use it so readily that we barely even recognize how amazing it is that we can express an infinite range of thoughts and be understood by others. We get a window into how interesting language really is when we encounter funny utterances, utterances which are funny in both the "interesting" and "amusing" senses of the word.

I just recently encountered two such funny utterances on some recent travels up to northern New Hampshire for a kayak trip. Looking more closely at each gives us insight into an aspect of the ways language works.

I heard the first on a podcast while driving home. On This American Life, when they are about to air a story that involves sex in some way (whether directly or obliquely), Ira Glass will provide a warning to the listener:

This story acknowledges the existence of sex.

This seems to me either a pragmatically paradoxical utterance, or an insufficient one. Let's suppose first that it is indeed the mere existence of sex that is problematic for some portion of the audience, and that some members of the audience might want to avoid knowing that sex exists, or more likely, they want to prevent someone else (e.g., children) from knowing sex exists.

The trouble, however, is that "acknowledges" is factive. A factive verb is one which presupposes the truth of its complement (the sentence that one acknowledges, in this case, "sex exists"). That is, to say that one "acknowledges that sex exists" presupposes that it is true that sex exists. To see this, compare the following sentences:

(1) This story acknowledges the existence of Santa Claus.

(2) This story acknowledges that the Mets won the 1986 World Series.

(3) This story acknowledges that the Mets lost the 1986 World Series.

Sentences (1) and (3) are unacceptable, they just don't sound right. They don't sound right because Santa Claus does not exist (sorry! content warning: this post acknowledges that Santa does not exist), and the Mets did win the 1986 World Series. You could say that someone "claimed that Santa exists," or that someone "discusses the existence of Santa Claus," but not that she "acknowledges" Santa's existence.

This is interesting, then, because the warning itself conveys the fact that sex exists. If the aim of the warning is to help the audience avoid this knowledge, then the warning is self-undermining!

Presumably, then, the intent of the warning is to help the audience avoid descriptions of sex that they might find inappropriate. In this case, however, the warning is insufficient - it does not properly warn you against what you might want to avoid, nor does it provide enough information to know whether the story is worth avoiding or not.

I suspect that this point is not lost on the producers of This American Life, and indeed may be a statement about the inanity of FCC requirements - but it is an interesting case for the reasons why the warning is so funny.

The second case was a road sign that encountered in a small northern NH town. It was a warning about the nearby presence of moose, with a sign under it:

Next 5500 feet.

Now, a mile is 5280 feet. A mile would be a fairly standard unit of measurement on a sign like this, and 5500 rounds down to 5280 fairly easily. One might naturally expect the sign maker to simply round it to 1 mile, unless that additional 220 feet were significantly more likely to contain moose than the area around it. The sign struck me as funny precisely because it seems committed to this surprising claim about the significance of those 220 feet for moose-alertness.

But why does it seem committed to this? The philosopher H.P. Grice has an answer. He was interested in content that we communicate beyond the literal content of what we say. Sarcasm provides a ready example - if I remark of some hideous shirt, "oh wow, that's an awesome shirt" I have literally said that the shirt is awesome. However, it might be clear from tone and context that I am communicating, or implicating, the opposite of what I literally said.

How does my audience know what I am implicating in a given context? Well, Grice thinks we enter into conversation with the mutual understanding that we will cooperate, and that we follow a few basic rules:

  1. Maxim of Quantity: make your contribution as informative as is required.
  2. Maxim of Quality: do not say what you believe to be false, or that which for which you lack evidence.
  3. Maxim of Relation: make your contribution relevant.
  4. Maxim of Manner: make your contribution clear.

Suppose I tell you that the shirt is awesome. This, given the clear fact that the shirt is hideous, is false and so in violation of Maxim of Quality. You, however, are assuming that I am cooperating. Grice's insight is that you will then assume I have violated the maxim on purpose, and that you should infer I actually mean that the shirt is awful.

Returning, then, to our road sign - it seems to violate the Maxim of Quantity. The use of a more fine-grained measurement scale (feet), when it seems like a broader one (miles) would do, seems to provide more information than is really required. Such information is not required for the simple reason that, when driving down the road, I cannot tell the difference between the passage of 5280 feet, and 5500 feet. So why, then, did the author of the sign choose feet? The answer, if I am following Grice, would seem to be that the author wants me to recognize that more specific information is actually required - that those extra 220 feet really do matter. This would bring the author's statement on the sign back in line with the Maxim of Quantity - exactly what I should want if I assume that s/he is being cooperative.

The actual explanation is probably not that the author wants me to pay extra close attention to those 220 feet. As with the first case, what I find interesting about the example is the way in which it illuminates a fascinating fact about language. In the first case, it was the way factive verbs work. In this second case, it is Grice's account of how we communicate content we do not literally say. These are features of language that we use like second nature, and only become aware when they become funny enough to rise to our attention.

Trump and the Bullshit Death Spiral

"Bullshit is a greater enemy to the truth than lies are." So writes Harry Frankfurt, in his essay "On Bullshit". I've had cause to think about this line in light of the Presidential campaign of Donald Trump, and the way that bullshit and lying can become deeply entwined for the serial bullshitter. Perhaps it is worth first taking a step back and looking at Frankfurt's distinction between bullshit and lying.

Lying is intentional deception. The liar needs to know the truth, and deliberately speaks falsely in order to bring about a belief in that falsehood. The bullshitter, by contrast, does not know or care whether what s/he says is true or false. Both the liar and the bullshitter care about the effects of their words, but the liar possesses knowledge of the truth, while the bullshitter ignores it entirely.

At first glance, it might seem that this makes lying the worse activity. After all, the liar knowingly disregards the truth; it is an intentional act, and it is at least possible that the bullshitter might speak the truth. Not so, argues Frankfurt.

Both in lying and in telling the truth people are guided by their beliefs concerning the way things are. These guide them as they endeavor either to describe the world correctly or to describe it deceitfully. For this reason, telling lies does not tend to unfit a person for telling the truth in the same way that bullshitting tends to. Through excessive indulgence in the latter activity, which involves making assertions without paying attention to anything except what it suits one to say, a person's normal habit of attending to the ways things are may become attenuated or lost.

While instances of lying may be more harmful than instances of bullshit, it does not undermine the value of truth in the same way that bullshit does. The liar has to know the truth in order to lie, and so one is required to remain in the habit of caring about the way the world really is. The bullshitter, however, consistently undermines the value of the truth. Not only are particular truths not important to the bullshitter, but the the truth itself is not important.

I was thinking about the passage quoted above when listening to some recent remarks by Donald Trump. The first was his claim that he had never met Putin, which is clearly at odds with his remarks that he had. The second was his claim that the NFL sent him a letter complaining about the debate schedule, which the NFL immediately denied. In the latter case, Trump is clearly lying, and in the former, at least one of those remarks is a lie.

What's interesting about these lies is the way in which they differ from what you might call, 'normal political lies.' They are not of great political importance, they are made by Trump directly (rather than intermediaries or third parties), and they are easily checked factual assertions. The use of such lies is one of the factors that has led Ezra Klein to call this a campaign between a normal politician, and an abnormal one.

Frankfurt gives us the diagnosis. Trump has shown an ease with bullshit, particularly in the promise of "deals" that will solve America's problems. Frankfurt himself looked at Trump's use of lies and bullshit, and a simple search for "Trump bullshit" brings up several examples of writers applying Frankfurt's analysis to Trump. What I think has gone underappreciated, however, is the way in which bullshitting makes lying all that much easier.

Trump's casual lying in the Putin or NFL cases, which has baffled political observers (particularly since avoiding the "gaffes" seems so easy) is symptomatic of a general disregard for the value of truth. While there is an important conceptual distinction between lying and bullshitting, the latter makes the former easier. The liar recognizes the value of truth, but values it less than what s/he will achieve through the lie. A politician, for example, might think that winning an election is more important than being truthful, and so, is willing to deceive others into believing a falsehood. As bullshit diminishes the value we place on truth, it is more easily outweighed by other other values. Let's call it the "bullshit death spiral" - the more you bullshit, the easier it is to lie and to bullshit, ad infinitum.

This is not to say that we should demand "normal political lies" in place of the more brazen Trump-lies. It is, to say, however, that we should be mindful everywhere of how bullshit undermines the value we place on the truth. While, in normal cases of political misrepresentation and lying, we value victory over the truth, the ubiquity of "fact checkers" suggests that we do still value the truth. The age of bullshit, of which Trump is the apotheosis, threatens a fundamental, though perhaps fragile, value.

On Public Thinking (i.e., Tweeting?)

A year ago, Christopher Long gave a talk at St. Lawrence, arguing in favor of integrating contemporary social media (and in particular, Twitter) into the classroom. During the talk, the audience was encouraged to tweet, and I hit my yearly twitter quota in an hour and a half. By the end, I tweeted out that I was convinced to tweet more by an argument based on Aristotle (what better illustration of what it is to be an academic could one ask for?). My recall of the details of the argument is fuzzy, but the crucial consideration was this: social media is becoming one of the primary platforms through which we engage with the world. If we care about being responsible citizens in the political, ethical, and cultural conversations of our age, then we should care about using social media.

I was moved by this argument; at least, I was moved enough to agree with it without doing anything differently. In looking back at my twitter feed, I can't help but notice a total of 11 original tweets since then. This is by far my most active social media account. What happened, and, should I be moved by arguments in favor of greater digital engagement?

Before turning to prescription, it's worth starting with the diagnosis. Part of my reluctance to engage with social media is simply habitual - I rarely have the thought "I should tweet about this!" Habits, however, are changeable if one cares to. Were there other considerations, ones which might weigh against arguments in favor of an increased digital presence? I suspect that there are two operating in my own case.

The first is a wariness of the placating effect of online posting (slacktivism). That is, social media posting may make one less likely to act on the expressed sentiments. The second is a worry about thinking in public in a medium which lasts essentially forever. I change my mind often, and if I have learned anything from spending my adult life among academics, it is that just about every topic is more complex than it first appears.

Neither of these concerns is compelling; upon reflection, they sound more like excuses than justifying reasons. Two considerations mitigate against the first worry. The most obvious reply is that the possibility of this happening does not thereby diminish the value of doing both. More substantively, however, digital engagement is a type of action. What's more, as an academic, it is likely the type of public contribution to discourse for which I am best suited to make. If anything, I should be more inclined to use these new platforms, for it is in working through ideas that I am best prepared to contribute to the world.

The second concern is more interesting. We are often encouraged not to share underdeveloped ideas for a few reasons. One is that we may be wrong, and be made to look foolish. Another is that, especially for an academic, our ideas might be "scooped" and developed by someone else. Or, one might be worried that expressing in process ideas shows too much process, and the muddy work that goes into developing ideas might reflect poorly on you as a scholar.

There is something to this second concern. Ideas on the internet are long lived, and never truly lost. Further, the rapidity and ease with which we castigate people for ideas speaks to a real discomfort with people changing their minds or growing in their views.

It is this attitude, however, that speaks in favor of developing our ideas in public. Showing our students, and showing the public, what it looks like to develop an idea has a range of benefits. It shows that ideas do not fall fully formed into the laps of geniuses, a pernicious conception of intellectual labor that suggests good thinking is the province of those with a gift, one that you either have or don't. It also shows the public that good ideas require false steps. Without recognizing this, it is too easy to see public discussion as a matter of choosing sides - a model where advocates of different positions try only to get the undecided onto their "team," without engaging each other. Further, it demonstrates that we come to our ideas by taking the evidence for, and against them, seriously. We are not born into our political positions and identity, we come to them through an honest and, at times, arduous search for the truth.

In reflecting on this, it strikes me as resting on a fairly obvious point, one Long made forcefully - the arguments in favor of public thinking through social media are the same arguments in favor of engaging the world as a public intellectual. It is the platforms that have changed, not the underlying arguments.

A year ago, I made the judgment that I should better engage using the digital tools of modern social media and the like. Today, I find these arguments, if anything, even more persuasive than I did then. This post then, is a public commitment to public writing.

Chalmers and Cappelen on Intuition

Today I was reading a recent paper by David Chalmers entitled "Intuitions in Philosophy: A Minimal Defense," which you can find here.  It's a reply to a recent book by Herman Cappelen which argues that, contrary to a recent interpretation of philosophical methodology particularly prevalent in experimental philosophy, philosophers do not rely upon intuitions in their arguments.  Cappelen's basic strategy is largely a negative one: he tries to show by analysis of the language philosophers use, and the arguments philosophers employ, in several cases studies, that intuitions are not at play.  Chalmers has a rather interesting reply to one of Cappelen's arguments that I want to briefly take up here.

Cappelen, when examining these case studies, classifies them as not appealing to intuition in cases where the author makes an argument for the claim purportedly supported by intuition.  That is, if the author has an argument that p, then that p is not supported by appeal to intuition.  I offered a similar argument in my dissertation by pointing to the arguments that Putnam offers in the Twin Earth thought experiment.  This seems obvious enough at first glance, since one would think that the intuition would only be at play if no argument was possible.  If an argument is available, why bother with an intuition?

On Chalmers' analysis of intuitions, an intuition has dialectical non-inferential justification.  That is, we need not make any claims about the actual epistemic justification of the intuition, but rather we look at what justifications are accepted for that claim in argument.  For intuitions, they are accepted without inferential justification.  This, however, only means that they do not require inferential justification.  It is entirely consistent that a claim might have both non-inferential justification and inferential justification.  If so, then we have to deny the general principle that if an author argues that p, then that p is not supported by an intuition.

For example, the Gettier case includes both an argument and an intuition.  We can see that they come apart in the further reporting of the Gettier case in secondary literature, where the argument is often dropped and only the intuition remains.  Indeed, the same could be said of the Putnam case (it is why I thought it an important contribution to even point out the Putnam argument at all!).  The later authors have taken the intuition as non-inferentially justified.  If they did not, then they would have maintained the inferential justification.  Perhaps they were wrong to do so (the question of epistemic justification), but from the point of view of methodological analysis, it is clear that they did accept the intuition.

I think Chalmers is making an important point here, and it reveals an aspect of Cappelen's analysis that didn't sit right with me either.  Even if the arguments he looks at can be best understood as not requiring intuition, this is a different claim from the one that intuition has not played a central evidential role in the field as it has actually progressed.  Indeed, I have argued that Kripke's arguments can be interpreted as not relying on intuition at all, but it does not follow that Kripke himself is not using his own intuitions in making the case.  

However, I think caution is needed with Chalmer's position as well.  One reason for endorsing Cappelen's principle is the Principle of Charity.  If we can interpret an argument as not relying on intuition, when those intuitions are epistemically problematic, then we ought do so.  Perhaps others have made the dialectical move of taking the intuition on board, but this might not just be epistemically unjustified, but unjustified as a matter of interpretation as well.  Interpreting involves balancing the competing demands of textual fidelity and charity.  We can read Cappelen's principle as tacitly assuming that it is uncharitable to ascribe an appeal to intuition when not necessary.*

This is important because we should also be cautious with using the secondary literature as evidence here.  Intuitions are evocative and easier to understand than arguments.  The author might be using them for the purposes of illustrating the position she is considering, rather than as justifying it.  At the least, care is required when looking at the secondary literature to see if the author is actually taking it on as a dialectical assumption rather than using it for more pragmatic purposes.

* This might be more problematic for Cappelen than I am giving credit for here.  His intention is not to evaluate the methodology, but strictly to interpret it.  He might want to avoid taking on the assumption that intuitions are epistemically suspect.

Epistemic Voluntariness

A colleague of mine gave an excellent talk this past semester on the Intellectual, or Epistemic, Virtues.  In this talk, he drew attention to the shift towards autonomy in ethics in the Modern period, most notably exemplified by Kant.  We might think of the intellectual virtue of autonomy as involving the ability and willingness to determine one's own beliefs through an appreciation of the evidence.  After the talk, we discussed whether we could really cash out this virtue as an independent virtue, rather than a derivative one.  That is, is intellectual autonomy reducible to possession of other epistemic virtues? 

Over the course of the conversation, I raised a question whether there might be an important epistemic virtue of epistemic voluntariness.  The parallel here is to debates over free will, and the compatibilist and incompatibilsit lines on that question.  For the compatibilist, the crucial idea is that one can freely choose something if what they have chosen is what they desire or want  to have chosen (though of course, complexities abound and compatibilists offer much more sophisticated definitions).  It might be that one's desires are determined, but this is not a challenge to human freedom.  

Is there an analogue for epistemic virtue?  I'd like to suggest and play with the following hypotheses:

Epistemic Voluntariness : one's beliefs are consistent with reflection on one's available evidence.

Let's illustrate this with a couple of examples.  First, let's imagine someone who fails to possess this virtue.  Archie believes that the government has a secret  weather machine, and it is used to divert attention from political scandal.  Now, Archie has lots of evidence available to him, and very little of it would indicate that such a machine could or does exist.  He also has a friend, June, who believes that the machine is real.  June tells Archie so, and Archie believes her.  Archie has no evidence that June is a good authority, and any evidence that would give him reason to believe her is clearly swamped by his own evidence against the hypothesis.  Yet, Archie believes June.

In this case, I would suggest that Archie does not possess the virtue, since he is not forming his own beliefs consistently with his available evidence.  This is not to make a judgment about the quality of the evidence, lest this virtue really amount to simply being a good reasoner!  As such, suppose that June's available evidence really would support the hypothesis (or is at least broadly consistent with it) that the government has a weather machine and they use it in these nefarious ways.  June's evidence might be inadequate, and poorly understood, but June's belief would be voluntary in a way that Archie's is not.

What distinguishes this proposal from intellectual autonomy is that it does not require a causal relationship between reflection on one's evidence and one's beliefs.  I take it that part of intellectual autonomy is that one, as a matter of fact, reflects upon their own ideas.  The key notion (again compare to autonomy with regard to free will) is that the belief comes entirely from the agent.  As with voluntariness with regard to free will, the idea here is that a belief could be voluntary even if not every stage leading up to the formulation of that belief was free/based on reflection.  

Is this a more useful concept than intellectual autonomy?  Voluntariness is useful in the free will debates, because of worries about the intelligibility of agents as loci of contra-causal force.  The same worry might be applied to the intellectual virtues (if one holds that our intellectual reasoning is itself determined), and it might be useful for that purpose.  But is it useful independently of the free will debate?  Could one, for example, be a libertarian about free will but nevertheless prioritize intellectual voluntariness over intellectual autonomy?  

I'm not yet sure a positive answer can be given to this question.  Indeed, there may be problematic cases of intellectual voluntariness.  Suppose that Archie's evidence would support the weather machine hypothesis, but that he doesn't care enough to reflect upon the data.  He settles on a rule: "I'll believe whatever June tells me to believe."  Sometimes Archie's beliefs are voluntary (when consistent with his evidence) and sometimes they are not.  If June's pronouncements happen to align with his evidence the majority of the time, would this make Archie in any way intellectually virtuous?  This account suggests that it would.  This seems counter-intuitive.

There are three terms in this definition that need precisification as well.  First, what counts as reflection?  Would the briefest consideration count?  Would it be required that the belief is consistent with their evidence in all-things-considered way?

Second, what counts as available evidence?  Is it that the evidence is consciously available?  That it is "contained" in their mind in some way (represented?  there is danger of substituting one mystery for another here)?  That the evidence could be represented or understood if the person went out and found it?  For example, I could easy pop by the office of a physics professor and get evidence with regards to some question in that field.  Is that evidence available to me in the requisite sense?

Third, what counts as evidence here?  Surely it should not be understood as a veridical term, if it is to capture the idea that I am after here.  That is, it need not be the case that the available evidence actually indicates a truth about the world.  If it were understood in this way, then voluntariness would really amount to reflecting on the right data, rather than reflecting on the data that one has.  This is, of course, an important virtue (in the above examples, June's evidence is presumably terrible!), but it is a distinct virtue.

On the basis of these reflections, I'm not quite convinced that this notion of intellectual voluntariness has legs, except, perhaps, as a modification of intellectual autonomy to handle objections to the doctrine of free will.  I still wonder, however, whether there is something to the idea, and that the problem is only with my formulation of the virtue.  Any thoughts - can intellectual voluntariness be saved?  Is it worth trying to save? 

Teaching Ethical Theory

I've been thinking quite a bit about how I teach Ethical Theory, and two related problems that keep cropping up.

  1. Students come out of the class with the attitude that they should choose between different major ethical theories.
  2. Students come out of the class flummoxed about ethics, thinking that all of their available options fail.

The relationship between these two is, I think, fairly clear.  Students think they have to make a choice, but have become quite good at knocking (intuitive at least) holes in any ethical theory thrown at them.  As a consequence, they tend to reject all of the theories, and end up unsure of how to make use of what we've discussed in class.

I should not be surprised by this, and the way I teach the class is implicated in these outcomes.  I've followed a fairly traditional structure, content-wise at least, where we work through the major theories (with particular emphasis on Virtue Theory, Deontology and Utilitarianism) and contemporary criticisms.  We dive deep into discussions of counter-examples, and the differences between the theories that these counter-examples illuminate.  Counter-examples or intuition-pumps are also such a natural conversation starter, that they can quickly come to dominate discussion. 

What I'd like to do is rethink the way I approach the content of the course, and I want to start with my goals for the students.  One of the goals of my ethics courses is always to empower students to be better ethical reasoners in their own lives (despite problematic empirical evidence).  Wielding counter-examples is not particularly conducive to that goal.  The aim of using counter-examples is to think about the complex judgments about conflicting values that we all have to make.  If this is the goal, then why not go directly for it?

We might think about the differences between the major ethical theories in terms of the different values that they prioritize.  Kant, for example, places great importance on autonomy and fairness before the moral law, both commitments challenged by feminist ethics.  Aristotle's conception of voluntary action shares some similar ground with autonomy, but differs in crucial respects (and likewise between Kant and Mill on fairness).  While one could still organize readings in the same way as the traditional course, the discussion might be refocused onto these values.  Why might we hold to them too?  Why does Kant, say, hold these values but not combine them with others?

I have kicked around this idea for awhile, but was concerned that it would not produce actual changes in my classroom - after all, discussion of the competing values might be a useful frame, but it could still easily lead to the focus on counter-examples that concerns me.  A colleague, however, suggested an assignment that she uses in her class, where students are asked to design their own ethical theory at the end of the term.  Such an assignment might be the linchpin that holds together a values based approach.

In particular, it encourages a focus on consistency.  That is, students will first work through how Aristotle, Kant and Mill develop (or at least attempt to develop) frameworks that consistently integrate these values, and that other values cannot be easily introduced without also introducing inconsistency.  Students then have to take up this task for themselves, identifying the values that matter to them and working to consistently integrate them into a coherent theory.

I see two principal advantages to this approach.  First, it encourages students to make the material personal (values that matter to them) even at the same time that they are developing a theory which applies beyond their own person.  This speaks directly to the goal of helping students develop themselves as moral reasoners.  Second, it encourages students to think through the implications of their commitments in a way that the critical stance does less successfully.  It is this sort of working out of one's commitments that makes Singer's "Famine, Affluence and Morality" paper so compelling (both philosophically and pedagogically), and this approach might make it clear why this argument cannot be so easily dismissed by pointing to intuitive counter-examples.

This is an approach to Ethical Theory that is a long ways off from coming to fruition, but nevertheless one I'm rather excited by.  Any thoughts, both from seasoned teachers of ethical theory and those who have taken such courses?