Thanks, Gedusa. I'm not an expert in metaethics either, so take what I say as a novice's answer.
Eliezer talks about the Pascalian wager for moral realism
in a nice essay, and it's actually a very common argument. My response is that moral realism isn't so much a possibility to which I would assign some probability. Rather, moral realism is a
confused concept. It would be like assigning a probability to 1+1=3.
Maybe there are systems in which you could assign probabilities less than 1 to logical truths without undercutting the foundation within which you're speaking. I agree that to the extent that probability expresses a feeling, it does
feel like logic could be wrong. But I don't know how to express this without talking nonsense.
I could also be wrong in the brain processes that lead me to believe that moral realism is confused. Indeed,
I'm not 100% sure that 53 is a prime number. As I said in an
old blog post:
I've done enough math homework problems to know that my probability of making an algebra mistake is not only nonzero but fairly high. And it's not incoherent to reason about errors of this type. For instance, if I do a utility calculation involving a complex algebraic formula, I may be uncertain as to whether I've made a sign error, in which case the answer would be negated. It's perfectly reasonable for me to assign, say, 90% probability to having done the computation correctly and 10% to having made the sign error and then multiply these by their corresponding utility-values-if-correct-computation. There's no mystery here: I'm just assigning probabilities over the conceptually unproblematic hypotheses "Alan got the right answer" vs. "Alan made a sign error."
In practice, of course, it's rarely useful to apply this sort of reasoning, because the number of wrong math answers is, needless to say, infinite. [...] When someone objects to a rationalist's conclusion about such and such on the grounds that "Your cognitive algorithm might be flawed," the rationalist can usually reply, "Well, maybe, sure. But what am I going to do about it? Which element of the huge space of alternatives am I going to pick instead?"
Perhaps one answer to that question could be "Beliefs that fellow humans, running their own cognitive algorithms, have arrived at." After all, those people are primates trying to make sense of their environment just like you are, and it doesn't seem inconceivable that not only are you wrong but they're actually right. This would seem to suggest some degree of philosophical majoritarianism.
So maybe the Pascalian argument does hold some water -- but only if we're prepared to accept that it also does in the case of libertarian free will, dualism, and theism.

In any event, what I'll say is that in practice, I
don't care much about the Pascalian possibilities for moral realism. This gets at the heart of emotivism in general: I care about what I care about, and if I don't feel emotionally that I'm obligated to follow this (apparently absurd) conclusion, then I won't.
In any event, it's not as though nothing matters if moral realism is false; it's just that nothing objectively matters. Things still matter to me. I don't know how to linearly combine the cases where moral realism is true and where it's false to come up with an expected-value answer. You were getting at this same point when you said, "but it seems like if you could say that your confidence in such a theory was pretty high, and the reasons for action as strong as realist ones - then the effect of moral realism (and so uncertainty) is diluted."
All of that said, I don't think moral uncertainty is totally useless, for the following reason: I (happen to) care (to some degree) about changes in opinion that my future self would undergo if it learned more about how the world works and experienced a wider variety of emotions and life-events. For example, I think it's important to study mechanisms of suffering and whether they exist in insects, and after that study, my judgment about whether insects suffer in a morally relevant way will be a better one than it is now. So moral disagreement does matter in the sense that it offers candidates for what a more-informed version of myself might come to believe upon further research.
But this doesn't lead to full-blown ethical majoritarianism or coherent-extrapolated volition. Ultimately, it's still
my feelings that I care about, and the feelings of others are only relevant as
evidence insofar they come from minds with a high degree of similarity to my own. The more I think another mind is emotionally different from mine, the less weight I give to its conclusions. I give almost no weight to suffering-maximizer minds (which are not just a lofty though experiment but must literally exist somewhere in our
multiverse).
How much weight I want to give to what I might feel in other circumstances is itself subject to my emotions. It's a tune-able parameter based on how strongly I feel that doing this matters.
---
ETA, 7 April 2013:
So why don't I give high weight to moral uncertainty, even though I've changed my ethical views many times? It's because I happen not to care that much about future changes to my values, except in cases where I think I don't know enough about the situation to form a judgment at all (e.g., with respect to whether insects suffer).
My moral intuitions change based on fluctuations in my neurochemistry, as well as in the longer term based on what kinds of thinkers have inspired me lately. There's not necessarily "progress" happening here: It's just like a leaf in an alleyway getting blown back and forth in various directions by the wind. Why not just go with what I want where I am rather than trying to imagine the average of all possible places I could be blown? What would that average even look like? Should I include the possibility of being brainwashed to think that needless torture was wonderful?
I'm pretty confident that there's not a unique stance to which my brain would converge over a broad set of modifications, influences, and experiences. Where I would end up depends a lot on my neurochemistry and life history, and I realize how path-dependent it is. This doesn't mean I'm inspired to imagine what other places I might have ended up upon different rolls of the dice. I would then have to decide how to pick the weightings for different possible histories, and that choice would itself be arbitrary. If it's all arbitrary, why not just do what I feel is right now? That can include giving weight to what my future self thinks about a given values-based question, but only insofar as I care about doing that.