Eliezer Yudkowsky
Xposted w/ edits from a comment on Effective Altruism, asking who or what I cared about:
I think that I care about things that would, in your native mental ontology, be imagined as having a sort of tangible red-experience or green-experienc
However, my theory of mind also says that the naive theory of mind is very wrong, and suggests that a pig does *not* have a more-simplified
It takes additional effort of imagination to imagine that what you think of as the qualia of an emotion is actually the impact of the cognitive algorithm upon the complicated person listening to it, and not just the emotion itself. Like it takes additional thought to realize that a desirable mate is desirable-to-yo
To spell it out in more detail, though still using naive and wrong language for lack of anything better: my model says that a pig that grunts in satisfaction is not experiencing simplified qualia of pleasure, it's lacking most of the reflectivity overhead that makes there be someone to experience that pleasure. Intuitively, you don't expect a simple neural network making an error to feel pain as its weights are adjusted, because you don't imagine there's someone inside the network to feel the update as pain. My model says that cognitive reflectivity, a big frontal cortex and so on, is probably critical to create the inner listener that you implicitly imagine being there to 'watch' the pig's pleasure or pain, but which you implicitly imagine not being there to 'watch' the neural network having its weights adjusted.
What my model says is that when we have a cognitively reflective, self-modely thing, we can put very simple algorithms on top of that---as simple as a neural network having its weights adjusted---and that will feel like something, there will be something that it is like that thing to be, because there will be something self-modely enough to feel like there's a thing happening to the person-that-is-
If the one's mind imagines pigs as having simpler qualia that still come with a field of awareness, what I suspect is that their mind is playing a shell game wherein they imagine the pig having simple emotions and that feels to them like a quale, but actually the imagined inner listener is being created by their own minds doing the listening. Since they have no complicated model of the inner-listener part, since it feels to them like a solid field of awareness that's just there for mysterious reasons, they don't postulate complex inner-listening
Contrast to a model in which qualia are just there, just hanging around, and you model other minds as being built out of qualia, in which case the simplest hypothesis explaining a pig is that it has simpler qualia but there's still qualia there. This is the model that I suspect would go away in the limit of better understanding of subjectivity.
So I suspect that vegetarians might be vegetarians because their models of subjective experience have solid things where my models have more moving parts, and indeed, where a wide variety of models with more moving parts would suggest a different answer. To the extent I think my models are truer, which I do or I wouldn't have them, I think philosophically
If there were no health reason to eat cows I would not eat them, and in the limit of unlimited funding I would try to cryopreserve chimpanzees once I'd gotten to the humans. In my actual situation, given that diet is a huge difficulty to me with already-conflic
Eliezer Yudkowsky
Rob Bensinger
Now we're not just picking on Chalmers. We have a general causal theory of evidence, and as a side-effect it predicts that brains can't know about epiphenomenalis
If you assign a low prior to 'all diamonds are epiphenomenally
Rob Bensinger
I don't think this is an adequate response to Eliezer's objection. Aside from occasionalism, I haven't yet seen any adequate response to Eliezer's objection in the literature. The problem is that this view treats 'causal relevance' as a primitive, like we can just sprinkle 'causality' vaguely over a theory by metaphysically identifying phenomena really really closely, without worrying about exactly how the physical structure of a brain ends up corresponding to the specific features of the phenomena. The technical account of evidence Eliezer is giving doesn't leave room for that; 'causal relevance' is irrelevant unless you have some mechanism explaining how judgments in the brain get systematically correlated to the specific facts they assert.
If the zombie argument works, quiddities can't do anything to explain why we believe in quiddities, because our quiddities can be swapped out for nonphenomenal ones without changing our brains' dynamics. If the qualia inversion argument works, quiddities can't explain why we have accurate beliefs about the particular experiences we're having (e.g., as William James noted, that we're experiencing phenomenal calm as opposed to phenomenal agony), because the quiddities can be swapped out for other phenomenal quiddities with a radically different character. The very arguments that seek to refute physicalism also refute all non-interaction
Systematic philosophy is relatively unpopular in modern academic analytic philosophy, so different fields often carry on their debates in isolation from each other. And systematic philosophy is -especially- unpopular these days among hard-nosed reductionists -- the sorts of academics most likely to share Eliezer's interests, background, and intuitions.
Rob Bensinger
David Pearce
Our classical digital computers /
Anyhow, and ethically much more important from a practical point of view...
Suppose, say, I hold a minority position. I believe I have cogent arguments that tomorrow's "mind uploads" lack bound phenomenal experience. "Uploads" are just digital zombies that I can molest at will - with no more compunction than the hostiles of today's violent games. Let's suppose the great majority of the scientific community are unpersuaded by these arguments. What is the ethically appropriate way to behave? Forcefully reiterate my position, lament how researchers haven't understood or properly worked their way though the long but compelling chain of reasoning demonstrating that I'm correct - and then cause grievously bodily harm to [what I'm convinced are] just zombies? Or have the cognitive humility to acknowledge that I could well be wrong? Acting out the consequences of idiosyncratic views can be ethically catastrophic.
Factory farming pigs for the dinner table on the contested assumption they are just zombies is no different.
Rob Bensinger
That isn't what I meant by 'sprinkled'. I meant that you need an actual mechanism (at least a toy one, as a proof of concept) for how the brain ends up correlated with its referent; saying the referent is 'causally relevant' or 'integral' doesn't give us such a mechanism, even schematically.
Suppose you claim to know that the 'inner nature' of a piece of bread is the Body of Christ. Without wading into any complex theology, it's perfectly reasonable for me to doubt your claim •simply on the grounds that you haven't given me a mechanism for how you came to know about the bread's 'inner nature'•. (And I can doubt this just as easily whether or not I agree with you that bread •has• an 'inner nature'.)
Now, there are ways to solve this problem. The Matrix God might have told you directly about the bread's inner nature. (This is also the only solution I know of to Eliezer's objection to non-interaction
On the other hand, just saying 'the Body of Christ is MAXIMALLY causally relevant, because it just IS the bread, the bread's true nature'... well, that doesn't even begin to address the worry, because it doesn't explain how the true nature differentially produces a change in the claimant's brain. The problem is the same whether you're talking about the 'inner nature' of a sensory object, v. the 'inner nature' of your own brain; somehow the actual pattern of neural firings in your brain has to causally interact with this inner nature in a way that causes the one to structurally resemble the other (like portions of my brain's visual centers structurally resemble the objects I'm looking at).
(I agree with you on the practical question. One's view on the hard problem of consciousness should inform one's view of animal welfare, but most such views agree that animals have a non-negligible chance of being moral patients. Eliminative physicalism denies that animals are conscious, but in a fashion that plausibly makes it •harder• to assert human exceptionalism.