So I read an interesting post by Yudkowsky that summarized my own - TopicsExpress



          

So I read an interesting post by Yudkowsky that summarized my own thoughts on the food industry and self-awareness more accurately than other arguments Ive seen. I think that I care about things that would, in your native mental ontology, be imagined as having a sort of tangible red-experience or green-experience, and I prefer such beings not to have pain-experiences. Happiness I value highly is more complicated. However, my theory of mind also says that the naive theory of mind is very wrong, and suggests that a pig does *not* have a more-simplified form of tangible experiences. My model says that certain types of reflectivity are critical to being something it is like something to be. The model of a pig as having pain that is like yours, but simpler, is wrong. The pig does have cognitive algorithms similar to the ones that impinge upon your own self-awareness as emotions, but without the reflective self-awareness that creates someone to listen to it. It takes additional effort of imagination to imagine that what you think of as the qualia of an emotion is actually the impact of the cognitive algorithm upon the complicated person listening to it, and not just the emotion itself. Like it takes additional thought to realize that a desirable mate is desirable-to-you and not inherently-desirable; and without this realization people draw swamp monsters carrying off women in torn dresses. To spell it out in more detail, though still using naive and wrong language for lack of anything better: my model says that a pig that grunts in satisfaction is not experiencing simplified qualia of pleasure, its lacking most of the reflectivity overhead that makes there be someone to experience that pleasure. Intuitively, you dont expect a simple neural network making an error to feel pain as its weights are adjusted, because you dont imagine theres someone inside the network to feel the update as pain. My model says that cognitive reflectivity, a big frontal cortex and so on, is probably critical to create the inner listener that you implicitly imagine being there to watch the pigs pleasure or pain, but which you implicitly imagine not being there to watch the neural network having its weights adjusted. What my model says is that when we have a cognitively reflective, self-modely thing, we can put very simple algorithms on top of that---as simple as a neural network having its weights adjusted---and that will feel like something, there will be something that it is like that thing to be, because there will be something self-modely enough to feel like theres a thing happening to the person-that-is-this-person. If the ones mind imagines pigs as having simpler qualia that still come with a field of awareness, what I suspect is that their mind is playing a shell game wherein they imagine the pig having simple emotions and that feels to them like a quale, but actually the imagined inner listener is being created by their own minds doing the listening. Since they have no complicated model of the inner-listener part, since it feels to them like a solid field of awareness thats just there for mysterious reasons, they dont postulate complex inner-listening mechanisms that the pig could potentially lack. Youre asking the question Does it feel like anything to me when I imagine being a pig? but the power of your imagination is too great; what we really need to ask is Can (our model of) the pig supply its own inner listener, so that we dont need to imagine the pig being inhabited by a listener, well see the listener right there explicitly in the model? Contrast to a model in which qualia are just there, just hanging around, and you model other minds as being built out of qualia, in which case the simplest hypothesis explaining a pig is that it has simpler qualia but theres still qualia there. This is the model that I suspect would go away in the limit of better understanding of subjectivity. So I suspect that vegetarians might be vegetarians because their models of subjective experience have solid things where my models have more moving parts, and indeed, where a wide variety of models with more moving parts would suggest a different answer. To the extent I think my models are truer, which I do or I wouldnt have them, I think philosophically sophisticated ethical vegetarians are making a moral error; I dont think theres actually a coherent entity that would correspond to their model of a pig. Of course Im not finished with my reductionism and its possible, nay, probable that theres no real thing that corresponds to my model of a human, but I have to go on guessing with my best current model. And my best current model is that until a species is under selection pressure to develop sophisticated social models of conspecifics, it doesnt develop the empathic brain-modeling architecture that I visualize as being required to actually implement an inner listener. I wouldnt be surprised to be told that chimpanzees were conscious, but monkeys would be more surprising. If there were no health reason to eat cows I would not eat them, and in the limit of unlimited funding I would try to cryopreserve chimpanzees once Id gotten to the humans. In my actual situation, given that diet is a huge difficulty to me with already-conflicting optimization constraints, given that I dont believe in the alleged dietary science claiming that I suffer zero disadvantage from eliminating meat, and given that society lets me get away with it, I am doing the utilitarian thing to maximize the welfare of much larger future galaxies, and spending all my worry on other things. If I could actually do things all my own way and indulge my aesthetic preferences to the fullest, I wouldnt eat *any* other life form, plant or animal, and I wouldnt enslave all those mitochrondria. Thoughts anyone?
Posted on: Wed, 12 Nov 2014 03:32:31 +0000

Trending Topics



Recently Viewed Topics




© 2015