-2 upvotes, 1 direct replies (showing 1)
[deleted]
Comment by throwhooawayyfoe at 17/01/2020 at 00:15 UTC
8 upvotes, 0 direct replies
It may be the case that in the future we have a workable materialist model of consciousness that passes all the same thresholds of trust we apply to other fields - for example, a model that allows us to simulate consciousness as an emergent property of material systems with accurate predictive power, or produces an AI that avidly attests to its own subjective experience.
But no matter how accurate or complete that model is, there would *still* be many people who claim it’s not enough. Perhaps because modeling emergent properties tends to be mindbogglingly complicated, and a model of something as complex as consciousness would be far outside the intellectual grasp of most or all human minds. Or perhaps because they’ve already defined consciousness as containing some magical quality, such that no model could satisfy them despite all proofs of its accuracy.
If that occurs it will be less a failure of the model than of our imaginations and egos.