Great discussion!
Why not? We know at the most basic level that the brain works via a series of electrochemical switches between neuronal connections - why is this not analagous to a series of transistors?
Or are you referring to the mind rather than the brain? In which case I agree there is no unifying theory of mind however the underlying organ from which it arises (the brain) is structured as I have described (unless you have another theory which I'd be most interested to hear).
The following is a good summary of why the "brain is a computer" is just a metaphor.
"Senses, reflexes and learning mechanisms – this is what we start with, and it is quite a lot, when you think about it. If we lacked any of these capabilities at birth, we would probably have trouble surviving.
But here is what we are not born with: information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers – design elements that allow digital computers to behave somewhat intelligently. Not only are we not born with such things, we also don’t develop them – ever.
We don’t store words or the rules that tell us how to manipulate them. We don’t create representations of visual stimuli, store them in a short-term memory buffer, and then transfer the representation into a long-term memory device. We don’t retrieve information or images or words from memory registers. Computers do all of these things, but organisms do not.
Computers, quite literally, process information – numbers, letters, words, formulas, images. The information first has to be encoded into a format computers can use, which means patterns of ones and zeroes (‘bits’) organised into small chunks (‘bytes’). On my computer, each byte contains 64 bits, and a certain pattern of those bits stands for the letter d, another for the letter o, and another for the letter g. Side by side, those three bytes form the word dog. One single image – say, the photograph of my cat Henry on my desktop – is represented by a very specific pattern of a million of these bytes (‘one megabyte’), surrounded by some special characters that tell the computer to expect an image, not a word."
(link:
https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer)
Computation is a useful metaphor by which we can study the brain but it does no explanatory work. It may even be a correct analogy. But we can't just assume this. We need some theory of why neural processing is akin to computational processing. I haven't come across any in the cognitive science literature myself yet. It is possible that the brain is just a computer but for now, it's just a metaphor.
Is this not a circular argument though? Previously you mentioned that the brain is by definition not a Turing machine therefore it can feel and emote and a computer (which is a Turing machine) can "by definition" do none of these things.
Yet a sea sponge is comprised of a very simple neural net - and I would argue, a series of simple biological switches (or transistors) yet we (and our intelligence/conscious etc) have evolved from primitive animals such as these by making repeated imperfect copies over aeons (hence my comment above that I didn't explain very well!)
You are right, let me rephrase my point better. I am claiming that a Turing machine, by definition, cannot be conscious. This is because of how it works. It literally parses strings of binary over and over again, given whatever a human inputs it with and outputs another string of meaningless binary that some other human
interprets as having meaning. As far as the computer is concerned, it has never encountered meaning at all.
If the brain is a turing machine then either we have misunderstood how a turing machine works or there is more to these strings of binary than we previously thought. Somehow they must encode semantic information too, and not just be syntactic engines. How is this possible? What is the theory behind it? If AI is truly possible, and a turing machine can feel, think and
actually mean what it says then we need to explain how this is possible.
This is the problem with sea sponge example. Sure, I can see how dim workings of semantic information in a sea sponge can evolve into a human writing a symphony or programming a computer, but the really hard step is getting to that semantics in the first place. Do bacteria have mental states with semantic information? What's the difference? What are we adding to syntax to get there?
I'm not asking all these questions as a rhetorical device but because this truly is the state of our understanding of how our brain works, and the crude analogy between it and a computer. We don't have any of the basics worked out. These are important questions and neither you nor I have an answer to them.
Good question - but we surely can't limit ourselves to things we only have certainty in?
I would say that there is a good deal of evidence for emergence - as you say in consciousness, evolution and so on. With the correct underlying conditions, complex order can emerge.
And what is the alternative? That someone is designing these things? Also possible but likely impossible to prove.
How could we really prove that a machine could be sentient anyway? How would we even prove that anyone else (apart from your own self) is?
No, of course, many prominent philosophers, cognitive scientists, neuroscientists, computer scientists etc etc are betting on the emergentism horse. There's nothing wrong with thinking it's the right story. But it's one thing to say that "well, we don't know how consciousness happened but it must have emerged out of matter somehow" because all the other options (physicalism, dualism, idealism(!)) seem to be worse. But AI isn't a real phenomenon that already exists and must be explained somehow (thus allowing ourselves to back a poor theory just because all the other options are worse). Rather, you're using a poor theory that does no explanatory work in one realm
to make predictions about how and why AI will definitely come about. This is an unwarranted step. AI isn't even a reality, so why should we explain it using a poor theory? Let's at least try to justify our belief that AI is possible using more than a hand-waving argument.
I get the impression you are not a full-blown emergentist, the idea that 'out of compexity order emerges' seems to suggest that there is something about the underlying conditions, as you put it, that make it the case that consciousness emerges. So what is it about the underlying conditions then that cause consciousness? There's an interesting paper on this regard by Eric Schwitzgebel. If all it takes is the underlying conditions, that is, the material conditions, to be in the right order for consciousness to emerge, then the United States is probably conscious (link:
http://www.faculty.ucr.edu/~eschwitz/SchwitzPapers/USAconscious-140130a.htm). That seems to me to be an incredible result (if you buy his argument). It's well worth a read anyway because of how well written it is and how clearly it grasps these issues. In any case, I think we need more constraints on how matter must be arranged and what it must do for consciousness to happen. But we have no theory for these constraints and we have only the one example of it here on earth of how it is empirically. There's just not enough data to make a case for "complexity => order" here. I don't mean to dismiss it, it's an interesting idea but what's the evidence?
This would be a necessary but not sufficient step for explaining our consciousness.
I guess that if you and I were to take a trip in a time machine to see the spawning of the first amoeba we could describe the development of sophisticated forms of consciousness as "magical thinking" and yet here we are!
These are all great questions - for me the premise of my argument remains - what is there to STOP the ongoing march of intelligence emerging over time (which hopefully you will agree there is a lot of evidence for)?
The underlying point seems to be that there is something inherently different about a biological circuit conducting electrical impulses compared to a silicon circuit. What that property is nobody can yet identify or explain - isn't that the real magical thinking here?
Copernicus, Darwin and Freud progressively showed how man was not central in the grand schemes of nature and I think that, one day, another name will be added to that list and his/her achievement will be to show that consciousness (as far as it can be proven) can be created digitally.
We are of course limited in being able to understand the question through the prism of our own minds but these are great problems to chew over.
I agree, it is a mystery. But what we have evidence for is biological intelligence. It's entirely possible, maybe even probable, that a more intelligent species will come along after we are long gone. But no computer is intelligent. No computer has ever been intelligent and until we can say exactly how computational intelligence will work, we have no good reason to believe that any computer will ever be intelligent either.