>b's weblog

News. Journal. Whatever.

Anweisung vom LKA-Präsident, der NSU nur zum Schein nachzugehen?TAFTA – die große Unterwerfung

Errors and Mistakes in Joscha's “How to build a mind”

Because this ideology is starting to enter the CCC, I'm trying to explain what is the problem with naive mechanicism. As an example, I'm criticizing Joscha's speech “How to build a mind” at 30C3.

The reason why I want to help with enlightenment of people in understanding what is being discussed here can be found in the development of the scientific as well as the public discussion about topics like measuring feelings or thoughts, computing natural language and the meaning of it, simulating “minds”, at last thinking – like Joscha – that intelligence will be a computational problem itself.

Actually, some of these ideas come from people who are fighting fundamentalistic religion because of its influence on the educational system. There is nothing wrong with that, but it leads them to start to believe themselves. However, science does not need any believe except into scepticism. So keeping being sceptical is required to remain in the domain of science – even being sceptical about what you believe to know, what you believe to be provable true. Let's start with our example; here is Joscha's speech (preface: I will not comment on Joscha's bashing of philosophers, on his bashing of the philosophy of mind nor on his bashing of the humanities here – I will only criticize statements and conclusions he's telling us which deliver factual content):

video previewYouTube

To start with the discussion, we should discuss the competence of informatics (which Joscha probably is meaning with “computer science” here) about describing a mind. Well, doesn't neuroscience see the central nervous system (CNS) as a computer which implements a recurrent neural network (RNN)? Shouldn't then computer science be responsible to describe the software which is running on that computer? And shouldn't informatics be the science which describes how such a software should work, and what it's all about? Well, yes and no.

Actually, there are some problems with that view:

  1. The problem of discrete vs. analogous computers. The CNS doesn't look like a discrete computer. The models the neuroscientists are using usually don't describe discrete computers. So most of the things we know about discrete computers will probably not apply – we have to be careful in our argumentation.

  2. The Simulation Problem. If the CNS is an analogous computer, then there will be software. But it will not be possible to simulate that software effectively. This is true because of what we can call the Simulation Problem: it's like with weather simulations – the far we simulate, the more the simulation will diverge from the simulated system. Simulations in general all diverge, only for a few special cases some are converging, and it is not possible to find a converging simulation for most of the problems. Ask the weather guys for the reasons, if you don't believe. Actually, emulation of a complex system is only possible for discrete systems. As it looks like today, the CNS does not resemble one of such simple cases we can find a converging simulation or even an emulation for – not at all.

  3. The Software Theses is unproven. The Software Theses is the assumption, that thinking is being implemented using software. There is no way to prove or to disprove this theses yet. I'm assuming that the software theses is correct, but I'm not able to offer any proof. Because of being a sceptic, I will announce that I'm basing my reasoning on the Software Thesis when I'm doing so. This is needed for working scientifically.

  4. The category error of mixing software with thinking. Even if thinking is implemented using software, it defines another category than software. The reason is the semiotic triangle: put forth your hand, with the palm to the top. Now think about how torridity would feel in the palm of your hand. Do you have an idea of that? Well, sure you have. So the word “torridity” is the designator of what we're talking about. The thing itself isn't here – there could be real torridity, but actually there is none. Obviously we can talk about things which aren't here. There is no real torridity right now, it's only in your imagination. In your imagination you have an idea of torridity. This is how the semiotic triangle works: we're using designators for things we have an idea about. Think about software: computer programs don't have ideas – those are only in the minds of the programmers. But the computer programs themselves (without the programmers) are meant when we're talking about software. We don't mean the programmers with “software”, but their programs. So software and thinking are forming two different categories. What is in one of them, will not be in the other. But there may be a realationship, perhaps a functional one (see Software Theses).

  5. The Identity Error. The naive physical experiments about what in your brain is working when you do this and that, are mostly useless. Think about that: The ALU in the CPU of your laptop is the spelling center of your word processor. Each time you're starting the spell checking, the ALU is heated. This can be measured. The hard disk is the print center: if your hard disk is damaged, then printing will not work any more. The misunderstanding here comes from confusing an identity relation with a functional relation. Of course, there is a functional relation between printing in your word processor and the hard disk. But that does not make your hard disk the “print center” of your laptop, does it? So what is the “speech center of your brain”? (There was an enlightening article about that problem in Wired lately).

These are some of the problems. Now let's start with Joscha's talk:

At 2:25 Joscha is talking about simulating systems which resemble a mind. Additionally, at 2:30 he's talking about testing. As we know, minds cannot be simulated with huge success. Tests will fail because of the Simulation Problem.

At 2:32 Joscha is claiming that computer science has the right tools for simulating a mind. This is wrong. It's wrong because of the Simulation problem. And it's wrong because of the differences between discrete vs. analogous computers. Computer science today still has a lack of understanding of non-discrete computers. Additionally, this implies a category error, what Joscha is claiming: even if he could simulate a mind effectively, this wouldn't show anything about thinking yet – only about the software implementation of thinking (if the Software Theses is correct). Even then there would be a need for reasoning from the software implementation to the domain of thinking – and this reasoning is not part of informatics, it is not part of computer science as we do it today.

At 3:10 Joscha is talking about developing and testing complex theories. Computer scientists have tools for developing and testing complex theories, he's claiming. That's wrong. For developing theories, computer science has no tools at all. For proving theories, there is a way: it's about proving the formal correctness of a theory. Theories can be proven if they're consistent or not. They can be proven if they match a specification. But that's it. There is no way in computer science to prove if a theory is sensible. There is no way to prove if a specification is sensible. The meaning of theories is not part of the scope of proving in computer science at all – or the physicists would be all unemployed next month, as well as all other scientists (and many more).

At 5:10 Joscha is talking about it would be possible to “look into mental representations”. This is not possible up today. It is a problem related to the Identity Error. We see some data, but we don't see its meaning.

At 6:50 he's claiming “minds are information processing systems”. While this may be true for a computer (and therefore for the CNS), this is wrong already for computer programs. Computer programs are not information processing systems at all – because they aren't systems. The system is following a computer program, the computer program therefore is defining a subset of possible trajectories for an information processing system. But we have to distinguish between a program and a software system here. A software system in a way is an implementation of an interpreter of its input. And a computer with a computer program resembles such an interpreter, defined by the computer program. So a software system is an information processing system, because it is resembling a computer with a specialized functionality (think about virtualization to have a plastic example for that). So isn't a mind just a computer program? No, it's not. But is it a software system? This is a category error. A mind has ideas, software systems haven't (only programmers have). A mind is none of the two.

So this was just the first 7 minutes of Joscha's talk. And this was the basics, the premisses he's basing his reasoning on. All of them are wrong. Ex falso quodlibet.

publiziert Thu, 09 Jan 2014 19:28:33 +0100 #informatik #ki #philosophie

Kommentieren…

Zurück zum Blogindex