šŸ’¾ Archived View for splint.rs ā€ŗ autocars.gmi captured on 2024-08-25 at 00:05:16. Gemini links have been rewritten to link to archived content

View Raw

More Information

ā¬…ļø Previous capture (2024-07-08)

-=-=-=-=-=-=-

Self Driving Cars may not happen

I have doubts about self-driving cars. The entire argument rests on analogy with software translators.

Software translators began as simple dictionaries. You type in ā€˜catā€™, it returns ā€˜le chatā€™. Next, programmers started to feed in simple rules, so ā€˜black catsā€™ could return ā€˜les chats noireā€™. The rules became so advanced that eventually, the computer could take simple sentences such as ā€˜I see a catā€™, and get the right answer back 90% of the time.

From an outsiderā€™s perspective, it seems like weā€™re getting there. It seems like the last step - those few finicky sentences which weā€™ve yet to teach a computer - are all that stand between us and a fully-working Artificial Intelligence. But the last step will not happen in our lifetime, no matter how much money people throw at it.

Examples

Letā€™s take that last sentence and drill into why it wonā€™t happen. ā€˜Last stepā€™, could mean ā€˜stepā€™ in the sense of a stage within a task, or ā€˜stepā€™ as in stairs. In order to understand which, the translating computer must understand weā€™re talking about a process, and therefore this means ā€˜stepā€™, in the sense of ā€˜taskā€™, but the computer canā€™t understand that, because it canā€™t understand anything.

These problems arise constantly.

Should we translate English ā€˜orā€™ into Polish ā€˜lubā€™, or ā€˜czyā€™? One means ā€˜orā€™ as a statement, like ā€˜tea or coffee, either are fineā€™, and the other means ā€˜orā€™ as a question, like ā€˜tea or coffee, which would you like?ā€™.

If a computer finds the Polish sentence ā€˜kot wracaā€™, how does it know if this is ā€˜a cat returnsā€™ (i.e. a new one), or ā€˜the cat returnsā€™ (i.e. the cat weā€™ve already been introduced to)?

When a book is ā€˜curiousā€™, it means the book is strange and interesting. When a person is ā€˜curiousā€™, it means theyā€™re actively interested in something. When a cat is ā€˜curiousā€™, it could mean either. What general rule could we possibly use to tell a computer which is which? A person can see that a book is not interested in anything, because itā€™s not sentient. How do we explain to a computer which things are and are not sentient?

The Necessity of ML

No computer can answer any of these questions, because the answer requires understanding the context, and computers understand nothing. In order to have a computer translate a standard paragraph, we must first create general artificial intelligence, and then teach it how to speak, then finally, how to speak a second language.

Self-driving cars have been almost ready for a while, and I hope theyā€™re ready soon. But I wonder if that last little bit isnā€™t so little. Perhaps cars will always have trouble telling the difference between a child running across the road and a plastic bag. Perhaps they wonā€™t be able to play well with cyclists because they canā€™t overtake, or they overtake wrong.

Part of driving is understanding that someone will react to you, with the assumption that they know you will react to them, and so on, back and forth. It might seem a trivial thing to pass someone with a knowing glance. But then again, the difference could require a massive step forward in how much cars understand.