💾 Archived View for dioskouroi.xyz › thread › 29426218 captured on 2021-12-03 at 14:04:38. Gemini links have been rewritten to link to archived content
➡️ Next capture (2021-12-04)
-=-=-=-=-=-=-
________________________________________________________________________________
This is older than even Unix. Check
https://hci.stanford.edu/winograd/shrdlu/
Yes, surely, but those techniques were implemented on deterministic direct algorithms, as opposed to artificial neural networks. Patrick Winston may come to mind, "the new technologies do the same things we used to do, only worse" (not literal quotation).
I think this effort is towards modelling networks that "approach the environment more like a human would" (e.g. "try and see shapes instead of textures").
I do not find it clear which problem is solved. Is there a road towards progress in "fuzzy Minsky" (trying to convey the idea)?
AI! AI! SHRDLU FHTAGN
Isn't "understands" a little too ambitious for what it actually does?
Why would it be? What is understanding, if not simply having the right mental model?
A mental model implies a mind is involved.
There is neither such thing as a mind nor as a mental model. That’s what B.F. Skinner would say. For him, there was just an organism, and the contingencies of the environment that would shape its behavior.
That’s actually very close to what is happening in machine learning today. Those structuralists who talked about a “mind” or “mental concepts” would be at a loss to understand, much less build, anything like GPT-3 or Tesla Autopilot.