💾 Archived View for skyjake.fi › gemlog › 2021-07_artificial-programming.gmi captured on 2023-11-14 at 07:57:50. Gemini links have been rewritten to link to archived content
⬅️ Previous capture (2023-01-29)
-=-=-=-=-=-=-
I think the recent advances in autocomplete/coding AIs are interesting, but they are definitely a double-edged sword.
Writing code that is correct and appropriate requires knowing the scope and context in which said code is running. The higher-level or larger the program, a simple textual prompt for an AI may not incorporate all of the required details. The larger the scope, the more difficult it is to ascertain that code is actually correct. But when you think about it, our current software architecture and engineering practices are based on how humans understand the structure and behavior of a program. Like an AI playing Chess or Go, perhaps it can ultimately produce output that works brilliantly but is in fact unintelligible — a disastrous mess of spaghetti code and an impossible black box for humans to unravel?
Do we need to create debugger AIs to fix bugs in the AI-produced code, since the human operator may not even understand what the code does? Perhaps we can combine this with Test-Driven Development, so humans just write the tests and an AI writes the implementation that produces the correct behavior to pass the tests. What could go wrong?!
In practice, all of this hinges on what source data the AI is trained on. A general-purpose AI trained on all the source code publicly available will by definition produce mediocre code. Shouldn't we aiming for the AI to only generate top-notch code? I suppose there isn't enough training material if we limit the source data to the subset that is judged to be great quality according to some metric. How would such a metric even be defined?
I also worry about the continual rise of abstraction levels. The number of programmers that actually understand how a CPU works must be reducing all the time, with the prevalence of high-level programming languages such as JavaScript and environments like Electron. It doesn't help that CPUs themselves are becoming more and more complicated. Writing efficient and simple ("Good") code is much easier when you understand the tech stack in sufficient detail from top to bottom. Throwing an AI in the mix will completely obscure what's going on.
At present time, though, we're talking about "smart autocomplete". It can be a helpful time-saver for boilerplate and rote code as long as the human operator keeps track of the larger context and is able to verify the correctness of the program.
This is pretty much what GitHub Copilot is meant for.
It's early days, though. We can't yet trust Tesla's Autopilot AI to autonomously drive a car, nor can we trust an autocomplete AI to autonomously write computer programs for us. In time we may get there, but the price to pay is that we lose control of our technology.
📅 2021-07-10
CC-BY-SA 4.0