💾 Archived View for separateconcerns.com › 2023-10-21-automating-job.gmi captured on 2024-05-10 at 10:37:34. Gemini links have been rewritten to link to archived content

View Raw

More Information

⬅️ Previous capture (2023-11-04)

-=-=-=-=-=-=-

Automating myself out of a job?

published 2023-10-21

Since I'm back in AI [1], a recurring question has been whether we are collectively automating ourselves — software engineers and other kinds of tech people — out of our jobs. So, are we? And if so, how long will it take until that becomes a problem, and is this ultimately bad? I don't have all the answers, but here is some insight.

First, we have always tried to automate repetitive things. This is why we write libraries and frameworks to move up the ladder of abstraction — which by the way comes with its own set of challenges. We are reasonably good at it, and that is part of what justifies our high salaries, but it is clear that very small shell scripts [2] will not be enough to replace us entirely.

In the last few years, large language models have shown somewhat impressive ability to generate code, to the point that AI replacing human programmers in the next decade has become a possibility. Like many, I have been experimenting with those tools, and my impressions are mixed.

On the one hand, I am convinced by tools that are integrated into editors and work like a better autocomplete. I have been using GitHub Copilot for a few months and it is helpful. It does not always suggest the right thing, but it does so often enough that it saves me time.

On the other hand, I am skeptical of the approach that consists of generating larger chunks of code using tools such as ChatGPT. It may work for one-shot throwaway scripts, but it is rarely helpful for writing "real" software. The reason is that it fails in two ways. The first one is generating completely wrong code (too simplistic, solving the wrong problem...), and the second one is making less obvious mistakes in the middle of code that otherwise looks good. While the former is not such a problem, the latter definitely is.

An analogy you could make is that they both work like junior programmers making mistakes, but Copilot pairs with you whereas the larger code chunks generation approach is similar to the pull request / code review workflow. If you have high-quality standards, reviewing and fixing bad code is often more time-consuming than writing it from scratch. With junior developers you do it anyway because you invest in their growth, hoping eventually they will write good code that saves you time. With current AI that learning doesn't happen, or at least not fast enough.

Also, there is clearly a difference depending on the kind of code you write and how innovative it is. For instance, for my job at Finegrain, Copilot is very helpful with Web backend code (using Quart [3], SQL requests...), but much less so when I work on deep learning models using our Open Source framework Refiners [4]. This reinforces my view that programming bootcamps that teach superficial concepts to put people on the market as early as possible are a trap; if you are a programmer and worry about your job safety, I suggest you invest in fundamentals.

If that wasn't reason enough, AI appears to be better at solving high-level problems. By that I mean that it is good at answering questions that an average non-technical person could ask, such as "create a nature-themed landing page for my new business which sells coconut oil shampoo", but less good at understanding the kind of problems developers can only talk about with other developers.

So it is clear to me that I will not be replaced by AI next year, but what about in 10 years? What about in 20? That is where knowing a bit about how AI works helps.

Let's start by saying I was not surprised that code generation turned out to be an area where deep learning shines. I have been saying for over a decade that this revolution will affect white-collar jobs before blue-collar [5]. Moreover, the current wave of AI is good at replicating things for which it has tons of examples, and we gave it that with Open Source code. This is also why AI is good at generating pictures, stories, or web pages: it is all out there on the Internet. Humans are still better at learning from fewer examples, even though there has been a lot of progress in that area recently with foundational models and few-shot learning.

Now, my belief is that although recent advances are a good predictor for upcoming short-term improvements, it is not for the long term (by that I mean a few years away). I am 100% sure that we will get better copilots and image models in 2024, but I also think it is extremely hard to predict what will happen in 2030. The reason for this is that large foundational models exhibit a property called emergence [6], which you can basically interpret as: "we know increasing the resources available to the model will make it better, but we do not know exactly at what". And let us be clear here: that does not mean we do not know how to improve a model along a specific dimension, how to teach it to be better at something specific. This is precisely what I am working on! This means I believe the major advances in AI have not and will not come from such deliberate approaches, but from just throwing more hardware, more / better data, and marginally better core architectures at models, and seeing what they learn to do.

In other words, I do think AI may get better at the things it doesn't do well enough to replace programmers, but I would say it is about as likely to happen to other "intellectual" professions such as managers, lawyers, doctors, architects, and so on.

One point that I'd like to emphasize though: a few years ago people who don't work in AI used to think that "creativity" was a human prerogative and that AI would never challenge "artistic" professions. I hope by now you have figured out this was wrong. I'll go even further: one of the main problems in AI today is that it tends to be too creative. We keep trying to figure out ways to curb that tendency to be like a child who tells you about their day at school but invents half of it. My personal view on it is that this is not something we learned about AI, this is something we confirmed about human nature: that evolution made us powerful pattern-matching machines, and that creativity is just an emergent property that came out of it. And I say confirmed because personally I have always thought it to be the case: there were hints at it even before AI, from observing how children learn to our unconscious biases.

That digression aside, let us now assume AI does effectively replace programmers. What then? Does that depress me? What am I to do with my life?

Well, the truth is I am not concerned. Programming is an essential part of me but I had been writing code for over a decade before I started doing it professionally, and even nowadays I still write some code for fun with no financial reward at all. I'll keep doing that even if AI beats us at it. Did people stop playing chess or any of the many games AI has beaten us at for years? Did we stop making tools and furniture by hand now that most of the industry is automated? Of course not. So I'll probably turn to a form of craftsmanship I already enjoy practicing, or even to more artistic forms of code [7].

Will I still make a living out of it? Probably not. But I am confident I will adapt. Even if the entire tech industry is automated I know that I can learn other unrelated marketable skills in a matter of months if not weeks — because I have already done it. And I am in a better position than most to notice what jobs are under threat.

Extrapolating to the entire society, will it not result in a major problem if too many jobs are automated? Well yes, it will, and I do believe at some point we will need to tackle that issue. Note that the issue is not that there won't be enough jobs for everyone. Jobs are not something we fundamentally need, if anything most of us work too much today. And it is not that there won't be enough value creation for everyone either: those machines will create value. The problem is one of allocation.

Here is how I explain my view on it: I actually like capitalism, and I think it is in part responsible for a lot of the progress — including social progress — that has happened in the world in the last few centuries. However, capitalism was designed on the assumption that the majority of the value-add comes from human work, and a minority from capital. Automation will reverse that, and in a system where capital is responsible for a large majority of the value-add capitalism breaks down.

So how do we prepare for this? Well to start let me recommend a short novel I already linked earlier [8]. But I think the most important thing to understand is that we are not there yet, and the worst thing to do would be to try to implement parts of the endgame now. See, this is not a zero-sum game, but the world is not a bunch of altruistic parties either. So if you start — for instance — taxing machines now, or worse trying to slow technical progress, rest assured some of your neighbors won't, and the end result is they will own the machines and not you. So the only reasonable thing to do if you are a benevolent but realistic government is accelerate as fast as you can, capture a large enough chunk of that capital, and then once things settle down make sure it benefits your population. All that while avoiding exploding under the pressure of the inevitable social unrest in the meantime.

Democratic governments of the world, I wish you good luck and hope you will succeed.

1: https://blog.separateconcerns.com/2023-03-31-joining-finegrain.html

2: https://www.youtube.com/watch?v=l2btv0yUPNQ

3: https://github.com/pallets/quart

4: https://github.com/finegrain-ai/refiners

5: https://marshallbrain.com/manna1

6: https://www.quantamagazine.org/the-unpredictable-abilities-emerging-from-large-ai-models-20230316/

7: https://www.camilleroux.com/2023/07/26/art-generatif-de-la-decouverte-a-la-publication-dun-projet-dans-un-studio-prestigieux/

8: https://marshallbrain.com/manna1