AI Coding
I tried out GitHub Copilot this weekend. I've been thinking, and talking, about AI like your average technology person. So here we are: what's my reaction to LLMs getting better and better at coding?
LLMs are great at information retrieval. Copilot or GPT read a lot of code from GitHub, StackOverflow, Reddit, etc. And now, given some context (code in the file I'm editing, its surrounding project info, etc), it can retrieve, by probability, the best snippet of code that could come next. I posted this example on social media recently:
Me: imports my own file system library Pathos (first time in this code base). Types name of computed property using it.
Copilot: suggests an implementation that uses the library, slightly incorrectly.
Me: Accepts suggestion, fixes error.
Me: Types in name of second computed property.
Copilot: suggests correct implementation using different parts of my library.
Fantastic! I'd like to think that the open-source code I put on GitHub made some small contribution to this specfic interarction of programming in Swift. If this is true, then I've contributed to everyone who is going to use Copilot to write Swift. So did everyone who posted their questions, answers, examples, musings, tests, libraries, apps ... about Swift on the internet. Copilot is the child of our collective labor. Before these LLM programming tools, we didn't interact. Now, we have a hive mind.
Will LLMs take our jobs?
I can't wait for that to happen. But, so far, I just don't see it. Programming is, barely, superfacially, about information retrieval. When it is, it's usually in the context of studying, training, interviewing, or some type of product grind. Sure, that covers a lot of of reason for writing code. BUT, here's the thing: I hate writing code for all of those reasons. If you had some time working in tech, you'd have heard the saying along the lines of "programmer should try their best to replace themselves". This is partially about working with people around you, but it also serves as a lens to evaluate programming tasks: if you are doing simple things over and over again, it means you are stuck at a place of stagnation. You are not growing, and probably not creating as much values as you deserve.
LLMs are going to replace some of the things we do as programmers. The parts that are boring, repetitive, stagnating. Will some people lose their jobs because of it? Maybe. At first, these will be, for the sake of simplicity, "low-level" jobs (I'm going to deliberatly leave "low-level" undefined here). For those who get to stay, however, our lives are going to become more fun and creative. Why yes, I included myself in this group because I'm egocentrically biased.
Let's say I'm wrong, these AI models, very quickly, grew super capable, to such degree that it's no longer commercially viable to hire humans for programming tasks. Well, it would be a kick in the butt for me to have to find a new career. But a kick in the butt is often the right thing to move us along in life, yes? When your comfort zone comes with a biweekly paycheck, it's extra hard to grow outside of it. But I should love to live in the post-progammer world as a software user. Chances are, there'd be less bugs?
Anyways, this has been my super unscientific, totally biased, very personal musings about AI this week.