Generative AI can code! What are you going to do about it?
I’m sharing my thoughts on generative AI and how it potentially affects lives of software engineers.
This one is gonna be a bit different. I hesitated to write this post. This blog is rather small with a narrow scope of topics and I definitely don’t dabble in non-technical writing. I also don’t like to follow the click-baity fad - which is definitely happening around AI now. Recently, approximately 1/3 of hacker news topics is related to AI and ChatGPT in one way or another.
I feel like the advent of generative AI affects me directly (or eventually will) so, I decided to share my thoughts from the perspective of a software engineer with years of experience. As always, all opinions are solely mine and I don’t represent any third party.
Please, take this post with a pinch of salt. It’s just me, trying to vent out some of my early observations. I’m no expert in machine learning or artificial intelligence so, it’s very likely that my perception in regards to this topics is very skewed, naive or simply wrong.
Coding with ChatGPT
I’m gonna base my observations on ChatGPT (GPT-3.5) as the lowest common denominator.
It codes really well when dealing with small scope problems. It kind of falls to pieces when actually trying to work with it cooperatively. What to I mean by that? Typical pair programming exercise is all about exchanging ideas and suggestions with a peer whilst simultaneously working on a problem. This can lead to introduction of new features, refactoring of already existing fragments of code or trying out different approaches when debugging but most importantly it’s the best way to share knowledge per sue. At the moment, I don’t see ChatGPT as a reliable partner to work with in that regard. As an example, I invented a small problem to implement a process manager in golang:
Implement a process manager in golang. It should have a StartProcess API accepting the command line and its arguments and a StopProcess API - allowing for termination of already running processes.
Initially, it went really well. After a bit of prompting, it sketched out the following code:
|
|
This looks great (on first glance at least) but has some problems. Here’s a couple:
- The goroutines reading
stdout
andstderr
will leak if the user doesn’t drain the channels returned in theProcessDescriptor
, scanner.Scan
call can be blocking, which again may lead to goroutine leaks,- It uses command name as a key in the process map - which means that you can run only one instance of a given command at once,
- Operations on
processes
map are not thread safe.
The main concept is there though and all of the above are just technicalities. So, I asked it to address these problems one by one and this is where the problems started.
It can successfully fix all the problems - it understands the code, and the problems I pointed out - I have no doubts about that but it… forgets the context for lack of a better word.
Often, when refactoring the code, it silently changes something. Some bits get deleted, in other ones, it introduces new problems or even reintroduces problems I have already asked it to fix. This is annoying, but most likely stems from the fact that ChatGPT accepts only 4k tokens as the prompt length and the conversation backlog is contributing towards that as well.
In other words, ChatGPT can code but its current incarnation is not the best pair programming buddy - this is really a trivial problem and fixing it is just a matter of time (or most likely is already less apparent with GPT4).
I’ve tried feeding it some example code from one of my
projects
and asked it to implement ArgParser
to make this example work. It did that
perfectly on first try. This proves again, that it can code really well. The
cooperative aspect where the context has to be maintained for longer period of
time is a bit lacking (at least now).
Expertise is still required but for how long?
The example above proves that similarly to Microsoft Copilot - expertise is still very important to understand and verify the generated code.
Many people will probably disagree and bring countless examples of games and code, generated with ChatGPT by other bloggers and YouTubers, claiming they don’t know i.e. JavaScript at all - yet managed to successfully do it. I’d say, that this is irrelevant. It’s just a way to fuel hype that has no application in the programming domain as an industry.
As a counter argument, you can do exactly the same with example code given in the documentation for i.e. PyGame now. Sure, it will require more elbow grease if you don’t know Python at all - but it’s possible. Still though, what you see is what you get. Something that seems to work but you don’t know why and how. You can’t guarantee it has no bugs nor that it’s production ready. In fact, I’d argue that if you’d want to release such game as a product, it would be harder to maintain it with AI than actually learning the technology behind it and do it yourself.
Five stages of AI reception
I’m trying to follow the news about progress in AI. So far, I’ve read countless comments on YouTube, hacker news, comments under the articles on ‘The Register’ etc. Various reactions of people remind me of 5 stages of grief.
Denial
There’s this camp which tries to prove really hard, that AI is just an incapable toy and their professions are safe. These are people which try to come up with the most exotic arguments how human approach is unique and irreplaceable in their domain. Some of them campaign that AI is purely evil. This last article is probably the best example (BTW, I’m still not sure if this article is for real or rather sarcastic?). TLDR is that AI insisted that the person in question is dead and produced non-existing data and links to articles to support that claim. The article concludes that this can create an irreversible damage that can have impact on people in the real world. To some extent I agree with the conclusion however, is it any different with Google nowadays? We have limited control of what Google returns about any given term and the scenario as described within this article is very plausible to happen now as well. Still though a perfect example of denial or even anger.
Denial of AI is very much observable within software industry as well. Many people argue that what they do can’t be automated because there’s this one unique thing of human factor that is simply irreplaceable in what they do. I’d argue that they are just lying to themselves to feel better, but time will tell.
The truth is that, AI is good in programming because programming is all about patterns. Patterns that repeat all over the place and idioms existing in specific programming languages. AI is good in recognising and discovering patterns hence it’s a perfect domain for it. I mean, you’re not inventing new, revolutionary data structures on a daily basis. Most of the tasks can be brought down to a series of well documented operations that we all repeat in any project like:
- open a file
- obtain data from the internet
- obtain/store data in the DB
- make an API call
- expose API
- sort collection
- find in a collection
- … you get the gist
AI is another tool that will increase the performance within industry of software production. I really like this article from Tomas Pueyo which goes into details about how demand shapes the supply within any industry really. It’s quite likely that similar will apply to software engineering domain.
Anger
Many people are angry about the advent of AI. The common observable narrative is “adopt or die”. This is even noticeable in the article from Tomas Pueyo which I already quoted. At this early stage, where AI is not really yet commercially widely deployed, I’m afraid that the real anger is still yet to come.
The anger may stem from the fact that the term “adopt” becomes a bit meaningless in this context. AI will become exponentially capable with each iteration. We, as programmers, can’t assume that it will be just that cool thing that assists us and allows us to build things quicker. In fact, I dare to argue that we, humans, will become the bottleneck under such arrangement.
It’s a bit like with an advent of automotive industry and hopes that cars will just supplement horse transport. We all know how that ended and I’m afraid, within the bounds of this comparison, we humans, are the horses.
Will AI have impact on outsourcing industry? Companies tend to seek cheaper workforce off-shore and tend to outsource some parts of their operations to companies like i.e. Mobica - which provide such services or individual contractors. What if you can have an army of machines providing the same quality level of services? An army that doesn’t sleep, need holidays, health services etc?
Bargaining
This is probably the biggest unknown now. I’m happy to adopt the new approach but what does it really mean? You can’t really bargain if you’ve got nothing to offer. Maybe programming will just become a hobby and the industry will change beyond recognition? Maybe that’s what we should accept. Will we all become entrepreneurs from now onwards and just delegate the act of building our solutions and ideas to AI agents?
There’s always the economical factor that dictates adoption of anything. Maybe human programmers will still be cheaper at scale than AI?
For now, Microsoft and OpenAI tries really hard to sell us tools like Copilot or access to ChatGPT API. The new generation of tools is still yet to come.
Depression
This is probably where I am at the moment on the spectrum. I like programming. It’s something that relaxes me and I enjoy it personally. It seems though that all of a sudden, without much warning the craft may become obsolete. It’s a bit depressing to think that all the time you’ve spend to polish your craft may have been essentially in vain and kind of… futile. Of course this is not entirely true because (hopefully) some of that expertise is still applicable until AI becomes a self sufficient programmer, still though a bit discouraging at first.
Acceptance
This is already observable from people that are very enthusiastic about the AI and see the potential how to early adopt it and use it to grow their businesses. The opposite is displayed by people who seem to be directly affected by AI, like anyone doing creative intellectual work.
AI is the new Google
Similarly as widespread access to a search engines allowed us all to find what we want quickly and effectively, I believe that we are at a breaking point now and AI will shape the new era of how we use technology and exchange information.
I don’t believe (or at least I prefer not to) that it will render Google obsolete. This would be a dystopian nightmare. We can argue about the monopoly of Google, how bad it is, how it’s biased one way or another, still though - it allows to you make your own opinions using any source of information you want. I don’t think this would be the case with AI.
Imagine that you’re using ChatGPT as your search engine and you perceive the world only through what it tells you - it would be a perfect propaganda machine, resembling a totalitarian system. It would choose your belief systems for you. It’s a bit like reversed reinforced learning but done on humans by a machine.
Questions
Okay, AI is here to stay. The Pandora’s box has been opened. The AI is only gonna get better, faster, cheaper and more available. How will the new world look like? How will it impact software engineering?
Will we still need human readable programming languages? I mean, we’ve gone full circle. Why would an AI produce a human readable source code when in fact it can just generate the machine code (or WASM code or minified JS or whatever) directly? Would we even need interpreted programming languages or only the compiled ones? One application for the source code is still for record keeping purposes but is it really needed? Will there be a need to tweak the code manually if at all? How would AI native code look like?
AI can interpret assembly code back to a high level programming languages. Jason Turner has tried that in his video. Imagine what future revisions of this technology will be capable of. Potentially, none of the human written code will be as performant as the one written by AI. Maybe by pure fact of expressing the intent, the AI will be able to generate the source code with new specialised data structures in place that we shouldn’t even attempt to manually modify? Using C++ as an example, do we even need further revisions of C++ which mainly focus on improving the syntax and the implementation of standard library in such scenario if the syntax itself becomes irrelevant?
If AI based software engineering (in whatever shape of form) will become mainstream, how will we handle internal, proprietary source code information? In fact this concern applies to any type of sensitive information, personal, medical, financial etc.
As a company X I wouldn’t want to be completely dependant on technology owned by a competition (and on top of that share all details of my operations with that technology).
Conclusion
I mainly wanted to flesh out my thoughts to be able to revisit them later on and compare to the new reality. Hopefully, some of you will find them entertaining.