learning to code

Simon Willison–a computer scientist and LLM researcher–published a roundup of everything we learned about LLMs in 2023 and it is a very good and entertaining post. You should read the whole thing, but I would specifically like to highlight this part:

Over the course of the year, it’s become increasingly clear that writing code is one of the things LLMs are most capable of.

If you think about what they do, this isn’t such a big surprise. The grammar rules of programming languages like Python and JavaScript are massively less complicated than the grammar of Chinese, Spanish or English.

It’s still astonishing to me how effective they are though.

One of the great weaknesses of LLMs is their tendency to hallucinate—to imagine things that don’t correspond to reality. You would expect this to be a particularly bad problem for code—if an LLM hallucinates a method that doesn’t exist, the code should be useless.

Except… you can run generated code to see if it’s correct. And with patterns like ChatGPT Code Interpreter the LLM can execute the code itself, process the error message, then rewrite it and keep trying until it works!

So hallucination is a much lesser problem for code generation than for anything else. If only we had the equivalent of Code Interpreter for fact-checking natural language!

I’m in the process of learning to write software with Python and this rings absolutely true. ChatGPT 4.0 is BONKERS good, it’s like having a tutor sitting at the desk with me. It makes the learning process so much more fluid, and even if it gives me wrong answers, a) I can check them when I try to run the code; b) it gives me a starting point for solving a problem; and c) I can use it to iterate.

LLMs open coding up to vastly more curious tinkerers who don’t like dealing with tedious syntax and impenetrable documentation. I suspect part of the reason Silicon Valley is losing its mind over this technology is because it can do what they do.

One response to “learning to code”

  1. @pjk Yeah, the main (well, only) place I regularly use LLM's (not exactly by choice, but I don't go out of my way to avoid it) is auto-complete while programming. And it's actually quite good. It'll be wrong fairly frequently, so you still have to know what you're doing, but most of the time it's really helpful.

    I still think it's massively over hyped though, mainly because investors really want it to be able to actually replace workers – enough that they believe it really can.

Leave a Reply

Your email address will not be published. Required fields are marked *