I stopped writing code
I stopped writing code. I use the models to implement all the features, make most of the changes. I review them and from time to time make small changes. It works pretty well. I am faster and my code (my?) is better. But I feel strange. It is another kind of work, it is not the usual developer work. Sometimes I think I am cheating.
I needed a way to allow myself to continue working like this, because I like it and I am much more productive. I started researching and I found this post from Lorentz Vedeler about automatic programming. The interesting part is the concept of LLMs as non-deterministic compilers. It is something I already thought about, and it is pretty reasonable. It gives me a framework: prompts are the source, tests are the spec, and the output is a probabilistic compilation.
Compiler++
If I accept that metaphor, vibe-coding becomes regular development with a language whose syntax is natural language. You still have to learn a little bit of syntax: if you are using Java you need to know that the class construct is central. All the details of the language fade away, but some key concepts are still there.
Since only a few concepts are required, and in general LLMs enable you to develop in a language you are not familiar with, what if I try to go to the root language: Assembly. If the model can "compile" there, maybe we can squeeze more performance by targeting the lowest level directly.
An interesting side effect
To see if we can gain a performance boost using Assembly, I asked Claude to create a series of simple functions in two different languages: Assembly and Python. Then I ran them and compared the results. Here is the repo and here are the results:
| Example | Assembly (ns) | Python (ns) | Speedup |
|---|---|---|---|
| Sum two numbers | 247 | 64 | 0.3x |
| Factorial(12) | 168 | 276 | 1.6x |
| Fibonacci(30) | 174 | 588 | 3.4x |
| Array sum (1000 elements) | 511 | 13,171 | 25.8x |
| is_prime(999983) | 467 | 21,527 | 46.1x |
Except for the first function, sum two numbers, Assembly is faster than Python. That's the answer I expected. It is not a rigorous benchmark, but it is enough to see the shape of the tradeoff.
To native tools
Since we have our shiny new compilers, should we abandon high-level languages and start compiling directly to Assembly? I don't have an answer to that, but one of the good things about LLMs is that you can use them to remove abstraction and overcomplication by implementing only the needed functionality. Abstractions have hidden cost: they hide details to allow us to put all the information into our small context. In the future, with LLMs with a huge context window, we will be able to give them a lot of information without saturating the context.
We have built many tools to allow us not to focus on the details, for example CloudFormation. It is a great tool because it allows you to not think about all the small details of each cloud platform. It gives you the opportunity to change the provider, and so on. But it has a cost in maintaining the code. It has bugs. It is another layer of distance between you and how code runs. With LLMs there is no difference in asking to write a Terraform, OpenTofu, CloudFormation or Azure Resource Manager script. If LLMs can generate low‑level scripts on demand, teams may choose fewer abstraction layers and accept more direct control.
Will I write an entire complex application in assembly in the future? Will we use only native/low-abstraction tools? Exciting questions.
Thank you for taking the time to read this post. If you want to experiment with more complicated functions, here is the GitHub repository.