I’ve thought about this for a really really long time, watched a bunch of videos and prominent people talk about it, and I have a lot of unstructured thoughts. I’m just gonna vomit all of them out here now and future me will somehow find a way to structure it.

The Foundational Model

Let’s start from the very start, how did this new AI/LLM boom come to be. I would think it was from the 2017 paper Attention is all you need. This paper first introduced the idea of transformers and outlined the architecture of LLMs that we know and love today. Many of the authors of this research paper then joined a little non-profit research house called OpenAI. So why am I saying all these? Well, Prof Ben Leong was the first person who brought attention of this paper to me (see what i did there :wink:) during Y3 in NUS, and he mentioned something very interesting. In the research paper, the authors mentioned the need for a feed-forward layer (or a linear layer) as part of the architecture of an LLM. There was no mention of why, its function, and only that it doesn’t work without it. No one knows why it is important, no one knows what is its use. Ben Leong used this to drive home the point that AI is more art than science. No one knows what they are doing, everyone is just experimenting, and trying every possibility and see what works. At the crux of it, LLMs are just a highly advanced auto-complete model.

A little while after my Y3, then reasoning models started getting popular and widely available. I see this point as a significant leap in the “foundational” model’s performance (arguably the foundational model is still the same, it’s just that they go through more iterations, but I still call it a “foundational” improvement since they are available as a new model). The model is able to solve much more complex tasks better and with higher correctness.

So what’s my point here? I feel that there will be no more advancements in the foundational aspects of AI. Yes there might be marginal improvements like having a higher score on one of the many benchmarks. But will this be noticeable? I highly doubt so. If you look back, the only real breakthrough here is the 2017 paper which kickstarted it all.

But there are a sizeable group who do not think so. There is a chase for something called AGI, or artificial general intelligence for the uninitiated. But this opens a whole can of worms. Currently many big budget companies like OpenAI are trying to achieve that. But the lines are blur. What do you count as AGI? There is no benchmark (or so I think). For all you know, if you were to randomly grab someone off the street from before ChatGPT was popular, and gave him/her a copy of the best LLM we have today, will they think that what we have now is AGI? Some might argue yes, and some no. So my point here is if we don’t even know what is AGI, how are we going to even try to achieve AGI.

I guess it’s pertinent for me to also mention DeepSeek in this section. So what’s so special about DeepSeek. Well it achieves similar performance with less resources, that’s about it. Is there a breakthrough? I guess yes, you can say so, but not in terms of performance, but that of efficiency. Unlike your popular models that activates or make use of multiple “neurons” when making a request, DeepSeek have something called a multi-expert model, only activating certain “neurons” which are useful to solve a specific set of problems without diminishing the correctness or quality of its answers while using way less resources.

It’s also pertinent for me to mention a research paper published by Apple recently, The Illusion of Thinking. The tldr of the research paper is that LRM (Large Reasoning Models) don’t actually do any reasoning. They find that for low complexity tasks, LLMs are better. For medium complexity tasks, LRMs are better. For high complexity tasks, both fail. And why they think that no reasoning is actually being done, is due to its failure in generalisable reasoning. Here is a quote “For instance, they could perform up to 100 correct moves in the Tower of Hanoi but fail to provide more than 5 correct moves in the River Crossing puzzle” (Note that the Tower of Hanoi and the River Crossing puzzle have very similar solutions). I guess this reaffirms my opinion that little to no advancements can be further made on the foundational aspect.

The Application Aspect

Ah the hard part. This is the cause of all these messy and unstructured thoughts. There are many things to discuss here, but the goal here is to find out how AI will affect jobs, and our life. But admittedly, it might be foolish and a waste of time to try and predict the future. But I guess it’s at least useful/important to have some discussions.

The Idea that being late to a trend is good

As you may come to know, many of my thoughts and ideas largely align with that of Theo’s. I largely enjoy his content and seek a lot of advice from him, which can be dangerous (hmmm).

He very publicly mention how it’s good to be late to a trend, so you don’t have to waste time dealing with the early-adopter’s nonsense. Here is an example that he uses. Back when mobile development was in its early stages, many developers make use of Adobe Flash Lite to develop, and if we were to be an early adopter, we would have specialised in it, only for it to be replaced with something better (native frameworks). So his point is, always wait and see before jumping in to a trend, it does not always pay to be early

Putting this lesson into AI, I largely adopt a stance of “wait and see”. By this, I don’t mean “wait and see” before using AI, but more like waiting to see whether I should jump into AI job roles. I have been a user of AI, from chatgpt to cursor. But as to having a job role in the AI space, I feel abit insecure and lost.

The Last Framework

This is an idea from Theo that really stuck with me.

There would never be a new framework

Why? Most developers today heavily rely on LLMs to develop. So if LLMs are bad at creating value to developers, the adoption rate will naturally be low. And when will LLMs be bad? When there is not enough data in the open internet, about a certain new framework, or when the Language Server Protocol (LSP) does not provide enough feedback or information to the model. So naturally LLMs excel at commonly used languages like TypeScript due to its excellent LSP, and less so for untyped languages that are less commonly used.

Other than the fact that there would be no new frameworks, there is also an impact on existing frameworks. Current frameworks would do well to not change the syntax or usages across different updates starting from now. Ensuring that the usages of different calls and its syntax remain the same is ever so important. Imagine if React changes how useState works in their new version. LLMs would have been so well trained in the old usage of useState that it would continuously provide developers with the old usage, breaking and leaving many (inept) developers confused.

The AI hype bubble in relation to other recent hype bubbles

Let’s discuss the 2 most recent hype bubbles that burst (or so I think it burst). The GraphQL and Web3 bubbles

Drawing 2025-06-17 12.13.33.excalidraw

⚠ Switch to EXCALIDRAW VIEW in the MORE OPTIONS menu of this document. ⚠ You can decompress Drawing data with the command palette: ‘Decompress current Excalidraw file’. For more info check in plugin settings under ‘Saving’

Excalidraw Data

Text Elements

value

hype

GraphQL

value

hype

Web3

value

hype

AI

Link to original

GraphQL I’ve heard of GraphQL but never knew that this was a bubble until Theo mentioned it. On the surface GraphQL seems very useful and I initially thought that it was a good “upgrade” for the IMS project. What is it? Here is a brief summary. Unlike REST APIs where we have to create an endpoint for different functions (like comments, posts etc), GraphQL sends over an entire graph and its relations to the frontend. One endpoint that rules them all. Sounds like a very elegant solution. But Theo mentioned that its value is actually quite low. The use cases for GraphQL only really exists in big tech like Instragram, Facebook, etc. Most of the time, GraphQL is simply overkill.

And so, even though it seems to have high value, the value it brings to society is low, and the seemingly high value brought about its high hype which was unwarranted. There are several startups that were born from this. Gatsby, the GraphQL for blogs, which seems overkill and probably is, that’s why its rarely being used now. GraphCool (now known as Prisma), known to turn databases into a GraphQL endpoint, but if i’m not wrong, they moved away from it, but they were slow internally at the start, as they were using GraphQL.

Web3 Know a little bit about this when studying it for my crypto.com interview (lol..). My thoughts about crypto? I genuinely don’t see it going anywhere. The whole idea of crypto is for the public to move away from FIAT currency and the heavy regulation of banks and financial institutions. But I feel that is too idealistic and probably would never happen, not in my lifetime at least. Yes, crypto would probably be useful and popular in countries with high regulation and poor confidence in their own currency, see Argentina and Venezuela. But that’s that. There’s only that few countries where it can be useful. Countries that have such a screwed up financial system, where their citizens have the means and the brains to use crypto. (See Africa, screwed up financial system, but they probably cannot use crypto lol). In countries like Singapore, I don’t see a pain point or a need to use crypto. Yes there is regulation, but do I care? Does it really affect me? No…

Theo mention that he got into crypto, and in particular bitcoin, as he sees it as a currency. But I disagree, it is more than that. It can be thought of as a good store of value, something like gold. But I feel that if most people see it as that, as a store of value, crypto will never take off.

As I have argued, the value it brings to society is low. The hype, however, wow… skyhigh. NFTs, memecoins, and the media coverage of it all. This is way higher than GraphQL, and many of the young generation are getting in on the action. And burst it did.

AI The value? It’s up there, wayyy up there. In a sad little google meets during my time at Ascenda, the last meeting of the Campaigns team (yikes), Vu mentioned something very interesting. AI has always been around, but what made it so special now, is its interface. It’s ability to interact using human natural language, to the point that even my mother, father or any joe on the street can extract value from it. And that’s scary. Never in the history of tech bubbles was its effect so widespread, that any literate human being can be benefitted.

The hype? I don’t think we have to discuss this much. Every company big or small are making systemic changes because of AI. Here are some notable ones.

  • Microsoft: Did a huge layoff of mostly senior engineers. These are some very famous people, including the creator of cpython, typescript, and even an AI director (what? why? this confuses me)
  • Google is going all in on AI, and they arguably have the best position to win. Google has vertical integration (means that Google controls every major piece of the stack, from the hardware up to the software and services, rather than relying heavily on third parties. This gives them a huge advantage because everything can be optimised to work together, often faster, cheaper, and better.)
  • Meta betted on VR, and the world was not quite ready. They know they are late, and losing this AI race, and are pouring in money to try and get back in the race.
  • Apple, losing but refusing to admit and properly join the race
  • Govtech, DSTA, DSO: Sooo many of the projects are AI related, to the point that it seems all that there is, is our government putting too much focus on AI?

Summary If we look at these 3 (or 2?) bubbles, we see a trend.

GraphQL - hyped amongst tech savy people Web3 - hyped amongst the younger generation AI - hyped by all, even the boomers

Also another interesting observation. What does it mean by bringing actual, useful value? Yes, we can say that web3 brought value, as it offered an alternative way to do things. How I see value is “whether it solves a pain point”. Simply offering an alternative is not enough. It has to solve a problem. Chatgpt solved a problem, saving us effort and time of google search. But web3? it solved a problem for that small group of people. Yes I know regulation is bad, but does it really affect me?

AI would not replace developers, not in the near future

There was an attempt. One of the many AI startups created Devin, with the sole purpose to replace developers. But it was not up to standard.

What I see AI doing, is transforming every developer into a superhuman 10x developer. It allows us to breeze through boring menial tasks, like changing shape of JSON API endpoints, renaming variables. But when engineering solutions, AI could only be used as an advisor.

The way we engineer solutions also has to change. We have to think, how can our solution make it better or easier for AI to help us? Look at convex, the first (that I know of at least) database as a service framework made to let AI help you better. From it’s use of typescript to the way it organises its project, it puts AI first.

The death of junior developers

Singapore and the tech market

Bringing our focus back home to Singapore, the Singapore government or NUS or both (no one will ever know) has been steadily increasing the intake of Computer Science students year-by-year. They are increasing the supply of software engineers and its related roles despite multiple people complaining how hard it is to find software related jobs and computer science graduates employment rates steadily decreasing. I’m one of the many graduates complaining about the job market haha. But deep down, if we look at it from an objective perspective, I do agree with their move. Hear me out.

Why do many tech startups fail? The cost of skilled labour in tech is very high. If you compare it to traditional small companies like an architecture firm, for instance, the cost of an architecture graduate is at most 5k SGD per month. You compare that to a Computer Science graduate, who expects at least 6k or more per month, and you see the problem. If you also look to our neighbours, like Vietnam for instance, the cost of a software engineer is so incredibly low, and given the nature of software engineering, they can operate remotely and need not be based in your country of operation. This makes Singapore software engineers that much less attractive. Lowering the cost of software engineers in Singapore by flooding supply is genuinely a good choice. And if we compare the starting pay for computer science students in Singapore, it is still very high compared to other majors.

But how do we compete with other countries like Vietnam? The cost of an engineer is so low that we can never beat, and with AI, the skill level of their engineers might be comparable to a fresh grad in Singapore. As Vietnam become more english-speaking and well-educated, it becoming more attractive for global (Western) tech companies. That’s a genuine fear that I have. Right now, Singapore is holding on to two threads that make them more “attractive” to tech companies. One is its university ranking, making companies perceive graduates in Singapore to be of higher quality. Second is its government policies. In order for tech companies to operate in Singapore, they have to hire a percentage of Singaporeans out of their entire Singapore office. Luckily, companies still want to operate in Singapore due to its stability and neutrality. But once both threads are broken, I feel that we are doomed.

I would also like to make a related point about Chinese tech companies. Singapore is in a good position to attract these tech companies due to Singapore’s largely chinese population. These chinese tech companies cannot be based in countries like Vietnam or Thailand due to its non-chinese speaking engineers. This is one competitive advantage of Singapore. However, most companies that come into Singapore have a goal of wanting to become more international and increase their sphere of operation beyond the chinese market. With time, these companies will become more internationalised, and the competitive advantage that Singapore has over these companies will slowly be lost. So they see Singapore more like a stepping stone to venture out into the world.