×LoginHomeBlogSearch

With from Devesh.


Photo by Roberto Nickson: https://www.pexels.com/photo/breathtaking-kalalau-valley-mountains-scenic-view-in-kauai-hawaii-2559941/
Photo by Roberto Nickson: https://www.pexels.com/photo/breathtaking-kalalau-valley-mountains-scenic-view-in-kauai-hawaii-2559941/

Let's work on the fearmongering bit, shall we?

Devesh Kumar

Devesh Kumar

Sat Mar 14 2026
21 Min Read

"We're so cooked" is a statement I've heard more times than I can remember, from my colleagues at work, from my siblings who are still in college and surprisingly from my mother, like what?! Mom, what are you talking about?! Turns out my mother is also fully aware of the "impending doom" of the economy with AI. Thanks to Instagram, I guess? I regret teaching my parents how to use Instagram.

"Beta, are you safe?" my Mother asked me about my job, in a tone that, let's just put it simply, was the quintessential example of how worried a mother gets when she feels her child is in danger. If the same question were asked 3 years ago, when ChatGPT had launched its coding models, I would have said: "Safe? I won't just be safe, I will thrive." But today, with almost 3 years of reflection and a lot of thought put into what is about to happen, even in the grander scheme of things, I don't feel inclined to say the same.

Just like all of you, I've been reading on and trying to wrap my head around all the sensationalizing that's going on around AI. 3 years ago, if you asked me if the thought of AI replacing knowledge-based human work is scary, I would have agreed with a slight touch of "But could it really happen though?" But in no way was I convinced that it would have happened this quickly.

Well, guess what? Things are starting to show signs that it will happen, sooner or later.

In such a time, I think fearmongering isn't going to help anyone. So I thought of writing something to compile my thoughts on this topic, and I hope this resonates with a lot of you.

In no way do I have a crystal ball that predicts the future. So these are my thoughts, but they've been compiled with weeks of research and years of thought and observing reality put into them. That being said, I am no economist or historian. Take this whole thing with a grain of salt. And if any of these go wrong, you don't, or at least, shouldn't know where to find me.

This is going to be a slightly long post because of the obvious need to clarify a lot of things. We'll cover the following topics:

  • People spend a lot of time worrying about the extreme outcomes while ignoring the middle ground.
  • It's in the best interest of companies to fearmonger.
  • People and society are resistant to change. Social signalling and constructs ARE important, no matter what we choose to say, and so is redundancy and choice.e.
  • Humans are terrible at predicting the future.
  • The fundamental disconnect of the people in charge from ground reality.
  • Humans have varied tastes.
  • The demand-supply paradox and the paradox of increased productivity.
  • A peep into the worst-case scenario.
  • The people most valuable and good at something tend to fare well anyway.
  • The bigger picture: There is still no lack of opportunities and problems left to solve that AI will not do for us.

I am not part of the cohort that boarded the "AI will not take your job, someone using AI will" train, I believe LLMs can reason better than most people, and there are jobs that humans are better off not wasting their time on. We've had a long history of people complaining about how soul-crushing their jobs are and how overworked they still are.

I'll just start this whole thing with a quote I live by:

"History doesn't repeat but it rhymes." - Mark Twain

People spend a lot of time worrying about extreme outcomes while ignoring the middle ground.

Okay, let's get the elephant in the room out of the way: This post isn't going to be about "Oh you have nothing to worry about, you're all going to be fine." Because that would be false. Everything is NOT going to be okay; there will be hundreds of thousands of people who will lose their jobs and hundreds of thousands more who will see their pay reduced or their responsibilities increase.

So what is this section about? It's to convince you that we don't have to worry about the extremes that EVERYTHING will change, it won't. Trust me when I say this, humanity has gone through several instances of technological revolutions and almost always, the prediction was that things would be very different. Guess what? They were, undeniably, but they weren't the extremes of what people predicted. We tend to forget that humans are change-resistant (More on this in a moment), and the path forward for any technology is usually the middle ground of what people predict.

So yes, while there will be a lot of change, it WON'T (And I say this with 100% certainty) be that everyone is out of a job and there is a massive shift in the economy. That just doesn't happen with any technological revolution, no matter how good it is. We have a lot of things in the world that should have been obsolete decades ago. Ask Germany or Switzerland what they're doing with paper letters when everything can literally be done on email.

On the bright side, AI DOES increase productivity, but it increases it at the layer that deals with raw implementation, at least as of now, and that's precisely what these AI companies are targeting. And if your entire job revolves around raw implementations (Coding, formatting Excel Sheets, creating PowerPoint presentations that are to be used as citations to meetings or data entry), I hate to break it to you, but your job was going to be cut sooner or later WITHOUT AI as well.

It's in the best interest of these companies to fearmonger.

Let's talk about what it is that these AI companies have been selling for the past 4 years. They're not selling YOU AI services, they're selling investors a dream. And that dream is of an ultra-efficient world where AI is capable of handling everything, and the benefits of productivity (or the economic gains of large companies reducing headcount) are captured by these AI providers. AI companies like OpenAI and Anthropic (+ Whatever Perplexity is doing) have been predicting that it will all happen in less than 2 years.

Granted, that vision was nowhere to be seen as reality in the last 3 years, but over the last year, it has started to take shape and show signs on the horizon. But the prediction was still way off.

The reason for fearmongering that AI is coming for your job is driven by greed for Investor capital. If you don't show yourself as the de facto disruptor in the AI space, which is going to bring on the world of automation, investors aren't going to give you a second look when every other AI company is promising vast riches for them, posing as the bellwether of the new world.

I'm sure we're nowhere close to Artificial General Intelligence (AGI), at which point it would be inevitable to say that our economy needs to shift significantly to accommodate not just the productivity gains, but rather the massive job displacements and lack of consumers in the economy. But just know that every statement, every promise and every bit of progress is driven by incentives. And whenever you hear something that sounds like fearmongering, remember that there are incentives behind it. It's either someone trying to sell an AI product, or an AI course (Which, let's be honest, hold no value right now, it's literally just training on how to use ChatGPT - CHATGPT?! If you can't use ChatGPT without training, you deserve to be replaced by AI).

We have had companies promising fully self-driving autonomous cars long before this AI hype cycle started, and we still aren't there. But Devesh, Waymo exists, right? Yes, and I am willing to bet that it uses real drivers remotely as a fallback for when their algorithms do not work. You can't rely on automation alone to do everything; that's a recipe for disaster. If anything, this makes the remote driver's life easier because they're not on the road, in danger, driving cars, reducing the probability of fatal accidents.

People are change-resistant. Social signalling and constructs ARE important.

Let's talk about the most important point that I could possibly make in any context. People (and society in general) are resistant to change, not by choice but because societal constructs and signalling are important to the world we live in.

It is very easy to be at the top of a 100-storey building, look down on everything, and think everything is a dot on the map. But in each of those dots, there are an infinite number of nuances because those dots are comprised of people. The favourite pastime of Silicon Valley is to think it knows every problem society is dealing with, and whatever it's building is a massive problem everyone needs solving, and society would cease to function if it didn't. But it misses the fundamental point, that it's disconnected from the everyday populus it's trying to build for. Every single person outside the glass offices is a human living their life, based on societal norms.

The Internet was widely adopted in the early 2000s, making information free and widely accessible. That by itself should have wiped out the utility of schools and colleges, but it didn't. Because sending your kids to school is an important construct and is more than just about the education they receive, going to college and getting a degree is an important construct. Even though everything you could learn is freely available (And I'd argue that it is a better way) from a tutor on YouTube.

The point is that different products and services sell different things to consumers than what we think on the surface. Schools are not just knowledge-imparting machines but rather a way for parents to educate and have their kids be taken care of while they’re away. Colleges, at least in their marketing brochures, are more about building soft skills, social circles, sports facilities and prestige.

The phenomenon of Silicon Valley thinking you can replace every line of work and change every line of consumption with AI is similar to how oblivious management consultants have been for a long time. A classic story is when a management consultant tells you to fire the office-room boy whose job it has always been to manage the printer.

"You can just automate that task and let this guy go. The role is redundant." The consultant says, and you do so. 1 Week later, your office is disappointed because that guy you fired didn't just handle the printer, he handled office supplies, ran to the store to get the office coffee, managed the desks and a lot of other responsibilities that didn't exactly have a title.

What I'm saying is: There's more to things than meets the eye for even the smallest of things. And anyone thinking everything will go away is missing the point.

Another example I can give you is the fact that embassies should have gone away with access to instant and secure communication channels (Embassies used to be a way for countries to communicate and for diplomatic relations), yet, we still have embassies and the list and need for ambassadors keeps growing, especially in the world we live in today. This is simple because embassies and ambassadors are not just communication proxies; they are diplomatic representations of one country to the other, and given we live in a society, it is important for a country to signal, and that's done via ambassadors and ever larger embassies. Albeit, however much it might cost.

Even though most office work can be done remotely over the Internet, the last time I checked, commercial real estate is still there, and most offices are still operating. Just because something can be done doesn't mean society will adopt it completely. There are middle grounds to every change, and society doesn’t build from scratch; it is like a codebase where "If it isn’t broken, why would you fix it?" applies, so society patches behaviours with changes and doesn’t just eradicate them overnight.

Humans are terrible at predicting the future

Everyone predicts a version of the future they believe is the most likely to materialize. We, however, tend to forget that we as a species have been terrible at predicting the future, forever.

Pick any predictions of how the world would have looked in 2025 from 50 years ago, and they're all off by monumental margins. The kind you can't even comprehend.

Hindsight is an amazing tool, but the problem with it is that you only get it after you've crossed the point in time you're reflecting on.

We tend to ask our minds to walk the paths we are aware of while ignoring every single other thread we aren't aware of. The reality is that the world is a machine running on parts we don't even know exist. No one knows the whole picture, and anyone trying to know which direction the machine will move can only do so after the machine has done so. Why? Because every part of the machine was assembled and added by different people who had nothing to do with each other.

So every single prediction of the future should be taken with a cubic meter of salt, and so should the ideas coming out of the mouths of CEOs and VCs on LinkedIn who have vested interests in whatever future they're selling. Jensen Huang saying he expects a $500K engineer to burn through $250K worth of tokens sounds fine until you realize he directly profits from it (Moreover, burning through just $5-10K worth of tokens is insane; $250K is near unachievable for anyone).

This post is no exception; this is just a compilation of my opinions and some of the research I've done, but this in no way makes a claim on what will happen. I have vested interests in the world going ahead the way it has always been, and I am not free from the worry of an uncertain future. I am just as lost as everyone else is.

We don't know WHAT is coming, but that might be a good thing because we're terrible predictors of the future, and I, being a perpetual optimist, can only be hopeful.

Humans have varied tastes

We have clothes from Bangladesh produced for next to nothing; there are hand-woven clothes that go for a premium. Yet, there is a huge market for both.

We often forget that humans have different tastes and that there exist different market segments for the same things. This means that even if something can be produced cheaply with AI, there will still be a market for services that have humans involved, albeit maybe not that big, but not completely gone.

Ask yourself this: if you knew a movie was entirely produced with AI, would you prefer it over another that was produced by a human? There’s your answer.

When technological advancements materialize and a product becomes a commodity, demand for products and services that have human craft associated with them increases, and so do their prices, because, shocker, humans like work done by other humans.

Remember, Google Docs and Microsoft Word exist, Google Meet and Zoom exist, and so do Slack and Traditional Email. Sometimes all of these exist in unison, in the same company across teams, because different teams and different people prefer different things. If you've ever visited the home page of a SaaS company and noticed the same set of logos (customers) everywhere, it's not a mistake or a scam. Companies are so large that one part uses Google Meet, another uses Zoom and so on. The same company can have wildly different policies for different parts of the organisation, one could be allowed to speed through using LLMs and another, mostly compliance and legal departments, might not even have clearance to use the internet.

All I'm saying is, the world is big enough to ensure everything exists simultaneously because tastes, preferences and priorities DO matter.

It is Silicon Valley's arrogance to think they know all the problems that exist in the world and that they can solve all of them.

Today's problem: The fundamental disconnect of the people in charge from ground reality

The problem today isn't necessarily AI; AI is a great productivity booster. Rather, it's ignorant and oblivious people at the top believing that anyone can work their way to anything they need, and fearmongering about it on LinkedIn.

Contrary to popular belief by the people at the top, asking a model to "act like you're a CEO" or "think like you're a software engineer" only changes the tone of how an answer is presented by an LLM; it doesn't change the quality of the output, and I'm saying this based on experience.

LLMs are Dunning-Kruger machines, and it's my personal belief that CEOs should stop posting on LinkedIn until they know what they're talking about. Because most of the time, they don't. This entire hype-cycle started and is running because the people at the top are talking about AI in guarded-off circles and refuse to admit that they don't know what it is and there might not be vast riches to be unlocked with this for everyone (I asked one of these people a question on where the AI wave is heading and I kid you not, the reply I got was verbatim "Everyone's doing it, and even Sam Altman doesn't have an answer for where it's going").

I've been part of these circles before, and let me tell you, everything in these circles is running on FOMO, from the Ivy League grads who cut the checks to the CEOs who oversee the vision for companies. They're almost always disconnected from ground reality.

This brings us to a very important point about the working world: Specialisation.

Our society has been built on specialisation, and the more developed an economy gets, the more common "This isn't exactly my job so I'm going to pay someone else to do it" becomes.

So we might not like to admit it, as most LinkedIn hype people suffering from the Dunning-Kruger effect don't, but there is some knowledge still needed to prompt your way through problem statements (This will be most likely invalid in a few years, but this holds for now). Because that isn't good for the reach for the "X just killed Y" posts.

What we're seeing is automation of the lowest layer of the specialization stack, stuff like writing code for software engineers.

First of all, these are all severe copyright infringements, not exactly sure how anyone is okay with this (Also because all models have been trained on YOUR data, your designs and your code for free) but, secondly, the job was never just about writing code. Where you'll run into issues is the architecture, as the code will eventually be so messy and tangled if you're not reviewing constantly.

Specialization in core skills is still important, not as important as it was before, but still important to detect when the LLM is bulls***t-ing and to understand what it's generating.

The demand-supply paradox and the paradox of increased productivity

Throughout history, when something is intrinsically valuable, say, raw materials like aluminium, iron and petroleum. Economic incentives dictate that humans will find a way to lower the cost of production and consumption consistently.

There exists this paradox, where, when the cost of production and distribution for something goes down, the demand for it skyrockets.

Remember, there was no such thing as petroleum a few hundred years ago, and today the world doesn't run without it. Humans know "consuming" better than anything. If you throw something at humans, they'll find a way to use it.

Even software and human resources follow the same pattern. When Excel came around, many jobs for spreadsheet builders went away, but the number of companies being opened increased, and so the demand for people with spreadsheet management knowledge stayed consistent.

We could see something similar with AI; we have entire economies running on content because producing content for social media has become super cheap and convenient. We could see millions of new companies spring up that absorb the job losses, even though not at the same pay, but people won't be out of work.

The barrier to building something for consumers and running a business has fundamentally been reduced. This is a second-order effect of AI where job displacements happen, but the larger economy grows with productivity gains, and people find something to do regardless.

We've seen this self-employment boom in South Korea with its struggling economy (More on this whole thing in a while). And let's not forget, every single economic model has predicted that we'll do less and less work as we grow richer as a country. I don't see that having happened.

There is also another paradox, one of increased productivity. It focuses on work, not reducing, but rather increasing work to fill the gap created by the productivity gains. This is clearly evident from the fact that most companies are today churning out more work, but no one is leaving the office before 5 pm.

This means that the beneficiaries of these gains aren’t the workers but the employers. Which is natural, every single time we've had an industrial revolution, we've just found more work to do, and I don't expect this time to be any different.

Leadership teams have already started realizing that the 2-year backlogs can now be cleared in 2 months, which isn't exactly a bad thing because most backlog items haven't been high-priority items in the first place and consisted of mostly gruntwork, so if AI is taking care of that, then it's a good place to be. But don't expect work to reduce anytime soon; in fact, brace for more work coming your way.

AI assisted productivity gains might not exactly be a bad thing

One thing to realize is that most developed countries are facing labour shortages and an ageing population, with the onus of productivity on fewer and fewer people today.

Pension funds and social security funds in most developed economies are under severe strain due to dwindling working populations carrying the weight of supporting more and more people who aren't active contributors to the economy.

In such a situation, AI-assisted productivity doesn't sound like that bad of a trade. The problem would be if greed gets in the way and AI replaces the very people who were keeping the productivity loop going.

Worst-case scenarios: Social Employment, UBI, and learnings from Remote Villages

I always wondered when I walked through extremely remote alpine villages on how people survive there and how they've been doing so for thousands of years. Turns out, the modern economy as we know it is built on growth, but these places are built on survival/sustenance. Everything you need is part of the local ecosystem; someone grows fruits, someone prepares meat, everyone trades and so on. Keep this thought in your mind.

Let's imagine a scenario where every single job is gone to automation and robotics. Truck drivers? Gone. Engineers? Gone. Finance Bros? Gone.

What happens in a society like this? First of all, the economy collapses because there's a fundamental disconnect between companies trying to produce goods and services, but consumers not having any money to purchase them, because you need employment to run the economic flywheel.

If that were to happen, no one actually knows what the outcome would be. Japan has this concept of "social employment", something you would also see in places like Hong Kong. Older workers with skills that have long been replaced are kept in the workforce and paid a respectable wage for jobs that aren't needed. When exiting the Hong Kong metro, you'd see older folks holding signs of where to go. Is the job needed? Absolutely not. Are these people employed out of respect? Yes.

One can make the argument that this cannot happen in a society where the very notion of profiting from automation means laying everyone off, and they would be right.

Universal Basic Income and Social Employment are concepts that apply to societies with empathy towards their populations. But rest assured, in a society where no one is employed, and we still have a government, it will be forced to implement these measures sooner or later. This is the worst-case scenario, and I'm pretty sure we'll not have to get there.

Coming to the example of the remote village I gave, that system works phenomenally here, but fails spectacularly when everyone lives in cities where resources are limited and crunched. But regardless, we have societies that are completely disconnected from the outside world and still survive. This is a bleak view and definitely not something I believe in, but this frees people from the mental agony of wondering what the worst-case scenario is, and trust me, it's not as bad as we think.

Germany lived through hyperinflation, the Holocaust, and two world wars and still became the largest economy in Europe (Often cited as the German Economic Miracle). Several cities across the world (Including the one I live in and love - Delhi) have been war-torn several times over and still continue to operate. Entire empires have fallen, and people still get on with their lives. Resilience and perseverance are defining traits that make us human.

Jump back: The demand and reward for people good at their job actually go up even with commoditized goods and services

LLMs are statistical in nature, so anything they create is a statistical derivation of everything they've been trained on. Which means someone who is creative and structured can still outperform an LLM in the quality of output (Not necessarily speed, of course). Humans and LLMs play in different markets: One prioritizes speed, and one prioritizes quality of execution. I believe the mixing of the two in balance is the best outcome.

Trust me, LLMs are not known for quality because the vast majority of information they're trained on is either biased or not of high quality. Most code is garbage, most patterns used in software engineering are anti-patterns, and most books belong in the trash can.

This means that anyone who's even slightly good at their job and good at pointing out where bullshit sits in the stack is at an unfair advantage. Not in terms of productivity necessarily, but quality. While LLMs commoditize things that were previously considered bespoke, they increase the value of creativity multi-fold. Similar to how graphic designs today are considered a commodity, generatable and downloadable instantly, the value of a graphic designer who's even slightly better at their work than most has actually increased.

As highlighted in the previous section, the world is big enough for both of these to exist. Even while dwindling in number, there are enough people with taste to make a sufficient client pool in any profession.

The bigger picture: There is still no lack of opportunities and problems left to solve that AI will not do for us.

I mean it when I say that humans have not scratched the surface of what we're doing.

This is going to sound blunt and rude, but I think we all need to hear it. Somewhere between trying to build a life and working, we forgot that humans have time and again found ways to keep ourselves occupied.

What if? The frontiers of what we're supposed to work on haven't even opened yet. What if? This is the exact break a large and productive society needed to come back to its senses and say, "Holy shit we were so busy building B2B SAAS that we took our eyes off the big picture".

Remember, the current structure of human work optimizes for survival, not growth. We still haven't sent a human to another planet, we still haven't solved for 100% adoption of renewable energy, we still haven't gotten rid of poverty and hunger, we still haven't found the cure to every life-threatening disease, we still as a specie haven't found concrete evidence of extra-terrestrial life and we still haven't touched the surface of exploring our own earth fully, let alone the cosmos. All of these are a few examples of the extent of problems we have at our disposal to work on, and yet, we sit in our chairs and worry about an AI taking our jobs.

Granted, there needs to be economic incentives for humans to do anything. But that again is a symptom of a system that rewards survival instincts, not growth instincts. Growth will happen when we are free to think and work on something new. And as for the current state of AI, it can only do for us what it can infer from its training set (Until AGI comes in and renders this paragraph obsolete). So it is our time to start thinking and creating industries for the future.

And who knows, every adversity has only helped humans evolve. We had a world running on autopilot for a while. This might be our cue to continue evolving.


My entire takeaway from this whole situation is: Keep your mind open, and so as well your options. We're in unprecedented times, and if this is any consolation, this time, whatever comes at your door will also come at everyone else's.