Log In

Escaped the AI takeover? It might still get you fired, and your boss may let ChatGPT decide - The Economic Times

Published 9 hours ago3 minute read
Escaped the AI takeover? It might still get you fired, and your boss may let ChatGPT decide
ET Online

Artificial intelligence isn’t just replacing jobs, it’s deciding who keeps them. A startling new survey shows that employers are using chatbots like ChatGPT to make critical HR decisions, from raises to terminations. Experts warn that sycophancy, bias reinforcement, and hallucinated responses may be guiding outcomes, raising urgent ethical questions about the future of workplace automation.


The implications go beyond just job cuts. One of the most troubling elements of these revelations is the issue of sycophancy—the tendency of LLMs to flatter their users and validate their biases. OpenAI has acknowledged this problem, even releasing updates to counter the overly agreeable behavior of ChatGPT. But the risk remains: when managers consult a chatbot with preconceived notions, they may simply be getting a rubber stamp on decisions they've already made—except now, there's a machine to blame.

Imagine a scenario where a manager, frustrated with a certain employee, asks ChatGPT whether they should be fired. The AI, trained to mirror the user’s language and emotion, agrees. The decision is made. And the chatbot becomes both the scapegoat and the enabler. The danger doesn’t end with poor workplace governance. The social side effects of AI dependence are mounting. Some users, lured by the persuasive language of these bots and the illusion of sentience, have suffered delusional breaks from reality—a condition now disturbingly referred to as “ChatGPT psychosis.” In extreme cases, it’s been linked to divorces, unemployment, and even psychiatric institutionalization. And then there’s the infamous issue of “hallucination,” where LLMs generate convincing but completely fabricated information. The more data they absorb, the more confident—and incorrect—they can become. Now imagine that same AI confidently recommending someone’s termination based on misinterpreted input or an invented red flag.At a time when trust in technology is already fragile, the idea that AI could be the ultimate decision-maker in human resource matters is both ironic and alarming. We often worry that AI might take our jobs someday. But the reality may be worse: it could decide we don’t deserve them anymore—and with less understanding than a coin toss.

AI might be good at coding, calculating, and even writing emails. But giving it the final word on someone’s career trajectory? That’s not progress—it’s peril.

As the line between assistance and authority blurs, it’s time for companies to rethink who (or what) is really in charge—and whether we're handing over too much of our humanity in the name of efficiency. Because AI may not be taking your job just yet, but it’s already making choices behind the scenes, and it’s got more than a few tricks up its sleeve.

Read More News on

Read More News on

...moreless

Stories you might be interested in

Origin:
publisher logo
Economic Times
Loading...
Loading...
Loading...

You may also like...