The Great AI Panic: Should HR and Data Engineers Abandon Their Careers?

Data Engineers ask if they should pivot into “AI engineering.”

Product Managers wonder whether copilots will just PM themselves.

Data Analysts fear natural-language queries will make them irrelevant – “after all, people won’t need to learn SQL anymore.”

And domain experts, who’ve spent decades in the trenches, aren’t sure if deep knowledge still matters when an LLM can speak confidently about anything.

Underneath the anxiety is a bad mental model: that AI replaces roles.

I have also noticed this common thread: most believe that learning AI means competing with the PhDs, the long-time researchers, the people who worked in AI long before ChatGPT made it mainstream.

On the other hand, here’s what I’ve been witnessing in the field, talking to customers, and AI leaders across industries – even a recent report from MIT’s Project NANDA puts numbers to it:

95% of enterprise AI pilots are going nowhere.

Despite billions invested, most companies see no measurable ROI. The researchers call this the GenAI Divide – the gap between flashy adoption and real transformation.

I see the human side of that divide every week. Smart, capable professionals who feel strangely insecure about their future.

Even tecchies are having an indentity crisis.


So why does this identity crisis exist in the first place?

A big part of it is the AI hype machine. Every demo, every headline, every LinkedIn post makes it sound like AI is a replacement engine: one model to rule them all, one prompt to do every job.

The subtext is always the same – “if the AI can do this, why do we need you?”

The second reason that most companies haven’t yet connected the dots on how these roles fit together in an AI team. Leaders are still hiring “AI squads” instead of designing cross-functional systems.

That sends a clear signal to everyone else: you’re not part of this future. And until that changes, people will keep feeling lost.

And finally, the narrative is being set by researchers and vendors, not by practitioners. It’s easier to sell the myth of the all-powerful model than to talk about the messy work of building reliable systems. But the messy work is where the real value lies.

And so, professionals not directly involved in AI start questioning their worth. Leaders assume roles are redundant. And projects fail because the team wasn’t engineered like a system.


A story from the field

I’ve seen this play out first-hand, multiple times. On one project, the solution looked flawless in demo. Accuracy charts were glowing, stakeholders were impressed. When this went to production, reality hit: customer complaints spiked, costs increased, and nobody could explain why.

It wasn’t the model’s fault. The data pipeline was brittle and a critical business rule got lost in translation. The person who finally spotted the issue wasn’t an “AI engineer”, not a “Data Scientist” – it was a domain expert who noticed a silent failure the model could never catch.

That’s when it clicked for me: AI doesn’t replace the team. It exposes every weak link in the system. If the data is messy, the AI will fail faster. If processes are unclear, AI will make that confusion bigger. AI puts stress on the system, and wherever the cracks are, they’ll show up. And each role – data engineer, data analyst, product manager, domain expert – matters more, not less, when AI is in the loop.


How different roles actually fit in an AI team

AI doesn’t replace their roles – it reshapes them. I know, this sounds cliché now, but stick with me, I will explain.

When AI becomes part of the system, each role becomes a reliability layer that prevents a specific kind of failure. When these roles are missing, you invite incidents.

Data Engineers are the guardians of reliability. Every failed AI rollout I’ve seen has a common thread: messy data pipelines. Schema drift, late batches, broken joins – these don’t just make a dashboard wrong, they make an AI decision wrong. And in production, a wrong AI decision has real business impact.

Data engineers own the plumbing that keeps AI systems from poisoning themselves.

Product Managers are the owners of trust and guardrails. Note this down, AI isn’t a feature, it’s a system. The PM is the one asking: what happens when the model is wrong? How do we fail gracefully? Without that thinking, you end up with a slick demo that crumbles in the wild.

The best PMs I work with now think in terms of “failure surface” and “fallbacks,” not just roadmaps.

Business Analysts are the translators of decision logic. Now, here’s the trap, a model spits out “82% confidence,” and the team blindly routes it into a workflow. That’s how silent failures creep in. Business Analysts step in here, they translate probabilities into business logic: when to proceed, when to escalate, when to stop.

Business Analysts anchor AI outcomes to real operational decisions.

Data Analysts are the evaluators. The most overlooked role in AI right now. Everyone talks about prompts, few talk about evaluation. Analysts are the ones who stress-test AI outputs, design golden datasets, and measure performance against baselines.

Data Analysts are the conscience of the system – the ones saying, this looks impressive, but is it actually better than what we had?

Domain Experts are the catchers of silent failures. They are the veterans, the people who’ve seen patterns no dataset ever captures. In one case I mentioned earlier, a claims adjuster spotted a flaw no engineer or model could. That’s not luck, that’s domain intuition.

Domain experts bring the knowledge that differentiates “technically correct” and “operationally disastrous.”

When you look at it this way, the question shifts. It’s not “which jobs does AI replace?” It’s “which failures does each role prevent?” That’s a much healthier, and much more productive way to think about team composition in the age of AI.


How professionals can stay relevant

If you’re feeling the identity crisis personally, shift your mindset.

Stop asking, “Am I being replaced?” and start asking, “Which failure only I can prevent?”

Then evolve your role to make that visible:

  • Data Engineers: Learn data governance principles, data contracts and drift detection. You’re not just building pipelines anymore, you’re building trust in data.
  • Product Managers: Think in terms of failure containment. Don’t just describe features, describe what happens if the AI is wrong. Define how far the error can spread, who is affected, and what safeguards kick in.
  • Business Analysts: Own decision tables and thresholds. Tie AI outputs to real operations.
  • Data Analysts: be the qulaity checker for AI. Step up as the evaluation conscience. Build golden sets (test data) and tradeoff dashboards (accuract vs cost vs latency).
  • Domain Experts: Codify the “obvious” exceptions. Build exception catalogs that models will never see. Learn AI tools to do them – use coding agents, or low-code workflows.

You’re not just doing a job. You’re preventing a failure class. Put that language in your LinkedIn profile, your CV, pitch yourselves differently.


Rethinking team design

The real identity crisis isn’t with the professionals – it’s with leadership. Too many companies still believe in “AI pods,” small squads of model specialists thrown at problems in isolation. That’s not how you deliver outcomes. That’s how you burn money and fuel hype cycles.

AI is a systems problem. And systems need reliability layers. Data engineers prevent data failures. PMs prevent trust failures. Business analysts prevent decision failures. Data analysts prevent measurement failures. Domain experts prevent contextual failures. Strip one of these out, and you invite incidents.

Leaders who get this will start building cross-functional pods around business outcomes. Each role with a clear contract of responsibility. Each team with evaluation baked in from day one.

Interestingly, the MIT report found the same thing: organizations that cross the divide emphasize AI literacy across all roles, not just in specialized teams. The best leaders don’t replace roles, they equip them.

That’s how you move from “AI experiment” to “AI in production.”

And for the professionals stuck in doubt – stop asking if AI will replace you. Start asking what class of failure only you can prevent. That’s your edge. That’s your identity.

Learn AI to power your existing skills, don’t lose your identity.


Ending the Identity Crisis

AI doesn’t erase the map of our roles. It redraws it.

The sooner we see ourselves as layers of reliability in a bigger system, the sooner we move past the hype and deliver outcomes that last.

So, when doubt creeps in, I want you to ask yourself – are you defining yourself by the job title you fear losing, or by the failure only you can prevent?

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *