Raluca Ghete Raluca Ghete

Step Right Up: AI in HR Is the Greatest Show We Don’t Understand

👀 Everyone’s talking about AI in HR.

But try using it? That’s where the fun begins.

Here, I share what it’s really like trying to build useful HR workflows with AI tools—from chatbots that can’t answer the same question twice, to a documentation system that worked like magic during the trial… then froze for 3 days right after I paid for the annual plan. (No one knows why. Still.)

We’re not scaling AI in HR. We’re still poking it with a stick.

But that doesn’t mean we shouldn’t use it.

TL;DR:
AI in HR is the new shiny thing—but most people selling it (and buying it) don’t really know what they’re doing. Not because they’re unqualified, but because the tech itself is still learning to walk in heels. Models change weekly. Outputs are flaky. Everyone’s building slides, not systems.

🕒 Let’s start with a little history lesson...

In 1835, P.T. Barnum pulled off one of the greatest cons of his career. He exhibited a woman named Joice Heth, claiming she was the 161-year-old nurse of George Washington. Crowds paid to gawk. Reporters speculated. Experts weighed in.

She was actually an enslaved woman in her 70s.

And Barnum made a fortune.

Why bring this up in a post about AI in HR?

Because every HR tech vendor right now is pitching you Joice Heth in a chatbot costume. “She’ll automate everything! She’s smarter than your HRBP! She used to work for George Washington!"

🤡 And the crowd goes wild.

Meanwhile, the people actually trying to use AI—inside HR teams, building workflows, wrangling API limits, cursing hallucinated outputs—are quietly sitting in the corner thinking, “This is... fine?

📚 A quick reality check:

According to recent research published in the International Journal of Academic Research in Business and Social Sciences (2025), integrating AI into HRM brings serious challenges: skill gaps, ethical minefields, employee distrust, and lack of clarity in ownership and implementation.

Some of the more... vivid findings? HR teams relying on AI-powered recruitment tools reported higher rates of false negatives—great candidates being filtered out because they didn’t use the “right” buzzwords. In one example, AI flagged a candidate as unqualified due to a nontraditional job title, even though their experience was a perfect match. Elsewhere, performance management systems driven by AI generated evaluation summaries that didn’t account for interpersonal dynamics or team context—leading to confusion and, in some cases, mistrust.

The study also pointed out how AI systems can unintentionally reinforce existing biases when trained on historic (and often biased) company data. And HR managers are rightfully nervous about the opacity of these tools—especially when they can’t explain why the system made a specific decision.

Even the most advanced models can't fully replicate what a good HR leader sees in a messy, real-life situation. That doesn’t mean we shouldn’t use it—it just means we need to use it responsibly.

I’m not against AI in HR. Far from it. I use it every day. But we’ve got to stay realistic about what it can (and can’t) do.

Let’s lead with curiosity, not naivety.

💡 What I'm seeing out there:

  • "AI-first HR platforms" that are basically Airtable with ChatGPT duct-taped on

  • Teams told to "use AI for performance reviews" with no training or guardrails

  • HRIS AI chatbots that give a perfectly helpful answer to one person asking about the paid leave policy and then tell someone else asking the exact same question that they don't have that information

  • Same tool, great results one day, nonsense the next—no one knows why

  • An AI documentation system I trialed that gave glorious results—fast, clear answers, clean structure. I was so impressed, I bought the annual subscription. A week later, it froze for three days straight. Customer support? Radio silence. When it finally came back online, no one could explain what happened—or whether it would happen again.

📏 Why it’s (still) so messy to actually use AI in HR:

  1. Models change too fast
    That prompt you perfected last week? Broken today. Tools are updating faster than People teams can even test. It's like trying to run payroll while the tax code rewrites itself every Thursday.

  2. Outputs are inconsistent
    Sometimes brilliant, sometimes blank stares in the form of text. If you’re using AI to auto-generate sensitive stuff like feedback, comp rationale, or employee comms—you need consistency. AI doesn’t do consistency. Yet.

  3. Nobody knows who should “own” it
    IT? People Ops? That one guy who likes Zapier? Everyone's poking around, but no one's driving the bus.

  4. It’s not plug-and-play
    These tools need context, structure, and testing. But we treat them like vending machines. Plug in "performance feedback" and out comes a glowing paragraph—meanwhile, the employee in question hasn’t submitted a project on time in six months.

🧠 So what to do instead:

  1. Admit we're all learning
    Make space to experiment. Make it someone’s actual job to test and document what works. Build an "AI sandbox." Test, document, iterate.

  2. Start small & useful
    Start with one real workflow you touch regularly. For example: use AI to generate first drafts of interview questions for recurring roles—then refine with your hiring manager. Try summarizing 360 feedback into a few clean themes for easier calibration. Or use it to rewrite policy docs into plain language people might actually read. These are practical, repeatable use cases that can actually save time without risking a PR disaster.

  3. Don’t expect magic
    AI doesn’t fix broken systems. It just makes your bad process faster. And prettier.

  4. Pair humans + AI
    Let the tool draft it. Let the human edit it. Treat it like a junior coworker, not an oracle.

🤖 Final thought:

There is potential here. AI can reduce cognitive load, free up your team, improve consistency. But we're not in the "scale" phase yet. We're in the "poke it with a stick" phase.

Half of us are still writing prompts like we’re casting spells.

And honestly? That’s fine.

Just don’t sell me George Washington’s babysitter.

🔗 Want to experiment more with AI in HR?

Good. Because this post is the start of a short series on actually useful ways to apply AI in People work. No hype, no "future of work" hand-waving. Just hands-on ideas, tools, and experiments—grounded in how I'm using these things every day.

👀 Here's what’s coming in the series (for now—this is a working plan, and if there’s something you’d rather see covered first, I’m open to changing course):

Episode 1: Drafting people policies using AI tools —and pairing that with an AI-powered "pre-mortem" to simulate employee reactions before rollout

Episode 2: Using AI to analyze open-text feedback from engagement surveys and turn it into themes and insights

Episode 3: How to use ChatGPT’s new Projects feature with custom instructions to organize and scale your People workflows

Episode 4: Building your own custom GPTs to reflect your tone, values, and people practices (and finally stop copy-pasting prompts into random chats)

Episode 5: How to stay current with AI in HR—newsletter recommendations, trusted voices, and how not to drown in hype

Episode 6: A curated toolkit of AI tools for People teams—with specific use cases, not just logos

📬 Subscribe now so you don’t miss an episode. Each one’s designed to be short, practical, and immediately useful—just like a good HR process.

Read More