Learning AI #13

Can AI prep me for an interview?

Is the job interview dead?

You want to apply for a role as a senior consultant. Such a position requires a bachelor’s degree in business, math, or engineering, and a minimum of 10 years’ experience, with some of it managing a team. Of course, you have one of the AIs polish your resume and tune it to the job description.

You feed your resume into the employer’s hiring portal and wait.

The hiring manager uses AI tools to screen a few hundred resumes to narrow them down to a group of a dozen candidates from which two senior consultants will be hired.

The AI screens the resumes for keywords like type of degree, years of work, and management experience. It goes further to check on fit based on your social media activity and a submitted writing or coding sample. The AI selected your resume for the group to be interviewed and presented the hiring manager with a summary of your qualifications.

Now, the fun starts.

If the employer can use AI, why can’t the candidate do the same?

Now that your AI-enhanced resume got you in the door, why stop there? The AIs suggest the following approach:

Step 1: Feed the job description to three AI systems and ask each one: "What questions is this employer likely to ask, and what are the ideal answers?"

Step 2: Take those AI-generated answers and feed them back to a different AI system playing the role of a hiring manager:  "Based on these responses, would you advance this candidate?"

Step 3: Have AI evaluate AI's own output: "Are these answers authentic, or do they sound coached?"

The AI will likely help you practice the STAR method for interviewing in which you describe a Situation-Task-Action-Result. So, your answer to the interview question, “Tell me about a time when you dealt with an ethical dilemma?” will not be spontaneous, but something you rehearsed in your AI prep for hours.

Hiring managers are increasingly relying, not just using, automated AI algorithms to screen candidates using patterns to match candidates to previous hires. Boring and biased.

If you have recently been in the job-interview market, there is a good chance you have used AI in some manner to enhance your chances. One theme of this newsletter is that if you are not embedding AI into your thinking and others are, and you are falling behind by standing still.

But here is the problem with the employer’s use of AI: the tools seek to match patterns of candidates that have been successful in the role previously. The AI makes no allowance for out-of-the-box thinking.

This is a form of bias, and once companies realize all the candidates look the same, something will change; like spending face-to-face time with people before eliminating them from the hiring process.

But in the scenario we describe in this post, is the best candidate getting hired, or is the person who is best at winning the AI vs AI game getting hired?

The problem with the battle-of-the-bots is that an average candidate can use AI to prepare and sound just as good as the naturally strong candidate.

The signal that interviewers thought they were reading: natural fluency, quick thinking, self-awareness, is now a skill that can be rehearsed to near-perfection overnight.

I know many people at different stages of their careers looking for a new job. Every one of them is frustrated and demoralized by the machine-driven process that controls their fate.

I once worked at a firm that grew from a startup to over 3,000 employees, and every new hire had to know somebody at the company. Call it old-school, but it worked then and still works today.

AI tools have made hiring managers lazy and complacent. They no longer search for the “best athlete” or the person with a unique background. They sit back and let the AI rank a candidate based on how well the candidate optimized the AI algorithm.

Give credit to job-seekers that figured this out, but don’t mistake them as qualified for the role. There are thousands of people with perfect or near-perfect standardized test scores who cannot carry on a conversation. The algorithms surely pick these up as “qualified.”

Hiring managers are forgetting what it was like to spend 30 minutes, in-person, to size up a candidate. Such face time is now reserved for one or two finalists out of the that pool of hundreds of candidates.

My advice to friends seeking new positions is simple: get out and talk to people. Create human touch points that you can lever to circumvent the system and get a fair shot at that dream job.

 

Things I think about

The cornea is the only tissue in the human body that has no blood supply. It gets oxygen directly from the air. 

Response from ChatGPT

Is the job interview dead?

What’s Changing (and Why People Think It’s “Dead”)

1) AI is replacing early-stage screening

Companies are using tools from firms like HireVue and Pymetrics to evaluate candidates before a human ever gets involved.

7

  • One-way video interviews (record yourself answering prompts)

  • Algorithmic scoring (speech patterns, keywords, even facial cues—controversial)

  • Automated resume + skills filtering

👉 Result: fewer traditional “sit-down” interviews at the front end.

2) Work samples are replacing talking

Employers increasingly want proof over promises.

6

  • Coding challenges (engineering)

  • Case studies (consulting, strategy—think McKinsey & Company style)

  • Paid/unpaid trial projects

  • Portfolio reviews

👉 Translation: “Show me you can do it” > “Tell me about a time you did it.”

3) Interviews are becoming shorter—but higher stakes

Instead of 5–6 rounds, many firms:

  • Compress into 2–3 rounds

  • Or run “superdays” (common on Wall Street)

When you do get a human interview, it matters more.

4) Remote interviewing is now standard

7

Tools like Zoom normalized:

  • Panel interviews from multiple locations

  • Faster scheduling

  • Less “chemistry building” in person

👉 This reduces the traditional “gut feel” component (but doesn’t eliminate it).

What’s NOT Dead (and won’t be)

1) Final-round human judgment

At the end of the process, humans still decide:

  • “Would I trust this person with clients?”

  • “Do I want to work with them at 10pm on a deadline?”

No AI has replaced that yet.

2) Executive / senior hiring

At your level (turnaround, advisory, executive coaching), interviews are actually becoming more conversational, not less:

  • Fewer gimmicks

  • More relationship-driven

  • Often resemble business development meetings

3) Cultural fit & trust assessment

Especially in:

  • Consulting

  • Healthcare leadership

  • Finance / advisory

People still hire people they believe.

The Real Shift (This is the key insight)

The interview isn’t dead—it’s moving downstream.

Old model:

Apply → Interview → Maybe prove skills

New model:

Prove skills → Then earn the interview → Then validate trust

What this means for you (practically)

Given your background (advisory / turnaround / executive coach), this trend actually works in your favor:

  • You can differentiate with real case examples

  • You can bypass early filters through network + credibility

  • Your “interview” is often:

    • A pitch

    • A diagnostic conversation

    • A mini-engagement

👉 In other words: you’re already operating in the post-interview world.

Bottom Line

The job interview isn’t dead.

But the “tell me about yourself” era is dying.

It’s being replaced by:

  • Evidence

  • Simulation

  • Reputation

  • And then… a shorter, more decisive human conversation

If you want, I can map out what hiring will look like in 3–5 years—and where someone like you has an edge (there are some non-obvious opportunities there).

 

Response from Claude

Is the job interview dead?

Opening Hook

Set the scene: a job candidate spends two hours with Claude the night before an interview, rehearsing answers, anticipating questions, and polishing their "authentic" story. Meanwhile, the hiring manager spent twenty minutes with an AI tool that screened 340 resumes, ranked the top ten, and generated a standard question set based on the job description. Neither person knows the other did this. They sit down across from each other and have what both of them will describe as "a great conversation."

The question the post explores: was that an interview, or a performance review of two AI systems?

Section 1: What the Interview Was Supposed to Do

Brief, accessible framing of what job interviews were originally designed to accomplish — not a history lesson, but a reminder of the underlying goals:

  • Assess capability: can this person actually do the job?

  • Assess fit: will this person work well with the team and the culture?

  • Assess character: is this person honest, self-aware, reliable under pressure?

  • Give the candidate a real look at the organization

The implicit assumption behind all of this: the answers a candidate gives in an interview reflect something true and unmediated about who they actually are. That assumption is now gone.

Section 2: How AI Has Already Changed the Candidate's Side

This is the section George runs the experiment for. He asks AI to help him prepare for a hypothetical senior consulting role — and documents exactly what happens.

What AI does well for candidates:

  • Anticipates likely questions with high accuracy

  • Helps reframe actual experience into compelling narratives

  • Coaches on structure (the STAR method, etc.) until answers are polished

  • Identifies landmines (gaps in resume, awkward transitions) and helps defuse them in advance

  • Calibrates language to the specific company's values and vocabulary

The uncomfortable observation: after two hours of AI prep, a mediocre candidate with real experience sounds nearly identical to a strong candidate with real experience. The signal that interviewers thought they were reading — natural fluency, quick thinking, self-awareness — is now a skill that can be rehearsed to near-perfection overnight.

The question to pose: is this cheating, or is it just the modern equivalent of researching the company and practicing in the mirror? Where exactly is the line?

Section 3: How AI Has Already Changed the Employer's Side

The other half of the experiment. George looks at what AI-assisted hiring tools actually do — not hypothetically, but what companies like LinkedIn, HireVue, Workday, and dozens of ATS platforms are already doing:

  • Resume screening that filters on keywords, not content

  • AI-generated "culture fit" scores based on writing samples or video interviews

  • Automated question generation based on job descriptions

  • Some systems that conduct entire first-round interviews with no human present

The key observation: these tools optimize for pattern-matching against past successful hires. They are, by definition, backward-looking. They select for people who look like the people who worked there before. That's not a neutral filter — it's a systematic bias machine dressed up as objectivity.

The specific problem: a candidate who uses AI to prepare will, almost by accident, optimize for exactly the keywords and phrases these systems are looking for. So the AI on the candidate's side and the AI on the employer's side end up in a kind of dialogue with each other — with the human in the middle as a delivery mechanism.

Section 4: Run the Experiment

This is the heart of the post. George runs the actual test — ideally with a real job description from a real company (something in consulting or healthcare management fits his world).

Step 1: Feed the job description to three AI systems and ask each one: "What questions is this employer likely to ask, and what are the ideal answers?"

Step 2: Take those AI-generated answers and feed them back to a different AI system playing the role of an ATS or first-round screener: "Based on these responses, would you advance this candidate?"

Step 3: Have AI evaluate AI's own output: "Are these answers authentic, or do they sound coached?"

The reveal: what does AI say when asked to spot AI-coached answers? Can it tell the difference? This is a direct callback to the "can AI spot a liar" thread from earlier posts — and the answer is probably more interesting and more uncomfortable than expected.

Section 5: Three Scenarios, Three Verdicts

Break down the outcome by candidate type — this is where the post gets genuinely useful rather than just provocative:

Scenario A — The Strong Candidate Someone who is actually great for the job. AI prep helps them articulate things they already knew but struggled to express. The interview is more efficient and less nerve-wracking. Outcome: AI made the process better and the right person probably gets the job.

Scenario B — The Weak Candidate Someone who is a genuine mismatch for the role. AI prep helps them pass the screening and sound credible in the first round. They get further than they should. Once they're in the job, the gap between the interview performance and the actual performance becomes obvious quickly. Outcome: AI delayed a bad hiring decision by 90 days and wasted everyone's time.

Scenario C — The Strong Candidate Who Didn't Use AI Someone excellent who prepared the old-fashioned way and gave imperfect, less structured answers. They get screened out by an ATS because their resume lacked keywords, or they advance but lose to the AI-prepped candidate in final rounds. Outcome: AI actively removed the best person from the process.

The editorial point: the net effect of AI on both sides may be a wash in terms of outcomes, but it systematically disadvantages people who either can't access or don't know about these tools — which correlates strongly with age, socioeconomic background, and access to networks that share this kind of knowledge.

Section 6: What Actually Predicts Job Performance (and Whether Interviews Ever Did)

This section adds the context that makes the whole argument land. George notes that the dirty secret of hiring is that traditional unstructured interviews are already terrible predictors of job performance — research has shown this for decades. The structured behavioral interview is better but still weak. The best predictors are work samples, cognitive ability tests, and structured reference checks — none of which most companies actually do.

So the provocative reframe: AI didn't break the interview. The interview was already broken. AI just made the performance more elaborate while leaving the underlying problem exactly where it was.

Section 7: What Would Actually Work

Brief, practical — this is where George's consulting instincts show. What should companies do instead?

  • Paid work samples: short, compensated assignments that simulate actual job tasks. Hard to fake. Directly predictive. Companies resist them because they cost money and require effort.

  • Structured reference conversations: not "is this person good?" but specific behavioral questions asked of multiple references. AI can actually help here — by generating better reference questions.

  • Transparent AI use on both sides: some companies are now explicitly telling candidates "use whatever tools you want to prepare" and evaluating the output rather than the process. That's actually more honest.

  • Shift the interview's purpose: stop trying to assess capability through conversation and use conversation only for what it's actually good at — mutual evaluation of fit and values, where both parties are trying to figure out if they actually want to work together.

Closing: The Real Question

The post ends not with an answer but with the question that sits underneath all of it:

If a candidate uses AI to present the best version of themselves, and an employer uses AI to find the best match for their criteria, and they both end up satisfied — did anything go wrong?

Maybe. Because the job of an interview was never just to match supply to demand. It was also to surface the unexpected — the candidate who doesn't fit the profile but turns out to be extraordinary, the role that gets redefined because the right person shows up for it. AI optimizes for the expected. It has no mechanism for the extraordinary.

And in George's world — crisis management, organizational transformation, the kind of work where the unexpected is the whole job — that's not a small problem.

Suggested closing line: The interview isn't dead. But it may have quietly become something no one actually intended — an audition for the AIs running the show on both sides of the table.

**********