AI Cheating Prevention

AI Cheating Tools Like Cluely in Exams: How to Block Them

Cluely, Parakeet AI, and FinalRound AI are reshaping exam and interview fraud. Learn how these tools work — and how AI proctoring blocks them in real time.

March 26, 2026

ai cheating tools

TL;DR

  1. Cluely, Parakeet AI, and FinalRound AI are purpose-built AI tools that feed candidates real-time answers during live exams and interviews — invisibly.
  2. These tools use invisible screen overlays that bypass standard screen-sharing detection, making them undetectable to platforms like Zoom and Teams.
  3. Cheating adoption more than doubled in the second half of 2025 — from 15% to 35% of candidates showing signs of AI-assisted fraud.
  4. ChatGPT and similar LLMs power most of these tools via audio transcription pipelines that convert spoken questions into generated answers within seconds.
  5. Blocking them requires multi-signal detection — application monitoring, audio analysis, and behavioral signals — not just browser lockdown.
  6. Talview blocks these tools at the point of activation, not just after the fact.

The Rules of Cheating Have Changed

There is a moment every exam administrator now dreads. A candidate performs flawlessly. Their answers are fluent, structured, technically precise. Then they join a different interview at a different company and cannot explain a single thing they said.

That gap — between what someone says in an assessment and what they actually know — has a name now. It is called Cluely. Or Parakeet AI. Or FinalRound AI. Or ChatGPT running quietly in the background while a candidate reads polished answers off a transparent overlay on their screen.

Cheating has always existed. What is new in 2026 is the professionalism of the tools, the size of the market, and the fact that a $20 monthly subscription is all that stands between a candidate and a $150,000 job offer they are not qualified for.

This is not a fringe problem. According to data from interview intelligence platform Fabric, AI-assisted cheating went from 15% of candidates in June 2025 to 35% by December 2025. It more than doubled in six months.

How These Tools Actually Work

Understanding the mechanics matters — because blocking them requires knowing exactly what you are up against.

Cluely: The Invisible Overlay

Cluely was founded in April 2025 by two Columbia University students who were suspended for building its predecessor, Interview Coder. The company's original tagline was "cheat on everything." It raised $5.3 million in seed funding and a further $15 million Series A from Andreessen Horowitz, bringing total funding to over $20 million.

The technology works by rendering a transparent overlay on the candidate's screen using low-level graphics hooks — the same framework that powers game graphics on Windows and macOS. The overlay sits on top of the exam environment and is invisible to screen-sharing software. The interviewer or proctor sees a clean screen. The candidate sees a floating panel of AI-generated answers.

Cluely captures everything visible on screen and everything audible through the microphone. When a question appears or is spoken, the tool sends it to a large language model and returns a suggested answer within seconds — fast enough to read aloud naturally.

Parakeet AI: The Real-Time Coaching Pipeline

Parakeet AI takes a slightly different approach. It functions as a live interview assistant that transcribes the interviewer's speech in real time, processes the question through an LLM, and returns a structured response candidates can read and paraphrase. It is marketed as an "interview copilot" — but in practice, it is a pipeline for routing questions through ChatGPT and displaying answers.

The key risk Parakeet AI poses is audio intelligence. It does not just capture what is on screen — it listens to the session and processes spoken questions automatically. This makes it a threat in oral exams, live coding interviews, and any assessment where verbal interaction occurs.

FinalRound AI: Behavioral and Technical Whispers

FinalRound AI offers real-time suggestions during both behavioural and technical interviews, structured around the candidate's uploaded resume. During a live Zoom or Teams session, it listens, analyses the question type, and generates talking points, code hints, or STAR-framework answers. It operates in a stealth mode designed specifically to avoid detection by standard interview platforms.

ChatGPT on a Second Device: Still the Most Common Method

Despite the sophistication of dedicated tools, the most widespread method remains simpler: a phone propped just outside the webcam frame, running ChatGPT. The candidate reads a question, types or speaks it into the phone, and reads the answer back. Survey data suggests 44% of candidates who cheated used some form of external text assistance fed to them during the session.

Why Browser Lockdown Is Not Enough

Most institutions respond to AI cheating by tightening browser lockdown software. Block tab switching. Disable copy-paste. Restrict keyboard shortcuts.

Cluely was specifically engineered to defeat all of these measures. It bypasses keyboard logging with hidden global shortcuts. Its overlay tricks the system into thinking the candidate never leaves the exam tab. Standard browser lockdown cannot see it because Cluely does not operate through the browser — it operates at the operating system level.

A secondary device running ChatGPT defeats browser lockdown entirely because it is not on the exam device at all.

The fundamental problem is that browser lockdown controls one surface: the browser. These tools live somewhere else entirely.

What Effective Detection Actually Requires

Blocking this new generation of tools requires detection across three layers simultaneously — not just browser lockdown.

Application-level monitoring is the most direct countermeasure for overlay tools. Rather than watching the browser, it watches every process active on the operating system. If Cluely launches or Parakeet AI initialises, the system detects it at the moment of activation. Talview blocks these tools the instant they start — halting the session rather than flagging it for review after the exam ends.

Audio intelligence is essential for tools like Parakeet AI and for detecting human coaching via earpieces. Talview's audio analysis listens for background voices, whispered instructions, and the acoustic patterns of a candidate reading a response rather than formulating one. The difference is measurable and consistent.

Behavioural analysis adds the third layer. Candidates using AI assistance show anomalies: response latency that does not match question difficulty, unnaturally structured answers to ambiguous prompts, and pacing that does not reflect real cognitive load. No single signal is definitive. The combination of all three creates a detection surface that is far harder to evade than any single-layer approach.

The Stakes Go Beyond the Exam Room

Gartner projects that by 2028, one in four candidate profiles will contain fraudulent elements generated by AI. Companies like Google and McKinsey responded to the 2025 surge in AI interview fraud by reintroducing mandatory in-person rounds. The direct cost of a single fraudulent hire exceeds $50,000 before downstream effects — security risks, performance gaps, cultural damage — are factored in.

For professional certification, the consequences are more serious still. A physician, an engineer, or a financial analyst who passed their licensing exam with real-time AI assistance holds credentials they did not earn. The qualification that is supposed to signal competence signals nothing.

Exam integrity is not about surveillance. It is about ensuring that the result of an assessment actually means something — for the institution that issues it, the employer who relies on it, and the candidates who earned it honestly.

Filed under

AI Cheating Prevention

© 2026 Talview. All Rights Reserved.