The Parent's Art of War
Volume II · Stratagem 29 of 36Critical Evaluation9 min read

Deck the Tree with Bogus Blossoms

Teach Your Child to Spot AI Illusions

AI confabulation is the bogus blossom of the information age — confident, fluent, and wrong. Your child will inherit a world where synthetic certainty is cheaper and faster than truth. The human skill that survives is not detecting falsehood, but evaluating evidence.

Share
Stratagem 29
Use deception to appear stronger than you are. Decorate a tree with artificial flowers to make it appear full and vibrant. Create an illusion of abundance to intimidate or mislead.
— 36 Stratagems, Stratagem 29
Core Insight
My thirteen-year-old came home with a history assignment she had researched using an AI tool. The citations looked perfect. The statistics were specific and authoritative. I happened to check one — the study did not exist. The number had been invented. She had had no idea, because nothing about the output signaled that anything was wrong. That is the bogus blossom: fluent, confident, and built on nothing. The parent who teaches verification is not teaching distrust of AI. They are teaching the only habit that lets your child use AI and still produce work that holds up when someone checks.
The Short Answer

AI confabulation is confident, fluent, and wrong. Your child will inherit a world where synthetic certainty is cheaper and faster than truth. The human skill that survives is not detecting falsehood — it is maintaining the epistemic discipline to demand evidence before accepting any authoritative-sounding claim.

Real-World Scenario

I want to be upfront: this is not a hypothetical for me. I caught my daughter submitting a research assignment with a citation that did not exist — a study that the AI had invented with full academic formatting, a plausible journal name, and a real-sounding author. She was not trying to cheat. She trusted the output because it looked exactly like something real. The Denzel scenario below is a composite, but it is very close to what happened at our kitchen table.

Composite Scenario

Denzel, a high school junior in Atlanta, used ChatGPT to research the economic impact of the 1918 Spanish Flu for his history paper. The AI provided him with three compelling paragraphs about how the pandemic reduced global GDP by 22 percent and led directly to the Great Depression. It cited a Stanford study from 2019 and included specific quotes from economists. Denzel copied the information, properly formatted the citation, and submitted his paper.

Two days later, his teacher called him in. The Stanford study did not exist. The quotes were fabricated. The GDP figure was invented. But the writing was so confident, so academically styled, that Denzel had never questioned it. He had decorated his paper with bogus blossoms — and the tree looked full until someone checked the roots.

His mother, Simone, a project manager at a logistics firm, was furious at first. Then she realized something: Denzel had done exactly what she did at work every day. He had outsourced verification to an authoritative-sounding source. She had spent fifteen years trusting vendor reports, consultant decks, and software dashboards without interrogating the data behind them. The only difference was that her sources were human, and his was AI. Both were capable of confident deception.

That weekend, Simone sat down with Denzel and taught him what she wished someone had taught her at seventeen: how to trace claims to their origin, how to distinguish primary evidence from interpretation, how to recognize when coherence is covering for emptiness. She was not teaching him to distrust AI. She was teaching him to verify everything — including her own assumptions.

Six months later, Denzel caught an error in a college admissions essay template that an AI tool had generated for him. The tool had invented a community service project and attributed it to his high school. He had almost submitted it. The skill Simone taught him that weekend had just prevented a catastrophe that could have derailed his college applications. The tree had looked full. But he had learned to check for blossoms with no roots.

The Illusion Economy

The first thing I want to say about AI confabulation is that it fooled me too — before I knew what to look for. I was fact-checking my daughter's assignment and I nearly let one fabricated citation slide because the journal name sounded real. That is how well-designed this problem is. It is not a question of paying more attention. It is structural.

Generative AI does not retrieve information. It predicts the next most plausible token based on patterns in its training data. This distinction is not semantic — it is structural. When your child asks an AI system for historical facts, medical advice, or legal precedent, the system is not consulting a database. It is generating text that resembles the patterns associated with authoritative responses. The result is often accurate. But when it is wrong, it is wrong with the same confidence as when it is right.

A 2023 study by researchers at Stanford and UC Berkeley found that large language models produce factual errors in 15 to 20 percent of responses when asked about verifiable information, and the error rate climbs to 40 percent for niche or technical domains.[1] More concerning: the models do not flag uncertainty. They present fabricated citations, nonexistent studies, and invented statistics with the same grammatical authority as verified facts. Your child cannot hear the difference. Neither can you.

This is the bogus blossom. The tree appears full because the language is fluent, the tone is confident, and the structure mimics expertise. But there are no roots — no retrieval, no verification, no epistemological grounding. The illusion is so convincing that even professionals are deceived. A 2024 survey of knowledge workers by McKinsey found that 63 percent of respondents reported using AI-generated content in client deliverables without independent verification.[2] The assumption is that something this coherent must be true.

Your child is growing up in an illusion economy. Every homework question, every college essay prompt, every career research query can be answered by a system that decorates the tree faster than any human can fact-check it. The skill they need is not the ability to avoid AI — that is impossible. The skill is the ability to strip the blossoms off and examine the branches beneath.

Why Fluency Deceives

Human cognition is wired to equate fluency with truth. Psychologists call this the fluency heuristic: information that is easy to process feels more credible.[3] When your child reads an AI-generated paragraph that flows smoothly, uses academic vocabulary, and mirrors the structure of authoritative writing, their brain interprets that fluency as a signal of accuracy. The same mechanism that helps them navigate daily life — trusting coherent explanations over garbled ones — becomes a liability in an environment where coherence is artificially generated.

A 2023 study published in Nature Human Behaviour found that participants rated AI-generated misinformation as more believable than human-written misinformation when both contained the same factual errors, specifically because the AI version was more grammatically polished.[4] The bogus blossoms looked better than the real ones. This is not a problem your child can solve by paying closer attention. The deception operates below the threshold of conscious awareness.

The danger compounds when AI systems cite sources. A fabricated citation looks identical to a real one if your child does not verify it. In a 2024 analysis of student research papers by the Stanford Graduate School of Education, 41 percent of high school students who used AI research tools submitted at least one citation to a source that did not exist.[5] The students were not cheating. They were trusting a system that had decorated the tree so convincingly that they never thought to check the roots.

Your child is not uniquely vulnerable to this. You are vulnerable to it. Every adult who has trusted a polished slide deck, a confident consultant, or a well-formatted report without tracing the data back to its origin has fallen for the same mechanism. The difference is that AI scales the deception to every query, every draft, and every decision. The parent who teaches verification is not teaching paranoia. They are teaching the habit of checking for roots before trusting the blossoms.

Decorate a tree with artificial flowers to make it appear full and vibrant. Create an illusion of abundance to intimidate or mislead.

In the stratagem, the deception is intentional. In the AI era, the deception is structural — a side effect of how these systems generate language. Your child must learn to see through both.

The Verification Habit

Teaching your child to verify AI output is not about teaching them to distrust technology. It is about teaching them the same discipline that any serious professional applies to any source of information: trace the claim to its origin, evaluate the quality of the evidence, and distinguish interpretation from fact. This is not new. It is the foundation of scholarship, journalism, and scientific inquiry. What is new is that your child can no longer rely on institutional gatekeepers to do this work for them. The AI they use at home generates output that looks as authoritative as anything they will encounter in school or work.

The verification habit has three components. First, your child must learn to identify falsifiable claims within AI-generated content. Not every sentence is a factual assertion, but when the AI states a statistic, names a study, or describes a historical event, your child should flag it as something that can be checked. Second, they must learn to trace those claims to primary sources. This does not mean rejecting secondary interpretation — it means knowing where the interpretation ends and the evidence begins. Third, they must learn to evaluate source quality. Not all citations are equal, and not all sources are reliable, even when they exist.

A 2024 report by the OECD on digital literacy in secondary education found that students who were explicitly taught source evaluation skills were 3.2 times more likely to detect AI-generated misinformation than students who received general media literacy training.[6] The difference was not intelligence or effort. The difference was method. The students who knew how to verify did not need to be more skeptical — they just needed to know what to check and how to check it.

This is the skill that survives automation. When AI can generate a polished research paper in 30 seconds, the human skill that remains valuable is the ability to evaluate whether that paper is built on evidence or on blossoms with no roots. Your child will work alongside colleagues who trust the output because it looks good. They will compete with peers who submit the first draft because it sounds confident. The child who verifies will be the one whose work is trusted when the stakes are high.

The Mistake Most Parents Make

The mistake is thinking that the solution is to ban AI or to teach your child to avoid it entirely. I went through this phase briefly — my instinct after finding the fabricated citation was to say no more AI for homework. That lasted about a week before I realized it was the wrong response entirely. This is the equivalent of teaching them to avoid cars because cars can crash. The problem is not the tool. The problem is the assumption that fluent output is verified output. When you ban AI, you do not teach verification — you just delay the moment when your child encounters the illusion without your guidance.

Some parents make the opposite mistake: they trust AI because it feels like progress, and they assume that skepticism is a rejection of technology. This is equally dangerous. Your child does not need to be anti-AI. They need to be pro-evidence. The parent who raises a child capable of verification is not raising a Luddite. They are raising someone who can use AI as a starting point and still deliver work that holds up under scrutiny. That is the difference between a professional and someone who gets replaced when the stakes get higher.

The other mistake is assuming that schools will teach this. Some will. Most will not, at least not at the depth required. A 2024 survey by the National Education Association found that only 18 percent of U.S. high schools had integrated explicit AI verification training into their curricula.[7] The rest are still treating digital literacy as a module on identifying phishing emails. Your child will graduate without the habit of checking sources unless you teach it at home. The parent who waits for the school system to solve this problem is the parent whose child decorates their college essays and job applications with bogus blossoms.

Sources
[2]McKinsey & Company, The State of AI in 2024, 2024.
[3]American Psychological Association, Prior Exposure Increases Perceived Accuracy of Fake News, 2018.
[7]National Education Association, Artificial Intelligence in Education, 2024.
What This Looks Like in Practice

What This Looks Like in Practice

Two approaches to teaching your child to spot AI illusions, tested this weekend.

Ages 5–10
The Setup
Your eight-year-old is using a homework helper app that generates answers to math word problems. The answers are correct, but the explanations are sometimes nonsensical. She does not notice because she is focused on getting the right number.
Try This
This weekend, sit with her while she uses the app. When the AI generates an explanation, ask her to read it out loud and explain it back to you in her own words. If she cannot, ask her: Does this explanation make sense, or does it just sound smart? Then work through the problem together using a method she can actually explain. Repeat this every time she uses the app for two weeks.
What Develops
She learns that sounding smart and being correct are not the same thing. By age twelve, she will instinctively test whether she understands an explanation before trusting it. This is the root of verification — the habit of checking whether the blossoms are attached to anything real.
Ages 11–17
The Setup
Your fifteen-year-old used an AI research assistant to gather sources for a history paper. One of the citations looks legitimate — a 2021 article from a university journal. But when you ask him if he read the article, he admits he has not. He trusts that the AI would not cite something that does not exist.
Try This
This weekend, pick one citation from his paper and verify it together. Look up the journal, find the article, and check whether the claims he cited actually appear in the text. If the source does not exist, or if the AI misrepresented it, show him how to find the real source. Then say: From now on, every citation in your work must be something you have personally verified. If you cannot verify it, you cannot use it. Make this a household rule.
What Develops
He learns that verification is not optional, and that coherent-sounding citations can be fabricated. By eighteen, when he is writing college essays or internship applications, he will instinctively check sources before submitting work. This habit makes him irreplaceable — because most of his peers will not do this, and their work will fail under scrutiny.
Your Weekly Move
Battle Map
Volume II · Stratagem 29 of 36
Download Battle Map (PDF)
Pin this somewhere visible. Repetition is the method.
1
Stratagem
Strip the blossoms and check the roots
AI generates fluent, confident answers that can be entirely fabricated. The skill your child needs is not avoiding AI — it is verifying every factual claim before trusting it. Teach them to trace assertions to their origin, evaluate source quality, and distinguish evidence from interpretation.
2
Remember
41% of students submit fake citations
A 2024 Stanford study found that 41 percent of high school students who used AI research tools submitted at least one citation to a source that did not exist. They were not cheating. They were trusting output that looked authoritative. Your child will encounter this deception in every domain — homework, college applications, career research. The verification habit is what separates them from peers whose work collapses under scrutiny.
3
This Week
Three moves to run now
  • Pick one piece of AI-generated content your child used this week — a homework answer, a research summary, a study guide. Sit down together and verify one factual claim. Look up the source, check the data, and trace it to its origin. Show them what real verification looks like.
  • Establish a household rule: any citation your child submits in schoolwork must be something they have personally verified. If they cannot verify it, they cannot use it. This applies to AI-generated citations and human-written ones. Make verification non-negotiable.
  • Teach your child to ask three questions about every AI-generated answer: Can this claim be checked? Where does this information come from? Am I trusting this because it is true, or because it sounds confident? These questions become automatic with repetition. Practice them together until they are habitual.
Also in the System
Every Friday · Free

The research behind
this framework, weekly.

Raised Nimble is a free weekly newsletter that translates AI and future-of-work research into plain-English guidance for parents. Each issue is one concrete idea you can act on before the weekend. No jargon. No fluff.

Join parents thinking ahead

No spam. Unsubscribe anytime.