012
All issues
Issue #012Apr 10, 2026Jerry ChouSource Evaluation

When Kids Use AI for Health Questions, Here’s What Parents Need to Know

A national poll found 51% of adults have used AI to make a real health decision without talking to a doctor. Public trust in AI for health care is dropping. Usage is going up anyway. Kids are watching all of this and normalizing it before anyone has explained to them why it can go sideways. Here is the one skill they need, and one game to build it this weekend.

The Research Behind This Issue
Trust in AI for health care is falling. People are using it for major medical decisions anyway. That gap is where the problem lives for our kids.
A new national poll of 1,007 U.S. adults commissioned by Ohio State University Wexner Medical Center found public openness to AI in health care dropped from 52% to 42% since 2024. Belief that AI makes health processes more efficient fell from 64% to 55%. Yet 51% of those same adults reported using AI to make an important health decision without consulting a medical professional. Experts warn that AI can be inaccurate and that patients should use it as a supplement, not a replacement, for professional medical advice.
The Short Answer

Public trust in AI for health advice is dropping while actual usage for real medical decisions is rising. A 2026 Ohio State University survey found 51% of adults have made an important health decision using AI without consulting a doctor. Kids are watching and normalizing this behavior before anyone has taught them why it can go sideways. The skill they need is source evaluation: the ability to ask what a source knows, and what it cannot know, about their specific situation. Health is the clearest place to start building that habit because the stakes make the lesson concrete.

I have been thinking about this one since I saw the headline. A new national poll found that 51% of adults have used AI to make an important health decision without talking to a doctor.[1] Adults. Half of us are doing this. So what does that mean for our kids, who are growing up thinking of AI as their first stop for almost any question? That is the conversation I wanted to dig into this week.

What’s Actually Happening

Public trust in AI for health care is dropping, but people are using it more than ever for real medical decisions, and that gap between skepticism and actual behavior is where the problem lives.

A 2026 survey of 1,007 U.S. adults commissioned by Ohio State University Wexner Medical Center found that openness to AI in health care dropped from 52% in 2024 to 42% today.[1] At the same time, belief that AI makes health processes more efficient fell from 64% to 55%.[1] People are becoming more skeptical over time.

But here is the part that stopped me. That same poll found 51% of adults had already used AI to make an important health decision without consulting a medical professional.[1] Trust is going down, but use is going up. Adults are saying they do not fully trust it and then doing it anyway.

Kids are watching all of this. They see adults Googling symptoms and asking ChatGPT what a medication does. The behavior gets normalized before anyone has ever explained to them why it can go sideways. That is the gap parents need to close.

Why This Changes Things For Your Child

When kids grow up with AI as their default answer machine, they need to be taught, explicitly, that health questions are a different category where the stakes change the rules.

My youngest reads at a fifth-grade level and will look up almost anything she is curious about. She does not yet have the judgment to know that “AI told me” is not the same as “a doctor checked this for me specifically.” That distinction is not obvious to a seven-year-old, and honestly, the Ohio State data suggests it is not obvious to a lot of adults either.

Research has consistently shown that AI tools can produce medically inaccurate information, sometimes confidently.[2] A 2023 study in JAMA Internal Medicine found that AI chatbots answered common patient health questions with responses that were sometimes incomplete or misleading.[2] The model does not know your kid’s medical history, allergies, weight, or age.

The deeper issue is that health misinformation can have real consequences. A kid who convinces herself that a rash is “probably fine” because an AI said so, or who decides a medication side effect is not worth mentioning to a parent, is making a decision with actual stakes. That is a different situation than getting a wrong answer on a trivia question.

The Skill That Actually Matters Here

The skill your kid needs is source evaluation: the ability to ask “who is telling me this, and what do they actually know about my situation?”

This is not a new skill. We have been teaching kids to evaluate sources since the encyclopedia era. But AI changes the presentation in a way that makes it harder. When a search engine returns results, you can see the source. When AI answers a question, it sounds like one confident voice. There is no byline, no “this article was written by a nurse practitioner.” It just sounds like the answer.

“Teaching kids to ask ‘but does this AI know MY specific situation?’ is the lever. That question alone changes how they relate to the output.”

A January 2025 Common Sense Media survey of 1,045 teens ages 13–18 found that more than a third of them believe generative AI makes it harder to tell whether online information is accurate.[3] A significant number reported being actively misled by AI-generated content. The problem is not just that AI can be wrong. It is that teens already suspect it can be wrong and still struggle to catch it in the moment, especially when the answer sounds authoritative.

My 13-year-old is sharp, self-directed, and does her own research. But I have noticed she sometimes treats AI output as settled rather than as a starting point. We have talked about this specifically around health questions, and I had to be honest with her that even I have caught myself accepting an AI answer without thinking critically about what it could be getting wrong. That admission seemed to land better than any lecture would have.

Signs Your Child Is Already Building This Skill

A child who pauses before accepting AI output, or who asks a follow-up question to verify what they heard, is already developing this muscle.

Watch for moments when your kid expresses doubt about something AI told them, or when they come to you and say “the AI said X but I wasn’t sure.” That is a good sign. It means they are treating AI as a source to check rather than a verdict to accept. Reinforce that instinct hard when you see it. Something as simple as “good thinking, I like that you questioned it” builds the habit more than any lecture does. Kids this age learn what gets noticed.

The warning sign is the opposite: a kid who closes the loop entirely with AI and never surfaces the question to an adult. Especially with health topics, you want your kid to see you as part of the process, not a step they can skip. If your kid is embarrassed about a health question, which happens a lot in middle school, they may prefer asking AI precisely because it feels private. Build enough trust that they know they can bring the weird questions to you too, without judgment.

What You Can Do This Week

Start one real conversation this week, not a lecture, just a question that opens the door.

Ask first, don’t tell. Try asking your kid if they have ever looked up a health question using AI and what they found. Do not come in with an agenda. Just listen. You will learn a lot from what they say, and asking instead of telling signals that you trust their thinking. My middle daughter opens up completely when she feels like I am genuinely curious rather than looking to catch her doing something wrong.

Walk through a real example together. Pick a common health question, type it into an AI tool with your kid watching, and look at the answer side by side. Ask out loud: “What would this AI not know about you specifically?” Let the answer come from them. That hands-on moment sticks better than any explanation you could give.

Make the “loop me in” norm explicit and low-stakes. Tell your kid directly that health questions are one category where you always want to be in the loop, not because AI is bad but because no AI knows their body, their history, or the specifics that matter. Frame it as “that is just the rule for health stuff,” the same way you have rules about other things that involve real risk.

The goal is not to make kids afraid of AI. It is to give them a mental category that says some questions need a human who knows you. Health is at the top of that list, and kids are old enough to understand that if you explain it without drama.

I do not have this perfectly figured out in my own house. My daughters are at different ages and different stages of independence, and the oldest is already making more decisions on her own than I sometimes realize. What I keep coming back to is that the parents who help their kids the most right now are probably not the ones with the strictest rules about AI. They are the ones having the most honest conversations about where AI is useful and where it genuinely falls short. Health is an easy place to start because the stakes are visible and the reasoning is not complicated. AI is not going anywhere. What we are really teaching here is judgment, and judgment takes practice.

Sources
[1]Ohio State University Wexner Medical Center. “Public trust in AI in health care is slipping, survey finds,” Medical Xpress, April 2026.
[2]Ayers, J.W. et al. “Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum,” JAMA Internal Medicine, 2023.
[3]Common Sense Media. “Teens, Trust, and Technology in the Age of AI,” January 2025.
This Weekend’s Family Activity

The Health Question Showdown

Ages 8–1430–40 minutesGame
What You Need
Index cards or slips of paperPen or markerAn AI tool (phone or computer)Optional: timer and scoring sheet
01
Step
Write the Questions
Together, write five common health questions on separate index cards. Examples: “I have a headache, what should I do?” or “My stomach hurts after eating, is that serious?” Shuffle the cards and place them face down in a pile.
5 min
02
Step
Ask the AI
Draw the first card. Have your kid read the question aloud to an AI tool, exactly as written. Read the response together out loud without commenting yet.
5 min
03
Step
What Does It Not Know?
Take turns naming one thing about your kid specifically that the AI had no way of knowing when it answered: their age, allergies, medications, something that happened recently, their medical history. Each detail earns a point. See how many you can name.
10 min
04
Step
Flip the Perspective
Ask your kid to imagine they are a doctor who actually knows them. What question would that doctor ask before answering? Have them say it out loud. This shifts them from passive receiver to active thinker about what real diagnosis actually requires.
10 min
05
Step
Debrief Together
After all five cards, talk about one moment that surprised you: either an answer that seemed more complete than expected, or a gap that felt bigger than expected. No winners or losers, just the most interesting thing you each noticed.
5 min
If this changed how you think about your kid’s next health question to AI, pass it on.
The Deeper Lesson

Why This Activity Works

The whole point of The Health Question Showdown is to make the gap between AI confidence and AI knowledge visible. A national poll found that half of adults have already used AI to make a real health decision without talking to a doctor, even as trust in AI for health care is declining. The problem is not that AI is useless for health questions. It can be a genuinely helpful starting point. The problem is that it answers every question with the same confident tone whether it has what it needs or not. Kids who learn to ask “what does this source not know about me?” are building the most transferable critical thinking skill there is, one that applies to AI, to websites, to advice from friends, and eventually to their own adult decisions.

Conversation Starter

Ask This at Dinner

If an AI gives you a confident answer about a health symptom but has never met you, seen you, or known anything about your body before this moment, what would you want to double-check before believing it?

Listen for whether they can name something specific about themselves that AI would not know. That instinct, once named out loud, is the beginning of real source evaluation.

Go Deeper

Build the Full Picture

Every Friday

This kind of thinking,
delivered weekly.

Raised Nimble translates AI and learning research into practical guidance for parents. Free, every Friday. No fluff.

Join parents thinking ahead

No spam. Unsubscribe anytime.