When Kids Use AI for Health Questions, Here’s What Parents Need to Know
A national poll found 51% of adults have used AI to make a real health decision without talking to a doctor. Public trust in AI for health care is dropping. Usage is going up anyway. Kids are watching all of this and normalizing it before anyone has explained to them why it can go sideways. Here is the one skill they need, and one game to build it this weekend.
Public trust in AI for health advice is dropping while actual usage for real medical decisions is rising. A 2026 Ohio State University survey found 51% of adults have made an important health decision using AI without consulting a doctor. Kids are watching and normalizing this behavior before anyone has taught them why it can go sideways. The skill they need is source evaluation: the ability to ask what a source knows, and what it cannot know, about their specific situation. Health is the clearest place to start building that habit because the stakes make the lesson concrete.
I have been thinking about this one since I saw the headline. A new national poll found that 51% of adults have used AI to make an important health decision without talking to a doctor.[1] Adults. Half of us are doing this. So what does that mean for our kids, who are growing up thinking of AI as their first stop for almost any question? That is the conversation I wanted to dig into this week.
Public trust in AI for health care is dropping, but people are using it more than ever for real medical decisions, and that gap between skepticism and actual behavior is where the problem lives.
A 2026 survey of 1,007 U.S. adults commissioned by Ohio State University Wexner Medical Center found that openness to AI in health care dropped from 52% in 2024 to 42% today.[1] At the same time, belief that AI makes health processes more efficient fell from 64% to 55%.[1] People are becoming more skeptical over time.
But here is the part that stopped me. That same poll found 51% of adults had already used AI to make an important health decision without consulting a medical professional.[1] Trust is going down, but use is going up. Adults are saying they do not fully trust it and then doing it anyway.
Kids are watching all of this. They see adults Googling symptoms and asking ChatGPT what a medication does. The behavior gets normalized before anyone has ever explained to them why it can go sideways. That is the gap parents need to close.
When kids grow up with AI as their default answer machine, they need to be taught, explicitly, that health questions are a different category where the stakes change the rules.
My youngest reads at a fifth-grade level and will look up almost anything she is curious about. She does not yet have the judgment to know that “AI told me” is not the same as “a doctor checked this for me specifically.” That distinction is not obvious to a seven-year-old, and honestly, the Ohio State data suggests it is not obvious to a lot of adults either.
Research has consistently shown that AI tools can produce medically inaccurate information, sometimes confidently.[2] A 2023 study in JAMA Internal Medicine found that AI chatbots answered common patient health questions with responses that were sometimes incomplete or misleading.[2] The model does not know your kid’s medical history, allergies, weight, or age.
The deeper issue is that health misinformation can have real consequences. A kid who convinces herself that a rash is “probably fine” because an AI said so, or who decides a medication side effect is not worth mentioning to a parent, is making a decision with actual stakes. That is a different situation than getting a wrong answer on a trivia question.
The skill your kid needs is source evaluation: the ability to ask “who is telling me this, and what do they actually know about my situation?”
This is not a new skill. We have been teaching kids to evaluate sources since the encyclopedia era. But AI changes the presentation in a way that makes it harder. When a search engine returns results, you can see the source. When AI answers a question, it sounds like one confident voice. There is no byline, no “this article was written by a nurse practitioner.” It just sounds like the answer.
“Teaching kids to ask ‘but does this AI know MY specific situation?’ is the lever. That question alone changes how they relate to the output.”
A January 2025 Common Sense Media survey of 1,045 teens ages 13–18 found that more than a third of them believe generative AI makes it harder to tell whether online information is accurate.[3] A significant number reported being actively misled by AI-generated content. The problem is not just that AI can be wrong. It is that teens already suspect it can be wrong and still struggle to catch it in the moment, especially when the answer sounds authoritative.
My 13-year-old is sharp, self-directed, and does her own research. But I have noticed she sometimes treats AI output as settled rather than as a starting point. We have talked about this specifically around health questions, and I had to be honest with her that even I have caught myself accepting an AI answer without thinking critically about what it could be getting wrong. That admission seemed to land better than any lecture would have.
A child who pauses before accepting AI output, or who asks a follow-up question to verify what they heard, is already developing this muscle.
Watch for moments when your kid expresses doubt about something AI told them, or when they come to you and say “the AI said X but I wasn’t sure.” That is a good sign. It means they are treating AI as a source to check rather than a verdict to accept. Reinforce that instinct hard when you see it. Something as simple as “good thinking, I like that you questioned it” builds the habit more than any lecture does. Kids this age learn what gets noticed.
The warning sign is the opposite: a kid who closes the loop entirely with AI and never surfaces the question to an adult. Especially with health topics, you want your kid to see you as part of the process, not a step they can skip. If your kid is embarrassed about a health question, which happens a lot in middle school, they may prefer asking AI precisely because it feels private. Build enough trust that they know they can bring the weird questions to you too, without judgment.
Start one real conversation this week, not a lecture, just a question that opens the door.
Ask first, don’t tell. Try asking your kid if they have ever looked up a health question using AI and what they found. Do not come in with an agenda. Just listen. You will learn a lot from what they say, and asking instead of telling signals that you trust their thinking. My middle daughter opens up completely when she feels like I am genuinely curious rather than looking to catch her doing something wrong.
Walk through a real example together. Pick a common health question, type it into an AI tool with your kid watching, and look at the answer side by side. Ask out loud: “What would this AI not know about you specifically?” Let the answer come from them. That hands-on moment sticks better than any explanation you could give.
Make the “loop me in” norm explicit and low-stakes. Tell your kid directly that health questions are one category where you always want to be in the loop, not because AI is bad but because no AI knows their body, their history, or the specifics that matter. Frame it as “that is just the rule for health stuff,” the same way you have rules about other things that involve real risk.
The goal is not to make kids afraid of AI. It is to give them a mental category that says some questions need a human who knows you. Health is at the top of that list, and kids are old enough to understand that if you explain it without drama.
I do not have this perfectly figured out in my own house. My daughters are at different ages and different stages of independence, and the oldest is already making more decisions on her own than I sometimes realize. What I keep coming back to is that the parents who help their kids the most right now are probably not the ones with the strictest rules about AI. They are the ones having the most honest conversations about where AI is useful and where it genuinely falls short. Health is an easy place to start because the stakes are visible and the reasoning is not complicated. AI is not going anywhere. What we are really teaching here is judgment, and judgment takes practice.
The Health Question Showdown
Why This Activity Works
The whole point of The Health Question Showdown is to make the gap between AI confidence and AI knowledge visible. A national poll found that half of adults have already used AI to make a real health decision without talking to a doctor, even as trust in AI for health care is declining. The problem is not that AI is useless for health questions. It can be a genuinely helpful starting point. The problem is that it answers every question with the same confident tone whether it has what it needs or not. Kids who learn to ask “what does this source not know about me?” are building the most transferable critical thinking skill there is, one that applies to AI, to websites, to advice from friends, and eventually to their own adult decisions.
Ask This at Dinner
Listen for whether they can name something specific about themselves that AI would not know. That instinct, once named out loud, is the beginning of real source evaluation.
Build the Full Picture
This kind of thinking,
delivered weekly.
Raised Nimble translates AI and learning research into practical guidance for parents. Free, every Friday. No fluff.
No spam. Unsubscribe anytime.