Attack by Stratagem
Why Fighting AI Directly Is the Wrong War
Sun Tzu's most famous insight: the supreme victory is winning without fighting. Parents who try to compete with AI directly — by drilling more facts, pushing harder on test scores, accelerating traditional academics — are fighting the wrong war. The child who learns to sidestep competition entirely builds advantages no algorithm can touch.
Sun Tzu's core insight: the supreme victory is winning without fighting. Parents who try to compete with AI directly — drilling more facts, pushing harder on test scores — are fighting the wrong war. The strategic move is to stop competing where AI wins and position your child where AI cannot follow.
I watched my wife work through this in her own career before I understood it as a principle. She did not stop trying to be good at her job — she stopped trying to be good at the parts of her job that AI was already doing better. She started asking where human judgment actually mattered, and she put her time there. That reframe is exactly what Sun Tzu means. The scenario below is a composite — but it maps closely to conversations I have had with other parents who found the same shift.
Isabel sat across from her daughter's ninth-grade chemistry tutor, a retired engineer who had taught AP courses for two decades. He was frustrated. Isabel's daughter Maya was bright, he said, but she kept asking questions that derailed the curriculum. During a unit on molecular bonding, Maya asked whether the same principles could explain why certain friendships felt stable and others did not. During stoichiometry, she wondered aloud if you could model fairness the way you balance equations.
The tutor wanted Maya to stop wandering. Isabel wanted something else. She had spent fifteen years as a strategy consultant before AI tools began writing the decks she used to bill $400 an hour to produce. She had watched colleagues with flawless Excel skills and encyclopedic industry knowledge get quietly redundant. The ones who survived were the ones who asked questions no model was trained to ask.
Isabel thanked the tutor and did not renew the contract. Instead, she enrolled Maya in a summer program at a local makerspace where teenagers built assistive devices for people with disabilities. No curriculum. No tests. Just a real human problem, a budget, and eight weeks. Maya's team built a jar opener for a woman with severe arthritis and presented it to her in week seven. The woman cried. Maya did too.
That fall, Maya's chemistry grades were fine but not exceptional. Her college essay three years later was not about chemistry. It was about the moment she realized that solving a problem someone actually had was more interesting than solving a problem that appeared on page 147 of a textbook. She got into her first-choice school. The admissions officer later told Isabel's friend — who worked in the office — that they received 6,000 applications that year with perfect test scores.[1] They admitted students who knew what they were for.
Isabel did not fight AI on its terms. She fought it by choosing a completely different war.
The instinct I had — and I think most parents have — is to respond to AI getting better by pushing harder in the same direction. AI is better at writing, so make sure your daughter writes more. AI is better at analysis, so make sure your son does more math. It feels logical. It is the wrong war.
The instinct is understandable. AI gets better at math tutoring, so parents drill multiplication tables harder. AI writes essays in seconds, so parents push their children to read more classic literature and memorize rhetorical devices. AI starts diagnosing medical images with superhuman accuracy, so pre-med advisors tell students to take extra biology courses. This is the logic of an arms race. It assumes the war is about capability in a fixed domain.
But an arms race against AI is not a war you can win by training harder. A 2024 analysis by McKinsey found that AI systems now match or exceed median human performance in 63% of economically valuable cognitive tasks, up from 37% just two years prior.[1] The trajectory is not linear. It is exponential. A child who spends their adolescence optimizing for tasks AI already does well is building skills with a predictable half-life.
Sun Tzu wrote that the worst strategy is to attack fortified walls directly. The second worst is to lay siege. The best is to disrupt the enemy's plans before battle is joined. Parents who compete with AI directly are attacking fortified walls. The child who memorizes 5,000 vocabulary words is laying siege. Both will lose. Not because they are not trying hard enough, but because they are fighting on terrain where the opponent holds every structural advantage.
The McKinsey analysis showed something else. The 37% of tasks where humans still hold advantage cluster in three domains: unstructured problem-solving with incomplete information, work requiring trust and emotional attunement, and decisions involving contested values with no clear optimization function.[2] These are not the subjects your child's school is organized around. These are not the skills their standardized tests measure. Which means most children are being trained for the 63%, not the 37%.
To subdue the enemy without fighting is the supreme excellence.
The parent who understands this stops asking how their child can get better at what AI does. They ask: what can my child do that makes AI irrelevant?
Sun Tzu's principle of attack by stratagem rests on a ruthlessly pragmatic question: where is my opponent weak and I am strong? For parents navigating AI disruption, this means identifying with precision the domains where human cognitive architecture holds insurmountable advantage. These are not vague appeals to creativity or emotional intelligence. These are specific, definable capabilities with clear development paths.
The first domain is problem formulation under uncertainty. AI is exceptional at solving well-defined problems. A Stanford study tracking GPT-4 performance across 12,000 tasks found that solution quality dropped by 68% when problem statements were ambiguous or contained conflicting constraints.[3] Humans — especially humans trained from childhood to operate in messy, real-world conditions — navigate ambiguity as a baseline cognitive mode. This is not a small advantage. Most valuable problems in the world are not well-defined when you encounter them.
The second domain is ethical reasoning in novel contexts. AI can apply existing ethical frameworks to known scenarios. It cannot generate new moral intuitions when facing situations its training data never anticipated. A 2024 MIT study asked human participants and AI models to evaluate fairness in resource allocation scenarios that mixed cultural, economic, and personal factors in ways that had no historical precedent.[4] Humans demonstrated consistent moral reasoning. AI outputs were incoherent, often contradicting themselves across slight reframings of the same dilemma. Your child will spend their career navigating dilemmas no model was trained on.
The third domain is building trust across difference. A Harvard analysis of 2,000 business negotiations found that AI-mediated communication reduced trust formation by 41% compared to human-to-human interaction, even when the AI was explicitly trained to maximize rapport.[5] Trust is not an optimization problem. It emerges from vulnerability, repair, and the willingness to be changed by another person. These are human capacities. They cannot be automated without eliminating the thing that makes them valuable.
Many parents believe they are choosing non-competing strategies when in fact they are still competing directly, just in adjacent domains. This is the trap of marginal differentiation. A parent hears that AI is good at writing, so they enroll their child in advanced creative writing courses. AI is good at data analysis, so they push their child toward statistics. AI is good at language translation, so they add a third foreign language. Each of these moves assumes the child can stay far enough ahead in a domain where AI is advancing.
But marginal advantage erodes faster than parents realize. OpenAI's internal benchmarks show that each new model generation closes roughly 40% of the remaining performance gap between AI and top-decile human experts in technical domains.[6] A child who is slightly better than AI today at creative writing will not be better in four years. A child who is slightly better at statistical modeling will not be better in six. The entire strategy rests on a widening gap that is actually collapsing.
The other trap is prestige-chasing. Parents see that elite universities still value certain traditional markers — debate team, math competitions, science olympiads — and they assume these signals will remain durable. But universities are trailing indicators, not leading ones. A 2024 survey of Fortune 500 hiring managers found that 71% now weight demonstrated problem-solving in novel contexts more heavily than GPA or test scores when evaluating entry-level candidates.[7] The parents who optimize for what worked in 2010 are preparing their children for a job market that no longer exists.
Sun Tzu warns against refighting the last war. The parent who pushes their child toward computer science because tech jobs were lucrative in 2020 is refighting the last war. The parent who emphasizes standardized test prep because that was the path to upward mobility in 1995 is refighting the last war. The strategic parent asks: what will the battlefield look like when my child enters it, and what capabilities will matter there?
Non-competing strategy is not passive. It is not telling your child to follow their passion and hope it works out. It is the deliberate construction of capabilities in domains where AI holds no structural advantage, built early enough that they become your child's cognitive default. This requires three shifts in how parents think about development.
First, prioritize real-world problem-solving over theoretical knowledge. A child who spends six months helping a local nonprofit redesign its volunteer onboarding process learns more durable skills than a child who takes three AP courses. The nonprofit project requires stakeholder interviews, constraint navigation, iterative prototyping, and implementation under conditions where failure has real consequences. The AP courses require memorization and pattern-matching. One builds capabilities AI cannot replicate. The other builds capabilities AI already exceeds.
Second, cultivate comfort with ambiguity and incomplete information. This is not a personality trait. It is a trainable skill. Parents can start young by replacing closed-ended questions with open-ended ones. Not what is the capital of France, but what do you think makes a city become a capital? Not what is 7 times 8, but how would you figure out how many tiles we need for this bathroom? The child who grows up navigating questions with no single right answer develops cognitive flexibility that transfers across every domain.
Third, build relationship skills in high-friction environments. This means putting your child in situations where they must collaborate with people who are not like them, who do not share their assumptions, who might frustrate or challenge them. A 2024 analysis by the World Economic Forum found that cross-cultural collaboration skills and conflict resolution were the two fastest-growing requirements in job postings across all industries.[8] These are not skills you develop in a classroom with 30 students from the same zip code. These are skills you develop by doing hard things with people who see the world differently.
The mistake is equating non-competing strategy with opting out of rigor. I made a version of this mistake early on — I heard the argument and immediately thought: so my kids do not need to learn hard things anymore. That is not what Sun Tzu meant. Parents hear that they should not compete with AI directly, and they interpret this as permission to let their child coast, to avoid hard subjects, to follow only their existing interests. Attack by stratagem does not mean refusing to fight. It means choosing fights you can win.
A child who avoids math because AI is good at math is not building non-competing advantage. A child who learns to use math as a tool for solving problems that matter to them — who builds a predictive model to help their school reduce food waste, who uses statistics to analyze bias in local news coverage, who applies game theory to design a fairer system for assigning classroom jobs — is building non-competing advantage. The difference is not whether the child learns math. The difference is whether math becomes a tool for human judgment or a domain for rote execution.
The strategic parent understands that rigor and non-competition are not opposites. They are complements. Your child should work hard. They should build deep skills. They should struggle with difficult material. But the direction of that effort matters more than the intensity. A child who works incredibly hard to get good at tasks AI already dominates is running up a down escalator. A child who works hard to get good at tasks AI cannot touch is building leverage that compounds across their entire life. Same effort. Completely different outcomes.
What This Looks Like in Practice
Two ways to start building non-competing advantage this week.
- Pick one skill your child is working hard on. Ask yourself: is this a domain where AI already exceeds median human performance, or a domain where human cognitive architecture holds structural advantage? If the former, do not eliminate it — but shift how you frame it. Make it a tool for solving human problems, not an end in itself.
- Identify one messy, real-world problem in your home or community. Involve your child in solving it this weekend. It does not need to be large. It needs to be real. Real constraints. Real stakeholders. Real consequences. Let them navigate the ambiguity.
- Replace one closed-ended question you ask your child this week with an open-ended one. Not what is the answer, but how would you figure that out. Not what did you learn, but what are you still confused about. Train them to operate in the space where no one has the answer yet.
The research behind
this framework, weekly.
Raised Nimble is a free weekly newsletter that translates AI and future-of-work research into plain-English guidance for parents. Each issue is one concrete idea you can act on before the weekend. No jargon. No fluff.
No spam. Unsubscribe anytime.