Artificial intelligence is entering pediatric operating rooms faster than the rules governing its use can keep up. From autonomous surgical robots stitching tissue without a human hand on the controls to AI-powered diagnostic tools shaping treatment plans for children, the technology is already changing how young patients are treated. But because children cannot consent for themselves and their developing bodies respond differently than adults, the ethical stakes of getting this wrong are uniquely high.
Robots That Operate on Their Own
The clearest sign that surgical AI has moved beyond theory came when researchers demonstrated autonomous laparoscopic suturing for intestinal anastomosis in both phantom models and living tissue. Published in Science Robotics, the study showed a robot performing soft-tissue reconnection with minimal human input, a task that demands real-time adaptation to unpredictable anatomy. While that work was not conducted on pediatric patients, it established a technical baseline: machines can now handle one of the most delicate categories of abdominal surgery largely on their own.
That capability becomes more complicated when applied to children. Pediatric anatomy is smaller, more variable across age groups, and less represented in the training datasets that teach these systems. Lee and colleagues describe how surgical robots can assume graded autonomy, from simple instrument guidance up to independent intraoperative decision-making. Each level introduces a different risk profile, and the gap between what a system can do in a controlled lab and what it should do inside a child’s body remains wide. Even seemingly minor miscalculations (millimeters of excess force or a delayed response to bleeding) can have outsized consequences in a smaller patient.
A Surge in Cleared Devices, but Few for Kids
The FDA had cleared more than 950 AI and machine-learning-based medical devices in the United States by mid-2024, with annual approvals jumping from roughly 40 to 221 in 2023 alone. That acceleration reflects broad confidence in AI across radiology, cardiology, and other adult-focused specialties. Pediatric surgery, however, sits on the margins of that growth, in part because children are underrepresented in the datasets that underpin commercial systems.
Intuitive Surgical’s quarterly filing with the SEC disclosed that its da Vinci 5 system’s FDA clearance excludes pediatric indications. That exclusion is telling. The most prominent robotic surgery platform on the market was not approved for use on children, which means surgeons who adapt it for younger patients do so in a regulatory gray zone. Levita Magnetics International Corp. received its own 510(k) clearance for a magnetic-assisted system, and another recent clearance for a robotic platform similarly focused on general minimally invasive procedures rather than child-specific applications.
The pattern reveals a structural problem. AI surgical tools are being built and tested on adult populations, then repurposed for children without dedicated clinical trials or tailored regulatory review. Most coverage of AI in surgery treats the pediatric gap as a footnote. It deserves to be the central concern, because the absence of child-specific data does not slow adoption; it just makes adoption less safe. Without explicit labeling and post-market surveillance focused on age, hospitals may not even know how often these systems are being used off-label in younger patients.
Consent When the Patient Cannot Speak
Informed consent in surgery already requires disclosure of risks, alternatives, and the surgeon’s experience with a given technique. When AI enters the equation, those disclosure obligations expand. A legal analysis in Nature argued that clinicians must now explain AI’s uncertain risks and level of autonomy, along with the quality of the evidence supporting its use. For adult patients, that conversation is direct. For children, it runs through parents or guardians who may not grasp the technical distinctions between a surgeon-controlled robot and one making independent tissue-handling decisions.
A separate review of pediatric AI implementation identified five core challenges in integrating these tools into children’s care, including safeguarding sensitive data, securing meaningful consent, and mitigating age-specific risks. These are not abstract concerns. A parent agreeing to “robotic-assisted surgery” may not understand that the AI component was trained primarily on adult anatomy, or that the system’s error rates in pediatric cases are essentially unknown. Without standardized disclosure requirements (spelling out what data were used, what the device will decide on its own, and what happens if it fails), consent risks becoming a formality rather than a genuine safeguard.
Language barriers and health literacy further complicate the picture. Families may conflate any robot in the operating room with full autonomy, or conversely assume that “assistive” means the machine cannot act independently. Ethically robust consent would require plain-language explanations, visual aids, and opportunities to opt out of AI involvement without losing access to needed surgery.
Frameworks Struggling to Keep Pace
Regulators and researchers are aware of the gap. The FDA issued draft guidance for developers of AI-enabled medical devices that sets expectations around transparency, bias documentation, and ongoing performance monitoring across the product lifecycle. That guidance connects to the agency’s Predetermined Change Control Plan framework, which allows manufacturers to update algorithms post-clearance under pre-approved conditions. But neither framework directly addresses the specific vulnerabilities of pediatric populations, such as the need for age-stratified training data or the heightened privacy protections that children’s health records demand.
On the clinical side, the ACCEPT AI framework establishes ethical prerequisites for pediatric AI, emphasizing equitable access, transparency, and child-centered benefit–risk assessments. A companion roadmap argues that by addressing these challenges early, pediatric teams can guide AI systems toward safe and effective implementation rather than reacting to harms after they occur. Yet these documents remain guidance (not binding rules), and uptake varies widely between institutions.
Meanwhile, AI is spreading quickly across pediatric subspecialties. One review notes that machine-learning tools are already being deployed in oncology, cardiology, and intensive care, and that AI is also being utilized in perioperative decision-making and monitoring. As these systems move from pilot projects into routine practice, the lack of harmonized standards for validation in children becomes harder to defend. Pediatric surgeons may face pressure to adopt AI tools marketed as cutting-edge, even when the underlying evidence in younger age groups is thin.
What a Child-Centered AI Standard Could Look Like
Bridging the gap between innovation and protection will require a shift in how AI for surgery is designed and evaluated. At a minimum, regulators could require pediatric sub-analyses in clinical validation studies whenever devices are likely to be used across age ranges. Post-market registries, stratified by age and procedure type, would help track real-world performance and surface safety signals earlier.
Hospitals, for their part, can insist on internal governance that treats AI as a high-risk intervention, not a neutral tool. That might mean multidisciplinary review committees for any new surgical AI, mandatory training for surgeons on device limitations, and consent templates that explicitly address data provenance and autonomy. Professional societies in pediatric surgery could also issue specialty-specific guidelines, translating broad ethical frameworks into concrete rules for when and how AI should be allowed in the operating room.
Children stand to benefit enormously from better imaging, more precise surgery, and personalized perioperative care. But they also have the least power to question the systems being used on them. As AI-driven robots and decision aids move from experimental to everyday tools, the central test will be whether pediatric medicine can demand evidence and safeguards that match the technology’s ambition. Getting that balance right will determine whether AI in the pediatric OR becomes a quiet revolution in safety, or the next preventable scandal in children’s health care.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.