Morning Overview

How AI is reshaping astronomy, from data sorting to new discoveries?

Artificial intelligence is now sorting through telescope data at a pace no human team could match, and the results go beyond efficiency gains. Machine learning classifiers have confirmed hundreds of new exoplanets from archived observations, while automated alert systems are preparing to handle millions of nightly detections from next-generation sky surveys. The shift is rewriting how astronomers decide what deserves a closer look and what gets left behind.

Neural Networks That Find Hidden Planets

The clearest proof that AI can do more than sort data came when a convolutional neural network trained on Kepler space telescope light curves identified two previously overlooked worlds. In a study in The Astronomical Journal, researchers reported the discovery of Kepler-90i, an eighth planet orbiting the Kepler-90 star system, and Kepler-80g, which completed a five-planet resonant chain around its host star. The neural network learned to rank plausible planet signals above false positives in test data, effectively replicating the judgment calls that human reviewers had been making by hand for years.

What made these discoveries significant was not just the planet count but the method. The network processed thousands of candidate signals that earlier review rounds had set aside as ambiguous. Kepler-90i, for instance, had been buried in noise that human analysts had flagged as inconclusive. A machine trained on confirmed detections picked it out. That result shifted the conversation from whether AI could assist astronomers to how quickly it could be deployed across other missions and archives.

ExoMiner and the Case for Explainable AI

Speed alone does not satisfy the standards of planetary science. A classifier that stamps “confirmed” on a candidate without showing its reasoning creates a trust problem. NASA addressed this directly with ExoMiner, a deep learning system that validated 301 new exoplanets from the Kepler archive. According to NASA’s Jet Propulsion Laboratory, ExoMiner is designed so that it is “not a black box,” meaning researchers can trace which features of a light curve drove each classification decision.

That transparency matters because false positives in exoplanet science carry real costs. Follow-up observations with ground-based telescopes or space instruments are expensive and time-limited. A classifier that cannot explain why it flagged a signal forces astronomers to either trust it blindly or repeat the vetting work themselves. ExoMiner’s explainability framework sidesteps that problem by producing audit trails alongside its verdicts. The 301 validated planets added to Kepler’s total came with enough supporting detail for peer review and independent checks.

The same philosophy underpins newer work such as ExoMiner++ 2.0, which extends the approach to the Transiting Exoplanet Survey Satellite (TESS). Instead of working only with pre-extracted light curves, the updated system ingests full-frame images that contain far more candidate signals per observation cycle. That recent technical effort signals that the vetting pipeline built for Kepler is being adapted for higher-volume missions, not abandoned in favor of something entirely new.

Triage at TESS Scale

TESS generates a volume of light curves from its full-frame images that dwarfs what Kepler produced. A dedicated study framed AI as an operational triage layer for TESS candidates, measuring how well machine learning models could separate genuine transit signals from instrumental artifacts and astrophysical mimics. The paper reported performance sufficient to act as a first-pass filter, reducing the pile of candidates that human reviewers need to examine without attempting to replace detailed follow-up analysis.

This triage framing is worth taking seriously because it redefines the role of AI in the discovery pipeline. The machine is not replacing the astronomer who confirms a planet. It is deciding which signals deserve that astronomer’s limited attention. When the candidate pool runs into the tens of thousands per observing sector, that gatekeeping function determines which planets get studied quickly and which sit in a queue for years. In practice, that means AI is now shaping the scientific sample long before any telescope points for confirmation.

Alert Brokers and the Real-Time Sky

Exoplanet hunting involves staring at the same stars for weeks. Time-domain astronomy, by contrast, demands rapid response. Supernovae, gravitational lensing events, and fast radio bursts all fade or change within hours or days. The Zwicky Transient Facility (ZTF), a wide-field survey camera at Palomar Observatory, packages and distributes real-time alerts describing every new or changing source it detects each night. A systems paper describing ZTF’s alert distribution detailed the design motivations and performance benchmarks for this pipeline, and noted that the framework could generalize to the much larger alert streams expected from the Vera C. Rubin Observatory’s Legacy Survey of Space and Time (LSST).

Raw alerts are not useful on their own. They need context: is this new bright spot a known variable star, a satellite glint, or a genuine supernova? That is the job of alert brokers, software systems that ingest, annotate, and filter alerts so astronomers receive only the events matching their scientific interests. An early blueprint for how machine learning could drive these brokers at LSST scale laid out a staged classification approach, moving from rapid early typing through intermediate characterization to retrospective analysis as more data accumulated. The goal is to provide increasingly refined labels as new observations arrive, without overwhelming users with noise.

Fink, Lasair, and Streaming Intelligence

Two operational brokers already put these ideas into practice. Fink, described in a peer-reviewed paper in Monthly Notices of the Royal Astronomical Society, is built on streaming technology and integrates machine learning models directly into its ingest pipeline. It currently processes ZTF alerts and is designed to scale to LSST volumes. Within seconds of receiving an alert, Fink can cross-match it with archival catalogs, apply trained classifiers, and assign probabilities that the event belongs to categories such as supernovae, microlensing candidates, or solar system objects.

Lasair, another broker deployed on ZTF data, is documented in a paper in the journal RAS Techniques and Instruments. It emphasizes flexible user-defined filters, allowing scientists to subscribe to customized event streams based on both simple properties and machine learning scores. Like Fink, it treats AI not as a monolithic decision-maker but as a toolkit embedded throughout the pipeline, from initial vetting to prioritizing follow-up observations.

These systems illustrate how AI reshapes time-domain astronomy in practice. A supernova candidate classified with high confidence by a broker can trigger robotic telescopes to obtain spectra within hours, while lower-priority events are logged for later review. Conversely, if a model flags an object as likely uninteresting, it may never reach a human’s inbox. The promise is efficiency and completeness; the risk is that systematic biases in training data could make certain classes of events consistently less visible.

Who Decides What the Sky Looks Like?

Taken together, neural networks for exoplanet vetting and machine learning brokers for transients mark a subtle but profound shift. Astronomers used to decide, largely by hand, which signals were plausible enough to merit attention. Now, in both slow and fast domains, AI sits between the telescope and the human, filtering, ranking, and labeling on a scale that manual methods cannot match.

This does not mean astronomers are being replaced. The studies behind the Kepler discoveries, the TESS triage models, and the ZTF alert pipelines all emphasize human oversight, validation, and interpretation. Yet the first line of judgment has moved into software. Decisions about network architectures, training sets, and thresholds are becoming as important to discovery as telescope apertures and exposure times.

The next decade of surveys will test whether these systems can remain both efficient and fair to the unexpected. If AI-guided pipelines can surface rare, novel phenomena as effectively as they confirm familiar classes, they will not just accelerate astronomy, and they will help define what counts as discoverable in the first place.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.