Morning Overview

$30B tech push in schools fails to boost students’ cognitive skills

U.S. schools spent $30 billion on educational technology in 2024, roughly ten times what they had been spending in prior years, yet long-running research and international assessments show no meaningful improvement in students’ core cognitive skills. The disconnect between spending and outcomes raises hard questions about whether device-heavy classrooms are actually helping children learn, or simply giving them more screen time during school hours.

A Tenfold Spending Surge With Flat Results

The scale of the investment is striking. According to a Bloomberg analysis, U.S. schools directed $30 billion toward educational technology in 2024, a roughly tenfold increase from earlier spending levels. Much of that money flowed into device purchases, broadband connectivity, and software licenses as districts tried to sustain the digital infrastructure built during pandemic-era remote learning. Across OECD countries, governments made similar bets: post-pandemic recovery funds financed large-scale device purchases and connectivity upgrades, as documented in the OECD’s digital education outlook, which describes national strategies built around universal access to devices and high-speed connections.

Yet the test scores that should reflect all this hardware and bandwidth have barely budged. PISA 2022 data shows that device-to-student ratios approached 1:1 across many OECD nations between 2018 and 2022, with schools also reporting higher levels of teacher digital training and access to learning platforms. But closing the device gap did not close the skills gap. The OECD’s own analysis found that students who used digital devices at school for more than one hour a day on leisure activities actually scored lower than peers with less screen time, according to a separate report on students and digital devices. More access, in other words, did not translate into better learning, and in some cases may have introduced new distractions that undercut the promise of technology-rich classrooms.

What 15 Years of Laptop Research Actually Shows

One of the strongest pieces of evidence comes from peer-reviewed quasi-experimental research on 1:1 laptop programs, the very model that billions of dollars have been spent replicating. A study published in Labour Economics used a difference-in-differences design with administrative data to evaluate large-scale 1:1 computer rollouts and found no average effect on math or language test scores and no effect on high-school admission rates. Worse, the authors reported evidence of increased inequality: students from low-socioeconomic-status households experienced worse outcomes than their higher-income peers, suggesting that the programs may have actively widened achievement gaps rather than narrowing them. When laptops arrive without strong support at home or in the classroom, they can become vehicles for off-task behavior instead of tools for learning.

A broader research synthesis over roughly 15 years of 1:1 laptop studies offers a slightly more nuanced picture, but it does not overturn the basic conclusion. The meta-analysis, published in the Review of Educational Research, did find average positive effects in some academic domains, particularly when technology was closely aligned with curriculum and when teachers had extensive preparation. But it also documented substantial variation across programs and highlighted that many studies lacked rigorous designs or long-term follow-up. In effect, the strongest gains appeared in tightly managed pilots with motivated staff and additional resources, conditions that are difficult to reproduce at national scale. The gap between what works in small, well-supported experiments and what happens in typical classrooms is where much of the recent $30 billion appears to have been lost.

Maine’s Long Experiment Offers a Cautionary Lesson

The Maine Learning Technology Initiative stands as one of the longest-running statewide 1:1 device programs in the United States. Since its launch, MLTI has deployed laptops or tablets to tens of thousands of students and teachers, backed by state investments in professional development, technical support, and regular device refresh cycles. The program is often described as a model of thoughtful implementation: devices are standardized, teachers receive training, and schools have a clear framework for integrating technology into instruction. If any initiative were going to demonstrate clear, system-wide learning gains from universal device access, it would be this one.

Yet even with this sustained commitment, the broader pattern holds. Maine’s publicly accessible education dashboard tracks indicators such as enrollment, graduation, and assessment results over time, but the state’s cognitive outcome data does not show the kind of unambiguous, long-term improvement that would justify the program solely on academic grounds. Scores have fluctuated with broader national trends rather than breaking away from them. MLTI’s experience illustrates a problem that extends well beyond one state. The infrastructure for digital learning can be built and maintained, but without a direct and proven link between device use and measurable cognitive development, the rationale for continued spending at this scale rests more on hope and habit than on evidence.

International Assessments Tell the Same Story

The pattern is not unique to the United States. The NCES summary of PISA 2012 computer-based reading results showed U.S. students’ proficiency distributions in digital reading and problem-solving at a time when school computer use was already on the rise. Those results placed American students in the middle of the international pack, with no sign that early adoption of classroom technology was translating into superior performance. The underlying PISA 2012 computer-based assessment database, made available as an OECD dataset, includes detailed microdata on digital reading, computer-based math, and problem-solving, allowing researchers to examine how access to computers and reported use relate to achievement.

Across multiple studies using that microdata, the consistent finding is that more frequent computer use in school is not associated with higher scores once student background characteristics are taken into account. In some cases, moderate use for clearly defined academic tasks appears neutral or modestly positive, while heavy or unsupervised use correlates with lower performance. The OECD’s more recent analyses of PISA 2022 extend this pattern: as device-to-student ratios moved toward 1:1 in many systems, average scores in reading, math, and science remained flat or declined. Countries that poured money into hardware did not reliably outperform those that invested more heavily in teacher quality, curriculum, or early-childhood education. The promise that simply wiring classrooms would boost national competitiveness has not been borne out in the comparative data.

Rethinking What Ed Tech Is For

Taken together, the evidence from large spending increases, long-running laptop initiatives, and international assessments points to a sobering conclusion: educational technology, as currently deployed, is not a reliable engine for improving core cognitive skills. That does not mean devices are useless. They can streamline administrative tasks, expand access to information, and support specialized interventions for particular groups of students. But the assumption that putting a laptop on every desk will, by itself, raise test scores or close achievement gaps is not supported by the best available data. Instead, technology often amplifies existing strengths and weaknesses in school systems, helping effective teachers do more while giving disengaged students new ways to tune out.

If policymakers want to avoid repeating the same expensive mistakes, the focus needs to shift from counting devices to clarifying purposes. Rather than treating 1:1 access as an end in itself, districts could start by identifying specific learning problems—such as weak reading comprehension in middle school or limited practice with algebraic reasoning, and then asking whether particular digital tools have strong evidence for addressing those gaps. Funding could be tied to programs that demonstrate measurable gains in randomized or quasi-experimental studies, with sunset clauses for initiatives that fail to deliver. In parallel, investments in teacher training, curriculum design, and classroom management would recognize that the most important technology in any classroom is still the human adult at the front of it. Until spending priorities reflect that basic reality, the next wave of devices is likely to produce the same disappointing results as the last one.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.