
Everyday frustrations in science, from clogged grant systems to confusing public messages, are not just abstract complaints. In a 2016 survey of 270 scientists, researchers identified concrete problems that slow down discovery and distort results, and they also pointed to emerging fixes. Drawing on those findings, I look at 10 daily headaches inside labs that scientists are actively trying to solve so progress can move faster and more reliably.
1. Chronic Funding Shortfalls in Labs
Chronic funding shortfalls in labs were singled out by the 270 scientists as one of the biggest problems facing science, and they described it as a structural issue rather than a temporary squeeze. In that survey, insufficient research funding was repeatedly cited as a core issue in academia, affecting everything from hiring to the kinds of questions scientists dare to ask. When basic operating costs are uncertain, labs struggle to maintain equipment, retain staff, or launch long-term projects that might not pay off quickly.
To fix this everyday problem, funders and institutions are experimenting with reforms that streamline grants and stabilize support. Multi-year block grants, bridge funding for labs between awards, and internal seed programs are designed to reduce the constant scramble for money. For early-career researchers, more predictable funding can mean the difference between staying in science or leaving altogether, which directly shapes how quickly new treatments, technologies, and climate solutions reach the public.
2. Time Lost to Endless Grant Applications
Time lost to endless grant applications emerged in the same survey as a daily burden that quietly drains scientific productivity. The 270 scientists reported spending excessive time on grant writing, often juggling multiple proposals at once just to keep their labs afloat. Many described weeks or months each year devoted to crafting applications, revising budgets, and responding to reviewer comments, time that could otherwise go into experiments, mentoring students, or analyzing data.
Reforms are starting to target this inefficiency by simplifying application forms, capping the number of active proposals per investigator, and piloting shorter pre-proposals that screen ideas before full submissions. Some agencies are testing lotteries among meritorious proposals to reduce the marginal gains from polishing every sentence. If these changes take hold, the everyday work of a scientist could shift away from paperwork and back toward discovery, which is ultimately what taxpayers and patients expect from publicly funded research.
3. Reproducibility Issues Undermining Discoveries
Reproducibility issues undermining discoveries were highlighted by the 270 scientists as a major problem that goes beyond money. In the survey, researchers described a “reproducibility crisis,” in which experiments do not reliably repeat when other teams follow the same methods. Many studies, especially in fields like psychology and biomedicine, have failed to replicate, raising questions about how often published findings reflect real effects versus statistical flukes or subtle biases.
To address this, journals and funders are pushing for preregistered study designs, open data, and detailed methods sections that make it easier for others to repeat work. Large-scale replication projects are systematically retesting influential findings to see which ones hold up. These efforts are still evolving, but they aim to turn reproducibility from an afterthought into a daily expectation, so that each new paper adds a more reliable brick to the scientific foundation the public relies on.
4. Replication Failures Eroding Public Trust
Replication failures eroding public trust were another theme running through the 2016 survey, where scientists worried that high-profile non-replications could make the public skeptical of science as a whole. According to the 270 scientists, many studies fail to replicate, and when those failures surface in news coverage, they can look like scientists constantly contradicting themselves. For people outside the lab, it is hard to tell the difference between healthy self-correction and evidence that the system is broken.
In response, some research communities are building registries of replication attempts and encouraging teams to publish negative or null results, not just flashy positive findings. Funding agencies are also starting to support dedicated replication grants, signaling that verifying past work is as valuable as discovering something new. If these norms spread, everyday readers could see fewer dramatic reversals and more steady refinement, which is crucial for maintaining trust in areas like vaccines, nutrition, and climate science.
5. Biased and Slow Peer Review Bottlenecks
Biased and slow peer review bottlenecks were cited by the 270 scientists as a core reason why good work can languish for months or years before reaching readers. In the survey, researchers described peer review as flawed and in need of reform, pointing to delays, inconsistent standards, and the tendency for certain voices or institutions to be favored. For early-career scientists, a single stalled manuscript can hold up promotions, grants, or job offers.
To fix this, journals are experimenting with open peer review, where reviewer names and reports are published, and with portable reviews that follow a manuscript between outlets. Some platforms allow authors to post preprints while formal review proceeds, so results are available quickly even if the final stamp of approval takes time. If these models mature, the daily experience of waiting in limbo for a decision could give way to a more transparent and predictable process that benefits both authors and readers.
6. Inefficiencies in Vetting Scientific Work
Inefficiencies in vetting scientific work extend beyond bias to the mechanics of how manuscripts are handled. The 2016 survey revealed biases and inefficiencies in the peer review process, including redundant rounds of review when a paper is rejected and resubmitted elsewhere. Reviewers often donate their time, which can lead to rushed or uneven assessments, and editors struggle to find qualified experts willing to take on additional unpaid labor.
New platforms are trying to streamline this by sharing reviews across journals, using structured checklists to standardize what reviewers look for, and incorporating statistical and plagiarism checks as routine steps. Some funders are also recognizing peer review as a form of scholarly output, encouraging institutions to value it in hiring and promotion. If these changes stick, the daily cycle of vetting work could become faster and fairer, reducing wasted effort while still protecting quality.
7. Cutthroat Competition Fostering Bad Science
Cutthroat competition fostering bad science was another concern voiced by the 270 scientists, who linked hypercompetitive culture to questionable research practices. The survey flagged a system where careers hinge on publication counts, impact factors, and grant totals, creating pressure to publish at all costs. Under those conditions, researchers may be tempted to slice results into multiple papers, overstate conclusions, or quietly ignore inconvenient data that complicates a clean story.
To counter this, institutions and funders are starting to reward collaboration, data sharing, and careful methodology rather than just headline-grabbing results. Some departments are revising promotion criteria to emphasize the quality and reproducibility of work, mentoring, and contributions to shared datasets. If competition can be balanced with cooperation, the everyday incentives inside labs may shift toward practices that produce more trustworthy science and fewer retractions.
8. PhD Overproduction Clogging Career Paths
PhD overproduction clogging career paths surfaced in the survey as a systemic issue that shapes daily life for early-career researchers. The 270 scientists noted an overproduction of PhDs competing for limited jobs, particularly in tenure-track academia. As more graduates chase a relatively fixed number of positions, postdoctoral appointments stretch longer, job searches become more grueling, and talented people face years of uncertainty.
In response, universities are expanding professional development programs that prepare PhD students for roles in industry, government, data science, and science communication. Some departments are also adjusting admissions to better match training capacity with realistic career outcomes. When graduates have clearer pathways and more options, the everyday anxiety of “making it” in academia can ease, and society benefits from highly trained scientists working across a wider range of sectors.
9. Poor Communication with the Public
Poor communication with the public was explicitly emphasized in the 2016 survey, where the 270 scientists called for better public communication of science. Many respondents worried that complex findings are often presented without context, or that uncertainty is framed as ignorance rather than an honest reflection of what is known. This gap can fuel confusion about topics like climate change, gene editing, or nutrition, and it leaves room for misinformation to spread faster than corrections.
To close this gap, researchers are partnering with journalists, schools, and community groups to explain results in clear, accessible language. Training programs now teach scientists how to handle interviews, social media, and public talks without oversimplifying. As these skills become part of everyday scientific work, people outside the lab can make more informed decisions about health, technology, and policy, and they can better understand why science sometimes changes its mind.
10. Increasing Complexity of Modern Research
Increasing complexity of modern research was an underlying thread in the 2016 reporting, where scientists described how today’s questions often require massive datasets, specialized equipment, and interdisciplinary teams. As methods grow more intricate, it becomes harder for any one person to master every step, which can introduce hidden errors and make results tougher to interpret or replicate. The 270 scientists saw this complexity as both a strength and a daily challenge.
To manage it, labs are embracing interdisciplinary approaches, bringing together statisticians, software engineers, clinicians, and basic scientists on the same projects. Shared protocols, standardized data formats, and collaborative platforms help teams coordinate and check each other’s work. When complexity is handled deliberately rather than ad hoc, the everyday process of doing research becomes more robust, and the ambitious discoveries people hope for, from new drugs to cleaner energy, become more achievable.
More from MorningOverview