Federal regulators have spent the past two years dismantling some of the most persistent myths about how tech companies handle consumer data, and the findings should unsettle anyone who trusts default privacy settings. Enforcement actions against Amazon’s Alexa and Ring divisions revealed that “delete” did not always mean delete, that employees could access private recordings, and that children’s voice data was kept indefinitely to train algorithms. Those cases, combined with long-standing misconceptions about batteries, viruses, and device security, show how much of what consumers believe about their gadgets is simply wrong.
At the same time, consumer-education efforts from agencies and journalists are starting to chip away at these myths, replacing vague reassurance with concrete evidence of what actually happens behind the screen. Regulatory complaints, settlement orders, and technical analyses now offer a clearer picture of how data flows through devices, how algorithms are trained, and why some familiar pieces of tech advice are either outdated or flatly incorrect. Understanding those details is no longer just a niche concern for security professionals. It is a practical survival skill for anyone who owns a smartphone, smart speaker, or laptop.
Your Voice Assistant Does Not Forget
One of the most damaging myths in consumer tech is the assumption that voice assistants discard recordings after processing a request. The FTC and the Department of Justice filed a complaint in May 2023 charging Amazon with violating children’s privacy law by retaining kids’ Alexa voice recordings for years. According to the complaint, Amazon saved voice recordings and generated transcripts from interactions with child-directed Alexa features, then allegedly used that retained data to improve Alexa’s algorithm. The company’s default policy, as described by the DOJ, was indefinite retention of children’s voice data, a direct contradiction of the Children’s Online Privacy Protection Act.
Amazon allegedly undermined parents’ attempts to delete their children’s recordings, keeping derived data products even after a deletion request was honored on the surface. The Department of Justice announced a settlement in which Amazon agreed to injunctive relief and a $25 million civil penalty to resolve the allegations, and a court entered a stipulated order for permanent injunction in July 2023. For parents who believed tapping “delete” on a voice history screen erased everything, these enforcement records tell a different story: the algorithm trained on that data could persist long after the raw audio was gone, and “forgetting” may require structural changes to how models are built and updated, rather than a single button press.
Smart Home Cameras and the Myth of Airtight Security
A related myth holds that connected home cameras are secure by default, that footage stays between the homeowner and the cloud. The FTC’s enforcement action against Ring, also announced in May 2023, shattered that assumption. According to the agency, Ring employees illegally surveilled customers by watching private video feeds, and the company failed to stop hackers from taking control of users’ cameras through credential stuffing and brute-force attacks. The FTC stated that approximately 55,000 U.S. customers were affected by these security failures, which also included delayed rollout of multi-factor authentication and inadequate monitoring of internal access.
The FTC’s business blog tying the Amazon and Ring cases together framed the enforcement actions as a direct challenge to several industry myths: that human employees never review recordings, that internal access controls are sufficient, and that deleting a product also deletes the data derived from it. In that post, the agency used the Alexa and Ring investigations to illustrate how data practices around recordings, transcripts, and algorithmic outputs are all part of the same legal and ethical obligation. The settlement required Ring to implement a new privacy and security program and to delete data products built from improperly accessed footage, underscoring that companies cannot treat algorithmic outputs as separate from the unlawfully collected inputs that created them.
Overnight Charging and the Battery Damage Fallacy
Away from privacy, one of the oldest and most stubborn device myths is that charging a phone overnight will destroy the battery. As HowStuffWorks reported, modern smartphones are intelligent enough to prevent overcharging, cutting off current once the battery reaches full capacity. The real threat to battery longevity is not overnight charging but chemical aging over time, a process that no charging habit can fully prevent, although extreme heat and constant heavy use can accelerate it. Phone makers now design charging systems to taper current and optimize for long-term health, which makes the old advice about unplugging at 100 percent far less relevant than it was for older battery chemistries.
Apple’s own support documentation explains that battery chemical aging can trigger iOS performance management features designed to prevent unexpected shutdowns, which can be misinterpreted as deliberate slowdowns to push upgrades. The user-visible effects include longer app launch times, reduced frame rates, and in extreme cases, a disabled camera flash when the system cannot draw enough peak power. A December 2017 analysis from Geekbench used score distributions to show that performance variation correlated with battery age and specific iOS versions, and that CPU scaling could appear as a mysterious slowdown to users unaware of the underlying mechanism. That finding helped reframe the “planned obsolescence” debate: the throttling was a battery management decision, not a conspiracy to force upgrades, though Apple faced criticism for not disclosing the practice sooner and ultimately added battery health tools so users could see and manage the trade-offs, themselves.
Macs Get Viruses and X-Rays Spare Your Files
Two other myths deserve quick burial. The belief that Mac computers cannot get viruses has persisted since Apple’s famous “I’m a Mac” advertising campaign, but it has never been true. As Business Insider documented, Apple computers are susceptible to malware, including adware bundles, browser hijackers, and more sophisticated threats that exploit software vulnerabilities. The smaller market share of macOS historically made it a less frequent target, but that calculus has shifted as Mac adoption has grown and attackers chase higher-value users. Relying on obscurity is not a security strategy, and skipping updates or basic protections because “Macs don’t get viruses” leaves a wide attack surface for phishing, malicious downloads, and compromised browser extensions.
Another persistent misconception is that airport X-ray scanners will erase or corrupt digital data on phones, laptops, or memory cards. X-ray machines used for baggage screening operate at energy levels designed to penetrate luggage and reveal shapes, not to flip bits on solid-state storage, and consumer electronics are engineered to withstand much higher levels of electromagnetic interference than they encounter in an airport line. While older magnetic media like exposed film or certain tape formats could be damaged by some scanning equipment, modern flash storage and hard drives are not meaningfully affected by routine security screening. If files vanish after a trip, the culprit is almost certainly a failing drive, accidental deletion, or malware infection, not the X-ray conveyor belt.
Turning Awareness Into Action
Knowing that these myths are false is only useful if it changes how people use and configure their devices. For smart speakers and cameras, that means reviewing default settings, limiting data retention where options exist, and periodically deleting stored recordings while recognizing that trained models may still embody past interactions. It also means enabling multi-factor authentication, using strong and unique passwords, and being skeptical of claims that “no one at the company can see your data” without independent verification. When a device offers privacy dashboards or access logs, checking them regularly can reveal unexpected data flows or logins that warrant further scrutiny or a support request.
When something seems off (an account accessed from an unknown location, a device behaving strangely, or a company ignoring deletion or opt-out requests), regulators have created channels for consumers to respond. The FTC encourages people to report scams, deceptive practices, and suspicious tech behavior at its centralized fraud reporting portal, which routes complaints to law enforcement partners. If misuse of personal information escalates into unauthorized accounts, loans, or tax filings, victims can create recovery plans and file reports through the government’s dedicated identity theft site. And for persistent robocalls and unwanted telemarketing that often accompany data misuse, consumers can register numbers and submit complaints via the national Do Not Call registry.
Regulatory actions against Amazon, Ring, and other tech firms show that myths about privacy and security are not harmless misunderstandings. They shape design decisions, enforcement priorities, and the everyday risks people face in their homes. Dispelling those myths requires a mix of technical literacy, skepticism toward marketing claims, and willingness to use the legal tools available when companies overstep. Voice assistants will not forget on their own, cameras are not sealed boxes, batteries age no matter how carefully they are charged, and Macs are not immune to malware. Accepting those realities is the first step toward demanding devices and policies that match what people actually expect from the technology woven into their lives.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.