- Ponder Road
- Posts
- The "Forever Chemical" Cover-Up: How 3M Put Profit Over Public Health
The "Forever Chemical" Cover-Up: How 3M Put Profit Over Public Health
It's a story straight out of a corporate thriller: A chemist at 3M discovers a company-made chemical in human blood, and her bosses, knowing the truth about the chemical's toxicity, bury the results.
Kris Hansen, a chemist at 3M, found PFOS, a fluorochemical used in Scotchgard and other products, in blood samples from across the country. This discovery sparked a decade-long battle for her, as 3M executives tried to discredit her research and silence her.
Key Highlights:
The Cover-Up: Despite knowing PFOS was toxic and had contaminated the blood supply, 3M executives dismissed Hansen's findings, questioned her methods, and even tried to sabotage her research. They knew the truth, but prioritized profit over public health.
A Legacy of Secrets: It's revealed that 3M had conducted animal studies decades earlier showing PFOS was toxic, but these results were kept secret, even from some employees.
A Global Experiment: PFAS, known as "forever chemicals," are now found in nearly everyone's blood, with potential health consequences including cancer, immune system problems, and developmental delays.
The Price of Loyalty: Jim Johnson, Hansen's former boss, knew PFOS was toxic and accumulating in humans, but chose to prioritize 3M's profits and his own career. He even hired Hansen to do the research he himself had done, only to later discredit her results.
The story underscores the dangers of corporate secrecy and the need for greater transparency in scientific research. It's a story about a woman who dared to speak truth to power and the devastating consequences of inaction in the face of a major public health crisis.
Source: ProPublica
It's the vacation rental horror story we all dread: hidden cameras lurking in your Airbnb. This week, we're diving into the dark side of the sharing economy, uncovering how to spot these sneaky devices and reclaim your privacy.
Don't expect Bond-style gadgets. Security expert Joe LaSorsa, who specializes in corporate counterespionage, says most cameras are basic, bought off Amazon or eBay. Think power strips, smoke detectors, and even alarm clocks. These seemingly ordinary items are plugged in and rely on WiFi, making them easy targets.
How to sniff out the spies:
Plug-in paranoia: Start with plugged-in objects. LaSorsa recommends checking suspicious devices like carbon monoxide detectors, Bluetooth speakers, and air fresheners.
WiFi whiff: Use a free app like AirPort Utility to check your internet connection. Look for strange, unidentifiable networks that don't belong to the homeowner.
Flashlight frenzy: Shine your phone's flashlight over suspicious objects. Look for tiny, reflective lenses that might be hidden cameras.
QR code conundrum: If you find a QR code that doesn't look like a manufacturer's sticker, it could be used to connect the camera to the host's app.
RF and thermal heat: A dedicated RF detector can pick up the radio frequency signals emitted by hidden cameras. A thermal detector can also identify suspicious heat levels coming from the device.
Airbnb's new policy bans indoor cameras, but outdoor cameras are still allowed. If you find a hidden camera, unplug it or cover the lens. Report the violation to Airbnb, or local authorities if it's in a private area like a bedroom or bathroom.
Don't let the fear of hidden cameras ruin your trip. With a little vigilance and these tips, you can enjoy your vacation while keeping your privacy intact.
Source: Washington Post
You're not crazy, it's your brain. Companies aren't just being nice when they price things at $9.99. They're using a sneaky psychological trick that makes you spend more without realizing it.
The science: Our brains aren't great at processing numbers, especially when they get complicated. We like shortcuts, and that's why we gravitate towards smaller, rounder numbers. When you see a price ending in .99, your brain immediately sees "two" (in the case of $2.99) and uses it as an anchor, making the price seem smaller than it really is.
The impact: This "left-digit effect" can lead to higher sales and more spending because you're less likely to think critically about the price and more likely to make impulse buys.
Key takeaways:
Prices ending in .99 make you spend more: Studies show that consumers buy more and spend more when items are priced at $2.99 than when they're priced at $3.00.
It's about perception, not just the price: Your brain is tricked by how you see the numbers, not just the actual dollar value.
Be aware of the tricks: Next time you're shopping, take a moment to pause and think critically about the price. Don't let the .99 ending fool you!
Pro tip: The next time you see a price ending in .99, try rounding it up to the nearest dollar before you decide to buy. You might be surprised at how much you actually spend!
Source: Kent Hendricks
Elon Musk's AI startup xAI is flexing its muscles, raising a whopping $6 billion in a Series B funding round led by Sequoia Capital. This puts the company's valuation at $18 billion and marks one of the biggest investments in the nascent AI space. Remember, this is just a year after xAI's debut, signaling serious ambition to challenge Musk's former allies at OpenAI.
What does xAI want to do with all that cash? They're aiming to launch their first products, beef up their infrastructure, and speed up development of future AI technologies.
Remember Grok? That's xAI's first public offering, integrated into X (formerly Twitter) and trained on a massive dataset. It's a rival to OpenAI's ChatGPT, and it seems like it's only the beginning.
This funding round highlights the AI arms race heating up, with Microsoft backing OpenAI and Amazon investing in Anthropic. xAI's massive injection of capital suggests a fierce battle for AI dominance is on the horizon.
Key takeaway? Keep an eye on xAI. With Musk at the helm and a hefty war chest, this company is poised to shake things up in the AI world.
Source: Bloomberg
Google's new AI-powered search feature, AI Overview, is causing a stir – and not the good kind. The tech giant is facing a backlash after the feature churned out some seriously bizarre and potentially harmful advice. From recommending glue in pizza sauce to suggesting rock ingestion for vitamins, AI Overview is failing to live up to its promise of "high-quality information."
What's the issue?
AI Overview relies on Google's powerful Gemini language model, trained on vast amounts of online data. The problem? This data includes all sorts of nonsense, from satirical articles to downright false information. The result is an AI feature spitting out advice that's either laughably wrong or downright dangerous.
The pressure is on:
This isn't the first time Google's AI has stumbled. Remember Bard, the chatbot that got facts wrong about space, and Gemini, the image-generating AI that seemed to have a bias against white people? Google's rivals, like Microsoft and OpenAI, are making huge strides in the AI space, and Google is feeling the heat to keep up. But rushing into the AI game without proper safeguards might be doing more harm than good.
Google's response:
Google claims these errors are isolated, attributing them to uncommon queries and fabricated examples. The company says they're working to refine the system and ensure accuracy. But the public is skeptical, and this PR nightmare could hurt Google's reputation as a reliable source of information.
The takeaway:
AI is the future, but it's still in its early stages. Google's struggles with AI Overview highlight the potential dangers of relying too heavily on AI without proper safeguards. As AI technology advances, we need to be critical of its outputs and demand transparency from companies like Google.
Source: NYTimes