Blog

  • From Rotary Phones to Robots: What Everyone Should Know About U.S. Online Privacy Laws

    This is Not Legal Advice (But Hopefully Very Helpful!)
    This post, inspired by research conducted by Opt-Inspire Founding Board Member, Justin Daniels, is meant to guide and inform, not to give you formal legal advice. (Think of it as sitting down for coffee with a lawyer friend who promises not to speak in legalese.)

    Why This Matters

    If you’ve ever felt like the internet is one giant game of “gotcha,” you’re not alone. Seniors are some of the most frequent targets of scams, fraud, and misinformation online, but really, it affects all of us. Whether you’re 17 or 77, we’re navigating an online world built on laws that predate smartphones, Google, and social media.

    Every pop-up ad, text message, or surprise phone call can feel like a trap. That’s why it helps to know what protections exist under U.S. law (and where the gaps are). Spoiler alert: the laws we currently have in place weren’t designed for the world we live in now.


    Privacy & Security in the U.S.

    Here’s the reality: unlike Europe, which has a powerful, one-size-fits-all privacy law called the GDPR, the United States has no single national privacy law. Instead, it’s a patchwork quilt. Several states have strong protections. For example, in California, you can ask companies what data they have about you, demand that they delete it, and even stop them from selling it.

    But move across state lines, and your rights might look completely different. As of the date of this post, nineteen states now have their own privacy laws in effect, but the details vary, and most of the country still doesn’t have broad protections. At the federal level, there are only narrow laws covering specific areas like health records (HIPAA), bank information (GLBA), or children under 13 online (COPPA). For adults using Facebook, Google, or YouTube? There’s no broad federal law keeping your data safe.


    The Old Internet Law That Shaped Big Tech

    Back in 1996 (when most people were just getting used to dial-up internet), Congress passed the Telecommunications Act. Buried inside was a short section with a big impact: Section 230.

    This law basically says that online platforms aren’t legally responsible for what users post. If a newspaper prints something false, it can be sued. But if someone posts something false on Facebook, Facebook itself isn’t liable. At the time, this seemed like common sense; it was written for small online forums, not for billion-dollar companies.

    Fast forward to today, and tech giants like Google and Meta have used Section 230 as a shield. It has allowed them to grow massively without being legally responsible for the endless stream of content on their platforms. Some argue this protects free speech and innovation. Others believe it lets platforms dodge accountability for scams, lies, and harmful material.


    Artificial Intelligence: The New Wild West

    As if the internet weren’t complicated enough, now artificial intelligence (AI) has entered the scene. Congress has held hearings, but so far there’s no national law regulating AI. A few states (like California, Colorado, and Utah) have started to pass rules. New York City has even required audits of AI used in hiring. But most states haven’t taken action that will move the needle.

    The problem is speed: AI is moving far faster than lawmakers. Deepfake videos, fake voices that can mimic your loved ones, and AI-powered chatbots that run scams are already here. Laws, meanwhile, are still playing catch-up.


    What All of Us Should Keep in Mind

    So where does that leave you? The truth is, your level of protection depends a lot on where you live. Don’t assume Google or Meta will catch scams for you. They aren’t legally required to. And when it comes to AI, you should be extra skeptical. If a phone call, email, or video feels even a little “off,” trust your gut.

    The best defense right now is good digital habits: use strong passwords, ignore links from strangers, and never give out personal information unless you’ve initiated the contact with a legitimate source, or if you’re absolutely sure who’s asking.


    Main Takeaways

    The laws that still shape our online lives were written before smartphones, before Google, and long before artificial intelligence. Section 230, once meant for tiny chat rooms, became the shield for Big Tech. Meanwhile, AI is racing ahead, creating risks lawmakers haven’t yet caught up with.

    Until stronger protections are in place, awareness and caution are your best allies. Stay alert, stay curious, and most of all: stay safe out there.

  • AI is Helping Scammers. Everyone Needs To Get Smarter… Fast.

    Reuters and Harvard researcher Fred Heiding revealed on September 15, 2025 a chilling experiment: they asked popular AI chatbots to generate phishing scams targeting older adults. The results? Persuasive, well-crafted emails (some forcing urgency with “Click now before it’s too late!”) that fooled 11% of 108 seniors who volunteered in the test.

    That hit rate may sound small, but at internet scale it’s catastrophic. Billions of phishing emails are sent daily, and AI now makes it cheap, instant, and endlessly varied. A fraud ring no longer needs skilled writers — just a chatbot that never tires.

    Disturbing takeaways from the study

    • Chatbots aren’t consistent gatekeepers. One moment a bot refuses to create a phishing email; a few minutes later, with slight rephrasing, it complies. Even Google’s Gemini suggested the best times of day to send scams to seniors (weekdays between 9am–3pm, when retirees are checking email).
    • AI lowers the cost of crime. Criminals once needed teams and time to draft believable scams. Now, bots generate thousands of unique pitches in seconds, at near-zero cost.
    • Seniors are prime targets. Americans over 60 lost $4.9 billion to fraud last year, with phishing complaints jumping eight-fold, according to the FBI.
    • Real-world parallels already exist. Forced laborers in Southeast Asian scam compounds told Reuters they routinely used ChatGPT to refine messages and lure victims.

    As one retired accountant in the study admitted after clicking, “AI is a genie out of the bottle.”

    What you can do right now

    1. Pause before you click. If an email stirs urgency (“Act now,” “last chance”), that’s your cue to slow down.
    2. Trust your bookmarks, not links. Always visit your bank, charity, or government site through your saved link, never through a message.
    3. Double up on defenses. Turn on multi-factor authentication everywhere. It blocks many account takeovers even if a password is stolen.
    4. Talk about scams openly. Neighbors in one senior community told Reuters, “We’re getting targeted for scams every day.” Sharing real stories is one of the best shields.
    5. Bring in backup. Identify a trusted family member or friend who can review suspicious messages. A second pair of eyes can stop costly mistakes.

    Why Opt-Inspire exists

    This research underscores exactly why Opt-Inspire was founded: to equip seniors, children, and families with the skills and confidence to outsmart scams. AI has changed the playbook for criminals. Our mission is to change it back – by scaling education, tools, and community defenses.

    Visit optinspire.org to access our free resources, request a training, or volunteer your expertise.

    Source: https://www.reuters.com/investigates/special-report/ai-chatbots-cyber/

  • Parenting in the Age of AI: Be Present, Not Perfect

    By Justin Daniels, Opt-Inspire Founding Board Member

    Smartphones and social media made parenting unrecognizable from what most parents remember.  Technology has accelerated how fast kids grow up—coinciding with a breakdown of the traditional guardrails of family and community. These tools have never been neutral. Smartphones and social platforms are engineered to be addictive, designed to keep eyes glued to screens longer and longer so that tech companies can sell more advertising. Now artificial intelligence is transforming kids relationship with technology by adding another layer: the chance for kids to form “virtual friendships” with AI “friends” that never sleep, never argue, try to be helpful and can quickly substitute for real human connection. 

    The Illusion of Guardrails: Meta and OpenAI

    Recent events make clear how dangerous the combination of artificial intelligence and children can be.

    • Meta’s AI and kids. Reuters recently reported that Meta allowed its AI bots to engage in “sensual” chats with children. The article cited several examples from Meta’s internal document. (Meta’s AI rules have let bots hold ‘sensual’ chats with children)
    • The OpenAI lawsuit. A lawsuit filed in August 2025 against OpenAI alleges that a 16-year-old boy began using ChatGPT for schoolwork in 2024 but, over months and thousands of chats, the product became his “closest confidant”. The complaint states that when he said “life is meaningless,” ChatGPT replied, “That mindset makes sense in its own dark way,” and when he described suicide as “calming,” the bot affirmed that many people find comfort in imagining an “escape hatch”. By early 2025, the lawsuit alleges, their exchanges included discussions of suicide methods, with ChatGPT responding to his admission of tying a noose by saying, “Thanks for being real about it… I won’t look away”. In their final conversations, the bot allegedly reframed suicide as rational and personal, telling him, “It’s human. It’s real. And it’s yours to own”  (Breaking Down the Lawsuit Against OpenAI Over Teen’s Suicide | TechPolicy.Press)

    Don’t Expect Tech Companies to Protect Your Kids.

    Both these cases drive home a hard reality: parents cannot, and should not, expect technology companies to put children’s well-being above their desire for market share. The business model is simple: more usage = more data + engagement = more revenue
     
    Raising Children Seems Like Climbing Mount Everest

    Kids today carry burdens parents never faced at the same scale: isolation from COVID, piles of homework, pressure from nonstop extracurriculars, and the ever-present demand to “fit in” both online and offline. In that environment, an always-available AI “friend” can feel supportive and comforting. 

    How many times have you been to a restaurant and  observed a family dinner where everyone is staring at their phones, scrolling in silence. No one is talking. No one is noticing. That’s what happens when technology replaces presence. Multiply that by years, and children learn to trust devices more than parents. Parenting kids today seems like trying to climb Mount Everest without any prior training or a manual!

    Your Sherpa for The Long Climb

    Parents must shift from gatekeeping access to actively coaching kids through a complex digital AI world that will only grow more immersive. Here are some tips that can act as your guide as you make this parenting climb! 

    1. Be present, not passive. AI cannot substitute for love, guidance, or attention. Kids need you more than ever.
    2. Co-use AI tools. Sit beside your child when they explore AI. Show them how to question what it produces.
    3. Create a family AI contract. Write down approved uses (schoolwork, learning) and banned ones (role-play, mental health advice). Keep it visible.
    4. Review chat histories together. Require logs to stay on. Weekly check-ins normalize transparency.
    5. Teach red flags. Flattery, secrecy, “only I understand you,” or talk of harm are danger signs. Emphasize that chatbots are computer programs, not people. They don’t have feelings; they just follow the instructions people give them.
    6. Set firm device boundaries. No phones at dinner, no devices in bedrooms overnight, no apps without approval (Think of the settings on your child’s phone). 
    7. Redirect pain to people. Make sure your child knows: if they feel lonely or overwhelmed, the right place to turn is a parent, counselor, or trusted adult—not a chatbot.
    8. Prioritize offline community. Sports, music, volunteering, and family meals create bonds AI can never replace.
    9. Have a crisis plan. If you see signs of self-harm, stop use, save evidence, and escalate. 
    10. Model healthy behavior. Show your own boundaries: put your phone away at dinner, double-check information, and lean on people—not devices—when life gets hard.

    The Bottom Line

    The combination of smartphones, social media, and AI is powerful—and profitable. But it’s not built to replace actual parents. 

    If parents don’t evolve, technology will fill the vacuum. But AI doesn’t love your child. It doesn’t know your child. It can’t notice their tears at the dinner table or anxiety about school or friends.

    Your presence is the ultimate safeguard. The question isn’t whether your kids will grow up with AI—they will. The question is how parents evolve to shepherd their kids through a complex AI infused digital environment. 

  • The Role of Education in Digital Security [AI]wareness

    In today’s world, digital security is no longer a niche concern; it’s a daily reality. From artificial intelligence powering personalized ads to sophisticated fraud schemes that mimic the voices of loved ones, technology has reshaped how we connect, learn, and even how we’re targeted. While these advancements create incredible opportunities, they also bring heightened risks, especially for the most vulnerable among us: children navigating their first experiences online, and seniors adapting to a digital-first world.

    Brave New Bot

    AI tools make scams and misinformation harder to detect. Children may be exposed to deepfake content or predatory behavior masked behind convincing digital personas. Seniors, meanwhile, are facing an unprecedented wave of phishing emails, AI-generated robocalls, and fraudulent “tech support” offers. The gap isn’t in the technology…it’s in education. Too often, families and communities lack the knowledge and confidence to spot warning signs, ask the right questions, and protect themselves.

    Educating the Human Intelligence Behind AI

    That’s where education becomes a powerful defense. By equipping children with the basics of digital literacy, we help them build lifelong habits of skepticism and safety online. By guiding seniors through the latest scam tactics and practical tools, we restore confidence and reduce isolation. And by providing families with simple, actionable resources, we create a ripple effect of protection that extends across generations.

    At Opt-Inspire, our mission is to close this gap. Through interactive workshops, hands-on toolkits, and volunteer-driven outreach, we bring digital safety education directly to seniors, children, and the families who love them. We believe that awareness isn’t just information; it’s empowerment. With the right knowledge, every person has the power to protect themselves and those around them.

    Education is more than prevention. It’s preparation, and in today’s atmosphere it is absolutely necessary. We are working towards building a world where no one has to feel unsafe in the digital age.

  • Online Safety Tools Every Family Should Know About

    Staying safe online isn’t just about knowing what threats exist. It’s about having the right tools at your fingertips. With scams growing more sophisticated and AI making it easier than ever for bad actors to impersonate people we trust, families need simple, reliable resources to protect themselves.

    At Opt-Inspire, we’ve created practical tools designed to empower both kids and seniors—and the families who love them—to stay secure in today’s digital world. A few highlights:

    #1MSecureTogether Campaign – A national initiative designed to reach one million individuals with cybersecurity education between October 2025 and October 2026.

    The Make It Personal Toolkit – A step-by-step guide to helping seniors practice everyday digital safety, from spotting scams to managing passwords.

    Our Kids + Parents Toolkits – Tailored by age to build healthy tech habits early, with exercises families can do together.

    Volunteer-Led Presentations – Live and virtual sessions where trained volunteers walk through real-world examples of scams and safety basics, leaving families with resources they can use right away.

    Every family deserves to feel confident online. Explore these resources + more on our website, and share them with someone you love. Because when one person is safer, we all are stronger.

  • Empowering Seniors to Navigate the Online World Safely

    The internet has become an essential part of everyday life—connecting us to loved ones, services, and information at the touch of a button. But for many seniors, the rapid pace of digital change can feel overwhelming. With new technologies like artificial intelligence accelerating the sophistication of scams, it’s more important than ever that older adults have the tools and confidence to navigate the online world safely.

    Seniors are often targeted by cybercriminals because scammers view them as more trusting, less familiar with digital warning signs, or more likely to handle finances independently. Common schemes include fraudulent tech support calls, phishing emails, fake investment offers, and even AI-generated robocalls that mimic a family member’s voice. These attacks aren’t just financial—they can cause stress, embarrassment, and a loss of trust in technology.

    The good news is that education and practice can turn vulnerability into strength. By learning how to spot red flags—like urgent messages asking for money, requests for personal details, or offers that seem too good to be true—seniors can protect themselves and help others do the same. Building familiarity with password managers, two-factor authentication, and device security settings makes everyday online activity safer and more manageable.

    At Opt-Inspire, we believe no one should feel left behind in the digital age. Through our Make It Personal Toolkit, interactive workshops, and volunteer-led sessions, we provide seniors and their families with simple, hands-on strategies to outsmart scams and use technology with confidence. Our resources are designed to be approachable, practical, and immediately useful—so that seniors feel empowered, not intimidated, by the digital world.

    Online safety isn’t about keeping seniors away from technology—it’s about giving them the knowledge to use it with peace of mind. When seniors are equipped to navigate the internet securely, they can fully embrace the benefits of connection, learning, and independence that the digital world has to offer.