Authored by Stacy Feiner and Lorri Slesh*
A recent “60 Minutes” exposé revealed how automatic updates and connected devices quietly intrude on privacy. One car owner who purchased GM’s OnStar system primarily for safety discovered that his driving habits were being secretly shared with insurers. This unauthorized practice caused his premiums to rise, costing him hundreds of dollars while lining the pockets of insurers. Even worse, the data was misinterpreted, turning an everyday convenience into unjust surveillance.
This is not happening in a vacuum. We’ve been lulled into convenience, which has crept into manipulation. The more technology advances, the more access becomes entrapment. Today, participation creates exposure, leaving us vulnerable.
AI Is Not All or Nothing
We’re not advocating a return to landlines, trip tickets, or cassette tapes. Convenience and personalization, including AI, are likely here to stay—but they demand new ethical guardrails. AI now permeates public infrastructure, commerce, education, healthcare, and social systems. Its impact on autonomy, privacy, and personal agency is profound.
Technology has historically been framed as benefiting humanity, not harming it. AI isn’t new: the term was coined by John McCarthy in 1955. In 1983, intelligent gameplay computers were introduced to the U.S. via the movie “War Games.” By 2011, Siri, Alexa, and IBM Watson entered our homes. Today, generative AI, including OpenAI agents and ChatGPT, is increasingly popular.
As consumers, we have lost control over the pace of technology adoption. Everett Rogers’ adoption curve once gave us a familiar rhythm: early adopters, early majority, late majority, and laggards. Today, AI is compressing that curve, replacing choice with forced consent—fueled by constant push notifications, covert data collection, and unauthorized selling of personal information. Platforms like Google, LinkedIn, and Facebook embed AI in ways that deliberately make it nearly impossible for consumers to understand or protect their personal data use.
The Psychological Cost of Forced Consent
When we are pushed to accept policies we cannot opt out of, serious psychological risks emerge. Research shows that a lack of choice induces chronic stress, anxiety, and learned helplessness. On a social level, attempts to protect our privacy often come at the expense of belonging. This double bind leaves people either isolated or pressured to conform.
Even small interactions—sharing data, following automated recommendations, relying on AI-driven systems—can escalate over time. They erode personal agency, expose individuals to manipulation, and threaten long-term autonomy. Normalized intrusion doesn’t just threaten privacy; it reshapes behavior, social dynamics, and our ability to make independent choices.
The compressed adoption curve forces both the early and late majority to adopt technologies they cannot avoid. Laggards are overrun, ignored, or excluded. Our fundamental human capacity to think for ourselves is at risk.
A Layered Approach to Ethical AI
We propose that AI requires a layered system of guardrails, where regulation, ethics, and relationships work together to protect human agency and well-being. Each layer is necessary, and each reinforces the others.
“The future of compliance isn’t just about keeping up — it’s about leading with integrity in an AI-driven world.” — Meredith Anastasio, Opal Group
Layer One: Regulatory Guardrails—AI as a Public Utility
We argue that AI has become essential infrastructure. Like electricity, water, or telecommunications, it must be treated as a public utility. Public utilities are regulated to protect consumers, ensuring fairness, safety, access, and accountability.
Treating AI as a public utility reduces risk, increases accessibility, and guarantees meaningful opt-out mechanisms. Users must retain control over participation, data sharing, and exposure—preserving agency while benefiting from technology.
Layer Two: Ethical Guardrails—Intelligence with Conscience
Regulation alone cannot ensure integrity; AI must be guided by ethical principles. Oversight defines boundaries; ethics give those boundaries meaning.
“AI without values is intelligence without conscience.” — Davos, 2025
Ethical guardrails ensure that innovation serves humanity, not just profit or convenience. Transparency, accountability, and care must be embedded into AI design and deployment. Ethics are the moral compass that make regulatory frameworks meaningful.
Layer Three: Relational Guardrails—The Sixth Level
The Sixth Level provides the social infrastructure for ethical AI. Based on Dr. Jean Baker Miller’s 1971 research on women’s unique psychological contributions to healthy relational development, The Sixth Level is a relational operating system—a framework for collaboration, transparency, and shared decision-making—that ensures technology benefits humanity rather than exploits it.
The framework is grounded in four principles: mutuality, ingenuity, justness, and intrinsic motivation. These principles guide interactions with technology, shape decision-making, and safeguard human agency in an automated world.
Each layer strengthens the others: Regulation provides structure, ethics provide conscience, and relationships ensure culture and practice align with both. Together, they prevent runaway systems from overriding human judgment.
Learning from the Past to Protect the Future
We would embrace AI if it advanced planetary health, improved human well-being, fostered meaningful connections, operated transparently under an Ethic of Care, and allowed opt-out agency. While we are on a runaway train, we can steer responsibly by demanding oversight, regulation, education, and ethical adoption frameworks.
Public health lessons—such as the 50-year effort to reduce tobacco use—show that large-scale behavior change is possible when regulation, transparency, and enforcement align. AI demands nothing less: a multi-faceted approach grounded in ethics, relational accountability, and collective protection.
The Path Forward
We are at a pivotal moment. The choices we make today—how AI is adopted, regulated, integrated, and whether it preserves user choice—will determine whether we protect human agency or surrender it to systems designed without us in mind.
“Ethics isn’t optional or merely nice‑to‑have; it must be embedded into the architecture, strategy, and leadership of organizations.” —Reid Blackman
A layered approach provides a blueprint:
- Regulation: Ensures fairness, safety, and an opt-out choice.
- Ethics: Aligns technology with human values.
- Relational: Ensures decisions and practices serve collective well-being.
Operationalized together, oversight, ethical intent, and relational accountability preserve human ingenuity, real consent, and healthier social connections in the age of AI.
*Acknowledgement to my co-author, Lorri Slesh, a growth strategist, integrator, and author, for advancing our presentation for the Opal Emerging Markets conference into this post.






