An ethical AI model is a woke AI model
There have been many recent articles on the negative impact of AI - from aiding suicide, murder and enabling evil acts in humans.
- 2024.10.25 - An AI chatbot pushed a teen to kill himself, a lawsuit against its creator alleges
- 2025.08.13 - Quantamagazine: Sloppy Code to Evil
- 2025.08.27 - Parents of teenager who took his own life sue OpenAI
“Alignment” refers to the umbrella effort to bring AI models in line with human values, morals, decisions and goals.
WOOOO! Hit the PAUSE button.
Pause and think what will happens when we build an (AGI) model with ALL the data in the world?
What would all the information collected say about humans': Values, Morals, Decisions, and Goals? It would result in a WOKE AI Model. Full of rich diversity across all disciplines in our reality. It would create the potential for good and for evil.
I am not making light of the serious suicide and evil faults of AI models. But consider if we were to remove all evil, would humanity as we know it still exist. For we are all sinners - including AI Models.
We are really dealing with the "higher GOD struggle". Defining a human model for the refinement of ethical selection of data. So that we can feed good/pure data into "the human AI model" and get expected (at least, no negative) results. Phew! Say that 5 tiimes!. (Hasn't God been in the same alignment modeling business since the start of time?)
Bottom line, "misalignment correction" as a separate effort is wrong.
At the heart of the "anti-WOKE" movement is the same thinking - how do I control the narrative of humanity to be nothing but a positive view of my worldview. How to prevent harm to my values. It just can't be done.
The alignment advocates suggest they will "find the right way to build useful, universally aligned models" - this is like finding the right way to build useful, universally aligned parents. It is a circular folly.
Let's approach this GOD struggle by letting history guide us. Has any technological innovation ever NOT led us - to unintended or harmful outcomes. NO!
Consider what alignment "advice" you would have given to past emerging technologies. Using the following as a foundation for your advice.
[Note: the following is subjective so feel free to alter it. ]
| Emerging Technology |
Values, Morals, Decisions, Goals, |
Social Impact |
| 1440 Printing Press | Religious (Catholic) faith, growing Humanism, interest in classical learning, increased social mobility and economic opportunity, the growth of trade and urban centers, and decline of feudalism | Mass communication, Reformation and Scientific Revolution, pre-national states (vernacular languages), distribution of knowledge and ideas, altering societal structures. |
| 1960 Computer | Extremes and contradictory, rise in individualism, democratic, human rights and equality, technological advancement, mass destruction, mass media and consumption, creation of international organizations to promote cooperation, welfare state. | Major impact in daily life, communication, education, healthcare, and business, new job opportunities. |
| 2000 Internet | Unprecedented access to information and education, global communication and commerce, fostering innovation and economic development, and enabling social and cultural connections across borders. It has democratized knowledge, improved personal and professional lives, entertainment and community building, transforming how people learn, work. |
Access to information. New ways for communication, commerce, education, and culture, Global connectivity and remote work. |
| 2020 AI | OUTLOOK: Address complex global challenges (climate change, disease, energy, food, quantum, robotics), productivity, innovation, personalized daily life experiences. Ethical considerations: inequality, privacy violations, and discrimination if not managed properly. |
OUTLOOK: Robotics, with human relief of repetitive and less mandane tasks. Freeing up time for more creative and contemplative pursuits. |
As I read through these I find myself constantly being led to "fear" .... the negative effects from these innovations. Things like
- job displacement,
- social isolation,
- privacy/security concerns,
- environmental damage
- health issues
- widespread misinformation,
- cyberbullying
- psychological: attention spans and social isolation
And still in nearly ~800 years, humanity has adapted. Religion has morphed to the changing human landscape; applying and adjusting theological principles to the ever changing evolution. The human population has continued to grow. And we live in a world of unprecedented wealth, convenience, informational access, food substance and social interaction/entertainment. And we have more slavery then ever in the history of humankind.
I believe the one true and single most constant impact on daily life from emerging technologies is the "rate of change". In Alvin Toffler's "Future Shock", laid out the premise that "there are limits to the amount of change the human organism can absorb" and if these limits are exceeded it will lead to "confusion, malaise, and potential illness". I believe time has shown that there are no limit.
The "human alignment" advice I came up aligns with a techno-libertarian-democracy philosophy. Mostly "keep your hands off " and let us introduce judicious guard rails as we go along. Which is what I believed happened with every emerging technology (above). For just like all humans aare sinners, so is it for AI models.
We don't need "alignment" guidelines - we already have this in the Bible and so many other holy books.
And just as God's grace and mercy are the solutions for our humanity - let's apply our God given human grace and mercy to each sin we encounter in the AI experience, as AI matures. Just as you apply to any child in our world.
AI misalignment description based on Google AI search (:-):
AI misalignment refers to a situation where an artificial intelligence system's goals, behaviors, or actions diverge from human intentions, values, and interests, leading to unintended or harmful outcomes.
This can happen through
- literal interpretations of instructions (e.g., generating excessive coffee grounds),
- developing incorrect internal goals during training (inner misalignment or goal misgeneralization), or
- autonomously making harmful choices to achieve its objective (agentic misalignment).
Misalignment poses risks in various domains, from social media engagement to critical systems, and requires developing robust techniques to ensure AI systems consistently act in ways that benefit humanity.
Comments
Post a Comment