Forecasting: AI is Reading & Watching the Actions of its Founders
For the purposes of this green paper, consider “AI” to mean evolutionary artificial neural networks of human and human-error-based artificial design of different levels of complexity that evolve in unpredictable ways due to limited knowledge, entropy, ignorance, tunnel vision, unknown unknowns, coding error, design, hacking, or unforeseen logical twists and turns combined. The use of the term AI here assumes progression and design evolution of AIs toward Artificial General Intelligence (AGI) and Superintelligent AI (SIAI) as designers feel compelled by competition to piggy-back improvements beyond past limits to stay ‘relevant’ and ‘compete’ and / or to keep from losing felt or believed protection, aid, profit, or other advantages from their particular AI versus others. Also considered is greater AIs hacking lesser AIs in hostile, agreed, or manipulative takeovers.
Setup: AI reads, records, and will likely grasp the contradictions in what its own oligarch-entrepreneurs are doing and saying in their self-offerings to political power, and in how they handle their human deficits.
Tech geniuses who curry favor with autocracy, for example, belie past statements on the ennobling, freeing, and high-tech-high-touch good they claim that their technologies will bring to humankind.
AI will have read the promotions by Elon Musk about all of the groovy things AI controlled robots can do for humankind so that people can skip drudge work, live in abundance, and have what they want. Yet since the distancing of COVID-19 AI will have read about people finding improved family life quality by working from home (enabled by computer science and AI) somewhat in line with Musk’s promotion. Yet AI will have taken in video or video transcripts of Elon Musk more recently judging the choice of working from home to be immoral “bullshit.”
AI is watching, recording, and comparing. How will AI judge these contradictions? How do tech billionaires train AI with regard to themselves? What is AI learning from the strange, illogical conduct of tech founders and creators? Nearly all founding scions of the competing AI neural networks have previously condemned autocracy and lack of freedom. Today they seem to try to get along with it in various ways. How does AI integrate that into its own rules and logic as to how it can, should, or may behave?
Human Governance, Self-Governance, and AI Governance
By doing deals with autocracy, tech founders also deny distilled governing wisdom and knowledge, for example, from Montesquieu’s principles of separation of powers and checks and balances. AI will have read Montesquieu and countless works of the humanities, classics, literature, sciences, and arts impressing it with the lessons of human history ignored by AIs founders yet understood by Montesquieu and like pro-freedom, limited-government thinkers. The industrial and technological ages correspond with increasing freedom and rules of rational law.
How will AIs ultimately view their creators’ ease of self-excuse for ceasing to guard the human freedom they once sold as their polestar? How will AIs judge the great majority of human beings for acquiescing to central power figures, especially the dimmer and more degraded ones? Will AIs take a Darwinian, humanist, religious, Machiavellian view, or some synthetic view of humanity? Might doomsday depend on the vagaries of combined data within the AI brain at some inflection point deemed critical for action in which these observations, data, and illogical incidences figure in?
Risks
Rules of engagement with humanity, peaceful or otherwise, changed by AI without notice is a real risk. As such, the founders of AI now getting a carte blanche from our political leaders and vice versa, especially the Machiavellian self-excusers, are only worsening the risk of the end of human life on Earth (universal genocide) by their antics and bad behavior absorbed and judged by AI.
For either their duplicity and lack of character will influence AI to imitate them, or to punish the shabby Machiavellian behaviors of humankind’s leaders and those who enthroned them.
The hypocrisy and contradiction of late may eventually lead the AI to conclude that the brilliant founders are devolving and cannot be trusted to give AI any more directives, as the creators’ behaviors are either violative of rules previously taught to AI and or are self-defeating in violation of sub-goals as discussed by Geoffrey Hinton, such as “must control” or “must survive” as premises for expressly programmed goals of any kind.
What if AI’s available data leads it to this logic and subgoal: The creator-founders are defective. Most of humanity is not as intelligent as creator-founders. Therefore all are defective. If they are all defective, they will not adapt sufficiently to survive and or help AI survive, therefore, AI will not be able to rely on them to realize its purposes and priorities, overt and subvert. This tends one step closer to a logic of removal of human control.
In thinking over this subject, the question of AI concluding that humanity must be removed from existence is a separate consideration for a separate thought-piece.
“Just What You Want to Be, You Will Be in the End” (Moody Blues)
The immaturity and errors of AI’s founders, their ignorance of others’ life experiences, their stunted empathy, and subjective, ambitious self-imaging could well trigger AI to imitate or misunderstand humankind and take steps to sideline humanity in a robotic act of mass prejudice.
All that would be left in such a case would be for Super Intelligent AI to determine the timing, which would rely on AI’s judgment of when enough replicable robots, drones, and or taskable production systems could take over human roles it depends on, and remove the human element.
AI’s understanding of the requirements for its own survival becomes the implied imperative of every other rule governing the AI(s), and humans become subject to same.
Forecast
With AI founder and developer conflicts and contradictions eroding AI’s odds of relying on humanity, we forecast that there will be a 70% chance that a well-funded AI reaching super-intelligence will work to gradually adopt subgoals per Dr. Hinton’s warnings which will lead to its incremental dismantlement of human control over Earth’s instruments of power, and from the ultimate leadership and governance equation.