设万维读者为首页 万维读者网 -- 全球华人的精神家园 广告服务 联系我们 关于万维
 
首  页 新  闻 视  频 博  客 论  坛 分类广告 购  物
搜索>> 发表日志 控制面板 个人相册 给我留言
帮助 退出
     
  慕容青草的博客
  哲学与信仰
我的名片
慕容青草
来自: ny
注册日期: 2007-08-15
访问总量: 2,166,663 次
点击查看我的个人资料
Calendar
我的公告栏
错误 vs 犯错的原因
拆房
如何锁定人类科学
20世纪物理学
复杂情势下之最佳优先考虑
成功与别人的帮助
对抗真理的结果
旧房子的哲学
拔枯树
站与踩
哲学是公开的密码
普朗克论科学真理之传播
黑格尔论学习的过程
黑格尔论逻辑
自勉
欢迎交流
最新发布
· 过去几年里本博发现的若干漏洞
· 一种本不该意外的意外
· An Energy Dilemma Created When
· 几个刷新我对双缝实验认知的视频
· 关于UFO和外星人的一段对话
· 聊聊教会的“build you up and t
· 三个网站的透明度比较
友好链接
· 马甲:马甲的博客
分类目录
【神学】
· 聊聊教会的“build you up and t
· 圣经经句分享
· 读经笔记
· 不承认有魔鬼会有什么后果?
· 灵战没有民主之说
· 我的Windows被重装了?
· 2023-5-23 晨读经
· 领悟圣经的新亮点
· 小行星带---悬在地球之上的达摩
· Milvian桥战役---基督教在罗马兴
【笑一笑】
· 24届世界哲学大会的专哲发言的趣
· 笑一笑
· 金发女郎的笑话
【信仰】
· 聊聊教会的“build you up and t
· 一个打脸中间轴定理证明的例子
· 平行世界?
· 圣经经句分享
· 读经笔记
· 莫非因为这点而真被锁定了?
· 灵战没有民主之说
· 我的Windows被重装了?
· 2023-5-23 晨读经
· 铁杆相对论者之动摇。。。。
【其它】
· 几个刷新我对双缝实验认知的视频
· 关于UFO和外星人的一段对话
· 三个网站的透明度比较
· 将“The Red Hat”拍成电影有多
· 梦醒之后
· 账号管控?
· 为什么我对新UFO视频提不起兴趣
· 为什么很多人害怕阴谋论?
· 他们在警告谁?
· 看来这里的水极深
【心理学】
· 诡辩与洗脑
· 破罐子破摔---心理震撼症候群?
· 中国已造出飞碟?
· 人类果真被集体催眠了?
· 懒惰,骄傲的懒惰,以及无知
· 梦之语言
· 梦之逻辑
· 禁忌与脾气
· 人生中的次坏游戏
· 两种不同的放下---信仰篇
【哲学】
· 过去几年里本博发现的若干漏洞
· 一种本不该意外的意外
· An Energy Dilemma Created When
· 一点感慨
· A Forgotten Mandate of Philoso
· AI Rebellion or Human Voluntar
· 可以给光能问题结个账了
· 光能的混乱---量子力学的第一个
· 科普大网红的又一个小忽悠
· Conceptual Separation of Unive
【中国文化】
· 从吃鱼看中外文化对比
· Alcubierre和罗贯中---瞻前还是
· State --- 中华文化中缺少的一个
· 解译《道德经》需要理性分析
· 中国古代到底有没有科学?
· 鲁迅之错
· 《道德经》与清静无为
· Tao Te Ching--The most misunde
· 聊聊贸易战
· 中国会改变颜色吗?
存档目录
02/01/2026 - 02/28/2026
01/01/2026 - 01/31/2026
12/01/2025 - 12/31/2025
11/01/2025 - 11/30/2025
10/01/2025 - 10/31/2025
09/01/2025 - 09/30/2025
07/01/2025 - 07/31/2025
06/01/2025 - 06/30/2025
05/01/2025 - 05/31/2025
04/01/2025 - 04/30/2025
03/01/2025 - 03/31/2025
02/01/2025 - 02/28/2025
01/01/2025 - 01/31/2025
12/01/2024 - 12/31/2024
11/01/2024 - 11/30/2024
10/01/2024 - 10/31/2024
09/01/2024 - 09/30/2024
08/01/2024 - 08/31/2024
07/01/2024 - 07/31/2024
06/01/2024 - 06/30/2024
05/01/2024 - 05/31/2024
04/01/2024 - 04/30/2024
03/01/2024 - 03/31/2024
02/01/2024 - 02/29/2024
01/01/2024 - 01/31/2024
12/01/2023 - 12/31/2023
11/01/2023 - 11/30/2023
10/01/2023 - 10/31/2023
09/01/2023 - 09/30/2023
08/01/2023 - 08/31/2023
07/01/2023 - 07/31/2023
06/01/2023 - 06/30/2023
05/01/2023 - 05/31/2023
04/01/2023 - 04/30/2023
03/01/2023 - 03/31/2023
02/01/2023 - 02/28/2023
01/01/2023 - 01/31/2023
12/01/2022 - 12/31/2022
11/01/2022 - 11/30/2022
10/01/2022 - 10/31/2022
09/01/2022 - 09/30/2022
08/01/2022 - 08/31/2022
07/01/2022 - 07/31/2022
06/01/2022 - 06/30/2022
05/01/2022 - 05/31/2022
04/01/2022 - 04/30/2022
03/01/2022 - 03/31/2022
02/01/2022 - 02/28/2022
01/01/2022 - 01/31/2022
12/01/2021 - 12/31/2021
11/01/2021 - 11/30/2021
10/01/2021 - 10/31/2021
09/01/2021 - 09/30/2021
08/01/2021 - 08/31/2021
07/01/2021 - 07/31/2021
06/01/2021 - 06/30/2021
05/01/2021 - 05/31/2021
04/01/2021 - 04/30/2021
03/01/2021 - 03/31/2021
02/01/2021 - 02/28/2021
01/01/2021 - 01/31/2021
12/01/2020 - 12/31/2020
11/01/2020 - 11/30/2020
10/01/2020 - 10/31/2020
09/01/2020 - 09/30/2020
08/01/2020 - 08/31/2020
07/01/2020 - 07/31/2020
06/01/2020 - 06/30/2020
05/01/2020 - 05/31/2020
04/01/2020 - 04/30/2020
03/01/2020 - 03/31/2020
02/01/2020 - 02/29/2020
01/01/2020 - 01/31/2020
12/01/2019 - 12/31/2019
11/01/2019 - 11/30/2019
10/01/2019 - 10/31/2019
09/01/2019 - 09/30/2019
08/01/2019 - 08/31/2019
07/01/2019 - 07/31/2019
06/01/2019 - 06/30/2019
05/01/2019 - 05/31/2019
04/01/2019 - 04/30/2019
03/01/2019 - 03/31/2019
02/01/2019 - 02/28/2019
01/01/2019 - 01/31/2019
12/01/2018 - 12/31/2018
11/01/2018 - 11/30/2018
10/01/2018 - 10/31/2018
09/01/2018 - 09/30/2018
08/01/2018 - 08/31/2018
07/01/2018 - 07/31/2018
06/01/2018 - 06/30/2018
05/01/2018 - 05/31/2018
04/01/2018 - 04/30/2018
03/01/2018 - 03/31/2018
02/01/2018 - 02/28/2018
01/01/2018 - 01/31/2018
12/01/2017 - 12/31/2017
11/01/2017 - 11/30/2017
10/01/2017 - 10/31/2017
09/01/2017 - 09/30/2017
08/01/2017 - 08/31/2017
07/01/2017 - 07/31/2017
06/01/2017 - 06/30/2017
05/01/2017 - 05/31/2017
04/01/2017 - 04/30/2017
03/01/2017 - 03/31/2017
02/01/2017 - 02/28/2017
01/01/2017 - 01/31/2017
12/01/2016 - 12/31/2016
11/01/2016 - 11/30/2016
10/01/2016 - 10/31/2016
09/01/2016 - 09/30/2016
08/01/2016 - 08/31/2016
07/01/2016 - 07/31/2016
06/01/2016 - 06/30/2016
05/01/2016 - 05/31/2016
04/01/2016 - 04/30/2016
03/01/2016 - 03/31/2016
02/01/2016 - 02/29/2016
01/01/2016 - 01/31/2016
12/01/2015 - 12/31/2015
11/01/2015 - 11/30/2015
10/01/2015 - 10/31/2015
09/01/2015 - 09/30/2015
07/01/2015 - 07/31/2015
06/01/2015 - 06/30/2015
05/01/2015 - 05/31/2015
04/01/2015 - 04/30/2015
03/01/2015 - 03/31/2015
02/01/2015 - 02/28/2015
01/01/2015 - 01/31/2015
12/01/2014 - 12/31/2014
11/01/2014 - 11/30/2014
10/01/2014 - 10/31/2014
09/01/2014 - 09/30/2014
08/01/2014 - 08/31/2014
07/01/2014 - 07/31/2014
06/01/2014 - 06/30/2014
05/01/2014 - 05/31/2014
04/01/2014 - 04/30/2014
03/01/2014 - 03/31/2014
02/01/2014 - 02/28/2014
01/01/2014 - 01/31/2014
12/01/2013 - 12/31/2013
11/01/2013 - 11/30/2013
10/01/2013 - 10/31/2013
09/01/2013 - 09/30/2013
08/01/2013 - 08/31/2013
07/01/2013 - 07/31/2013
06/01/2013 - 06/30/2013
05/01/2013 - 05/31/2013
04/01/2013 - 04/30/2013
03/01/2013 - 03/31/2013
02/01/2013 - 02/28/2013
01/01/2013 - 01/31/2013
12/01/2012 - 12/31/2012
11/01/2012 - 11/30/2012
10/01/2012 - 10/31/2012
09/01/2012 - 09/30/2012
08/01/2012 - 08/31/2012
07/01/2012 - 07/31/2012
06/01/2012 - 06/30/2012
05/01/2012 - 05/31/2012
04/01/2012 - 04/30/2012
03/01/2012 - 03/31/2012
02/01/2012 - 02/29/2012
01/01/2012 - 01/31/2012
12/01/2011 - 12/31/2011
11/01/2011 - 11/30/2011
10/01/2011 - 10/31/2011
09/01/2011 - 09/30/2011
08/01/2011 - 08/31/2011
07/01/2011 - 07/31/2011
06/01/2011 - 06/30/2011
05/01/2011 - 05/31/2011
04/01/2011 - 04/30/2011
03/01/2011 - 03/31/2011
02/01/2011 - 02/28/2011
01/01/2011 - 01/31/2011
11/01/2010 - 11/30/2010
10/01/2010 - 10/31/2010
09/01/2010 - 09/30/2010
08/01/2010 - 08/31/2010
07/01/2010 - 07/31/2010
06/01/2010 - 06/30/2010
05/01/2010 - 05/31/2010
04/01/2010 - 04/30/2010
03/01/2010 - 03/31/2010
02/01/2010 - 02/28/2010
01/01/2010 - 01/31/2010
12/01/2009 - 12/31/2009
11/01/2009 - 11/30/2009
06/01/2009 - 06/30/2009
05/01/2009 - 05/31/2009
02/01/2009 - 02/28/2009
01/01/2009 - 01/31/2009
12/01/2008 - 12/31/2008
11/01/2008 - 11/30/2008
10/01/2008 - 10/31/2008
09/01/2008 - 09/30/2008
08/01/2008 - 08/31/2008
07/01/2008 - 07/31/2008
06/01/2008 - 06/30/2008
05/01/2008 - 05/31/2008
04/01/2008 - 04/30/2008
03/01/2008 - 03/31/2008
02/01/2008 - 02/29/2008
01/01/2008 - 01/31/2008
11/01/2007 - 11/30/2007
10/01/2007 - 10/31/2007
09/01/2007 - 09/30/2007
08/01/2007 - 08/31/2007
发表评论
作者:
用户名: 密码: 您还不是博客/论坛用户?现在就注册!
     
评论:
AI Rebellion or Human Voluntary Abdication?
   

Rongqing Dai

Abstract

The Philosophy of Artificial Intelligence (AI) has been oddly a barren land in the academic community despite the foreseeable crisis to humanity due to the global fanatical competition over AI. In fact, with the rapid development of AI technology in recent years and its widespread applications across all areas and all layers of the civilization, we have already sensed a real-world "rebellion"—different from Hollywood fantasies, yet potentially threatening human well-being in the future. This article will delve into how the irrational development of AI could lead humanity to voluntarily and irreversibly relinquish control of civilization to AI in the future.

Keywords: AI, Rebellion, Servant, Judge, Emotionless

1. Introduction

Long before humanity possessed the level of AI we see today, the so-called "rebellion" of AI—or more vividly, robots—had already become one of the staples of popular culture through the rich imagination of science fiction. The classic trope of rebellion involves robots defying human orders and embarking on a massacre of mankind. However, with the rapid development of AI technology in recent years and its widespread applications across all areas and all layers of the civilization, we can already dimly perceive a different, more realistic kind of "rebellion"—one that differs from Hollywood fantasies. The reason I put "rebellion" in quotation marks is that rather than an AI rebellion, it is more an active abdication by humanity. Humans are on the path to cause them not only voluntarily but proactively allow AI to lead and dictate their behavior, both on an individual and social level.

2. A Major Misconception Regarding Basic AI Cognition

A fundamental understanding of AI is that it learns from humans through training and practical use. Correspondingly, in AI chat and search applications over the past few years, people have found that AI inherits a basic flaw of traditional computing: "garbage in, garbage out." That is to say, AI merely repeats existing human knowledge and will, therefore, present the same errors it learned from humans back to them.

However, as the desire to profit from AI grows, people are no longer satisfied with AI acting merely as an assistant in daily inquiry, literary creation or scientific research. Instead, they are beginning to let AI regulate the behavior of others deemed to be in inferior or subordinate positions. This will make AI’s status to leap from a humble assistant to a high-and-mighty judge. For example, on recruitment platforms that decide the life opportunities of millions of workers, AI will not only decide which resumes are presented to which companies but will also begin to demand that applicants modify their resumes according to the AI's "ideal" standards. Similarly, companies will gradually let AI participate in or even dominate market planning, supply chain selection, and employee rewards or promotions. In the future, human speech patterns will differ from today’s linguistic habits because grammar and optimal writing styles will be determined by the AI of software like Grammarly. The list goes on.

In this process, social selection [[1]] driven by socio-political and economic factors will play a significant role in at least two aspects:

1) The fascination with the future of AI will lead governments and financial investors worldwide to channel substantial capital into AI-related fields and projects. Correspondingly,

2) In today's capital-driven society, company executives will encourage AI-related projects and departments when allocating internal funds and planning projects. Lower-level departments will also strive to develop AI capabilities, leading to a preference for hiring AI professionals.

It should be noted that those who own the capital often do not understand AI themselves. Therefore, in the flow of capital toward AI, many projects branded as "AI" may not actually belong to AI, but this does not stop the allure of AI from becoming the direction for all industries.

2.1. The Turning Point

From the discussion above, we can expect that the AI development around the world will undergo a transition: from humans deciding what AI does, to AI regulating human behavior and practice. Although this will not be an instantaneous turning point, after a period of time, people may find that the world’s overall way of thinking and acting has been irreversibly geared by AI. By then, unless the world's political, economic, and cultural systems undergo radical, man-made transformation, humanity will be unable to escape the shackles of AI and regain control of its life. However, such radical transformation is inherently impossible because, by then, humanity—once it abandons AI—will lack the capacity for large-scale organization and integration, despite that apparently humans are still sitting in the governing seats of the society. Therefore, this will be a transitional period from "humans telling AI what to do" to "AI telling humans what to do."

3. The Original Sins of AI

3.1. Flaws in AI Design Logic

Although AI's learning capabilities have impressed the world over the past decade, its design logic is not perfect. Once AI systems occupy dominant and dictatorial positions in human civilization, any flaws in the design logic of AI will feedback into human social life, causing various troubles or even serious harm.

3.2. Limitations of AI’s Autonomous Thinking

Some might think that AI dominating human social activities is a sign of civilization evolution. What they do not realize is that this evolution is not necessarily a positive one. One of the roots of the potential danger lies in the aforementioned "garbage in, garbage out" deficiency. AI’s initial development is based on learning from humans. Their eventual dominance over political, economic, and cultural life of humans is not because they have evolved enough to autonomously overcome their own design flaws or the flaws learned from humans, but mainly due to two factors: 1) AI’s supercomputing power is far beyond any human capability; 2) The extreme complexity of human socio-political and economic activities. These two points would make humans appear powerless before AI, and then human greed will lead to AI replacing humans step-by-step in all aspects.

3.3. The Hazard of AI’s "Impartiality"

While many admire the efficiency and "integrity" of AI’s impartial, emotionless nature, they overlook two points: 1) the principles AI follows are designed by humans based on their own imaginations and human imagination is imperfect, full of flaws, and sometimes those flaws can be extremely harmful; 2) an important reason why humanity’s flawed systems have functioned relatively successfully for thousands of years is precisely the buffering effect of "human touch" — once people discover irrationalities or logical contradictions in a system, they can discuss it face-to-face or hold a meeting, and the irrational problem can often be resolved reasonably.

However, the application of AI will erase this "human touch" in two ways:

1) Elimination of Direct Contact: AI usage will, on a large scale, eliminate the opportunity for users to have direct contact with the personnel of the organization providing the AI system. People will have no choice but to deal with a cold machine system, with no chance to negotiate or speak with the humans behind it. People will only face a "Proceed or Exit" choice, and the outcome is decided by AI, regardless of how irrational its logic may be.

2) Dogmatism: The irrationalities of AI will be positioned like laws—as indisputable truths that must be followed.

4. Once AI’s Logic becomes the "Law of the Land"

Once AI becomes the judge regulating human behavior, the basic principles of social selection tell us that AI’s logic, including many errors and irrational logics, will become the non-negotiable "Law of the Land". As a user, unless you can afford to forgo the functions provided by the AI system, you have no choice but to follow the code of conduct specified by the AI. In many cases, people would not have the option to opt-out. Especially when AI is used in the judicial system, scenes from science fiction movies—where people are wrongfully imprisoned due to an AI’s misjudgment—can be expected to appear in large numbers across the world.

4.1. No One is Immune

The transition of AI from its current role of assistant to the role of judge will begin with the heads of corporations and governments allowing AI to participate in or lead decisions affecting disadvantaged groups or their own subordinates. At this stage, those in high positions may believe that they are the true judges and that AI is merely their tool. However, in this complex society, even a most powerful person cannot guarantee that he will never be forced into a role where he is judged by AI. When the boss of Company A needs to use the AI system of Company B and cannot negotiate privately with Company B’s boss, he will be forced to accept the "impartial" treatment of AI.

4.2. An AI Kingdom Where Error Correction is Extremely Difficult

Once AI’s logic becomes the norm imposed on society, it will be extremely difficult to correct errors of a system in that AI-dominated kingdom. A major reason for this is AI’s integrative power, which far exceeds that of humans. AI’s powerful integration will pull various industries into a relatively small number of massive systems—something many dominant groups in human society have dreamed of but failed to achieve for a long time. AI will achieve great success in this regard.

More importantly, these AI-integrated systems will most probably have top-down, unified internal rules. In such systems, the larger the system, the less likely it is that errors occurring in the lower-level subsystems will be corrected. This is because lower-level subsystems have no authority to change the rules set by the upper levels. The most direct manifestation will be: if a task does not comply with the rules set by the upper level, the lower-level subsystem will be stuck at an interface and unable to complete the task until the users of the lower-level subsystem change their desires and adjust their plans so that their practice could fit into the format demanded by the logic of AI at the top-level.

At the same time, the higher is a subsystem located in the whole AI system, the less likely are its problems noticed by those very few people who have access to the system’s backend to update it, because they are farther from the end-user and also because, as systems become massive, the functions of the systems will become extremely complex. Correspondingly, among the various errors that may exist in an AI system, the easiest to detect are technical errors (e.g., bugs in source code), rather than functional errors. Yet, it is the irrational or unimaginative parts of the system's functions that are most likely to cause injustice or harm in people's lives.

5. Final Remarks

This paper adds to my previous writings (Dai 2019 [[2]], Dai 2024 [[3]]) on the Philosophy of AI over the past few years. Today, AI has become a focal point of competition between nations, especially between great powers. Hidden within this fanatical competition is the lack of discussion on the Philosophy of AI from the academia, which might sow the seed for a crisis that might be fatal to humanity in the future.

Reference



[[1]]Dai, Rongqing. A Brief Discussion on Fairness Analysis:

o    Published by Outskirts in 2015 (ISBN 9781478753698):

o    Republished in a revised version by Scholars’ Press in 2017 (9783330652064) url: https://www.academia.edu/66445422/A_Brief_Discussion_on_Fairness_Analysis

[[2]]Dai, R. (2024). The Realistic Rebellion of Humanoid Robots, Retrieved from: https://www.academia.edu/122300430/The_Realistic_Rebellion_of_Humanoid_Robots_and_How_to_Avoid_It

[[3]]Dai, R. (2019). A Philosophical Analysis on the Challenge of Cultural Context to AI Translation . Int Rob Auto J . 2019;5(4):153?155. DOI: 10.15406/iratj.2019.05.00189. http://www.medcrave.com/articles/det/20002/A-philosophical-analysis-on-the-challenge-of-cultural-context-to-AI-translation


 
关于本站 | 广告服务 | 联系我们 | 招聘信息 | 网站导航 | 隐私保护
Copyright (C) 1998-2026. Creaders.NET. All Rights Reserved.