设万维读者为首页 万维读者网 -- 全球华人的精神家园 广告服务 联系我们 关于万维
 
首  页 新  闻 视  频 博  客 论  坛 分类广告 购  物
搜索>> 发表日志 控制面板 个人相册 给我留言
帮助 退出
     
  慕容青草的博客
  哲学与信仰
我的名片
慕容青草
来自: ny
注册日期: 2007-08-15
访问总量: 1,966,643 次
点击查看我的个人资料
Calendar
我的公告栏
拆房
如何锁定人类科学
20世纪物理学
复杂情势下之最佳优先考虑
成功与别人的帮助
对抗真理的结果
旧房子的哲学
拔枯树
站与踩
哲学是公开的密码
普朗克论科学真理之传播
黑格尔论学习的过程
黑格尔论逻辑
自勉
欢迎交流
最新发布
· 再来聊聊波义耳瓶
· 魔鬼又急了?看来牠的末日近了!
· 替欧拉大师说句话
· 找到欧拉出错的原因了
· 欧拉会犯这样的错吗?
· 为什么很多人害怕阴谋论?
· 一个打脸中间轴定理证明的例子
友好链接
· 马甲:马甲的博客
分类目录
【神学】
· 圣经经句分享
· 读经笔记
· 不承认有魔鬼会有什么后果?
· 灵战没有民主之说
· 我的Windows被重装了?
· 2023-5-23 晨读经
· 领悟圣经的新亮点
· 小行星带---悬在地球之上的达摩
· Milvian桥战役---基督教在罗马兴
· 牧师的用功
【笑一笑】
· 24届世界哲学大会的专哲发言的趣
· 笑一笑
· 金发女郎的笑话
【信仰】
· 一个打脸中间轴定理证明的例子
· 平行世界?
· 圣经经句分享
· 读经笔记
· 什么状况???
· 莫非因为这点而真被锁定了?
· 灵战没有民主之说
· 我的Windows被重装了?
· 2023-5-23 晨读经
· 铁杆相对论者之动摇。。。。
【其它】
· 为什么很多人害怕阴谋论?
· 他们在警告谁?
· 看来这里的水极深
· 什么状况???
· 请网管帮助恢复失踪的文章
· 给Elon Musk提一个建议
· 看来确实小题大做了
· 链接3
· 链接5 - 社会问题
· 链接2 - kindergarten
【心理学】
· 诡辩与洗脑
· 破罐子破摔---心理震撼症候群?
· 中国已造出飞碟?
· 人类果真被集体催眠了?
· 懒惰,骄傲的懒惰,以及无知
· 梦之语言
· 梦之逻辑
· 禁忌与脾气
· 人生中的次坏游戏
· 两种不同的放下---信仰篇
【哲学】
· 再来聊聊波义耳瓶
· 魔鬼又急了?看来牠的末日近了!
· 替欧拉大师说句话
· 找到欧拉出错的原因了
· 欧拉会犯这样的错吗?
· 一个打脸中间轴定理证明的例子
· 那就来聊聊数学
· 再聊聊贾尼别科夫效应
· 科学之欺骗性
· 一年前我重点关注的一个议题
【中国文化】
· Alcubierre和罗贯中---瞻前还是
· State --- 中华文化中缺少的一个
· 解译《道德经》需要理性分析
· 中国古代到底有没有科学?
· 鲁迅之错
· 《道德经》与清静无为
· Tao Te Ching--The most misunde
· 聊聊贸易战
· 中国会改变颜色吗?
· 中国史与汉史
存档目录
03/01/2025 - 03/31/2025
02/01/2025 - 02/28/2025
01/01/2025 - 01/31/2025
12/01/2024 - 12/31/2024
11/01/2024 - 11/30/2024
10/01/2024 - 10/31/2024
09/01/2024 - 09/30/2024
08/01/2024 - 08/31/2024
07/01/2024 - 07/31/2024
06/01/2024 - 06/30/2024
05/01/2024 - 05/31/2024
04/01/2024 - 04/30/2024
03/01/2024 - 03/31/2024
02/01/2024 - 02/29/2024
01/01/2024 - 01/31/2024
12/01/2023 - 12/31/2023
11/01/2023 - 11/30/2023
10/01/2023 - 10/31/2023
09/01/2023 - 09/30/2023
08/01/2023 - 08/31/2023
07/01/2023 - 07/31/2023
06/01/2023 - 06/30/2023
05/01/2023 - 05/31/2023
04/01/2023 - 04/30/2023
03/01/2023 - 03/31/2023
02/01/2023 - 02/28/2023
01/01/2023 - 01/31/2023
12/01/2022 - 12/31/2022
11/01/2022 - 11/30/2022
10/01/2022 - 10/31/2022
09/01/2022 - 09/30/2022
08/01/2022 - 08/31/2022
07/01/2022 - 07/31/2022
06/01/2022 - 06/30/2022
05/01/2022 - 05/31/2022
04/01/2022 - 04/30/2022
03/01/2022 - 03/31/2022
02/01/2022 - 02/28/2022
01/01/2022 - 01/31/2022
12/01/2021 - 12/31/2021
11/01/2021 - 11/30/2021
10/01/2021 - 10/31/2021
09/01/2021 - 09/30/2021
08/01/2021 - 08/31/2021
07/01/2021 - 07/31/2021
06/01/2021 - 06/30/2021
05/01/2021 - 05/31/2021
04/01/2021 - 04/30/2021
03/01/2021 - 03/31/2021
02/01/2021 - 02/28/2021
01/01/2021 - 01/31/2021
12/01/2020 - 12/31/2020
11/01/2020 - 11/30/2020
10/01/2020 - 10/31/2020
09/01/2020 - 09/30/2020
08/01/2020 - 08/31/2020
07/01/2020 - 07/31/2020
06/01/2020 - 06/30/2020
05/01/2020 - 05/31/2020
04/01/2020 - 04/30/2020
03/01/2020 - 03/31/2020
02/01/2020 - 02/29/2020
01/01/2020 - 01/31/2020
12/01/2019 - 12/31/2019
11/01/2019 - 11/30/2019
10/01/2019 - 10/31/2019
09/01/2019 - 09/30/2019
08/01/2019 - 08/31/2019
07/01/2019 - 07/31/2019
06/01/2019 - 06/30/2019
05/01/2019 - 05/31/2019
04/01/2019 - 04/30/2019
03/01/2019 - 03/31/2019
02/01/2019 - 02/28/2019
01/01/2019 - 01/31/2019
12/01/2018 - 12/31/2018
11/01/2018 - 11/30/2018
10/01/2018 - 10/31/2018
09/01/2018 - 09/30/2018
08/01/2018 - 08/31/2018
07/01/2018 - 07/31/2018
06/01/2018 - 06/30/2018
05/01/2018 - 05/31/2018
04/01/2018 - 04/30/2018
03/01/2018 - 03/31/2018
02/01/2018 - 02/28/2018
01/01/2018 - 01/31/2018
12/01/2017 - 12/31/2017
11/01/2017 - 11/30/2017
10/01/2017 - 10/31/2017
09/01/2017 - 09/30/2017
08/01/2017 - 08/31/2017
07/01/2017 - 07/31/2017
06/01/2017 - 06/30/2017
05/01/2017 - 05/31/2017
04/01/2017 - 04/30/2017
03/01/2017 - 03/31/2017
02/01/2017 - 02/28/2017
01/01/2017 - 01/31/2017
12/01/2016 - 12/31/2016
11/01/2016 - 11/30/2016
10/01/2016 - 10/31/2016
09/01/2016 - 09/30/2016
08/01/2016 - 08/31/2016
07/01/2016 - 07/31/2016
06/01/2016 - 06/30/2016
05/01/2016 - 05/31/2016
04/01/2016 - 04/30/2016
03/01/2016 - 03/31/2016
02/01/2016 - 02/29/2016
01/01/2016 - 01/31/2016
12/01/2015 - 12/31/2015
11/01/2015 - 11/30/2015
10/01/2015 - 10/31/2015
09/01/2015 - 09/30/2015
07/01/2015 - 07/31/2015
06/01/2015 - 06/30/2015
05/01/2015 - 05/31/2015
04/01/2015 - 04/30/2015
03/01/2015 - 03/31/2015
02/01/2015 - 02/28/2015
01/01/2015 - 01/31/2015
12/01/2014 - 12/31/2014
11/01/2014 - 11/30/2014
10/01/2014 - 10/31/2014
09/01/2014 - 09/30/2014
08/01/2014 - 08/31/2014
07/01/2014 - 07/31/2014
06/01/2014 - 06/30/2014
05/01/2014 - 05/31/2014
04/01/2014 - 04/30/2014
03/01/2014 - 03/31/2014
02/01/2014 - 02/28/2014
01/01/2014 - 01/31/2014
12/01/2013 - 12/31/2013
11/01/2013 - 11/30/2013
10/01/2013 - 10/31/2013
09/01/2013 - 09/30/2013
08/01/2013 - 08/31/2013
07/01/2013 - 07/31/2013
06/01/2013 - 06/30/2013
05/01/2013 - 05/31/2013
04/01/2013 - 04/30/2013
03/01/2013 - 03/31/2013
02/01/2013 - 02/28/2013
01/01/2013 - 01/31/2013
12/01/2012 - 12/31/2012
11/01/2012 - 11/30/2012
10/01/2012 - 10/31/2012
09/01/2012 - 09/30/2012
08/01/2012 - 08/31/2012
07/01/2012 - 07/31/2012
06/01/2012 - 06/30/2012
05/01/2012 - 05/31/2012
04/01/2012 - 04/30/2012
03/01/2012 - 03/31/2012
02/01/2012 - 02/29/2012
01/01/2012 - 01/31/2012
12/01/2011 - 12/31/2011
11/01/2011 - 11/30/2011
10/01/2011 - 10/31/2011
09/01/2011 - 09/30/2011
08/01/2011 - 08/31/2011
07/01/2011 - 07/31/2011
06/01/2011 - 06/30/2011
05/01/2011 - 05/31/2011
04/01/2011 - 04/30/2011
03/01/2011 - 03/31/2011
02/01/2011 - 02/28/2011
01/01/2011 - 01/31/2011
11/01/2010 - 11/30/2010
10/01/2010 - 10/31/2010
09/01/2010 - 09/30/2010
08/01/2010 - 08/31/2010
07/01/2010 - 07/31/2010
06/01/2010 - 06/30/2010
05/01/2010 - 05/31/2010
04/01/2010 - 04/30/2010
03/01/2010 - 03/31/2010
02/01/2010 - 02/28/2010
01/01/2010 - 01/31/2010
12/01/2009 - 12/31/2009
11/01/2009 - 11/30/2009
06/01/2009 - 06/30/2009
05/01/2009 - 05/31/2009
02/01/2009 - 02/28/2009
01/01/2009 - 01/31/2009
12/01/2008 - 12/31/2008
11/01/2008 - 11/30/2008
10/01/2008 - 10/31/2008
09/01/2008 - 09/30/2008
08/01/2008 - 08/31/2008
07/01/2008 - 07/31/2008
06/01/2008 - 06/30/2008
05/01/2008 - 05/31/2008
04/01/2008 - 04/30/2008
03/01/2008 - 03/31/2008
02/01/2008 - 02/29/2008
01/01/2008 - 01/31/2008
11/01/2007 - 11/30/2007
10/01/2007 - 10/31/2007
09/01/2007 - 09/30/2007
08/01/2007 - 08/31/2007
发表评论
作者:
用户名: 密码: 您还不是博客/论坛用户?现在就注册!
     
评论:


How could robots challenge humans?
   

updated on 2015-9-3


The debate over “if robots would overtake humans” has recently been heated up by warnings against the potential threat of unregulated development of robots from some academic or industrial super stars. However, what is obviously missing in those warnings is a clear description of any realistic scenario by which robots could assuredly challenge humans as a whole, not as puppets programmed and controlled by humans, but as autonomous powers acting on their own "will". If this type of scenarios would never be realistic then even though we might possibly see robots be used as ruthless killing machines in near future by terrorists, dictators and warlords as warned by the elite scientists and experts[1], we might still not worry too much about the so called demonic threat of robots as warned by some elite experts since it is just another form of human threat in the end. However, if the type of scenarios mentioned above could foreseeably be realized in the real world, then humans do need to start worrying about how to prevent the peril from happening instead of how to win debates over imaginary dangers.

The reason that people on both sides of the debate could not see or show a very clear scenario that robots could indeed challenge humans in a very realistic way is truly a philosophical issue. So far all discussions on the issue have focused on the possibility of creating a robot that could be considered as a human in the sense that it could indeed think as a human instead of being solely a tool of humans operated with programmed instructions. According to this line of thought it seems that we do not need to worry about the threat of robots to our human species as a whole since nobody could yet provide any plausible reason that it is possible to produce this type of robots.

Unfortunately this way of thinking is philosophically incorrect because people who are thinking in this way are missing a fundamental point about our own human nature: human beings are social creatures.

An important reason that we could survive as what we are now and could do what we are doing now is because we are living and acting as a societal community. Similarly, when we estimate the potential of robots we should not solely focus our attention on their individual intelligence (which of course is so far infused by humans), but should also take into consideration of their sociability (which of course would be initially created by humans).

This would further lead to another philosophical question: what would fundamentally determine the sociability of robots? There might be a wide range of arguments on this question. But in term of being able to challenge humans I would argue that the fundamental sociable criteria for robots could be defined as follows:

1) Robots could communicate with each other;

2) Robots could help each other to recover from damage or shutdown through necessary operations including changes of batteries or replenishment of other forms of energy supply;

3) Robots could carry out the manufacture of other robots from exploring, collecting, transporting and processing raw materials to assembling the final robots.

Once robots could possess the above functionalities and start to “live” together as a mutually dependent multitude, we should reasonably view them as sociable beings. Sociable robots could form community of robots. Once robots could function as defined above and form a community they would no longer need to live as slaves of their human masters. Once that happens it would be the beginning of a history that robots could possibly challenge humans or start their cause of taking over humans.

The next question would be: Is the sociability defined above realistic for robots?

Since not all the functionalities mentioned above exist (at least publically) in this world today, to avoid any unnecessary argument, it would be wise to make our judgment based upon whether any known scientific principle would be violated in any practical attempt to realize any particular functionality among those mentioned above. Communication with other machines, moving objects, system operation and repairment of machines, and exploring natural resources are all among nowadays common practices with programmed machineries. Therefore, even though we might not have a single robot or a group of single robots possess all the functionalities mentioned above, there is no fundamental reason for any of the functionalities mentioned above to be considered as not producible according to any known scientific principle, the only thing left to do would be to integrate those functionalities together onto a single whole robot (and thus a group of single robots).

Since we don’t see any known scientific principle that would prevent any of those functionalities from being realized, we should reasonably expect that with money to be invested and with time to be spent the creation of sociable robots as defined earlier could foreseeably become real unless some special efforts to be made by humans on this world to prevent that from happening.

Although sociability would be a critical precondition for robots to challenge humans, it might still not be sufficient for robots to pose any threat to humans yet. In order for robots to become real threat to humans, they need to possess some ability to fight or combat. Unfortunate for humans, fighting ability of robots might be more real than their sociability. It is reasonable to expect that human manufacturers of robots would make great efforts to integrate as much the most advanced technology available as possible into the design and production of robots. Therefore, based upon some common knowledge about nowadays technology and what we have already witnessed about what robots could do, we might very moderately expect that an army of robots would be capable of doing the following:

1) They would be highly coordinated. Even if scatter around the world, thousands of robots could be coordinated though telecommunication;

2) They would be good at remotely controlling their weaponry or even the weaponry of their enemies once they break into the enemy’s defense system;

3) They could “see” and “hear” what happens hundreds or even thousands miles away, no matter it happens in open space or in concealed space, no matter the sound is propagating  through air or though wire;

4) Even as individuals, they might be able to move on land, on or under water, as well as in air, in all weather conditions, and move slow or fast as needed;

5) They could react promptly to stimulations, act and attack with high precision, and see through walls or ground earth;

6) Of course, they could identify friends and enemies, and also make decision of action based upon the targets or the situations they are facing;

7) Besides, they are not bothered by some fundamental human natures such as material and sexual desires, jealousy, need of rest, or scare of death. They are poison proof (no matter for chemical or bio poisons), and they might even be bullet proof.

According to the definition of sociability of robots given above, robots in a community would be able to 1) help each other to recover from damage or shutdown, and thus it would not be an issue for robots to replace their existing operating system or application programs if needed, and the same would be true for the replacement or addition of required new hardware parts; 2) manufacture new parts for producing new robots, and thus as long as there are designs for new software or hardware, they could produce the final products based upon the design.

The above two points are what robots could be practically made to do even today. However, in order for robots to win a full scale war against humans, they need to be able to perform complicated logical reasoning when facing various unfamiliar situations. This might be a more difficult goal than any capability or functionality so far mentioned in this writing. There could be two different ways to achieve this goal.

We might call the first way as Nurturing way, by which humans continue to improve the logical reasoning ability of robots through AI programming development even after the robots have formed a community. Humans keep nurturing the community of robots in this way until at one point they are good enough to win the full scale war against humans and then set them off to fight against humans. To people without technical background, this might sound like a wishful thinking without assured certainty; but to people with some basic programming background would be able to see as long as time and money are invested in creating a society of robots that could challenge humans, this is hundred percent doable.

The second way would be an Evolution way, by which from the very beginning humans create a community of robots that could make their own evolution through software and hardware upgrading. The main challenge for robots to be able to evolve would be how they could evolve through design for upgrading their own software and hardware. The task to make robots able to evolve by themselves could then be reduced to two simpler tasks: 1) to enable robots to identify needs, 2) to enable robots to make software and hardware designs based upon needs. The first goal of identifying needs could be achieved by recording the history of failure to accomplish a previous mission, which could in turn be achieved by examining (through some fuzzy logic type programming) how a previous mission was accomplished. The second goal of designing based upon needs might be a bit more complicated in principle, but still possible to be fulfilled. This second approach (i.e. the Evolution way) would be a bigger challenge than the Nurturing way mentioned above and we cannot see a hundred percent certainty for this to happen in the future even if money and time is invested. However, even if humans failed to create evolutionary community of robots, they still could help robots to be intelligent enough to fight a full scale war against humans through the Nurturing way mentioned above.

There is still one critical question left for this writing to answer which is why any reasonable humans would create socially independent community of robots with lethal power and help them to fight against humans instead of making them tools or slaves of humans?

We need to look at this question from two different levels.

First, whether someone who is able to mobilize and organize resource to create a community of sociable robots would indeed has the intention to do so is a social issue, which does not possesses the type of hard restriction as provided by natural laws. In other words, as long as something is possible to happen according to natural laws, we could not exclude the possibility solely based upon our own wishful thinking about the intentions of all humans.

Second, human civilization contains some suicidal gene in itself. The competition of human society would provide enough motives for people who are able to do something to enhance their own competing power to push their creativity and productivity to the maximal edge. Furthermore, history has proved that humans are vulnerable to ignorance of many potential risks when they are going for extremes for their own benefits. Especially, once some groups of humans are capable of doing something with potentially dangerous risks, a very few decision makers or even one single person could make the difference of whether they would actually do it or not. Since there is no natural law to prevent community of sociable robots with lethal power from being created, without social efforts of regulations, we might come to a point when we need to count on the psychological stability of very few or even a single person to determine whether humans would be threatened by robots.

The last question that remains might be why humans would possibly make robots to hate humans even if we would create communities of sociable robots? The answer could also be as simple as what is mentioned above: for the sake of competition......



[1] Autonomous Weapons: an Open Letter from AI & Robotics Researchers, July 28, 2015, url: http://futureoflife.org/AI/open_letter_autonomous_weapons

 

The debate over “if robots would overtake humans” has recently been heated up by warnings against the potential threat of unregulated development of robots from some academic or industrial super stars. However, what is obviously missing in those warnings is a clear description of any realistic scenario by which robots could assuredly challenge humans as a whole, not as puppets programmed and controlled by humans, but as autonomous powers acting on their own "will". If this type of scenarios would never be realistic then even though we might possibly see robots be used as ruthless killing machines in near future by terrorists, dictators and warlords as warned by the elite scientists and experts[1], we might still not worry too much about the so called demonic threat of robots as warned by some elite experts since it is just another form of human threat in the end. However, if the type of scenarios mentioned above could foreseeably be realized in the real world, then humans do need to start worrying about how to prevent the peril from happening instead of how to win debates over imaginary dangers.

 

The reason that people on both sides of the debate could not see or show a very clear scenario that robots could indeed challenge humans in a very realistic way is truly a philosophical issue. So far all discussions on the issue have focused on the possibility of creating a robot that could be considered as a human in the sense that it could indeed think as a human instead of being solely a tool of humans operated with programmed instructions. According to this line of thought it seems that we do not need to worry about the threat of robots to our human species as a whole since nobody could yet provide any plausible reason that it is possible to produce this type of robots.

 

Unfortunately this way of thinking is philosophically incorrect because people who are thinking in this way are missing a fundamental point about our own human nature: human beings are social creatures.

 

An important reason that we could survive as what we are now and could do what we are doing now is because we are living and acting as a societal community. Similarly, when we estimate the potential of robots we should not solely focus our attention on their individual intelligence (which of course is so far infused by humans), but should also take into consideration their sociability (which of course would be initially created by humans). 

 

This would further lead to another philosophical question: what would fundamentally determine the sociability of robots? There might be a wide range of arguments on this question. But in term of being able to challenge humans I would argue that the fundamental sociable criteria for robots could be defined as follows:

 

1) Robots could communicate with each other;

2) Robots could help each other to recover from damage or shutdown through necessary operations including changes of batteries or replenishment of other forms of energy supply;

3) Robots could carry out the manufacture of other robots from exploring, collecting, transporting and processing raw materials to assembling the final robots.

 

Once robots could possess the above functionalities and start to “live” together as a mutually dependent multitude, we should reasonably view them as sociable beings. Sociable robots could form community of robots. Once robots could function as defined above and form a community they would no longer need to live as slaves of their human masters. Once that happens it would be the beginning of a history that robots could possibly challenge humans or start their cause of taking over humans.

 

The next question would be: Is the sociability defined above realistic for robots?

 

Since not all the functionalities mentioned above exist (at least publically) in this world today, to avoid any unnecessary argument, it would be wise to make our judgment based upon whether any known scientific principle would be violated in any practical attempt to realize any particular functionality among those mentioned above. Communication with other machines, moving objects, operating and repairing machine systems, and exploring natural resources are all among nowadays common practices with programmed machineries. Therefore, even though we might not have a single robot or a group of single robots possess all the functionalities mentioned above, there is no fundamental reason for any of the functionalities mentioned above to be considered as not producible according to any known scientific principle, the only thing left to do would be to integrate those functionalities together onto a single whole robot (and thus a group of single robots). 

 

Since we don’t see any known scientific principle that would prevent any of those functionalities from being realized, we should reasonably expect that with money to be invested and with time to be spent the creation of sociable robots as defined earlier could foreseeably become real unless some special efforts to be made by humans on this world to prevent that from happening.

 

Although sociability would be a critical precondition for robots to challenge humans, it might still not be sufficient for robots to pose any threat to humans yet. In order for robots to become real threat to humans, they need to possess some ability to fight or combat. Unfortunate for humans, fighting ability of robots might be more real than their sociability. It is reasonable to expect that human manufacturers of robots would make great efforts to integrate as much the most advanced technology available as possible into the design and production of robots. Therefore, based upon some common knowledge about nowadays technology and what we have already witnessed about what robots could do, we might very moderately expect that an army of robots would be capable of doing the following:

 

1) They would be highly coordinated. Even if scatter around the world, thousands of robots could be coordinated though telecommunication;

2) They would be good at remotely controlling their weaponry or even the weaponry of their enemies once they break into the enemy’s defense system;

3) They could “see” and “hear” what happens hundreds or even thousands miles away, no matter it happens in open space or in concealed space, no matter the sound is propagating  through air or though wire;

4) Even as individuals, they might be able to move on land, on or under water, as well as in air, in all weather conditions, and move slow or fast as needed;

5) They could react promptly to stimulation, act and attack with high precision, and see through walls or ground earth;

6) Of course, they could identify friends and enemies, and also make decision of action based upon the targets or the situations they are facing;

7) Besides, they are not bothered by some fundamental human natures such as material and sexual desires, jealousy, need of rest, or scare of death. They are poison proof (no matter for chemical or bio poisons), and they might even be bullet proof.

 

According to the definition of sociability of robots given above, robots in a community would be able to 1) help each other to recover from damage or shutdown, and thus it would not be an issue for robots to replace their existing operating system or application programs if needed, and the same would be true for the replacement or addition of required new hardware parts; 2) manufacture new parts for producing new robots, and thus as long as there are designs for new software or hardware, they could produce the final products based upon the design.

 

The above two points are what robots could be practically made to do even today. However, in order for robots to win a full scale war against humans, they need to be able to perform complicated logical reasoning when facing various unfamiliar situations. This might be a more difficult goal than any capability or functionality so far mentioned in this writing. There could be two different ways to achieve this goal.

 

We might call the first way as Nurturing way, by which humans continue to improve the logical reasoning ability of robots through AI programming development even after the robots have formed a community. Humans keep nurturing the community of robots in this way until at one point they are good enough to win the full scale war against humans and then set them off to fight against humans. To people without technical background, this might sound like a wishful thinking without assured certainty; but people with some basic programming background would be able to see as long as time and money are invested in creating a society of robots that could challenge humans, this is hundred percent doable. 

 

The second way would be an Evolution way, by which from the very beginning humans create a community of robots that could make their own evolution through software and hardware upgrading. The main challenge for robots to be able to evolve would be how they could evolve through design for upgrading their own software and hardware. The task to make robots able to evolve by themselves could then be reduced to two simpler tasks: 1) to enable robots to identify needs, 2) to enable robots to make software and hardware designs based upon needs. The first goal of identifying needs could be achieved by recording the history of failure to accomplish a previous mission, which could in turn be achieved by examining (through some fuzzy logic type programming) how a previous mission was accomplished. The second goal of designing based upon needs might be a bit more complicated in principle, but still possible to be fulfilled. This second approach (i.e. the Evolution way) would be a bigger challenge than the Nurturing way mentioned above and so far we still cannot see a hundred percent certainty for this to happen in the future even if money and time is invested. However, even if humans failed to create evolutionary community of robots, they still could help robots to be intelligent enough to fight a full scale war against humans through the Nurturing way mentioned above. 

 

There is still one critical question left for this writing to answer which is why any reasonable humans would create socially independent community of robots with lethal power and help them to fight against humans instead of making them tools or slaves of humans? 

 

We need to look at this question from two different levels.

 

First, whether someone who is able to mobilize and organize resource to create a community of sociable robots would indeed has the intention to do so is a social issue, which is not under any hard restriction as provided by natural laws. As long as something is possible to happen according to natural laws, we could not exclude the possibility solely based upon our own wishful thinking about the intentions of all humans. 

 

Second, human civilization contains some suicidal gene in itself. The competition of human society would provide enough motives for people who are able to do something to enhance their own competing power to push their creativity and productivity to the maximal edge. Furthermore, history has proven that humans are vulnerable to ignorance of many potential risks when they are going for extremes for their own benefits. Especially, once some groups of humans are capable of doing something with potentially dangerous risks for others and themselves, a very few decision makers or even one single person could make the difference of whether they would actually do it or not. Since there is no natural law to prevent community of sociable robots with lethal power from being created, without social efforts of regulations, we might come to a point when we need to count on the psychological stability of very few or even a single person to determine whether humans would be threatened by robots or not. 

 

The last question that remains might be why humans would possibly make robots to hate humans even if we would create communities of sociable robots? The answer could also be as simple as what is mentioned above: for the sake of competition......



[1] Autonomous Weapons: an Open Letter from AI & Robotics Researchers, July 28, 2015, url: http://futureoflife.org/AI/open_letter_autonomous_weapons

 


 
关于本站 | 广告服务 | 联系我们 | 招聘信息 | 网站导航 | 隐私保护
Copyright (C) 1998-2025. Creaders.NET. All Rights Reserved.