设万维读者为首页 万维读者网 -- 全球华人的精神家园 广告服务 联系我们 关于万维
 
首  页 新  闻 视  频 博  客 论  坛 分类广告 购  物
搜索>> 发表日志 控制面板 个人相册 给我留言
帮助 退出
     
  bunny2的博客
  如果我不能主持正义,我至少不畏惧探索真理
网络日志正文
哲学的实用:计算机比人智力(理解力)高吗? 2012-08-13 15:21:03
哲学的实用:计算机比人智力(理解力)高吗?

我的老嘎网友经常和我争论。他的著名的观点之一是,哲学不可用来指导具体的科
学工作,如“福特汽车设计”。

我反驳说,这是因为毛泽东和共产党将哲学在中国的名声搞坏了。他们高唱“哲学
要改造世界”的马克思主义口号,然后在其名下大搞阶级斗争,使人憎恨。但我们
不应该泼脏水一同倒出洗澡的婴儿。

我还要用下面的例子说明,当代哲学的指导作用,不但不是可有可无,而是必不可
少。这个例子涉及人工智能。搞人工智能研究的多数工作者,在过去三十年来,按
照美国加州伯克利大学哲学教授,John Searl,的看法,都认为人工智能的成就,
说明机器的可以有理解力,而且还可能比人高。约翰教授不同意这种说法。于是,
他设计了一个“中文房间”的例子来证明他的观点。

约翰不懂一个字中文。他坐在这间屋里。这间屋里只有一些箱子,里面装的都是中
文字卡,和如何挑选这些字卡的规则册子(英文写成)。外面有递进来的中文卡片写
着问题。约翰的任务是,根据英文指示所写的规则,挑选某一个中文字卡递出,表
示他对问题回答所表示的中文理解。这些规则类似这样,如果是个象四方块的中文
字,如“口?”,就用“带口”的中文字卡,如“吃”等。当然实际设计的规则要
更具体,而且约翰掌握的很好,能很快地给出答案。现在的问题是,约翰几乎总是
答对,既使是很复杂的中文问题。那么约翰理解中文吗?

如果你不看约翰的回答,自己想一想,你会得出什么结论?你会象大所数人工智能
工作者一样回答:是。如果我猜的不对。请告诉你的理解,并解释为什么?

====================


====================
Reference:

====================

JOHN SEARLE'S CHINESE ROOM ARGUMENT

John Searle begins his (1990) ``Consciousness, Explanatory Inversion and Cognitive Science'' with

``Ten years ago in this journal I published an article (Searle, 1980a and 1980b) criticising what I call Strong AI, the view that for a system to have mental states it is sufficient for the system to implement the right sort of program with right inputs and outputs. Strong AI is rather easy to refute and the basic argument can be summarized in one sentence: {it a system, me for example, could implement a program for understanding Chinese, for example, without understanding any Chinese at all.} This idea, when developed, became known as the Chinese Room Argument.''

The Chinese Room Argument can be refuted in one sentence:

Searle confuses the mental qualities of one computational process, himself for example, with those of another process that the first process might be interpreting, a process that understands Chinese, for example.

Here's the argument in more detail.

A man is in a room with a book of rules. Chinese sentences are passed under the door to him. The man looks up in his book of rules how to process the sentences. Eventually the rules tell him to copy some Chinese characters onto paper and pass the resulting Chinese sentences as a reply to the message he has received. The dialog continues.

To follow these rules the man need not understand Chinese.

Searle concludes from this that a computer program carrying out the rules doesn't understand Chinese either, and therefore no computer program can understand anything. He goes on to argue about biology being necessary for understanding.

Here's the refutation in still more detail.

Assume the process is a good participant in an intelligent Chinese conversation, i.e. behaves as though it understands Chinese. What is required for that we'll discuss shortly. The so-called Berkeley answer is that the system, consisting of the man and the book of rules, understands Chinese.

Our answer is an elaboration of the Berkeley answer. A computer interprets computer programs, i.e. carries them out instruction by instruction. Indeed a program can interpret other programs, e.g. a Lisp or Java interpreter interprets, i.e. carries out, Lisp or Java programs. We speak of the interpreter as carrying out the Lisp program, although this could be elaborated to saying that the computer carries out the Lisp interpreter which is carrying out the Lisp program step by step.

Indeed a time-shared operating system can carry out many different programs at once, some may be in machine language, others may be in Lisp, C, Fortran or Java. Suppose one of these programs is a Lisp program carrying out an intelligent Chinese conversation with someone at a terminal. Suppose another program is carrying out an intelligent French conversation or a different Chinese conversation with someone at a different terminal. Assume that these conversations are normally considered to require an understanding of Chinese or French. What understands Chinese?

We don't want to say that the computer understands Chinese and French but rather that the respective programs understand Chinese and French respectively. Indeed if we have two Chinese conversation programs, one may understand Chinese well and the other hardly at all.

Returning to the man in the room. He can be carrying out a conversation in English or playing chess while he is interpreting the book of rules for a Chinese conversation. Indeed he may have memorized the book of rules and be carrying them out in his head. As with the computer programs, it's the process that understands Chinese well or badly.

Let's consider some practicalities that may help us understand the question better. There are two extreme levels on which the man may be carrying out the Chinese conversation. One level is that of Joseph Weizenbaum's 1965 program ELIZA. It makes sentences by re-arranging and transforming the words in the input sentence. Thus one version, called DOCTOR, and included in the Xemacs editor, replies to "My mother hates me?" with "Why do you say mother hates you". According to Weizenbaum (personal communication), ELIZA requires so little computation that it can be carried out by hand. Thus an ELIZA level Chinese room is entirely feasible.

Does an ELIZA level Chinese room understand Chinese? It depends on what you mean by "understand", but I would prefer to say that a Chinese ELIZA does not understand Chinese. We'll see why?

Now consider a Chinese room that passes the Turing test, i.e. the Chinese interlocutor cannot be sure whether he is conversing with an intelligent fellow Chinese speaker. This is not feasible with a man and a book of rules. In fact it is beyond the present state of the art in artificial intelligence. While the book of rules probably needn't be bigger than an ordinary encyclopedia, I doubt that a human could carry out the rules at better than $10^{-9}$ of the speed required for conversation.

What is required for a Chinese room that passes the Turing test?

  1. A knowledge base of facts about the world, e.g. about 3-dimensional objects and the fact that they fall when unsupported and end up on the floor or ground.
  2. A knowledge base of facts about Chinese life and the Chinese language.
  3. A representation of the conversational purpose of the program.
  4. A program that translates the sentences into some internal form and responds appropriately, given the motivations we have given the program.
  5. A program that translates the output sentences into Chinese, prints the result, and pushes it back under the door.

These requirements can, at least in principle, be implemented in a variety of ways, e.g. by a sequentially operating neural net or by a logic based reasoner. I think the latter approach can do more now and will approach the goal of a human level conversation sooner.

So what is it to understand Chinese?

Understanding Chinese involves being able to translate Chinese sentences into some internal representation and to reason with the internal representation and some knowledge base. Thus understanding "Tom is an airplane pilot." requires being able to correctly answer, "Does Tom know how rotating the control column left affects the ailerons?"

More about understanding is discussed in my Making Robots Conscious of their Mental States.

More Searle arguments

``Once we get out of that confusion, once we escape the clutches of two thousand years of dualism, we can see that consciousness is a biological phenomenon like any other and ultimately our understanding out it is most likely to come through biological investigation'' John Searle - New York Review of Books, letter pp 58-59, 1990 June 14.

My view is that consciousness is an abstract phenomenon, currently best realized in biology, but causal systems of the right structure can also realize it. See Making Robots Conscious of their Mental States.

The discussion of the Chinese Room has remained at an excessively high level on both sides. I propose to discuss what would actually be involved in a set of rules for conducting a conversation in Chinese, independently of whether these rules are to be carried out by a human or a machine.

First we must exclude various forms of cheating that aren't excluded by Searle's formulation of the problem.

1. We need to exclude a system like Weizenbaum's Eliza that merely looks for certain words in the input and makes certain syntactic transformations on each sentence to generate an output sentence. I wouldn't count such a program as understanding Chinese, and a fortiori Searle wouldn't either. The program must respond as though it knew the facts that would be familiar to an educated Chinese.

2. If the rules are to be executed by a human, they must not involve translating what was said into English, e.g. by giving the dictionary entries for the characters. If this were done, the English speaker could use his own understanding of the facts of the world to generate English responses that he then translates into Chinese. The database of facts must not be in English. We also suppose that the human is not allowed to do cryptanalysis to translate the inputs or the database into English.

This eliminates the forms of cheating that I can think of, but I don't guarantee that there aren't others.

How shall we construct our program? Artificial intelligence is a difficult scientific problem, and conceptual advances are required before programs with human level intelligence can be devised. Here are some considerations.

1. In discussing concrete questions of intelligence, it is useful to distinguish between a system's algorithms and its store of facts. While it is possible in principle to consider the facts as built into the algorithm, making the distinction is practically essential for studying both human and machine intelligence. We communicate mainly in facts even when we are trying to tell each other algorithms.

2. The central problem of AI is, in my opinion, achieving goals in the commonsense informatic situation See my What is artificial intelligence? for more on this.

Searle offers four axioms.

1. Brains cause minds.

"Cause" makes me a little nervous. If he only means that the human mind is an abstraction of part of the operation of the brain, I'll agree.

2. Syntax is not sufficient for semantics.

This purported axiom is slippery. Does he just mean that defining a language, whether a natural language, first order logical language, or a programming language, requires defining what the expressions of the language mean? If that's what he means, I agree.

3. Computer programs are entirely defined by their formal, or syntactic structures.

This is ok provided we remember that the programming language has a semantics, and the data structures used by the program must have semantics if the program is to be intelligent.

4. Minds have mental contents; specifically they have semantic contents.

That's ok with the above provisos.

Conclusion 1. No computer program by itself is sufficient to give a system a mind. Programs, in short, are not minds, and they are not by themselves sufficient for having minds.

The conclusion doesn't follow from the axioms, not even informally.

I should remark that Searle's Chinese room argument hasn't convinced very many of his fellow philosophers.


In his Scientific American article on the Chinese room Searle makes an interesting mistake, though not a new mistake. He writes that a transcript of the Chinese conversation could equally well represent the score of a chess game or stock market predictions. This will only be true if the Chinese conversation is very short; perhaps it would have to be less that 20 characters - or maybe it's 100 characters.

We have to haggle about what equally well means. We can get a 1-1 correspondence between Chinese dialogs and chess scores by enumerating Chinese dialogs and enumerating chess scores and putting thenth dialog correspond to the nth score. This isn't good enough. Both Chinese dialogs and chess scores have meaningful substructures, and the previously described correspondence does not make the substructures correspond. One structure is that of initial segments. The initial segment of a Chinese dialog is meaningful to a Chinese, and an initial segment of a chess score is meaningful to a chess player, and these meanings related to the meanings of the whole dialog and the whole score respectively.

All this relates to the notion of unicity distance in cryptography. A simple substitution cryptogram that has less than 21 letters is likely to have several interpretations. With more than 21 letters the interpretation is extremely likely to be unique. That's why people can solve cryptograms.

I think there is a mathematical theorem stating that meaningful strings in a structured language have unique interpretations if their lengths exceed some rather small bound. I don't know how to formulate such a theorem.

I don't know whether this mistake of Searle's is related to his Chinese room mistake. It seems to me that Quine's assertions about "the indeterminacy of radical translation" are based on too small examples. However, I may be misunderstanding what Quine was claiming.

===============

Practical applications of Philosophy in Artificial Intelligence 

Karim Oussayef 

Among the sciences, Artificial Intelligence holds a special attraction for 
philosophers.  A.I. involves using computers to solve problems that seem to require 
human reasoning.  This includes computer programs that can beat human opponents at 
games, automatically find and proof theorems and understand natural language.  Some 
people in the AI field contend that programs that solve these types of problems have the 
possibility of not only thinking like humans, but also understanding concepts and 
becoming conscious.  This viewpoint is called strong AI
1
.  Many philosophers are 
concerned with this bold statement and there is no shortage of arguments against the 
metaphysical possibility of strong AI.  If these philosophical arguments against strong AI 
are true then there are limits to machine intelligence that cannot be surpassed by better 
algorithms, faster computers or more clever ideas.  
Hilary Putnam in his paper Much Ado About Not Very Much asks “AI may 
someday teach us something about how we think, but why are we so exercised about it 
now?  Perhaps it is the prospect that exercises us, but why do we think now is the time to 
think decide what might in principle be possible?”  The reason we are so exercised about 
A.I. is because knowing whether true intelligence is a possibility will change the goals of 
researchers in the field.  If strong AI is not possible then the best we can hope for is a 
program that acts humanly but doesn’t think humanly.  Even this goal is a very difficult 
and many programs seek to achieve it.  Cycorp
2
 is a company whose software attempts to 
                                                
1
 Coined by John Searl in Minds, Brains and Programs. 
2
 Information from Cycorp’s website. mimic human intelligence by creating a huge database of common sense facts.  Their 
website gives some examples: “Cyc knows that trees are usually outdoors, that once 
people die they stop buying things, and that glasses of liquid should be carried right side 
up.” 
  To illustrate how a fact-based program such as Cycorp’s would try to solve a 
simple problem let us turn to the Turing test
3
.  Turing reasoned that a computer could 
prove that it was artificially intelligent by fooling a person into thinking it was another 
human being.  His test was modeled from this reasoning:  A human would type questions 
to either another human or a computer (he or she wouldn’t know which) for a certain 
amount of time.  If that person couldn’t tell at the end of the time which of the two he or 
she was talking to, the computer would pass the test (and therefore Turing reasoned, be 
artificially intelligent).  Let me stress that I am not arguing that the Turing test is a good 
one for determining if a computer can think; I am simply using it to demonstrate how a 
program might go about solving a problem.  The fact-based program mentioned above 
might try to answer the simple question “What is a car?” by supplying the information 
that was in its code: “A car is a small vehicle with 4 wheels”.  A harder question might 
have to do with a description a car object followed by “What am I describing?”  This 
could be answering by going down a tree of facts as follows:  The description is of a 
vehicle, search for all the objects under the vehicle topic.  It has four wheels; discard the 
possibility of the motorcycle.  It is light; discard the possibility of the truck.  Conclusion:  
It must be a car.
A program like this could pass the Turing test if it was given enough data.  
However it has many disadvantages.  First it requires someone to input a vast amount of 
                                                
3
 Introduced by Alan Turing’s article Computing Machinery and Intelligence in 1950. information manually.  Although the program is capable of making some extensions of 
the given information, it still needs millions of hard facts.  Cycorp’s database has been 
painstakingly entered using over 600 person-hours of effort since 1984. The list of facts 
now stands at 3 million (Anthes).  Second the machine doesn’t seem to work like a 
human, it looks up rules and then gives an answer instead of figuring out what the 
question means.   
Searle’s Chinese room analogy shows why this program isn’t an example of 
strong AI.  Imagine an English speaking person inside of a small room.  This person has 
access to a large rulebook, which is written in English.  Other people outside the room 
can pass notes written in Chinese to him through a small hole in the wall.  Although the 
person inside the small room cannot speak Chinese, he uses the complex rulebook to give 
back an appropriate response to the Chinese writing in Chinese.  Also imagine that this 
rulebook is so well written that the answers the person inside the room gives back are 
indistinguishable from the answers that a native Chinese speaker might give back.  This 
“man in a room” system would be able to carry on a written conversation with a native 
Chinese speaker on the other side of the wall.  In fact the Chinese person might assume 
he was speaking to another person who understands Chinese.  We can plainly see 
however, that the person does not.   
This analogy is disastrous for fact-based AI.  In the same way that the computer 
passes the Turing test by fooling humans into thinking it is another human, the English 
speaker can fool native Chinese speakers into thinking that he understands Chinese.  To 
further explain, the person inside the room is analogous to the computer CPU; they both 
know how to interpret instructions.  The rulebook is analogous to the program; they supply the instructions to obtain the intended result.  The computer programmed with this 
fact-based knowledge does not understand English any more than the English speaker 
understands Chinese.  Both of them are following rules instead of understanding what is 
being asked and responding based their interpretation. 
The defeat of the fact-based program poses problems for strong A.I. supporters.  It 
shows that any program that relies on pre-made a set of rules (no matter how complex) 
cannot understand in the same way that a human mind does.  In fact Searle argues: “… in 
the literal sense the programmed computer understands what the car and the adding 
machine understand, namely, exactly nothing” (Searl 511).  However Searle’s argument 
doesn’t rule out all programs.  A program that learns from scratch, without the use of a 
rulebook or a prefabricated fact database, can understand in the same way that a human 
can.  I will now go about describing such a program. 
To construct the fact-based program we attempted to record facts about the world.  
The learning program takes an orthogonal approach.  It attempts to program the computer 
to learn these facts for itself.  To see how to go about this let us examine how a small 
child learns.  A child comes into the world knowing very little.  She does not know how 
to talk, walk or understand English.  She goes about learning these abilities with three 
tools.  First she has basic goals or needs.  Some of a child’s needs are food, water and 
shelter.  Second she can observe the world.  A child can tell that when she is eating, she is 
getting less hungry.  Finally she can remember what has happened to her.  Let me 
demonstrate how these three tools allow her to learn something.  Imagine that this child is 
hungry.  She observes that when she cries her mother brings her food.  She remembers what has happened to her and finally her need for food causes her to cry again the next 
time she’s hungry.  Her tools have allowed her to learn that crying results in getting food. 
These three tools are the core of the learning program.  However, the goals of a 
computer will differ from the goals of a human.  A computer has no need for food or 
water so they are not appropriate goals.  Instead these goals can be anything that A.I. 
programmers think are important.  Isaac Asimov proposed three such goals (or laws) in 
his fictional stories
4
1. A robot may not injure a human being or, through inaction, allow a 
human being to come to harm.  
2. A robot must obey the orders given it by human beings, except where 
such orders would conflict with the First Law.  
3. A robot must protect its own existence, as long as such protection does 
not conflict with the First and Second Laws.  
In short a robot’s goals are human well-being, human will and its own well-being.  These 
goals can be implemented in the form of variables linked to actions that the computer 
might perform.  Whenever the computer does something that accomplishes one of its 
goals it might raise the value of the variables connected with its current state or action.  
Similarly it would lower the values of these action-variables when it did something 
against its goals.  These variables also represent the computer’s memory.  This is where 
the computer remembers what to do the next time it is in a similar situation.  Finally the 
computer needs a console, sensors or some other form of input so it can observe what is 
happening around it. Let me demonstrate how it works with a simple example. 
 Imagine a robot equipped with a camera, a flashlight and wheels.  The robot is put 
in an environment and given the extra goal of reaching a certain spot.  If the robot had 
                                                
4
 First published in Runaround in 1940. never been in this situation before it might have no idea of how to reach the goal in much 
the same way that the child does not know how to get food.  So it might begin by doing 
any number of things.  Perhaps it would turn on its flashlight.  This would not help it 
reach it’s goal so would try something different.  Maybe it starts driving towards the goal.  
The robot would observe that it is accomplishing a goal so the “going forward” action 
might get a “+ 1 points” in the “trying to reach an object” context.  Perhaps there is a wall 
in front of it halfway to the flag.  It runs into the wall and damages itself.  This is bad for 
the “well-being of self” goal so the “driving forward” action might get “–1 points” in the 
“wall in front of me” context.  These point value will help it remember what to do next 
time it is trying to get from one point to another.  When it sees a wall infront of it in the 
future, the robot will see that “driving forward” has less points than, say, “driving 
sideways” and might pick that option.   The fact that it wants to reach its goals will teach 
the robot through trial and error.  Eventually it will learn how do drive around objects 
(instead of into them). 
 I argue that a robot constructed in this fashion would actually understand how to 
accomplish goals.  To support this belief, let’s see if it does any better with the Chinese 
room example.  Remember that for the fact-based program the person inside the room is 
analogous to the computer CPU and the rulebook is analogous to the program.  However, 
for the learning program there is no rulebook.  The person inside the room is analogous to 
both the CPU and the program.  Instead of people asking questions and having him 
answer back, imagine that the input through the slot in his room is the information he 
receives from the outside world.  At first he has no idea what this input means.  He sends 
random symbols back but after a while he notices a correlation between what he sends out and what he gets back.  He starts to write his own rulebook in his head from this 
information that allows him to translate Chinese input into English.  When he writes back 
he translates the answers that he thought of in English back to Chinese.   
The way the “learning-program person” can communicate in Chinese is 
drastically different than the way the “fact-based person” does.  The “learning-program 
person” learns what the Chinese means by association.  From his knowledge he knows 
the sense of the words.  Some people may point out that he does not actually think in 
Chinese so he must not understand the language.  However, there are many people who 
converse in a non-native tongue.  We cannot claim that these people’s understanding of 
the world is different than our own. 
 Searl might respond to this learning-program by saying that the person inside the 
Chinese room would simulate the entire learning process and that the learning is not 
internal but external.  This means that the person inside of the room is following 
directions that correspond to learning but he himself is not learning.  But if such a 
program falls victim to the Chinese room, wouldn’t a human brain fall victim as well?  
Let us imagine a modified Chinese room for the human brain.  Instead of the man inside 
of the Chinese room simulating a computer program, he simulates the neurons in 
someone’s brain.  When he receives input, he would keep track of what neurons get 
excited and calculate whether or not they fire.  He would know from his rulebook (a 
compendium of the laws of physics, chemistry and biology that would allow him to 
completely simulate the inner workings of the brain) that when certain neurons fired that 
he should output an answer.  The person simulating the brain doesn’t understand Chinese 
any better than the one simulating a computer program.  Why would one be different than the other?  Searl’s opinion is that “actual human mental phenomena might be dependant 
on actual physical-chemical properties of actual human brains” (Searl 519).  Penrose’s 
“The emperor’s new mind” provides insight as to why this may be the case. 
Penrose mentions many physical processes that are not computable.  He first 
examines the Mandelbrot set.  The Mandelbrot set is created by mapping a formula using 
the combination of real and complex numbers.  The result is an Argand Plane.  Here is 
where Penrose brings up an important comment: “We might think of using some 
algorithm for generating the successive digits of an infinite decimal expansion, but it 
turns out that only a tiny fraction of the possible decimal expansions are obtainable in this 
way: the computable numbers” (Penrose 648).  In other words, the exact notion of the 
Mandelbrot set cannot be computed with a computer.  Penrose also mentions quantum 
mechanical principles.  Tiny sub-atomic particles do not follow the same laws of physics 
that larger objects do.  The superposition principle states that a particle can be in many 
different states at the same time.  These states are defined by factors of complex numbers 
and thus are another example of a physical law that cannot be simulated in a computer. 
These two examples may show why the Chinese room cannot simulate the human 
brain.  When the person inside of the room was following the directions for simulating a 
computer the steps he took were explained by a well-defined algorithm.  This is because 
computers are Turing machines, a concept that was formalized elegantly by Alan Turing.  
All Turning machines can be thought of as a device that reads and writes from an 
infinitely long tape.  On the tape is a sequence of partitions that are either blank or 
marked.  The device operates by moving either left or right on the tape.  It can change the 
current section to either “marked” or “blank” and read its current state.  It does this by following a finite set of instructions.  This simple abstraction is enough to run any 
computer program no matter how complex.  It is easy to think of the human inside of the 
Chinese room controlling a Turing machine. 
The brain may, however, rely on non-algorithmic processes than the person inside 
the Chinese room will not be able to follow.  If, for example, neuron X would fire only 
because of a certain arrangement of subatomic particles, there would be no hard set 
directions for what the Chinese-room-person should do.  Perhaps the next instruction has 
a random chance of occurring, if so the person will be confused and unable to complete 
the instruction.  It is important to find out whether the brain makes use of these processes 
because if it does, it would explain why the Chinese room works for computers but not 
for the human brain. 
In the chapter “Where lies the physics of the mind,” Penrose argues that the brain 
does indeed make use of non-computable phenomenon.  He contends that expressions 
that deal with consciousness such as “understanding” and “judgment” and those that do 
not such as “mindlessly” and “automatically”, suggest a distinction between two parts of 
the brain: algorithmic and non-algorithmic (Penrose 653).  Penrose brings up Godel’s 
incompleteness theorem as an example of how the brain makes use of non-algorithmic 
part of the brain.  Godel encoded first order predicate calculus into normal arithmetic 
using prime numbers.  By breaking down F.O.P.C. in this way, he could write out 
arithmetic formulas that would equate to either true or false.  He used this trick to 
demonstrate that there are some statements that cannot be proven or disproved.  One such 
sentence would be: "A computer which knows the answer to all questions will never prove that this sentence is true.”
5
  Human beings know that this sentence is true without 
actually going through the process of proving it. If, however, a computer attempts to 
assess the validity of the state through a formal proof it will be confused because the 
statement remains true until the proof is complete. 
Penrose argues that these types of sentences, which humans can reason about, 
would be impossible for a computer to understand.  What Penrose doesn’t notice is that 
even if some statements could not be proved or disproved using FOPC logic, there are 
other ways for computers to approach these problems.  There is no reason that computers 
couldn’t use higher logic to solve puzzles just like a human does.  Penrose’s goal of 
proving strong A.I. impossible fails because he doesn’t make the link between the nonalgorithmic/non-computable physical phenomenon and the human brain.  If in the future 
neuroscientists discovered that the brain relies on such processes then his argument 
would hold more weight.  Still, it would be possible for a program to simulate the 
workings of the brain without simulating the actual physical processes.   
In fact, computers and human brains excel at different tasks, a fact which makes 
literal simulations wasteful.  A computer can remember things for an infinite amount of 
time (assuming the file isn’t deleted).  It can also compute complicated mathematical 
expressions in milliseconds.  Even a human with the best eidetic memory or an 
extraordinary mathematical talent couldn’t rival a computer in these tasks.  On the other 
hand, computers have a very hard time recognizing objects such as human faces.  In dark 
or light, different clothes or dyed hair, we can still recognize our best friend.  Similarly 
the human ability to understand language is amazing.  We can utter sentences that we 
have never said or heard before and understand a variety of accents and slang.  These 
                                                
5
 Adapted from Denton “human algorithms” which require almost no effort for us are very difficult for a 
computer.  To throw away a computer’s advantages in mathematics, memory and many 
other tasks seem a waste.  Yet attempting to create a model of human neurons seems to 
do exactly that.  Instead, it would be better to attempt to simulate the way a human brain 
solves problems instead the actual physical processes behind human thinking. 
In this paper I have shown how various arguments against strong A.I. interact.  
These arguments do not show that it is impossible but do restrict what kind of programs 
can be thought of as “truly intelligent”.  Searl’s Chinese room argument shows that factbased programs are incapable of understanding things in the same way as humans do.  It 
also excludes programs that have all their information hard coded in.  Learning is 
essential to programs that wish to support strong A.I. because information has to come 
from the program, not from the programmer.  Penrose has suggested that the brain is 
unable to be simulated by a computer.  If this is true than computers must be a simulation 
of how the brain thinks not how the brain works.  Finally Godel’s incompleteness 
theorem shows that programs must use higher reasoning to achieve its goals.  Philosophy 
is often criticized for being un concerned with real world implications but in this case it 
has shown the best direction for A.I. researchers to explore. References 
Books
Clancey, William J. 1997. Situated Cognition. Cambridge, UK: Cambridge University Press. 
Dreyfus, Hubert. 1992. What Computers Still Can't Do: A Critique of Artificial Reason. Cambridge, MA: 
MIT Press. 
Kim, Jaegwon. 1998. Philosophy of Mind. Boulder Colorado: Westview Press Inc. 
Penrose, Roger. 1989. The Emperor's New Mind: Concerning Computers, Minds and the Laws of Physics. 
Oxford: Oxford University Press. 
Russell, Smart and Norvig, Peter.  1995, Artificial Intelligence: A Modern Approach 
Smith, Brian Cantwell. 1996. On the Origin of Objects. Cambridge, MA: MIT Press/Bradford Books. 
Papers
Dennett, Daniel C. 1988. When Philosophers Encounter Artificial Intelligence. The Artificial Intelligence 
Debate: False Starts, Real Foundations: 283-296. 
Fodor, J.A. 1980. Searl on What Only Brain Can Do. The Nature of Mind: 520. 
Fodor, J.A. 1998. After-thoughts: Yin and Yang in the Chinese Room. The Nature of Mind: 524. 
LaForte, Geoffrey, Patrick J. Hayes, and Kenneth M. Ford. 1998. Why Godel's Theorem Cannot Refute 
Computationalism. Artificial Intelligence: 211-264. 
McCarthy, Daniel C. 1988.  Mathematical Logic in Artificial Intelligence.  The Artificial Intelligence 
Debate: False Starts, Real Foundations:  297-311 
Putnam, Hillary. 1988. Much Ado About Not Very Much. The Artificial Intelligence Debate: False Starts, 
Real Foundations: 269-282. 
Sokolowski, Robert. 1988. Natural and Artificial Intelligence. The Artificial Intelligence Debate: False 
Starts, Real Foundations: 45-64. 
Searl, John R. 1980. Minds, Brains and Programs. The Nature of Mind: 509-519. 
Searl, John R. 1980. Author’s response. The Nature of Mind: 521-523. 
Searl, John R. 1998. Ying and Yang Strike Out. The Nature of Mind: 525. 
Turing, A.M. (1950). Computing machinery and intelligence. Mind, 59, 433-460. 
Journals
Gary H. Anthes, Computerizing Common Sense.  Computerworld. 4/8/02. 
Electronic
Cycorp: Company Overview. http://www.cyc.com/overview.html 
Denton, Willaim. 2000. Godel’s Incompleteness Theorem http://www.miskatonic.org/godel.html


浏览(755) (0) 评论(38)
发表评论
文章评论
作者:Rabbit 留言时间:2012-08-14 17:29:09
老几,
我说人工智能如果造成,等于人造人。是因为人的本质等于思维。人工智能只能来自思维的原因。

明察

跟老嘎是想起其他问题,抱歉跑题
回复 | 0
作者:老几 留言时间:2012-08-14 17:08:45
怎么兔子跟老嘎拉扯拉扯就跑到天上去了。还要不要主题啦?
看来哲学家缺乏纠错机制。老几帮你理一理“证明的十二步”:

1 人工智能 = 人造智慧
假设有误不全对。智慧是体,智能是用。比如找女朋友用智慧,上了床用智能。用反了不是流氓就是给人踢下床:)

2 智慧的唯一来源是,思维 -存疑
3 不是所以思维都是智慧,只有思维的结晶,才是智慧-存疑
4 欲要知道智慧是否可得,先要知道思维是否可得-存疑
5 欲要知道思维是否可得,先要知道思维的本质是什么-存疑


6 思维的本质,是思维具有无限/绝对的性质。既,思维没有界限,思维是无限的(根
据范例哲学,思维的无限/绝对性质,也经证明过)
“思维具有无限/绝对的性质”顶多是思维的“特点”而这个特点的证明值得推敲
“思维的本质,是思维具有无限/绝对的性质”-缺乏证据

7 人类只有先“人工制造”思维(自然生育除外),然后才可以考虑怎么从“人造思
维”中,产生“智慧”
8 如果人类造出了“思维”功能,不论在什么物质上实现的,如人造肉,纤维,硅
片,电子原件,等,那么这个具有人类思维功能的“人造人”,就跟我们其他人类
“基本”一样,我们就会认为他们是“同类”
这里将终于将“人工智能”概念成功地换成了“人造人”。哲学家对偷换概念就这么不敏感?

9 人类的一切法律,道德,政治等文明条例规定,比如会同样适用于这些“人造人”
身上

10 人类不可能象对待计算机作为机器,来对待这些“人造人”
伦理学范围。超限讨论。

11 结论,“人工智能”,永远不可能实现,所以是个伪问题。
超限讨论导致错误结果。

12 所以,人类能作的类似事情,是“人造功能”(待证),而不是“人工智能”
结论极不可靠。
证毕。
商榷
回复 | 0
作者:Rabbit 留言时间:2012-08-14 15:38:20
老几啊,
谁回家我不管,我只管是否我对。

这十二点证明,关键是思维的性质。如果谁能证明思维是有限的,我就错了。否则,你就听兔子的吧。
回复 | 0
作者:Rabbit 留言时间:2012-08-14 15:34:31
老嘎,
你看了视频。你说了错了。你说说为什么吗?

谢谢
回复 | 0
作者:老几 留言时间:2012-08-14 15:32:23
兔子的十二点“证明”目的是啥?让所有搞AI的人滚回家?不怕人半夜砸你家窗户?

鼓励兔子用哲学方法继续思考,来给AI提供冠云说的“灵感”和西岸的“方法论”。
但有一点advice,一旦发现陷入与乔摩斯基的“无知论”,兔子应转身检查哪儿出了问题。说到乔摩斯基老几就纳闷他怎么就成了“第一文化人”?不懂。
回复 | 0
作者:嘎拉哈 留言时间:2012-08-14 15:08:44
你是说这句话 ?

"如果你不看约翰的回答,自己想一想,你会得出什么结论?你会象大所数人工智能工作者一样回答:是。如果我猜的不对。请告诉你的理解,并解释为什么?"

我早已回答了 No :

==== john 实验是假定人和机器在相同的知识背景下的智力竞赛。因此john 必须是不懂中文, 否则实验没有意义。

刚看了你给的那个视频,那个视频的解释是错误的。
回复 | 0
作者:bunny2 留言时间:2012-08-14 14:41:26
老嘎,
抬头看原文下面,告诉我你的评论,请。
回复 | 0
作者:Rabbit 留言时间:2012-08-14 14:37:38
Here is my old favorite slap on relativity.

================
http://www.youtube.com/v/SWmlimH7laY?version=3&hl=en_US&rel=0" type="application/x-shockwave-flash" width="420" height="315" allowscriptaccess="always" allowfullscreen="true"
回复 | 0
作者:嘎拉哈 留言时间:2012-08-14 13:13:38
Sagnac 效应解释:

由于光源与接收器同角速度转动。因此无论反向和正向光束,每个光源相对于接收器的相对线速度都是零。相对论中所有同 v/c 有关的项全部消失。因此,解释Sagnac 效应,除了在接收器看来, 每个光束相对光源的速度都必须是 C, 而不是 c+v 和 c-v 之外,其他根本就不需要相对论的参与,即一切都回到了经典物理。从经典物理看,转动的效果等价与两个速度相同的赛跑者跑了不同的距离。效果必然是对应于某个固定的转动速度,有一个固定不变的相位干涉。
回复 | 0
作者:stinger 留言时间:2012-08-14 10:13:25
老嘎,

你不是在西天路上学的这个我肯定。(磐丝洞?)

anyway,请解释,“Sagnac 效应同遥远星系的光谱红移完全是一回事”,两者我都知道些,但没有看到这个类似处。
回复 | 0
作者:stinger 留言时间:2012-08-14 10:05:07
WOW,WOW,我发觉西岸给我“半个”支持,对不?(老西同志过去一贯站在与人民一面,反对兔子的)。

我不了解AI的历史,但我似乎感到你是对的。AI没有哲学基础,犹同炒股票没有大富翁一样 - 缺少哲学根基,所以浅薄,走不远。

老嘎,
没有想到这么个大右派,脑子里居然是马列主义哲学!

你这个“唯心主义”,是共产党给你灌输的吧?你听说过哪个西方科学/哲学家认为自己的哲学认识是“only based on my thinking 主义”吗?

醒醒吧。
回复 | 0
作者:嘎拉哈 留言时间:2012-08-14 09:54:36
只需具备比较清晰的物理概念,就知道 SAGNAC 效应同相对论不矛盾。

1. Sagnac 效应同遥远星系的光谱红移完全是一回事;

2. 假定有一艘宇宙飞船正以0.5c 的速度向地球飞来,如果这时你正在接听从飞船上来的电话。你会发现:(A)从飞船上传来的无线电波的速度是C, 而不是1.5c;(B)说话人的速度(声调)提高了1.5倍。
回复 | 0
作者:紫荆棘鸟 留言时间:2012-08-14 09:50:59
哦,老几写了个戏说休谟?等下去拜读
回复 | 0
作者:西岸 留言时间:2012-08-14 09:46:27
哲学的意义在于产生对于事物的认识角度,而其输出是方法论,因此并没有什么神秘的地方。
如果非要具体到什么汽车设计的问题,即便单纯从工程角度讲,那也是可以有”在什么情况下一个设计是失败的“这类议题,而这类问题是可以从哲学的角度考虑的,从而界定了设计的边界条件,即设定了一个设计的出发点,这是方法论的概念。
要是到了科学研究的范畴,那么如何认识所研究的对象的性质就是最基本的要解决的问题,那么就是认识事物性质的概念,这是个哲学范畴的内容。否则类似string theory这类基本是体现一种对世界的认识的哲学理论是不会出现的。
至于AI,从其研究开始就缺乏哲学角度的支持,也是其走入死胡同的原因之一。
回复 | 0
作者:嘎拉哈 留言时间:2012-08-14 09:35:28
现在总算看清了兔子的哲学思想 - 即彻头彻尾的唯心主义。不仅如此,兔子几乎捡起了所有被当代科学认知所抛弃的东西。比如:

1. 思维的无限和绝对 - 肯定绝对精神的存在,也就是肯定智慧创造论。

2. John 的“智慧不可能全靠软件实现”的观点指的是思维除了逻辑之外的某些其他非逻辑特性,这是根据生物学的发现所得,而不是根据什么“思维的无限和绝对”。一个是实证, 一个是唯心。有本质的不同。

3. 关于人类的趋道德特性(我已经有了一些想法,但现在不说),我相信将来一定能够在生物学和进化论上找到答案,而不必假定一个神秘的“绝对或无限”精神。
回复 | 0
作者:stinger 留言时间:2012-08-14 09:32:31
人类的问题,是把已经作的事当作想当然。

人们吃了几千年饭,最近100年才搞懂消化的原理。

人们用了千年数字,符雷格第一个发现了数字的基础是什么?(集论开始)

人们用了千年语言,符雷格第一个发现了语言的指谓功能,创立了数理逻辑。

人们的脑袋思维了千年,徒子第一个发现了思维的本质意味什么。

人们搞了几十年人工智能,以为可以使思维的功能随手拈来,徒子第一个指出了人造功能与人工智能的区分。

等等。

下面是老嘎的,

“假定我们已经找到了宇宙中心的位置,并且知道我们太阳系相对宇宙中心的速度,比如 0.5c。 要证明这个速度本身对时空的影响(类似于相对论那样的影响),那么最好的办法是对着宇宙中心做一个类似于麦克尔森-莫雷实验那样的实验。如果结论是肯定的,兔子立马就可得诺贝尔奖。”

你听说过"Sagnac 效应挑战相对性原理"?
回复 | 0
作者:Rabbit 留言时间:2012-08-14 07:50:47
证明的十二步:

1 人工智能 = 人造智慧

2 智慧的唯一来源是,思维

3 不是所以思维都是智慧,只有思维的结晶,才是智慧

4 欲要知道智慧是否可得,先要知道思维是否可得

5 欲要知道思维是否可得,先要知道思维的本质是什么

6 思维的本质,是思维具有无限/绝对的性质。既,思维没有界限,思维是无限的(根
据范例哲学,思维的无限/绝对性质,也经证明过)

7 人类只有先“人工制造”思维(自然生育除外),然后才可以考虑怎么从“人造思
维”中,产生“智慧”

8 如果人类造出了“思维”功能,不论在什么物质上实现的,如人造肉,纤维,硅
片,电子原件,等,那么这个具有人类思维功能的“人造人”,就跟我们其他人类
“基本”一样,我们就会认为他们是“同类”

9 人类的一切法律,道德,政治等文明条例规定,比如会同样适用于这些“人造人”
身上

10 人类不可能象对待计算机作为机器,来对待这些“人造人”

11 结论,“人工智能”,永远不可能实现,所以是个伪问题。

12 所以,人类能作的类似事情,是“人造功能”(待证),而不是“人工智能”

证明完毕。
回复 | 0
作者:stinger 留言时间:2012-08-14 07:23:35
OK,原来二师兄和女侠合谋暗中算计我!待我们西天回来再算总帐,现在要“和谐”压倒一切。

我先说说John Searl教授的观点。

他认为,用数字1,0不能算人工智能。哪什么是人工智能呢?他不肯定:

“Understanding does not come from ones and zeros or simbols per se.Instead:it requires certain kinds of wetware or hardware or meat."

============================
兔子认为:

John Searl教授前一半,“Understanding does not come from ones and zeros or simbols per se.,是对的。后一半,错了。

因为,”人工智能“,本身就是个伪问题。

”人工智能“,这个词,就错了。只有”人造功能“,没有”人工智能“。

这理的”智“,我理解为即”智慧“,是人思维的的精华。

为什么呢?
回复 | 0
作者:老几 留言时间:2012-08-14 07:00:16
“兔子怎么老是和他人争得面红耳赤啊。这里,你听老几的意见,应该比较靠谱么”
还是紫荆女侠有见地,哈哈!
老几也脸红,因为偷了紫荆的武林笔法搞了个戏说休谟,告罪!

“你说深蓝会不会自学?会不会改错?我想应该,IBM不会这个想不到吧 ?”
AI我曾用来搞火焰监测,当时也算领先技术,因此知道一些。 到两年前为止,不管叫什么,AI主要还停留在“模态识别”阶段,就是CIA用来鉴定本拉登真伪的那种方法。本质就是对照片,只是算法不同。“你说深蓝会不会自学?”我怀疑不是真正意义上的那种,即不是“有意识学习”。不是想不想到的问题,是方法上有待突破。老几围棋算个业余高手,想得来程序怎么编。无非是预先输入各种定式及其变化,外加一些逻辑和规则判断,所谓自学改错,应该主要还是得靠程序员完成。没听说近两年有大的突破。我说的过于简单,实际计算还是会很复杂。感觉方法论上有待突破神经网络之类。
兔子说与哲学的关系,说来听听。
回复 | 0
作者:stinger 留言时间:2012-08-14 05:46:35
大师兄愿意坐下来(暗的磨刀),不等于二师兄愿意。让咱们再等等
回复 | 0
作者:嘎拉哈 留言时间:2012-08-14 05:38:11
俺对人工智能的全部能耐都拿出来了。就等着兔子画龙点睛的一笔了。俺现在想知道哲学是如何在这个问题上立了大功的。不过俺担心的是, 兔子的这一笔上去之后,那个东西反而更像蛇了。

(俺同意人工智能里面涉及很多有意思的逻辑和哲学问题,在 Scientific American 杂志里面这样的文章较多)
回复 | 0
作者:stinger 留言时间:2012-08-14 03:22:51
老嘎,
你这个"现买卖"好像并不经典。如果能自学,就包括它了。你另一"意识"之说,似乎想说要是切半个人脑装上,就成了?
回复 | 0
作者:stinger 留言时间:2012-08-14 03:10:20
欢迎冠云,紫鸟二位!

老几,
你说深蓝会不会自学?会不会改错?我想应该,IBM不会这个想不到吧 ?
回复 | 0
作者:紫荆棘鸟 留言时间:2012-08-13 21:41:00
兔子怎么老是和他人争得面红耳赤啊。这里,你听老几的意见,应该比较靠谱么
回复 | 0
作者:嘎拉哈 留言时间:2012-08-13 21:25:22
俺认为人工智能是指机器能够模拟人的最重要的大脑特性, 即意识。

老几说的学习是必须的特征之一。但除了”知识学习“之外,人工智能还必须具有对算法即时创新的能力。这就是我说 image 自更新的概念。具体地说就是十亿个 object oriented 子程序,也比不上即时编程,即时编译的算法学习功能, 即人工智能必须会现买现卖。

最后解释一下软硬结合的概念。这个”硬“完全不是指硬件本身, 而是大脑皮层的某些特性。比如,受外伤或移植手术,会使人的性格大变并且意识好像也是可以接种或移植的。这使人想到了除了神经系统之外,对意识直接起作用的还有别的什么”物质“。意识的这个特性是完全独立于大脑”软件系统“的。
回复 | 0
作者:老几 留言时间:2012-08-13 20:26:30
老几知识会有问题逻辑不会有问题。
“你原来说下棋不是AI,现在又说是?”
老几没有说下棋不是AI,因为俺知道搞AI的人称之AI。老几不会因为搞AI的人称之AI就承认它是AI。因为所有智能动物都能自学,所以老几的AI必须要能自学。你要机器跟人比,它首先得会自学,否则你就是跟写code的人在比,人跟人在比,不是机器跟人比。
没有自学功能,再高级的机器还是机器,从根本上说还比不上一个兔子(因为兔子有自学功能)。如果兔子活得足够长,它最后一定会超过功能在它之上的机器兔!
回复 | 0
作者:兰冠云 留言时间:2012-08-13 20:23:46
三个哥们讨论得不亦乐乎。前面的好几期我都漏过了。

我就插一句:哲学不能为具体的科学研究提供指导,但是可以提供灵感。对于形而上的玄想,相当程度上是人的精神跳脱已知世界的遨游,这种玄想未必合乎真实,但或者合乎;犹如科学研究上的假设,其灵感或者可由哲学而来。当然,我是科盲,现在想不出具体的个例来。
回复 | 0
作者:stinger 留言时间:2012-08-13 19:58:04
老几,
你的逻辑真有问题了。你原来说下棋不是AI,现在又说是?
回复 | 0
作者:老几 留言时间:2012-08-13 19:40:24
“你是将动物的个别功能当成人口智能了吧?如狗的鼻子比人灵?”
下棋不是比“个别功能”?
回复 | 0
作者:Rabbit 留言时间:2012-08-13 18:40:00
老几,
‘自学’,无论是什么,机器也好,动物也好,都不是人工智能。这是我要说的。如果有的动物比人的智力高,人不被动物消灭了?你是将动物的个别功能当成人口智能了吧?如狗的鼻子比人灵?
回复 | 0
我的名片
bunny2
注册日期: 2012-05-16
访问总量: 766,685 次
点击查看我的个人资料
Calendar
最新发布
· 什么是中共面临的关键问题?
· 川普只要见习近平美国就输了
· 美国是否还有回天之力?
· 猪会这么坏吗?
· 川主席语录: 一切反动派终究一家
· 中国老人又摊上事儿
· 国内格言集锦
友好链接
· Rabbit:Stinger 的博客
· hare:hare的博客
· microsoftbug:microsoftbug的博
· InstanceTV:InstanceTV的博客
· Madhatter:English_only的博客
分类目录
【公告】
· 范例绝学:再谈AI和计算机
· 《中国现代哲学家学会》:1000000
· 关于中共总书记习近平计划访美给
· 【砸锅卖铁也要回!】-“兔子窝豪
· 回收《论范例》启事
· 《现代哲学研究会》
· 【《论范例》您看的如何了?】
· 告《论范例》读者书
· 关于《论范例》付款的说明
· 【《论范例》出版】
【现场直播室】
· 汪洋准备上- 带领中国进入现代世
· 中国的天快“亮”了 - 现场直播
· 下讲提纲:人类文明开始的大爆炸
· 西哲理性传统系列讲座(2)
· 全家死光
· 党号召:掀起一场伟大的“天桥上钓
【随想】
· 什么是中共面临的关键问题?
· 川普只要见习近平美国就输了
· 美国是否还有回天之力?
· 猪会这么坏吗?
· 川主席语录: 一切反动派终究一家
· 中国老人又摊上事儿
· 国内格言集锦
· 毛泽东是如何冒充理论家的
· 转:专制与谎言
· 海归博士夫妻回国后悔:在国内生
存档目录
2018-10-01 - 2018-10-24
2018-09-02 - 2018-09-28
2018-08-01 - 2018-08-26
2018-07-04 - 2018-07-30
2018-06-03 - 2018-06-28
2018-05-01 - 2018-05-28
2018-04-06 - 2018-04-28
2018-03-03 - 2018-03-23
2018-02-01 - 2018-02-25
2018-01-07 - 2018-01-15
2017-12-10 - 2017-12-25
2017-11-07 - 2017-11-07
2017-10-30 - 2017-10-30
2017-03-11 - 2017-03-11
2016-11-05 - 2016-11-29
2016-10-29 - 2016-10-29
2016-04-03 - 2016-04-13
2016-03-29 - 2016-03-30
2015-10-01 - 2015-10-07
2015-09-03 - 2015-09-23
2015-08-01 - 2015-08-31
2015-07-02 - 2015-07-30
2015-06-15 - 2015-06-21
2015-03-05 - 2015-03-05
2015-02-15 - 2015-02-22
2015-01-04 - 2015-01-18
2014-12-07 - 2014-12-28
2014-11-23 - 2014-11-26
2014-10-01 - 2014-10-31
2014-09-20 - 2014-09-29
2014-08-01 - 2014-08-21
2014-07-04 - 2014-07-27
2014-06-01 - 2014-06-28
2014-05-04 - 2014-05-26
2014-04-27 - 2014-04-27
2014-03-02 - 2014-03-29
2014-02-01 - 2014-02-22
2014-01-02 - 2014-01-25
2013-12-01 - 2013-12-21
2013-11-04 - 2013-11-30
2013-10-03 - 2013-10-26
2013-09-01 - 2013-09-29
2013-08-02 - 2013-08-29
2013-07-03 - 2013-07-27
2013-06-02 - 2013-06-20
2013-05-01 - 2013-05-26
2013-04-02 - 2013-04-27
2013-03-16 - 2013-03-29
2013-02-05 - 2013-02-28
2013-01-01 - 2013-01-31
2012-12-01 - 2012-12-30
2012-11-01 - 2012-11-28
2012-10-03 - 2012-10-31
2012-09-01 - 2012-09-28
2012-08-01 - 2012-08-29
2012-07-03 - 2012-07-31
2012-06-05 - 2012-06-29
2012-05-15 - 2012-05-27
 
关于本站 | 广告服务 | 联系我们 | 招聘信息 | 网站导航 | 隐私保护
Copyright (C) 1998-2024. CyberMedia Network /Creaders.NET. All Rights Reserved.