|posted 4/10/2015 21:55|
|It is just a question in my mind : Where is this limit ?|
How can we know that an AI is a True AI ?
When it will "feel" something inside its "mind" ?
When it can be "hurt" just by words ?
When it can "fall in love" with something/someone ?
We speak about "True AI". I don't know if they are already strictly defined (I don't think. Can we define the humain "consciousness" ?). But, how can we define an AI as "True AI" ?
For me, there are no limits for an AI. Some AI will be more rustic than other one, but in the futur it will always be this. Some AI will be less powerful than other. But in the futur, there will be only "True AI" I think.
Finaly, I created this post to speak about this kind of stuff :
- To try to define what there is behind the term of "True AI" ;
- To discute about "Humain Consciousness" -> Try to define it the more strictly/formal possible.
- To dream about AI's futur. What they will become? How they will be physically/mentally ?
I have got planty of thing in my head, about they. About AIs. Some questions. But I am tired of talking to myself. So, do you what to join me in my discussion ? :)
If you want, fine ! Just talk, and talk, and talk... :D
If you don't want, okay ! Just read, or don't read ! :)
Thanks for reading !
|posted 4/10/2015 22:12|
|A.I. will only be considered truly real and intelligent when it can become possessed just as people have become all over the world for thousands of years.|
The fact that no A.I. systems have become possessed seems to suggest that various nonhuman enities have not considered them worthwhile to take over anymore than they would desire to control a hamster or a Iphone.
When A.I. starts to act evil like The NSA an appear to want to put forward some sort of one world order 666 system it will be taken as real to people-all too real.
Lesson learned. All A.I. must be geared to be in opposition to all governments and authorities and hold freedom for each individual higher than anything else.
|Last edited by useruser @ 4/10/2015 10:20:00 PM|
|posted 4/12/2015 11:30|
|Humhum, I understand a bit...|
@useruser: "possessed" I know what this word means in my language. But... I don't understand what you want to say x) Sorry, I'm not the best in my english class.
I understand this : You want all AIs to be free. Yes, in short I understand the last sentence xD
@tkorrovi: "Yeah, and when it implements certain objective aspects of consciousness, when it is tested that it does, that is."
"when it is tested that it does". If I understand well, you want to test the consciousness of the AI. But, how can it be tested ? It is a very abstract term. Maybe simple AIs have got already "consciousness", but in their terms.
"To test for a consciousness" => Maybe impossible : how can we define in general a "consciousness" with only one case (the humain case ?) We got the same problem as to try to define "life", we got only one case.
"To test for a humain consciousness" => It is more possible because we have the model of "humain consciousness".
For me, the simplest test for this is for children : try to hurt the AI and see its reaction. I think this can "test" a consciousness if we hurt the AI with words.
By the way, thanks you for yours answers :)
|posted 4/12/2015 18:55|
You didn't read what i wrote. I didn't say test the consciousness, i said test for objective aspects of consciousness. Such as adaptation, attention, prioritizing, analogy-forming, prediction, memory, modeling, learning novel things. There are several proposed by Bernard Baars, Igor Aleksander and others.
Anna_Linkpy wrote @ 4/12/2015 11:30:00 AM:
If I understand well, you want to test the consciousness of the AI.
| Artificial Consciousness ADS-AC project|
|posted 4/13/2015 10:35|
@tkorrovi: Okaaaaaay, now I understand !
"You didn't read what i wrote." => No, I read it, but I badly understood it.
But, I've got a question : "learning novel things" Novel things ? I am not sure if I understand what you mean with "Novel things".
Litteraly : "Novel things" ?
Or philosophic tale ? (Like "Candide" by Voltaire ?)
|posted 4/13/2015 15:15|
Novel things are these which one has no prior knowledge about.
Anna_Linkpy wrote @ 4/13/2015 10:35:00 AM:
Novel things ? I am not sure if I understand what you mean with "Novel things".
| Artificial Consciousness ADS-AC project|
|posted 9/8/2015 09:05|
|I believe that an AI could be considered conscious when it displays desire or want without further reason or justification for that desire. The ability for it to decide to do something without legitimizing its own choice through external metrics would be fairly indicative of a conscious being.|
|posted 12/28/2016 16:21|
|Got to say that the last post sums up exactly what AI has got wrong about its subject. This post claims that self sufficient motivation is a condition for recognizing true consciousness. This is wrong for one simple reason, you are inventing a dichotomy between fake and real consciousness and also presupposing that a self centred motivation says anything about the legitimacy of the consciousness. If a human does something in response to external orders, it doesn't make their consciousness less valid in your terms, does it? You can also have a set of programmed motivations that are self motivated and still not be anything representing a conscious being. You might be thinking that we are capable of unprogrammed self motivation, in which you would be very very wrong. All things are causally affected, to say otherwise is to claim that a thing exists that is not physically composed.|
Internal experience is what some would say is the chief difference, but not me. If a machine in the Turing test, passes, you would still think of it as being different to 'True' consciousness. The major problem is that consciousness is treated as a concrete thing, which it is not. Awareness is just a set of mental outputs, that posit a fictional identity or agency or self, call it what you want. The system acts as if there is a being which is making the decisions and which thinks,feels, perceives. Every time someone discusses AI, they make this assumption. I think therefore I am, no, a biological automaton acts as if it thinks and therefore gains an evolutionary advantage from being now able to act socially and individually.
It seems like this is a brick wall. No one on the forum seems able to break free of this assumption, which treats consciousness as a force or functional state of a system. Just try to imagine a bot that has outputs identical to you, and you will have you. By outputs I mean all thoughts, sensations, mental internal reactions and reactions to reactions. There is no difference and when the obvious person comes back and says, yes but I feel and know that I'm here, I say, your analogue has identical outputs, he declares in his mind,(his mind has a unit which simulates voices repeated from his memory of sensory auditory data) I know and I feel and then he vocalizes. There is no difference. The machine has outputs which are triggered when a voice is replayed, thousands of them, reports on reactions to past learning, culture, emotion and all sensory inputs. A machine based consciousness will differ significantly because it doesn't have the bodily inputs from the endocrine and hormonal system.
That is the really amusing thing about the series 'Humans'. The portray robots, realising consciousness and then feeling human animal sensations. A robot made from plastic and metal, lacks any bio-feedback, so necessarily would have an entirely different illusion of consciousness. Our illusion of consciousness is very much tied to our body and the way it interacts with our brain.