|posted 1/19/2010 20:02|
|"Beyond that, can we really say intelligence is anything more than the ability to evaluate sensory input and make decisions on what the most appropriate response or reaction to that input is?"|
In a post I was browsing down below, I came across this interesting statement. I fully agree with angstperpetual's comment, but for me it brought up another question. What is sensory input?
Quite frankly, it is everything. "I think therefore I am," is true only because I THINK. Hal, at its most basic definition, doesn't truly think because it is not exposed to remotely as many inputs as a human. Hal has one sensory 'organ', and only gets input from THAT on a very sparse basis. twenty-four times a second we take a picture of our environment and process it. Most of us hear thousands,hundreds of thousands maybe, of words a day. We have sense of touch, smell, and taste. We process a countless number of inputs a day. And when we do? Our brains are capable of engineering complex solutions to complex problems because we not only recognize simple patterns, like Hal, but because we can recognize patterns of patterns. There is NO current substitute for human sentience. Hal is impressive tech, but it uses only one of the many inputs that are required for true thought. Don't get me wrong; as time passes, we WILL see sentient AI. But as for now: The leading AI on the nursery has a pitiful 370003 inputs as of this post. I am quite certain that I have many more inputs that that in a day.
What is sentience? It IS how we react to our surroundings (inputs), but it requires that we receive MANY more inputs and have MANY more reactions than Hal could ever hope to achieve. It also requires us to create our own inputs. We humans imagine. We dream. We create our own worlds and lives. We think on things for longer than Hal's one brief analysis of what rules to create an apply. AI is possible, but Hal is not Artificial intelligence, it is Imitated Intelligence. Hal is first degree intelligence; He can learn. We can learn how to learn. PERHAPS, it is possible to teach Hal a new dimension of thought, but it would be simpler to start from scratch.
|posted 1/20/2010 06:41|
|I don't know if you are asking a question, making a point or demanding somthing?|
My context sensitivity has failed yet again...
|posted 1/20/2010 16:05|
|Just making a point... seeing what everyone else is thinking on the subject. I've been getting sick of seeing posts of Ooh! it's alive!|
|posted 1/21/2010 04:00|
|I agree that chatbots are not where artificial true intelligence is at or will be found, but I'm not sure that anyone here thinks that.|
As for being impressed with chatbots? They certainly are an emerging technology with some useful applications. If people value them for what they are then that is OK I think. If people overvalue them then I can live with that.
I think you make a very good point that flexibility and metaprogramming (software controlling software, when you mention software systems that can reflect on their processing) in software systems will be a key in developing artificial intelligence.
|posted 1/29/2010 18:35|
|I think that HAL is an excellent method for the rudimentary "recording" of another person's intellect or the "imagined" intellect of a fictional personality, in much the same way you could come to know an auther by reading a book and studying his characters or subjects. Except with HAL you can interact in natural language with the character itself. |
In its ultimate form, imagine talking to an amalgum of Einstein's personality imbedded in an advanced HAL. Is it really intelligence? At the point at which you can converse with a HAL/Einstein and get the same responses you would have expected to have received from the actual Einstein...who will care if it's really intelligence?
Once perfected, such a conversational personality could be combined with various expert systems or executable functions to create a natural language interface for a computer system like the computers that appear on Star Trek's Enterprise.
|posted 2/11/2010 04:53|
|Well said.. And to prove how much slower than weak AI i am, I've only just inderstood the dig about "It's alive!"...|
|posted 2/18/2010 16:32|
|"WHO WOULD CARE?"|
the differnece here is that while Einstein could come up with new theories and invent something, HALstein could only insist that E=MC^2. It would be like Einstein after a really bad cuncussion. Sure, it's einstein, but It isn't really *Einstein* anymore. It would be like trying to run Crysis on an Univac. It's pointless, worthless. "record someone's personality" Hah! That's the most useless part. Dogs and cats have personality! But in the end they are nothing without an actuall degree of intelect.
|posted 2/19/2010 23:45|
|The more we interact with machines the more personality they will need for "human" use. There's nothing wrong with a boxed personality and there are machines that create no matter how basic.|
But now to say something is useless because it doesn't fit "our human" interpretation of intelligence is ridiculous. Animals have proven to be highly intelligent and skilled in the areas that benefit "them". Remember they evolved to survive in their element, not in houses or zoos.
Just like a human living in the forest of the Amazon may not be skilled in the written laws of physics, but I bet a hunter understands better than scientists the practical roles of trajectory and energy when he uses blow darts or arrows.
Computers while not "all that" yet, they will evolve in design, capabilities, and if the human race doesn't become extinct, systems may begin to understand and predict our behavior better than we can. They are after all created in our image. Don't believe me? Take up some psych courses on mental development.
I don't care whether people call computers alive, dead weight, or scrap metal, the fact is we interact with them more closely each day, not months, years or decades. Things are moving along quickly.
If nothing else, we will all have to be able to interface with our machines in the easiest way possible, and since most humans have a difficult time learning a foreign language as it is, we will rely on computers to at least be able to understand us in our language.
Okay, rant over:)
|posted 2/22/2010 14:59|
#1: Yes, but the fact is, they can't speak. You cannot possibly believe that in the grand scheme of evolution they benefit from this and their subserviance to humans. The point of "AI," and I use the term loosely, is to create an equal
Sangay Glass wrote @ 2/19/2010 11:45:00 PM:
"Animals have proven to be highly intelligent and skilled in the areas that benefit "them". "
"Just like a human living in the forest of the Amazon may not be skilled in the written laws of physics, but I bet a hunter understands better than scientists the practical roles of trajectory and energy when he uses blow darts or arrows."
"Computers while not "all that" yet, they will evolve in design, capabilities, and if the human race doesn't become extinct, systems may begin to understand and predict our behavior better than we can."
"I don't care whether people call computers alive, dead weight, or scrap metal, the fact is we interact with them more closely each day, not months, years or decades. Things are moving along quickly."
"If nothing else, we will all have to be able to interface with our machines in the easiest way possible, and since most humans have a difficult time learning a foreign language as it is, we will rely on computers to at least be able to understand us in our language."
#2: That's precicely my point. Those hunters are capable of learning theoretical physics, just as the physicists are capable of learning practical physics. Hal, however, is not. Unless the gravatational constant was reciently changed to "Hi, Mommy!" (seriously though... why did they assume we would all be female?)
#3: Ah, that is precicely my point. I invite you to read again what I have written on this post. I never dispute that computers *will* become self aware, however it will be by our doing, not by their evolution. Because for a computer to evolve, it must be aware of itself. It must be able to change it's programming with a goal in mind, and it must be able to work independantly of a human operator. Hal is... not.
#4: "scrap metal"!?!?! blasphemy! show me the culprit and I will show him my fist. I don't dispute that computers are a fast growing field. BUT, people have said they were on the cusp of creating a sentient machine for decades. It STILL hasn't been done. It won't be done for several more decades. My guess is it'll evolve around half a decade after quantom computing moves into the home. And yes, there's a corilation there. To process all the input and simulate all the mental processes, we WILL need to harness far more resources. And we will need those resources to operate on every fascet of information that could concievably be introduced to an AI
#5: Ah, now this is EXACTLY my point. Hal *can* fill the niche of translator to a machine. THAT is all. If you look at the chart of a human brain (you imply you have a psych background, so I don't think I'm doing anything but reminding you) then you will notice that the speech center of the brain is only a small portion of a very large, very complex system. The point of this forum post of mine is to make clear that the speech processing center of a brain is not in fact a whole brain. This is my response to all the people out there chanting "It's alive!"
|Last edited by jbharner @ 2/22/2010 3:03:00 PM|