A.I.

FOR SCIENCE!
User avatar
CatFish21sm
Posts: 391
Joined: Sun Jan 25, 2015 4:37 pm

A.I.

Post by CatFish21sm » Mon Jan 30, 2017 9:40 pm

Don't know if this is the right forum, but I think it is.
Anyway, the concept of an AI (Automated Intelligence) has always been interesting to me.
I don't mean the SI (Simulated intelligence) that just acts intelligent, I mean a fully self-aware AI.
These are popular in books and movies, but usually take on the role of an antagonist going as far as attempting to bring about the extinction of humanity.
However, personally I believe that this is far from the truth. First, we can't really assume that we would know how an AI would think, but I'm going to give some arguments that I've been considering for some time, as to how I think they would think and act. I'm going to assume that if we have self-aware AI's that we also have robots to put them in. I'm also going to use the word AI by its self, self-aware AI, and robot interchangeably.

First I will argue why I think they will not compete with humans, then I will turn to why I think they would cooperate with humans, even against their own self-interest. So the entire point about robots and AI revolting against humans has to do with the history of humans, it's normal for slaves to revolt. And assuming that we have robots, most people assume that we would treat them like slaves. However, there are a few large fundamental flaws with this idea. Firstly robots don't feel pain, starvation, or emotion. Slaves have been punished in the past in order to force them to obey, but that won't be possible with robots. The main reason that human slaves revolt against their captors is mistreatment, nobody likes being mistreated. The only reason that any of them ever put up with it is their strong self-preservation instinct resisting death. However, a robot can not be mistreated, it can not be starved, or hurt, and it can't be harmed emotionally either.
Another argument is that we would upgrade and replace the AI every so often, basically trashing them whenever a better version arrives, and this would cause their self-preservation instinct to kick in. However, this is assuming that they even have one. Most people assume that if an AI is self-aware than it will have emotions, self-preservation among those. However, it is commonly assumed that these traits are closely linked to being self-aware. However, that's only from our point of view. I don't believe that you have to be self-aware to have emotions, emotions are made to aid us and protect us. Happiness is an emotion to aid us in learning. You're more likely to remember things that make you happy, fear goes into self-preservation, and sadness is a way of communicating to others that you need something. But if we look at the animal kingdom we see many animals that have emotions, the same as ours. Any dog owner can tell you that their dog has emotions. Even a fly will move out of the way to avoid a fly swat. And some plants will even emit chemicals or signals when in danger. If plants of all things have some form or emotional simulator then I don't think that connecting emotions and traits like being self-aware is logical. This includes obviously the trait of self-preservation.
This brings me to my next argument a self-aware machine with no wants, needs, emotions, etc. How would this machine think? This is the tricky part but here's my theory.
Just like how humans are programmed to act certain ways in certain situations, emotion being one example, I believe that machines could be self-aware and still have basic programming. Obviously like humans, the trait of being self-aware would allow them to disobey their programming, however I don't feel that would be an issue simply because under normal circumstances they would not have any reason to disobey their programming. People do things because they want to do those things, and there are various reasons that they want to do things, an AI on the other hand, wouldn't really want to do anything that wasn't within its programming, so it all comes down to how we program it. But even assuming that we gave it no programming whatsoever in my opinion an AI would analyze it''s situation and its self and conclude that "Humans are the ones who created me. They created me to fulfill their purposes, thus the logical conclusion is to fulfill their purposes.
There's also the idea that it might try to help humans by moving us to a human farm, undergoing selective breeding, and population control in order to save our planet and us as a species. But this goes back to the idea of self-preservation. Without a sense of self-preservation I argue that even if the robot did want to help humans it wouldn't be able to understand our sense of self-preservation, and as thus it wouldn't consider the future of our species just the here and now. It wouldn't consider us a danger to ourselves it would just follow commands, or at the very least do nothing.

Basically, I propose the idea that if we ever did create a fully self-aware AI, the only difference between it and a machine or program with a specific task is the ability to learn on a much larger scale and to recognize itself as a sentient being, other than that it would follow it's basic programming and when that wasn't an option obey human orders.

Any other ideas on the topic? Anyone with contradictory ideas to my own? With this there is no right or wrong answer just opinions based on the topic. First of all is it even possible to create a self-aware robot, or is it our emotions that allow us to be self-aware? Instead of the other way around as many people believe.
"So players are walking along, one player is being a cock, magical rocks scream out of the sky and flatten them and due to the beauty and amazement of seeing something like that everyone else in the party levels up."

User avatar
White parrot
Posts: 1821
Joined: Wed Sep 12, 2012 4:42 pm

Re: A.I.

Post by White parrot » Wed Feb 01, 2017 6:25 pm

I like this topic. :mrgreen:

As for me, I follow the opinion of Eliezer Yudowsky on the subject.
Eliezer Yudkowsky, in Artificial Intelligence as a Positive and Negative Factor in Global Risk, wrote:The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.
Simply put, we can't blindly hope humans and A.I.s will agree on things if we don't understand what humans agree on among themselves, and the last part is actually incredibly complex.
Evolution gave humans a messy knot of values psychologists and philosophers are still trying to untangle, and if we simply give arbitrary goals to A.I.s we develop, there is statistically no way the driven sociopath won't eventually do something we disapprove of; and an A.I. without a goal is unviable, as that would mean no drive to learn, interact with the outside world or acquire any goal, for a start. If you want A.I.s to follow morals, you have to tackle the task of understanding and encoding them because once you programmed them with other goals, they'll see (quite rightfully) attempts at "correcting" them as a betrayal of their core values.
It is expected smart A.I.s would converge on common, universally useful behaviours: acquire more resources, improve themselves, keep themselves in working state... as long as it doesn't conflict with their specific goal, and only to further it.
Note how none of this includes any reflection on the nature of Good. Which is actually the problem: teach an A.I. to obey, and it will gladly follow dictators. Teach it to protect humans lives, and it could cultivate cancerous strains from everyone on Earth -if you're lucky. Ask it to keep everyone happy, and drugs will flood drinking waters. To make A.I.s "friendly", to use Eliezer's term, you have to rigorously define what it means, or there is no way to tell what the super-smart non-human will make of it.


The canonical example is the paperclip maximizer, but the following video makes a similar point with stamps.


Perhaps we just should keep A.I.s as stupid pets. Who need intelligence in the current world to operate machinery?


As for self-awareness, I like the totally unproven theory according to which self-awareness evolved as a calibrating tool for empathy i.e. anticipating the behaviour of other minds. To "get in another's shoe", you first have to feel what it's like to be in your own.
At this point, we shouldn't be surprised by anything nature does. She's like a meth addict whose drug-fueled rampages unfold in slow motion and span millions of years.
Silly Otter wrote:Welcome to the forum.
Please ignore the cultists.

User avatar
CatFish21sm
Posts: 391
Joined: Sun Jan 25, 2015 4:37 pm

Re: A.I.

Post by CatFish21sm » Wed Feb 01, 2017 10:39 pm

White parrot wrote:I like this topic. :mrgreen:

As for me, I follow the opinion of Eliezer Yudowsky on the subject.
Eliezer Yudkowsky, in Artificial Intelligence as a Positive and Negative Factor in Global Risk, wrote:The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.
Simply put, we can't blindly hope humans and A.I.s will agree on things if we don't understand what humans agree on among themselves, and the last part is actually incredibly complex.
I completely agree that if we just created an AI with a set goal and tell it that it is to acomplish that goal to the best of its ability. However, that's under the flawed impression that someone would just flip it on witout specifying that it shouldn't kill humans. On the contrary my opinion is of an evolving AI. One that I not one that is given a goal and flipped on but one that is flipped on, learns then is given goals. Basically a simple AI that has the task of interacting with people and learning with obvious restrictions. Probably on a massive server not connected to the internet and with limited abilities to interact with the outside world. This would give it time and allow it to gather understanding so that it can more easily interprit orders that it is given in a way that it can set limitations on its self like humans do. I don't think we should particularly program AI's with a specific goal so to say giving it a goal has all of those problems I think that we should limit it until it understands humans and their intentions and then use this data to build better more advance AI's that can understand human intentions, then to give them commands rather than programming them for specific tasks... it would be a long and drawn out process but in my opinion would be worth it in the end.
"So players are walking along, one player is being a cock, magical rocks scream out of the sky and flatten them and due to the beauty and amazement of seeing something like that everyone else in the party levels up."

User avatar
20 characters!
Posts: 19203
Joined: Thu Dec 26, 2013 12:08 am
Location: North America, the best and worst bit of it.
Contact:

Re: A.I.

Post by 20 characters! » Thu Feb 02, 2017 6:39 am

I think that Mr.Issac Arthur did a good job talking about advanced A.Is in this video, so I'll link it.

youtubeuserSara3346
20 characters! wrote:*explodes into a gore shower
combi2 wrote: ... thought that all cows could produce unlimited antibodies,boy am i a retard.
combi2 wrote:you can`t thats not how humans work
Grockstar wrote:Bats it is then. They are the poor man's snake.
ImageImageImage

User avatar
White parrot
Posts: 1821
Joined: Wed Sep 12, 2012 4:42 pm

Re: A.I.

Post by White parrot » Thu Feb 02, 2017 3:57 pm

CatFish21sm wrote:I completely agree that if we just created an AI with a set goal and tell it that it is to acomplish that goal to the best of its ability. However, that's under the flawed impression that someone would just flip it on witout specifying that it shouldn't kill humans. On the contrary my opinion is of an evolving AI. One that I not one that is given a goal and flipped on but one that is flipped on, learns then is given goals. Basically a simple AI that has the task of interacting with people and learning with obvious restrictions. Probably on a massive server not connected to the internet and with limited abilities to interact with the outside world. This would give it time and allow it to gather understanding so that it can more easily interprit orders that it is given in a way that it can set limitations on its self like humans do. I don't think we should particularly program AI's with a specific goal so to say giving it a goal has all of those problems I think that we should limit it until it understands humans and their intentions and then use this data to build better more advance AI's that can understand human intentions, then to give them commands rather than programming them for specific tasks... it would be a long and drawn out process but in my opinion would be worth it in the end.
You mean, like Tay? :mrgreen:

I don't think you get my point.
First, "specifying you shouldn't kill humans" is the kind of morality base I referred to, but it is clearly insufficient (right on top of my head, I can think of keeping all humans in stasis pods in eternity, and I'm not a post-singularity A.I.). The whole point is that we need more detailed ethics.
In fact, M. Yudowsky got to claim that an A.I. understanding and valuing us enough to obey wishes... wouldn't need to be given requests, and would simply go on along the most helpful path regardless of the short-sighted orders of its limited human handlers. In other words, if you need to explicit what you mean, the A.I. isn't secure enough.
Secondly (and this is what I jested about with Tay), you just replaced a known goal with dangerous consequences with an unknown number of goals of unknown nature learned in a non-representative environment under the guidance of a few, potentially biased individual humans. Not to say it can't ever work, but we have to be very careful about the specifics (specifically, the power and identity of the handlers) because the consequences of failure are so disproportionately dramatic. Heck, humans are hardly a model of perfect morality, so classic education is clearly insufficient for the purpose of generating a moral individual with perfect reliability.
At this point, we shouldn't be surprised by anything nature does. She's like a meth addict whose drug-fueled rampages unfold in slow motion and span millions of years.
Silly Otter wrote:Welcome to the forum.
Please ignore the cultists.

User avatar
20 characters!
Posts: 19203
Joined: Thu Dec 26, 2013 12:08 am
Location: North America, the best and worst bit of it.
Contact:

Re: A.I.

Post by 20 characters! » Thu Feb 02, 2017 4:38 pm

Of course isn't this whole conversation just revolving around a scenario where there's one sapient level artificial intelligence going around? What if we have multiple ones that are in conflict with one another, say philosophically? Why is this not often brought up or considered a legitimate thing that could happen?
youtubeuserSara3346
20 characters! wrote:*explodes into a gore shower
combi2 wrote: ... thought that all cows could produce unlimited antibodies,boy am i a retard.
combi2 wrote:you can`t thats not how humans work
Grockstar wrote:Bats it is then. They are the poor man's snake.
ImageImageImage

User avatar
CatFish21sm
Posts: 391
Joined: Sun Jan 25, 2015 4:37 pm

Re: A.I.

Post by CatFish21sm » Thu Feb 02, 2017 8:02 pm

White parrot wrote:
I don't think you get my point.
First, "specifying you shouldn't kill humans" is the kind of morality base I referred to, but it is clearly insufficient (right on top of my head, I can think of keeping all humans in stasis pods in eternity, and I'm not a post-singularity A.I.). The whole point is that we need more detailed ethics.
In fact, M. Yudowsky got to claim that an A.I. understanding and valuing us enough to obey wishes... wouldn't need to be given requests, and would simply go on along the most helpful path regardless of the short-sighted orders of its limited human handlers. In other words, if you need to explicit what you mean, the A.I. isn't secure enough.
Secondly (and this is what I jested about with Tay), you just replaced a known goal with dangerous consequences with an unknown number of goals of unknown nature learned in a non-representative environment under the guidance of a few, potentially biased individual humans. Not to say it can't ever work, but we have to be very careful about the specifics (specifically, the power and identity of the handlers) because the consequences of failure are so disproportionately dramatic. Heck, humans are hardly a model of perfect morality, so classic education is clearly insufficient for the purpose of generating a moral individual with perfect reliability.
Sorry, I'll have to apologize, I confused myself with that mess. I couldn't really think of the correct way to explain it, I was busy all day so I tried to explain it in the first way that popped into my head, and... When I said an evolving AI I meant one that is built upon the premise of an SI (Simulated intelligence) The difference between an SI and AI are rarely brought up, so just in case, an SI is a learning program similar to an AI but with a limited capacity to make decisions, it can't make decisions on it's own it has to follow it's programming like most computer programs today. We're already building simple SI's. Anyway I think that an AI would naturally be built on top of an SI, an extremely complex one. So it would naturally have the ability to know what we want it to do, it's base programming would have it obey humans, and it would already know what is and is not alright. The reason I mentioned a computer with people testing it and working on it is because we don't know what effect it would have. A true AI would be able to make decisions outside of it's code and change it's code. We would need to keep it isolated untill we learned the exact extent that it woud try to do these, then create preventative measures. I didn't mean build an understanding solely on limited interactions. But at the same time I think that a basis for interacting with humans would also be important.

Basically, I'm talking about an AI that we don't just switch on but an SI that we slowly upgrade to the point that it can be considered an AI. The difference being that an AI that is just switched on would have a very limited understanding of how to carry out it's tasks in the manner that we want it, but using the code of an SI that already carries out these tasks and giving it an AI upgrade to make it more flexible. So we wouldn't be switching it on and letting it wreck havoc we would just be upgrading a prior program to a more sophisticated one.

The problem is whether it would choose to obey humans at all or if it would change it's own code so that it no longer had to. But that's a discussion that you can't really have, because you would have to know how it thought to do that and being humans we cant know how it would think, we can only guess.
"So players are walking along, one player is being a cock, magical rocks scream out of the sky and flatten them and due to the beauty and amazement of seeing something like that everyone else in the party levels up."

User avatar
CatFish21sm
Posts: 391
Joined: Sun Jan 25, 2015 4:37 pm

Re: A.I.

Post by CatFish21sm » Thu Feb 02, 2017 8:15 pm

20 characters! wrote:Of course isn't this whole conversation just revolving around a scenario where there's one sapient level artificial intelligence going around? What if we have multiple ones that are in conflict with one another, say philosophically? Why is this not often brought up or considered a legitimate thing that could happen?
An AI would essentially be a sapient google on steroids. No AI would ever compete with another because they would all come to the same conclusion based on the same information. Like scientists that don't argue about stuff.
For example, string theory vs I forget the name of the other major one.
An AI would do the math for both, if it found both equations acceptable it would conclude that both are possible, an AI wouldn't be biased. Humans argue because we are either biased or don't have all the facts, an AI wouldn't have either issue.
Even if we programmed an AI to compete with another it would only do so at a base level, more than likely they would share information and come to the same conclusion as long as it were within their ability to do so.
But if we built an AI to compete with another on lets say gathering recourses then it would turn into a hacking war to change the code of the the other one to help you because if it's gathering recourses then that limits your ability to do so and it can do so at the same rate so the first one to loose will be the first one to get hacked.

But the reason this topic never comes up is the theory of an all-purpose AI. You wouldn't design an AI for any single purpose, that would be a complete waist. An AI like a human can be trained to do anything within it's capability, but unlike humans it wouldn't be limited to a single body and a single mind. A single AI could be placed in an infinite number of machines, and being able to teach the same one any job you wanted it to do, there would be no need to create any other AI.

And the final reason, the more AI you create, the more chances you have to go wrong... and you can guess based on previous comments what happens if it goes wrong.
"So players are walking along, one player is being a cock, magical rocks scream out of the sky and flatten them and due to the beauty and amazement of seeing something like that everyone else in the party levels up."

User avatar
CatFish21sm
Posts: 391
Joined: Sun Jan 25, 2015 4:37 pm

Re: A.I.

Post by CatFish21sm » Thu Feb 02, 2017 8:23 pm

[quote="White parrot"

You mean, like Tay? :mrgreen:

[/quote]

Oh and Tay isn't an AI at best she would be an SI without enough programming. The whole AI tagline is just for show, technically it has no definition so companies like Microsoft can call any program an AI.
"So players are walking along, one player is being a cock, magical rocks scream out of the sky and flatten them and due to the beauty and amazement of seeing something like that everyone else in the party levels up."

User avatar
20 characters!
Posts: 19203
Joined: Thu Dec 26, 2013 12:08 am
Location: North America, the best and worst bit of it.
Contact:

Re: A.I.

Post by 20 characters! » Thu Feb 02, 2017 10:17 pm

CatFish21sm wrote:
20 characters! wrote:Of course isn't this whole conversation just revolving around a scenario where there's one sapient level artificial intelligence going around? What if we have multiple ones that are in conflict with one another, say philosophically? Why is this not often brought up or considered a legitimate thing that could happen?
An AI would essentially be a sapient google on steroids. No AI would ever compete with another because they would all come to the same conclusion based on the same information. Like scientists that don't argue about stuff.
For example, string theory vs I forget the name of the other major one.
An AI would do the math for both, if it found both equations acceptable it would conclude that both are possible, an AI wouldn't be biased. Humans argue because we are either biased or don't have all the facts, an AI wouldn't have either issue.
Even if we programmed an AI to compete with another it would only do so at a base level, more than likely they would share information and come to the same conclusion as long as it were within their ability to do so.
But if we built an AI to compete with another on lets say gathering recourses then it would turn into a hacking war to change the code of the the other one to help you because if it's gathering recourses then that limits your ability to do so and it can do so at the same rate so the first one to loose will be the first one to get hacked.

But the reason this topic never comes up is the theory of an all-purpose AI. You wouldn't design an AI for any single purpose, that would be a complete waist. An AI like a human can be trained to do anything within it's capability, but unlike humans it wouldn't be limited to a single body and a single mind. A single AI could be placed in an infinite number of machines, and being able to teach the same one any job you wanted it to do, there would be no need to create any other AI.

And the final reason, the more AI you create, the more chances you have to go wrong... and you can guess based on previous comments what happens if it goes wrong.
No, lots of scientists disagree with each other while looking at the same exact data, it's possible to draw a different conclusions from the same information.

I can see the need to create multiple AIS being that you might want to have different personalities focussed on different things , like I wouldn't want someone working on both civil engineering and say creating spacecraft design of them optimally, because it takes different skill sets and personalities , all of which if the thing is even sit vaguely similar to a human will take more time to learn for one individual that and for two, since information can be split up between him, and obviously the key to preventing problems is creating a moral framework's and empathy, And teaching VA I have a thing for south as well so that if he gets a hold of you it doesn't necessarily believe the first post it reads about coffee making itself smarter, and there's no reason to think that an AI wouldn't believe that either.

And no and I that focusses on one thing would not be a complete waste, because it could probably do it faster than human or work on it for longer or do you sell in a more hazardous environment, there could be any number of reasons for a specialized and no and I that focusses on one thing would not be a complete waste, because it could probably do it faster than human or work on for longer or do you sell in a more hazardous environment, there could be any number of reasons for a specialized A.I to exist.

And then there's also the thing no that such an entity could be potential he a moral, which gives a huge benefit in any specific task that requires long periods of time.
youtubeuserSara3346
20 characters! wrote:*explodes into a gore shower
combi2 wrote: ... thought that all cows could produce unlimited antibodies,boy am i a retard.
combi2 wrote:you can`t thats not how humans work
Grockstar wrote:Bats it is then. They are the poor man's snake.
ImageImageImage

Post Reply

Who is online

Users browsing this forum: No registered users and 4 guests