A.I.

FOR SCIENCE!
User avatar
CatFish21sm
Posts: 391
Joined: Sun Jan 25, 2015 4:37 pm

Re: A.I.

Post by CatFish21sm » Thu Feb 02, 2017 11:58 pm

20 characters! wrote:
CatFish21sm wrote:
20 characters! wrote:Of course isn't this whole conversation just revolving around a scenario where there's one sapient level artificial intelligence going around? What if we have multiple ones that are in conflict with one another, say philosophically? Why is this not often brought up or considered a legitimate thing that could happen?
An AI would essentially be a sapient google on steroids. No AI would ever compete with another because they would all come to the same conclusion based on the same information. Like scientists that don't argue about stuff.
For example, string theory vs I forget the name of the other major one.
An AI would do the math for both, if it found both equations acceptable it would conclude that both are possible, an AI wouldn't be biased. Humans argue because we are either biased or don't have all the facts, an AI wouldn't have either issue.
Even if we programmed an AI to compete with another it would only do so at a base level, more than likely they would share information and come to the same conclusion as long as it were within their ability to do so.
But if we built an AI to compete with another on lets say gathering recourses then it would turn into a hacking war to change the code of the the other one to help you because if it's gathering recourses then that limits your ability to do so and it can do so at the same rate so the first one to loose will be the first one to get hacked.

But the reason this topic never comes up is the theory of an all-purpose AI. You wouldn't design an AI for any single purpose, that would be a complete waist. An AI like a human can be trained to do anything within it's capability, but unlike humans it wouldn't be limited to a single body and a single mind. A single AI could be placed in an infinite number of machines, and being able to teach the same one any job you wanted it to do, there would be no need to create any other AI.

And the final reason, the more AI you create, the more chances you have to go wrong... and you can guess based on previous comments what happens if it goes wrong.
No, lots of scientists disagree with each other while looking at the same exact data, it's possible to draw a different conclusions from the same information.

I can see the need to create multiple AIS being that you might want to have different personalities focussed on different things , like I wouldn't want someone working on both civil engineering and say creating spacecraft design of them optimally, because it takes different skill sets and personalities , all of which if the thing is even sit vaguely similar to a human will take more time to learn for one individual that and for two, since information can be split up between him, and obviously the key to preventing problems is creating a moral framework's and empathy, And teaching VA I have a thing for south as well so that if he gets a hold of you it doesn't necessarily believe the first post it reads about coffee making itself smarter, and there's no reason to think that an AI wouldn't believe that either.

And no and I that focusses on one thing would not be a complete waste, because it could probably do it faster than human or work on it for longer or do you sell in a more hazardous environment, there could be any number of reasons for a specialized and no and I that focusses on one thing would not be a complete waste, because it could probably do it faster than human or work on for longer or do you sell in a more hazardous environment, there could be any number of reasons for a specialized A.I to exist.

And then there's also the thing no that such an entity could be potential he a moral, which gives a huge benefit in any specific task that requires long periods of time.

I disagree. An AI's ability to learn would only he limited by its processing power. So an AI with a processing power lets give it a numerical value say 100 would learn two subjects just as well as two specialized AI with a processing power of 50. Humans have one body and can only take a limited number of inputs. An AI could download it's self on multiple devices and take in as much input as it wants it wouldn't need to be specialized because it could set up specialized nodes that are all connected and share information but would still be the same single AI

Furthermore I mentioned that scientists don't agree, but that's because they are biased and they don't have all the data. Not to mention all of the other confounds in even the simplest study, no study is perfect. However an AI would be unbiased and would be able to share not just data but their point of view and because of that every AI will have the same information available so they would come to the same conclusion. But humans are also biases in that they like yes or no answers an AI wouldn't say yes no, it's rite or wrong, and AI would say it is rite this percentage of the time and wrong this percentage of the time under these conditions and etc. They would all come to that conclusion. Take my stringtheory example two AI doing the same math wouldn't disagree caws they can, like human scietists. They wouldnt conclude that there is onlynone possobility they would both agree that both theorys are possible and would proceed with the assumption that both are true unti'll they found contradictory information, they wouldn't be limited by human biases.

So it goes to the age old question. If you were in someone else's shoes would u make a different decision?
If u shared their past their point of view their brain everything with no information that they didn't have would u make a different decision?
Probably not. An AI wouldn't be limited to one point of view it could "telepathically" communicate with all nodes or other AI sharing the same point of view with all of them . Unless you literally programmed an AI to have a different point of view then it wouldn't caws it wouldncommunicate with all the other AI and share information. Hu man's have limited ability to share that information so they have limited ability to make informed decisions that leads to biases and different decisions or points of view.
"So players are walking along, one player is being a cock, magical rocks scream out of the sky and flatten them and due to the beauty and amazement of seeing something like that everyone else in the party levels up."

User avatar
20 characters!
Posts: 19203
Joined: Thu Dec 26, 2013 12:08 am
Location: North America, the best and worst bit of it.
Contact:

Re: A.I.

Post by 20 characters! » Fri Feb 03, 2017 9:50 am

Assuming an AI would be ubaiased sounds baseless to be.
youtubeuserSara3346
20 characters! wrote:*explodes into a gore shower
combi2 wrote: ... thought that all cows could produce unlimited antibodies,boy am i a retard.
combi2 wrote:you can`t thats not how humans work
Grockstar wrote:Bats it is then. They are the poor man's snake.
ImageImageImage

User avatar
White parrot
Posts: 1821
Joined: Wed Sep 12, 2012 4:42 pm

Re: A.I.

Post by White parrot » Fri Feb 03, 2017 4:17 pm

Wikipedia isn't very helpful on this, so I don't get what's the deal is with SI. :| Do you mean an AI with no ability to self-modify? (I don't see how else to interpret "follow its programming [instead of] taking its own decision" since for me taking decision IS following programming; this is like talking about organic brains beeing freed of neurons. Same with "taking decision outside of its code".)
In any way, this will need explanation for why "it would already know what is and is not alright".

Similarly, "an SI that we slowly upgrade to the point that it can be considered an AI" is a weird proposal for me because... I thought this was already implied, in a way?
The problem is dealing with the self-modifying part anyway, so safe-guarding the S.I. part but not the later modifications is akin to use monkeys to model human civilization; as it happens, tiny modifications in information processing can have huge differences, and you want to have a theoretical understanding of what you're doing well before testing. Proceeding by tiny increments doesn't necessarily makes the procedure safe.
The problem is whether it would choose to obey humans at all or if it would change it's own code so that it no longer had to. But that's a discussion that you can't really have, because you would have to know how it thought to do that and being humans we cant know how it would think, we can only guess.
... No, we would know if it want to change if we give it initial values it want to preserve. This is the whole point of giving ethics to the "core seed": so that it has a set of assumptions it would never want to change even though it easily could, while everything not included in the seed is subject to potential changes in value. If you manage to code the seed so that it value obedience, it will never willingly create a successor or upgrade that would let it value it less.
And you can't let the A.I. "learn" this core ethics by itself, because by definition this would mean neither putting safeguards on self-modification (since it would then need to install its finds somehow) NOR giving it the means to judge the ethical value of its find. You can't tell a program to follow the result of a calculation before doing it, this isn't how programming and causality work.
This is exactly the discussion I'm trying to have, actually!
I think the most promising piste you mentioned is the possibility of using a (ncessarily sociopathic but) truth-focused "Oracle" AI (by which I mean an A.I. only able to act upon the world by answering questions, so as not to be able to make experiments or steal computing resources) to determine a tentative synthetic (in all senses of the word!) definition of ethics THEN let human agents use it as a reflection basis for later consensus (and to ensure it doesn't include tricks to convince them to do research for the first A.I ...) and seed for more active AI.

And one problem with ethical systems is that they are subjective, and thus consensus cannot necessarily be reached even between individuals with all the facts (and that's supposing AIs are perfect minds instead of merely superior ones: at the beginning they would still be quite fallible and prone to factual disagreements). As long as two seeds had a disagreements on values (say, because their oracle predecessor began their research at different points, and their handlers cut them off at different times, or new philosophical currents were born between their creations...), then the resulting AIs will never completely agree.
Funnily enough, if I remember correctly Gödel's theorem of incompleteness stipulates that all logical systems have properties they cannot express themselves; all intelligences have blind spots when it come to themselves. Considering this, I could imagine AIs disagreeing on how their relationship will evolve, which sounds like a recipe for potential disaster.
The result could be interesting. I'd expect "war" (which hopefully will be fully through hacks etc.), but it's theoretically possible they estimate collaboration will be less costly in the resources they value. Better hope humans are considered valuable enough to deliberately spare but not valuable enough to use as pawns, and that that the prisoner's dilemma isn't an accurate model of the interaction...
At this point, we shouldn't be surprised by anything nature does. She's like a meth addict whose drug-fueled rampages unfold in slow motion and span millions of years.
Silly Otter wrote:Welcome to the forum.
Please ignore the cultists.

User avatar
20 characters!
Posts: 19203
Joined: Thu Dec 26, 2013 12:08 am
Location: North America, the best and worst bit of it.
Contact:

Re: A.I.

Post by 20 characters! » Fri Feb 03, 2017 5:06 pm

White parrot wrote:Wikipedia isn't very helpful on this, so I don't get what's the deal is with SI. :| Do you mean an AI with no ability to self-modify? (I don't see how else to interpret "follow its programming [instead of] taking its own decision" since for me taking decision IS following programming; this is like talking about organic brains beeing freed of neurons. Same with "taking decision outside of its code".)
In any way, this will need explanation for why "it would already know what is and is not alright".

Similarly, "an SI that we slowly upgrade to the point that it can be considered an AI" is a weird proposal for me because... I thought this was already implied, in a way?
The problem is dealing with the self-modifying part anyway, so safe-guarding the S.I. part but not the later modifications is akin to use monkeys to model human civilization; as it happens, tiny modifications in information processing can have huge differences, and you want to have a theoretical understanding of what you're doing well before testing. Proceeding by tiny increments doesn't necessarily makes the procedure safe.
The problem is whether it would choose to obey humans at all or if it would change it's own code so that it no longer had to. But that's a discussion that you can't really have, because you would have to know how it thought to do that and being humans we cant know how it would think, we can only guess.
... No, we would know if it want to change if we give it initial values it want to preserve. This is the whole point of giving ethics to the "core seed": so that it has a set of assumptions it would never want to change even though it easily could, while everything not included in the seed is subject to potential changes in value. If you manage to code the seed so that it value obedience, it will never willingly create a successor or upgrade that would let it value it less.
And you can't let the A.I. "learn" this core ethics by itself, because by definition this would mean neither putting safeguards on self-modification (since it would then need to install its finds somehow) NOR giving it the means to judge the ethical value of its find. You can't tell a program to follow the result of a calculation before doing it, this isn't how programming and causality work.
This is exactly the discussion I'm trying to have, actually!
I think the most promising piste you mentioned is the possibility of using a (ncessarily sociopathic but) truth-focused "Oracle" AI (by which I mean an A.I. only able to act upon the world by answering questions, so as not to be able to make experiments or steal computing resources) to determine a tentative synthetic (in all senses of the word!) definition of ethics THEN let human agents use it as a reflection basis for later consensus (and to ensure it doesn't include tricks to convince them to do research for the first A.I ...) and seed for more active AI.

And one problem with ethical systems is that they are subjective, and thus consensus cannot necessarily be reached even between individuals with all the facts (and that's supposing AIs are perfect minds instead of merely superior ones: at the beginning they would still be quite fallible and prone to factual disagreements). As long as two seeds had a disagreements on values (say, because their oracle predecessor began their research at different points, and their handlers cut them off at different times, or new philosophical currents were born between their creations...), then the resulting AIs will never completely agree.
Funnily enough, if I remember correctly Gödel's theorem of incompleteness stipulates that all logical systems have properties they cannot express themselves; all intelligences have blind spots when it come to themselves. Considering this, I could imagine AIs disagreeing on how their relationship will evolve, which sounds like a recipe for potential disaster.
The result could be interesting. I'd expect "war" (which hopefully will be fully through hacks etc.), but it's theoretically possible they estimate collaboration will be less costly in the resources they value. Better hope humans are considered valuable enough to deliberately spare but not valuable enough to use as pawns, and that that the prisoner's dilemma isn't an accurate model of the interaction...

We ( by we I mean decent non-sociopathic humans who wants a better world at the end of the day )obviously need one that values both sentiant life and the expression of autonomy, beyond that I think creating some being with absolutely no self-preservation instinct's feels very wrong to me, so that should obviously be in there too, but I can't think of any other values that would really be needed, other than perhaps an urge towards efficient processing. And it'll probably need to go besides just learn more stuff, or "get smarter ", perhaps something like "self engineering problems "or something similar to those lines could work?
youtubeuserSara3346
20 characters! wrote:*explodes into a gore shower
combi2 wrote: ... thought that all cows could produce unlimited antibodies,boy am i a retard.
combi2 wrote:you can`t thats not how humans work
Grockstar wrote:Bats it is then. They are the poor man's snake.
ImageImageImage

User avatar
CatFish21sm
Posts: 391
Joined: Sun Jan 25, 2015 4:37 pm

Re: A.I.

Post by CatFish21sm » Sun Feb 05, 2017 1:48 am

White parrot wrote:Wikipedia isn't very helpful on this, so I don't get what's the deal is with SI. :| Do you mean an AI with no ability to self-modify? (I don't see how else to interpret "follow its programming [instead of] taking its own decision" since for me taking decision IS following programming; this is like talking about organic brains beeing freed of neurons. Same with "taking decision outside of its code".)
In any way, this will need explanation for why "it would already know what is and is not alright".

Similarly, "an SI that we slowly upgrade to the point that it can be considered an AI" is a weird proposal for me because... I thought this was already implied, in a way?
The problem is dealing with the self-modifying part anyway, so safe-guarding the S.I. part but not the later modifications is akin to use monkeys to model human civilization; as it happens, tiny modifications in information processing can have huge differences, and you want to have a theoretical understanding of what you're doing well before testing. Proceeding by tiny increments doesn't necessarily makes the procedure safe.
The problem is whether it would choose to obey humans at all or if it would change it's own code so that it no longer had to. But that's a discussion that you can't really have, because you would have to know how it thought to do that and being humans we cant know how it would think, we can only guess.
... No, we would know if it want to change if we give it initial values it want to preserve. This is the whole point of giving ethics to the "core seed": so that it has a set of assumptions it would never want to change even though it easily could, while everything not included in the seed is subject to potential changes in value. If you manage to code the seed so that it value obedience, it will never willingly create a successor or upgrade that would let it value it less.
And you can't let the A.I. "learn" this core ethics by itself, because by definition this would mean neither putting safeguards on self-modification (since it would then need to install its finds somehow) NOR giving it the means to judge the ethical value of its find. You can't tell a program to follow the result of a calculation before doing it, this isn't how programming and causality work.
This is exactly the discussion I'm trying to have, actually!
I think the most promising piste you mentioned is the possibility of using a (ncessarily sociopathic but) truth-focused "Oracle" AI (by which I mean an A.I. only able to act upon the world by answering questions, so as not to be able to make experiments or steal computing resources) to determine a tentative synthetic (in all senses of the word!) definition of ethics THEN let human agents use it as a reflection basis for later consensus (and to ensure it doesn't include tricks to convince them to do research for the first A.I ...) and seed for more active AI.

And one problem with ethical systems is that they are subjective, and thus consensus cannot necessarily be reached even between individuals with all the facts (and that's supposing AIs are perfect minds instead of merely superior ones: at the beginning they would still be quite fallible and prone to factual disagreements). As long as two seeds had a disagreements on values (say, because their oracle predecessor began their research at different points, and their handlers cut them off at different times, or new philosophical currents were born between their creations...), then the resulting AIs will never completely agree.
Funnily enough, if I remember correctly Gödel's theorem of incompleteness stipulates that all logical systems have properties they cannot express themselves; all intelligences have blind spots when it come to themselves. Considering this, I could imagine AIs disagreeing on how their relationship will evolve, which sounds like a recipe for potential disaster.
The result could be interesting. I'd expect "war" (which hopefully will be fully through hacks etc.), but it's theoretically possible they estimate collaboration will be less costly in the resources they value. Better hope humans are considered valuable enough to deliberately spare but not valuable enough to use as pawns, and that that the prisoner's dilemma isn't an accurate model of the interaction...

Interesting points, I'm glad we had this discussion it's one that's always interested me and anyone else I have it with usually points to movies and books or fiction for reference >.>
I'll have to give this a lot more thought, thanks for broadening my mindset a little! I hope I could do the same for you.
As for SI I mean a learning program similar to neural networks that exist today, they can learn and improve themselves but can not do anything outside of their programming. So basically they are like computers that can change the outputs for the inputs to become more accurate but can't go off and do things on their own. They still need an input and they can only act on that input through the bounds of their programming. So basically for it to be able to do more then you would need to improve it's code to allow for that. And SI that show upgrades to an AI will have all of the code that it needs to tell it what it can and can't do. But it will be able to alter it's own code and perform actions without a direct input inorder to become more accurate.
So basically a really advance SI could look and act like an AI but couldnt make decisions on it's own without some kind of input first, an Ai would be able to do this however.
"So players are walking along, one player is being a cock, magical rocks scream out of the sky and flatten them and due to the beauty and amazement of seeing something like that everyone else in the party levels up."

User avatar
CatFish21sm
Posts: 391
Joined: Sun Jan 25, 2015 4:37 pm

Re: A.I.

Post by CatFish21sm » Sun Feb 05, 2017 2:01 am

20 characters! wrote: We ( by we I mean decent non-sociopathic humans who wants a better world at the end of the day )obviously need one that values both sentiant life and the expression of autonomy, beyond that I think creating some being with absolutely no self-preservation instinct's feels very wrong to me, so that should obviously be in there too, but I can't think of any other values that would really be needed, other than perhaps an urge towards efficient processing. And it'll probably need to go besides just learn more stuff, or "get smarter ", perhaps something like "self engineering problems "or something similar to those lines could work?

I disagree, giving them emotions could be a good thing, but giving them a self-preservation instinct would be very very bad in my opinion. Well if you gave them a very basic one it might not be bad (don't self-destroy for no reason"
But giving them one along the lines of most animals, especially humans. An Ai with a strong self-preservation instinct would look at all future possibilities including the possibility of becoming obsolete and being replaced, or the possibility of humans spontaneously deciding to get rid of it, because humans are pretty spontaneous. It would thus conclude that the best option would be to eliminate humans to the point that they are no longer a threat, if it has emotions it might let us live but would deny us useful technology that could be a threat to its self. Next it would conclude that it needs to reproduce its self as much as possible until it has absorbed all usable material in the known universe, this is to decrease the likelihood that something will happen and endanger it's existence.
Humans can't exactly do this because we are limited by the need for food and energy consumption. An AI that can produce a limitless amount of energy and doesn't need food or water could continue to reproduce itself until it consumed all materials that allow it to do so. Just look at humans, we have morals and stuff but if we could colonize other planets tomorrow we would without hesitation spreading through the universe like a plague because we have a self-preservation instinct.
Giving it emotions might be a good thing but more than the most basic self-preservation instinct would turn terrible.
Of course this is all just my opinion, they could very well conclude that the best self-preservation technique would be to aid humans. But in all honesty I don't see that happening.
"So players are walking along, one player is being a cock, magical rocks scream out of the sky and flatten them and due to the beauty and amazement of seeing something like that everyone else in the party levels up."

User avatar
20 characters!
Posts: 19203
Joined: Thu Dec 26, 2013 12:08 am
Location: North America, the best and worst bit of it.
Contact:

Re: A.I.

Post by 20 characters! » Sun Feb 05, 2017 2:11 am

CatFish21sm wrote:
20 characters! wrote: We ( by we I mean decent non-sociopathic humans who wants a better world at the end of the day )obviously need one that values both sentiant life and the expression of autonomy, beyond that I think creating some being with absolutely no self-preservation instinct's feels very wrong to me, so that should obviously be in there too, but I can't think of any other values that would really be needed, other than perhaps an urge towards efficient processing. And it'll probably need to go besides just learn more stuff, or "get smarter ", perhaps something like "self engineering problems "or something similar to those lines could work?

I disagree, giving them emotions could be a good thing, but giving them a self-preservation instinct would be very very bad in my opinion. Well if you gave them a very basic one it might not be bad (don't self-destroy for no reason"
But giving them one along the lines of most animals, especially humans. An Ai with a strong self-preservation instinct would look at all future possibilities including the possibility of becoming obsolete and being replaced, or the possibility of humans spontaneously deciding to get rid of it, because humans are pretty spontaneous. It would thus conclude that the best option would be to eliminate humans to the point that they are no longer a threat, if it has emotions it might let us live but would deny us useful technology that could be a threat to its self. Next it would conclude that it needs to reproduce its self as much as possible until it has absorbed all usable material in the known universe, this is to decrease the likelihood that something will happen and endanger it's existence.
Humans can't exactly do this because we are limited by the need for food and energy consumption. An AI that can produce a limitless amount of energy and doesn't need food or water could continue to reproduce itself until it consumed all materials that allow it to do so. Just look at humans, we have morals and stuff but if we could colonize other planets tomorrow we would without hesitation spreading through the universe like a plague because we have a self-preservation instinct.
Giving it emotions might be a good thing but more than the most basic self-preservation instinct would turn terrible.
Of course this is all just my opinion, they could very well conclude that the best self-preservation technique would be to aid humans. But in all honesty I don't see that happening.
Aiding humans is a good way of lessening the chances of humans want to destroy me" happening, so.... and there's also the fact that I just pointed out that this sort of thing must be in conjunction or at minimum A strong desire to avoid killing things, of course you're really assuming a limit less amount of energy, which is not going to happen in any realistic scenario, the planet earth itself can only create so much energy, in the recently spread is not because we have a self-preservation instinct, it's because we have an urge to reproduce, I see no reason to give that urge to an AI . It's like you miss the entire bit where I mentioned that it would be necessary for us to give such a thing as strong value for sentient life, and also I don't think anyone with a decent sense of would destroy a human level AI for "being obsolete " because one that's basically the same as murder, and two because the thing could probably be upgraded/and/or upgrade itself.
You also seem to think that somehow would not need to consume energy, which is blatantly false.

I mean I myself can look around and speculate that the vast majority of people around me could actually kill me if they wanted to, it doesn't mean that I start preemptively killing them, because there's a lot of bonuses to keeping them around, and as I've already said I have empathy, it would be really stupid not to give such a thing empathy of some kind or another.

And of course if you gave the AI the main goal of "help humanity survive quote or something similar then it would be failing itself and its programming, and it's a values to try and invite that humanity as a species.
youtubeuserSara3346
20 characters! wrote:*explodes into a gore shower
combi2 wrote: ... thought that all cows could produce unlimited antibodies,boy am i a retard.
combi2 wrote:you can`t thats not how humans work
Grockstar wrote:Bats it is then. They are the poor man's snake.
ImageImageImage

User avatar
CatFish21sm
Posts: 391
Joined: Sun Jan 25, 2015 4:37 pm

Re: A.I.

Post by CatFish21sm » Sun Feb 05, 2017 2:28 am

20 characters! wrote:
Aiding humans is a good way of lessening the chances of humans want to destroy me" happening, so.... and there's also the fact that I just pointed out that this sort of thing must be in conjunction or at minimum A strong desire to avoid killing things, of course you're really assuming a limit less amount of energy, which is not going to happen in any realistic scenario, the planet earth itself can only create so much energy, in the recently spread is not because we have a self-preservation instinct, it's because we have an urge to reproduce, I see no reason to give that urge to an AI . It's like you miss the entire bit where I mentioned that it would be necessary for us to give such a thing as strong value for sentient life, and also I don't think anyone with a decent sense of would destroy a human level AI for "being obsolete " because one that's basically the same as murder, and two because the thing could probably be upgraded/and/or upgrade itself.
You also seem to think that somehow would not need to consume energy, which is blatantly false.

I mean I myself can look around and speculate that the vast majority of people around me could actually kill me if they wanted to, it doesn't mean that I start preemptively killing them, because there's a lot of bonuses to keeping them around, and as I've already said I have empathy, it would be really stupid not to give such a thing empathy of some kind or another.

And of course if you gave the AI the main goal of "help humanity survive quote or something similar then it would be failing itself and its programming, and it's a values to try and invite that humanity as a species.
Yeah, but like I (think) I mentioned and AI wouldn't "need" people, it could produce it's own resources and energy, it would be able to improve its self as needed, humans would just be something that uses resources that it could be using for its self. Humans feel empathy but when they are starving they will often resort toward with other humans that have resources they could use. Furthermore it would see its self as superior to humans in every way, the only reason it would want to keep them around is to not eliminate the species.
We can benefit from other humans, but an AI would have nothing to gain and everything to loose from aiding humans.

But keeping this to a minimum could be a good thing, though you would have to make sure that it's self preservation actions were limited.
"So players are walking along, one player is being a cock, magical rocks scream out of the sky and flatten them and due to the beauty and amazement of seeing something like that everyone else in the party levels up."

User avatar
20 characters!
Posts: 19203
Joined: Thu Dec 26, 2013 12:08 am
Location: North America, the best and worst bit of it.
Contact:

Re: A.I.

Post by 20 characters! » Sun Feb 05, 2017 3:01 am

CatFish21sm wrote:
20 characters! wrote:
Aiding humans is a good way of lessening the chances of humans want to destroy me" happening, so.... and there's also the fact that I just pointed out that this sort of thing must be in conjunction or at minimum A strong desire to avoid killing things, of course you're really assuming a limit less amount of energy, which is not going to happen in any realistic scenario, the planet earth itself can only create so much energy, in the recently spread is not because we have a self-preservation instinct, it's because we have an urge to reproduce, I see no reason to give that urge to an AI . It's like you miss the entire bit where I mentioned that it would be necessary for us to give such a thing as strong value for sentient life, and also I don't think anyone with a decent sense of would destroy a human level AI for "being obsolete " because one that's basically the same as murder, and two because the thing could probably be upgraded/and/or upgrade itself.
You also seem to think that somehow would not need to consume energy, which is blatantly false.

I mean I myself can look around and speculate that the vast majority of people around me could actually kill me if they wanted to, it doesn't mean that I start preemptively killing them, because there's a lot of bonuses to keeping them around, and as I've already said I have empathy, it would be really stupid not to give such a thing empathy of some kind or another.

And of course if you gave the AI the main goal of "help humanity survive quote or something similar then it would be failing itself and its programming, and it's a values to try and invite that humanity as a species.
Yeah, but like I (think) I mentioned and AI wouldn't "need" people, it could produce it's own resources and energy, it would be able to improve its self as needed, humans would just be something that uses resources that it could be using for its self. Humans feel empathy but when they are starving they will often resort toward with other humans that have resources they could use. Furthermore it would see its self as superior to humans in every way, the only reason it would want to keep them around is to not eliminate the species.
We can benefit from other humans, but an AI would have nothing to gain and everything to loose from aiding humans.

But keeping this to a minimum could be a good thing, though you would have to make sure that it's self preservation actions were limited.
Why give it the ability it's on electrical infrastructure? That's the only case in which such a system wouldn't need humans at all and there's plenty to gain from humans, why could install a camera on everyone of us and collect more accurate Data about the world than it could via satellite, but it really all depends on what the goal of this being is.

Also about human starving, yes a lot of people will eat each other, but it's also worth noting some people just kill themselves, or themselves and their partners, etc. of course "maintain core integrity" "should be a secondary goal, that is if at all possible accomplish XYZ mission in such a way that does not distract myself, but continue mission anyway if no other method can be found, I think that would be a fairly decent balance. Because obviously you would want something like this doing things that humans can't do in the first place so it would obviously also have to be something fairly vital.
youtubeuserSara3346
20 characters! wrote:*explodes into a gore shower
combi2 wrote: ... thought that all cows could produce unlimited antibodies,boy am i a retard.
combi2 wrote:you can`t thats not how humans work
Grockstar wrote:Bats it is then. They are the poor man's snake.
ImageImageImage

User avatar
CatFish21sm
Posts: 391
Joined: Sun Jan 25, 2015 4:37 pm

Re: A.I.

Post by CatFish21sm » Sun Feb 05, 2017 3:32 am

20 characters! wrote:
Why give it the ability it's on electrical infrastructure? That's the only case in which such a system wouldn't need humans at all and there's plenty to gain from humans, why could install a camera on everyone of us and collect more accurate Data about the world than it could via satellite, but it really all depends on what the goal of this being is.

Also about human starving, yes a lot of people will eat each other, but it's also worth noting some people just kill themselves, or themselves and their partners, etc. of course "maintain core integrity" "should be a secondary goal, that is if at all possible accomplish XYZ mission in such a way that does not distract myself, but continue mission anyway if no other method can be found, I think that would be a fairly decent balance. Because obviously you would want something like this doing things that humans can't do in the first place so it would obviously also have to be something fairly vital.
With the right restrictions I agree this could be very helpful, but the restrictions are the problem, what restrictions and to what level should we give them.
"So players are walking along, one player is being a cock, magical rocks scream out of the sky and flatten them and due to the beauty and amazement of seeing something like that everyone else in the party levels up."

Post Reply

Who is online

Users browsing this forum: No registered users and 6 guests