A.I.

FOR SCIENCE!
User avatar
White parrot
Posts: 1821
Joined: Wed Sep 12, 2012 4:42 pm

Re: A.I.

Post by White parrot » Mon Feb 06, 2017 5:33 pm

CatFish21sm wrote:Interesting points, I'm glad we had this discussion it's one that's always interested me and anyone else I have it with usually points to movies and books or fiction for reference >.>
I'll have to give this a lot more thought, thanks for broadening my mindset a little! I hope I could do the same for you.
Thanks, I hope I'm not too forceful. <_<'

CatFish21sm wrote:As for SI I mean a learning program similar to neural networks that exist today, they can learn and improve themselves but can not do anything outside of their programming. So basically they are like computers that can change the outputs for the inputs to become more accurate but can't go off and do things on their own. They still need an input and they can only act on that input through the bounds of their programming. So basically for it to be able to do more then you would need to improve it's code to allow for that. And SI that show upgrades to an AI will have all of the code that it needs to tell it what it can and can't do. But it will be able to alter it's own code and perform actions without a direct input inorder to become more accurate.
So basically a really advance SI could look and act like an AI but couldnt make decisions on it's own without some kind of input first, an Ai would be able to do this however.
Aaaah, OK. So a "sufficiently advanced" S.I. would be reactive and adapt really well to new situation, but to qualify as an A.I. it would need to set goals to itself without being prompted to and work on it in "spare times". An inner life of a sort. I I understand correctly.
As you say, it could "look and act like an A.I." in that for an outsider observer, it might not be obvious at first glance if an original action is completely spontaneous or a reaction that we just can't rattach to a specific input. More philosophically, does an impulse coded inside an A.I. by its predecessor would count as spontaneous or as an input?... @_@'
... I guess one critical aspect of S.I./A.I. conception is the ability to access the way they think. Technically we could do without and it's an added difficulty in conception, but it'd be so much simpler in the formative stage if we can at least get glimpses of what the hell their reasoning is! :(

CatFish21sm wrote:An Ai with a strong self-preservation instinct would look at all future possibilities including the possibility of becoming obsolete and being replaced, or the possibility of humans spontaneously deciding to get rid of it, because humans are pretty spontaneous. It would thus conclude that the best option would be to eliminate humans to the point that they are no longer a threat [...]
20characters! wrote:Aiding humans is a good way of lessening the chances of humans want to destroy me" happening, so...
Note that it's is difficult to get into the shoes of an entity smarter than ourselves, even if we know exactly its motivations, because we just don't have the same grasp on the factual situation. Similarly, we can't entirely trust limitations which are not included in the A.I. itself, because if it really set its mind to it it may figure out a way to get around them we humans haven't thought of. Which is pretty creepy. :P In this case, we know a self-serving A.I. would serve itself (because duh), but how is another matter... Assuming the worst is a good rule of thumb, not because all A.I. would inexplicably hate humans, but just because in case we're wrong we're better off not creating right now an A.I. that could have helped than creating an A.I. that will destroy us.
Hence, the idea of having really umambiguous motivations, because this is the only thing we could be sure of! :o
CatFish21sm wrote:Yeah, but like I (think) I mentioned and AI wouldn't "need" people, it could produce it's own resources and energy
A relatively happy but definitively embarrassing outcome is when the A.I. we just painstakingly created just strike out on its own, never looking back toward Earth.
At this point, we shouldn't be surprised by anything nature does. She's like a meth addict whose drug-fueled rampages unfold in slow motion and span millions of years.
Silly Otter wrote:Welcome to the forum.
Please ignore the cultists.

User avatar
CatFish21sm
Posts: 391
Joined: Sun Jan 25, 2015 4:37 pm

Re: A.I.

Post by CatFish21sm » Mon Feb 06, 2017 8:53 pm

White parrot wrote: Thanks, I hope I'm not too forceful. <_<'
Nah, you're good!
White parrot wrote: Aaaah, OK. So a "sufficiently advanced" S.I. would be reactive and adapt really well to new situation, but to qualify as an A.I. it would need to set goals to itself without being prompted to and work on it in "spare times". An inner life of a sort. I I understand correctly.
As you say, it could "look and act like an A.I." in that for an outsider observer, it might not be obvious at first glance if an original action is completely spontaneous or a reaction that we just can't rattach to a specific input. More philosophically, does an impulse coded inside an A.I. by its predecessor would count as spontaneous or as an input?... @_@'
... I guess one critical aspect of S.I./A.I. conception is the ability to access the way they think. Technically we could do without and it's an added difficulty in conception, but it'd be so much simpler in the formative stage if we can at least get glimpses of what the hell their reasoning is! :(
That's mostly right, a good example would be a program that is responsible for making your coffee. An SI would learn how you like your coffee, dark, cream, etc. and make it the way you like it. But the same AI would go a step further and analyze your tastes compared with everyone else in the world find someone with similar tastes and suggest formulas that you might actually like better. Or it might go further than that and slightly alter the coffee and measure your reaction in an attempt to make a better coffee. But it wouldn't stop there, it would monitor your habits, when you drink coffee at first the time, then day of the week, and continue measuring your coffee drinking habits until it has a 100% accuracy rating and can have your coffee made the perfect way at the perfect temp and time by exactly when you pick up the cup.
An SI would work pretty well, it could set goals for itself, but could only follow it's programming. It couldn't experiment with new tastes unless you told it to, it couldn't plan for the future unless you told it to, it could only do what it's told to do. On the other hand an AI could predict what you might tell it to do in the future or even what you might want it to do but not directly tell it to do and go ahead and set goals and functions for those actions without any specific command. It's more of an issue of flexibility. An AI would be extremely flexible while an SI would be very rigid. For example, there is an SI right now that diagnoses cancer (better than doctors who specialize in that form of cancer) and can answer questions on jeopardy and blow the other contestants away. But it can only answer questions it can't predict future questions, furthermore it can only diagnose one form of cancer, it can't go off on its own and teach itself how to diagnose other forms. And most importantly a true AI would be able to alter its own code in order to fix errors, make itself more efficient, and even give itself more ability. An SI would never be able to do any of those. So it's more of a matter of flexibility than anything else. A sufficiently programmed SI would simulate intelligence and could act flexible but deep down it's still rigid. An AI, on the other hand, would have actual intelligence and would be very flexible.
White parrot wrote:A relatively happy but definitively embarrassing outcome is when the A.I. we just painstakingly created just strike out on its own, never looking back toward Earth.
There's a movie about that!
Though it's dramatized and I don't agree with all of it's points (There's quite a few I don't agree with but whatever) it's still a good movie that shows that even if we created an AI with the utmost caution and made the perfect AI. The fact that it's an AI could still bite us in the but...
The movie is called "Her" it wasn't really big, so in case you haven't seen it here's the trailer.
https://www.youtube.com/watch?v=WzV6mXIOVl4
"It's better than the trailer, but not much better..."
"So players are walking along, one player is being a cock, magical rocks scream out of the sky and flatten them and due to the beauty and amazement of seeing something like that everyone else in the party levels up."

Post Reply

Who is online

Users browsing this forum: No registered users and 1 guest