Thanks, I hope I'm not too forceful. <_<'CatFish21sm wrote:Interesting points, I'm glad we had this discussion it's one that's always interested me and anyone else I have it with usually points to movies and books or fiction for reference >.>
I'll have to give this a lot more thought, thanks for broadening my mindset a little! I hope I could do the same for you.
Aaaah, OK. So a "sufficiently advanced" S.I. would be reactive and adapt really well to new situation, but to qualify as an A.I. it would need to set goals to itself without being prompted to and work on it in "spare times". An inner life of a sort. I I understand correctly.CatFish21sm wrote:As for SI I mean a learning program similar to neural networks that exist today, they can learn and improve themselves but can not do anything outside of their programming. So basically they are like computers that can change the outputs for the inputs to become more accurate but can't go off and do things on their own. They still need an input and they can only act on that input through the bounds of their programming. So basically for it to be able to do more then you would need to improve it's code to allow for that. And SI that show upgrades to an AI will have all of the code that it needs to tell it what it can and can't do. But it will be able to alter it's own code and perform actions without a direct input inorder to become more accurate.
So basically a really advance SI could look and act like an AI but couldnt make decisions on it's own without some kind of input first, an Ai would be able to do this however.
As you say, it could "look and act like an A.I." in that for an outsider observer, it might not be obvious at first glance if an original action is completely spontaneous or a reaction that we just can't rattach to a specific input. More philosophically, does an impulse coded inside an A.I. by its predecessor would count as spontaneous or as an input?... @_@'
... I guess one critical aspect of S.I./A.I. conception is the ability to access the way they think. Technically we could do without and it's an added difficulty in conception, but it'd be so much simpler in the formative stage if we can at least get glimpses of what the hell their reasoning is!
CatFish21sm wrote:An Ai with a strong self-preservation instinct would look at all future possibilities including the possibility of becoming obsolete and being replaced, or the possibility of humans spontaneously deciding to get rid of it, because humans are pretty spontaneous. It would thus conclude that the best option would be to eliminate humans to the point that they are no longer a threat [...]
Note that it's is difficult to get into the shoes of an entity smarter than ourselves, even if we know exactly its motivations, because we just don't have the same grasp on the factual situation. Similarly, we can't entirely trust limitations which are not included in the A.I. itself, because if it really set its mind to it it may figure out a way to get around them we humans haven't thought of. Which is pretty creepy. In this case, we know a self-serving A.I. would serve itself (because duh), but how is another matter... Assuming the worst is a good rule of thumb, not because all A.I. would inexplicably hate humans, but just because in case we're wrong we're better off not creating right now an A.I. that could have helped than creating an A.I. that will destroy us.20characters! wrote:Aiding humans is a good way of lessening the chances of humans want to destroy me" happening, so...
Hence, the idea of having really umambiguous motivations, because this is the only thing we could be sure of!
A relatively happy but definitively embarrassing outcome is when the A.I. we just painstakingly created just strike out on its own, never looking back toward Earth.CatFish21sm wrote:Yeah, but like I (think) I mentioned and AI wouldn't "need" people, it could produce it's own resources and energy