2
2/14/2006 10:36:36 AM
2/14/2006 11:50:48 AM
No logical, robotic being would have use for humanity after a point.You better damn well hope the things understand sentimental value.
2/14/2006 11:55:04 AM
^ I disagree.It wouldn't SEE a use for humanity after a point.A large amount of humanity's ability to survive hinges on acting irrationally. A combination of both (nearly) flawless computational power, and the ability to think without being 100% caged by logic would be ideal.
2/14/2006 12:16:40 PM
Thing is, even if a robot had no use for humanity, it doesn't necessarily mean that the robots would attempt to exterminate us James Cameron-style. They'd probably just ignore us and go about their merry way into space. It's not like they'd have any reason to stick around on Earth.
2/14/2006 12:59:51 PM
^ That's right. It'd be more "logical" to not risk extermination and the cost of a drawn out war.
2/14/2006 1:00:46 PM
Yeah. I don't know why people usually assume that artificially intelligent robots would be as territorial as humans, or limit themselves to the confines of the surface of the Earth when there's so much more room in space.
2/14/2006 1:03:21 PM
I don't know why anybody thinks artificially intelligent robots would be "logical".
2/14/2006 1:05:57 PM
I don't see why anyone would think they'd be purely logical either. What's your point?
2/14/2006 1:14:51 PM
Irrational human's wouldn't allow it.Since war is man's very nature, logic would dictate the most efficient means of extermination or reducing humanity to the point where it can't possibly pose a threat.No war is necessary, unleash biological agents that don't effect machines.I mean really there's absolutely no point to humanity if true artificial intelligence is created.
2/14/2006 1:36:25 PM
There's a gap in your logic chain that I'm missing. How do you get from "they wouldn't need us" to "therefore they would destroy us?"
2/14/2006 1:37:19 PM
^ Too many science fiction books. Edit:
2/14/2006 1:46:35 PM
I'd think we'd design them to be more proficient with logic than ourselves, but that they'd have a limited means of "consciously(?)" deviating from that pattern of processing information when the situation demanded. IOW, it wouldn't be purely logical (i.e. Spock) all the time, but it'd have a much higher capacity for logical thought than we do.
2/14/2006 2:18:35 PM
I think we'll end up copying human intelligence first. This means that AIs won't be much more logical than we are.
2/14/2006 4:58:01 PM
2/14/2006 7:09:30 PM
no, not truelyemulations and mimics surebut the real deal? no
2/14/2006 11:27:01 PM
Could you elaborate why you feel a technical issue of complexity will never be solved?
2/15/2006 1:19:33 AM
Or, for fun, explain how our own actions are different from emulation and immitation?
2/15/2006 3:28:59 AM
in a word, choice.
2/15/2006 4:50:35 AM
2/15/2006 8:48:36 AM
2/15/2006 9:37:07 AM
2/15/2006 12:01:50 PM
2/15/2006 12:06:04 PM
2/15/2006 5:54:10 PM
Wow, choice looks an awful lot like copping out.
2/15/2006 6:01:41 PM
damnit dirtygreeki was just coming to post that
2/15/2006 6:02:59 PM
haha i love those dino comics... except the ones i don't understand (But i understand that one!!!!!1)
2/16/2006 1:26:26 AM