No True AI
Shower thought, but I’m getting a certain amount of “no true scotsman” vibe from my twitter stream recently about AI. “Even if it does pass the turing test it’s not sentient” which is interesting! When people are presented with something that passes the previously impossible-to-approach barrier they set, do they accept that it passes the barrier, or was the barrier wrong? Were we just not really seriously considering the barrier as a good test until something arrives to challenge it? Or is it an indication that the entire problem is bad? Why does this test matter? etc. We just don’t believe in intelligent computers as a society, and so pre-writing a test decades ago doesn’t help at all, because we’ve used the test until something “passes” it (to be clear, this expert system is not intelligent, but that’s not the interesting thing here) and as soon as something passes it we’ll move the test.
Feels somewhat like the mistake of the google engineer is that they jumped from “I can have a conversation with this” to “therefore we should never turn it off and it should be allowed to vote” and when/if intelligent machines arrive they’re not going to get that. I have a feeling that (assuming we can build intelligence) we’ll end up with star wars droids - intelligent, have personalities, you can make friends with them, they’ll have wants and dreams, but at the same time absolutely everyone in society (including the droids) accepts/assumes that they’re slaves / property / subhuman and have no rights.