Each one of your stories reminds me of slightly different things, even if they're on related themes. This one reminded me of Zero HP Lovecraft's "God-Shaped Hole".
It seemed inevitable this was heading towards a human-bot mismatch, but it didn't end up as classical conventions would go - props for that. Plus it's left ambiguous What It All Means: was that really a bot? Or are some genuine humans actually so programmatically rote in some contexts that they themselves can't pass the TT? (Insert modern NPC meme.) Which of course has Implications, since I think we all know the former very much exists IRL right now, no Singularity needed. Any autist can also relate: sometimes even from the inside, "being human" can feel an awful lot like merely predicting the next expected token. It's more complicated with many-layered interactions like body language and social graces, but mere text...it's so easy to escape the Uncanny Valley with text alone. And dialogue, to a certain extent (current voice deepfakes are remarkably accurate, even if the video parts remain jarringly fake).
Nevermind the in-universe complication that now every single "human" coulda actually been a bot. There's nothing in the parameters requiring them not to lie - in fact that's the whole point of the game! And lying convincingly is one of the more important tells that someone is a human...it's actually very hard to retain 100% intellectual consistency, and calling out such "blunders" in real human interactions is a quick way to ostracize oneself as a social loser who Just Doesn't Get It. So it'd make sense to keep around a real human for training purposes, possibly even set them up in a Truman Show-type situation.
Also, it's interesting that Alan apparently has both really excellent eyesight and the ability to quickly and reliably read text backwards. Unless I misunderstood something about teleprompters and camera angles. (Or maybe he's just guessing at the prompts...which is possibly worse, since they'd be uncannily good guesses.)
Thanks for your detailed thoughts once again, I'm obviously a fan of a number of past AI stories that wrestle with that sort of human/AI ambiguity, and hoped to do something a little different that wasn't a re-tread, on top of exploring some ideas for an adversarial Turing Test, some of which may already be out of date.
I might be willing to give Alan credit for reading text backwards reasonably quickly, but I'd say the live-action version is free to change the angles so that wouldn't be necessary.
This was an interesting story, Mark. The premise pulled me in and made me root for the human. I saw scary hints of what a bot-run future could look like.
Each one of your stories reminds me of slightly different things, even if they're on related themes. This one reminded me of Zero HP Lovecraft's "God-Shaped Hole".
It seemed inevitable this was heading towards a human-bot mismatch, but it didn't end up as classical conventions would go - props for that. Plus it's left ambiguous What It All Means: was that really a bot? Or are some genuine humans actually so programmatically rote in some contexts that they themselves can't pass the TT? (Insert modern NPC meme.) Which of course has Implications, since I think we all know the former very much exists IRL right now, no Singularity needed. Any autist can also relate: sometimes even from the inside, "being human" can feel an awful lot like merely predicting the next expected token. It's more complicated with many-layered interactions like body language and social graces, but mere text...it's so easy to escape the Uncanny Valley with text alone. And dialogue, to a certain extent (current voice deepfakes are remarkably accurate, even if the video parts remain jarringly fake).
Nevermind the in-universe complication that now every single "human" coulda actually been a bot. There's nothing in the parameters requiring them not to lie - in fact that's the whole point of the game! And lying convincingly is one of the more important tells that someone is a human...it's actually very hard to retain 100% intellectual consistency, and calling out such "blunders" in real human interactions is a quick way to ostracize oneself as a social loser who Just Doesn't Get It. So it'd make sense to keep around a real human for training purposes, possibly even set them up in a Truman Show-type situation.
Also, it's interesting that Alan apparently has both really excellent eyesight and the ability to quickly and reliably read text backwards. Unless I misunderstood something about teleprompters and camera angles. (Or maybe he's just guessing at the prompts...which is possibly worse, since they'd be uncannily good guesses.)
Thanks for your detailed thoughts once again, I'm obviously a fan of a number of past AI stories that wrestle with that sort of human/AI ambiguity, and hoped to do something a little different that wasn't a re-tread, on top of exploring some ideas for an adversarial Turing Test, some of which may already be out of date.
I might be willing to give Alan credit for reading text backwards reasonably quickly, but I'd say the live-action version is free to change the angles so that wouldn't be necessary.
This was an interesting story, Mark. The premise pulled me in and made me root for the human. I saw scary hints of what a bot-run future could look like.