There's an interesting post on Vicky Beeching's blog on how increasingly sophisticated Artificial Intelligences (AI) may impact on human roles and interactivity on science. She posts some great questions at the end as well which I hope to explore in a bit more detail here.
- Do you think you could be fooled by a highly programmed ChatBot at the end of a phone?
- What spiritual questions does all of this raise about our value of people vs machines?
Rightly she voices concerns that the use of AI as an alternative communication tool may in some way degenerate the interactive experience of communication that human-human communication generally enjoy today. This is because it seems to become more one sided and one directional as it becomes increasingly human-machine rather than human-human. The specific example she sites is that of an AI 'ChatBot' at the other end of a customer services call.
We must be concerned for each other and the consequences of replacing people unnecessarily with technology. Everybody is different and while I like to use automated shopping tills I know that many of my friends do not and will still prefer manned ones. I find telephone software agents incredibly annoying and just want to speak to a real person immediately, others that I know find the agent much more relaxing and a less stressful experience. One size doesn’t fit all people or situations and we need to be both sensitive and courageous with that.
I was intrigued by the question about being “fooled” by a ChatBot as increasingly the sophistication of these programs is reaching this point so it becomes a very real possibility. Everyone would like to say no to this I imagine but human-machine interactions are already very two-way with humans talking to their cars, singing along with their stereos and machines and software fighting alongside humans in both real-war situations and as team mates in many modern video games. These are just a few examples where we in certain situations, at certain times treat non-human constructs as other persons. There is already a relational aspect to human-machine interaction and this is without anybody being “fooled” in the way VB’s post means.
I do however disagree on that. I would argue that to talk about humans being “fooled” is a very human-centric view. The human user is fooled into the belief that they are talking to another real human but surely only because the ChatBot gives good cause to. If we got to this point where a human is engaging a machine fully as an equal it is probably at a stage where we could say that it is identical to say that the human is in reality talking to a life-form that is not human but is a person as well. And it is only human-bias and discomfort that would deny this. Questions of being “fooled” become questions of ‘getting to know you’ instead!