Yesterday I had a conversation with Niels who suggested it's because we don't understand this kind of technology.
And when we don't understand something, we either:
🤷♀️ Consider it inevitable
🙅♀️ Refuse to have anything to do with it, or
🙇♀️ Ascribe a higher 'intelligence' to it
Instead of accepting that there is something we don’t understand (yet), we create myths about magic machines and intelligent black boxes.
But what should we do instead?
My truly intelligent colleague said that we should acknowledge that there is always an element of fear in what we don’t understand.
And instead of covering up that we are afraid, we should create contexts where we take our fear seriously.
All technology is potentially dangerous, so we should be vigilant when using technology we don’t understand.
So, maybe the important question is not how to understand and use artificial intelligence.
Maybe the important question is how to design technology that makes us more vigilant?