
At the heart of this conundrum are the tangled vines of transparency and trust. When we interact with algorithms, we know we are dealing with machines. Yet somehow their intelligence and their ability to mimic our own patterns of thought and communication confuse us into viewing them as human. Researchers have observed that when computer users are asked to describe how machines interact with them, they use anthropomorphic terms such as “integrity,” “honesty,” and “cruelty.” I also refer to algorithms’ “behavior” or their “going rogue.” Our language, at least, would suggest that we expect the same degree of trustworthiness, benevolence, and fairness from the computer algorithms we deal with as we do from our human peers.