It’s easy to say that the technology companies can do better. They can, and should. But ultimately, that’s not the problem. The problem is the media ecosystem they have created. The only surprise is that anyone would still be surprised that social media produce this tragic abyss, for this is what social media is supposed to do—what it was designed to do: to spread the images and messages that accelerate interest, without check, and absent concern for their consequences. It’s worth remembering that “viral” spread once referred to contagious disease, not to images and ideas. As long as technology platforms drive the spread of global information, they can’t help but carry it like a plague.
Fox has long been a bane of liberals, but in the past two years many people who watch the network closely, including some Fox alumni, say that it has evolved into something that hasn’t existed before in the United States. Nicole Hemmer, an assistant professor of Presidential studies at the University of Virginia’s Miller Center and the author of “Messengers of the Right,” a history of the conservative media’s impact on American politics, says of Fox, “It’s the closest we’ve come to having state TV.”
Americans’ need the same resolve in fighting for competition that their corporations have shown in fighting against it.
The challenge, as always, is political. But with US corporations having amassed so much power, there is reason to doubt that the American political system is up to the task of reform. Add to that the globalization of corporate power and the orgy of deregulation and crony capitalism under Trump, and it is clear that Europe will have to take the lead.
At the heart of this conundrum are the tangled vines of transparency and trust. When we interact with algorithms, we know we are dealing with machines. Yet somehow their intelligence and their ability to mimic our own patterns of thought and communication confuse us into viewing them as human. Researchers have observed that when computer users are asked to describe how machines interact with them, they use anthropomorphic terms such as “integrity,” “honesty,” and “cruelty.” I also refer to algorithms’ “behavior” or their “going rogue.” Our language, at least, would suggest that we expect the same degree of trustworthiness, benevolence, and fairness from the computer algorithms we deal with as we do from our human peers.
That people don’t know there are human beings doing this work is, of course, by design. Facebook would rather talk about its advancements in artificial intelligence, and dangle the prospect that its reliance on human moderators will decline over time.
But given the limits of the technology, and the infinite varieties of human speech, such a day appears to be very far away. In the meantime, the call center model of content moderation is taking an ugly toll on many of its workers. As first responders on platforms with billions of users, they are performing a critical function of modern civil society, while being paid less than half as much as many others who work on the front lines. They do the work as long as they can — and when they leave, an NDA ensures that they retreat even further into the shadows.
To Facebook, it will seem as if they never worked there at all. Technically, they never did.
The new Jesko by Koenigsegg, named after the company founder’s father, is claimed to be the world’s first road-legal 300 mph car. That top speed translates to more than 480 kilometers per hour. To put it in more relatable terms, this car can travel fast enough to cover the length of a football field — doesn’t matter which version of football you prefer — in less than a second. When you think of it in those terms, you’ll probably also realize just how theoretical performance like that is: there aren’t many straight lines in the world long enough to let a person hit such ludicrous speeds.
Because CAPTCHA is such an elegant tool for training AI, any given test could only ever be temporary, something its inventors acknowledged at the outset. With all those researchers, scammers, and ordinary humans solving billions of puzzles just at the threshold of what AI can do, at some point the machines were going to pass us by. In 2014, Google pitted one of its machine learning algorithms against humans in solving the most distorted text CAPTCHAs: the computer got the test right 99.8 percent of the time, while the humans got a mere 33 percent.