The Secret Lives of Facebook Moderators in America

That people don’t know there are human beings doing this work is, of course, by design. Facebook would rather talk about its advancements in artificial intelligence, and dangle the prospect that its reliance on human moderators will decline over time.

But given the limits of the technology, and the infinite varieties of human speech, such a day appears to be very far away. In the meantime, the call center model of content moderation is taking an ugly toll on many of its workers. As first responders on platforms with billions of users, they are performing a critical function of modern civil society, while being paid less than half as much as many others who work on the front lines. They do the work as long as they can — and when they leave, an NDA ensures that they retreat even further into the shadows.

To Facebook, it will seem as if they never worked there at all. Technically, they never did.

Why CAPTCHA Have Gotten So Difficult

Because CAPTCHA is such an elegant tool for training AI, any given test could only ever be temporary, something its inventors acknowledged at the outset. With all those researchers, scammers, and ordinary humans solving billions of puzzles just at the threshold of what AI can do, at some point the machines were going to pass us by. In 2014, Google pitted one of its machine learning algorithms against humans in solving the most distorted text CAPTCHAs: the computer got the test right 99.8 percent of the time, while the humans got a mere 33 percent.

→ The Verge

The Dark Secret at the Heart of AI

Credit : Adam Ferriss

As the technology advances, we might soon cross some threshold beyond which using AI requires a leap of faith. Sure, we humans can’t always truly explain our thought processes either—but we find ways to intuitively trust and gauge people. Will that also be possible with machines that think and make decisions differently from the way a human would? We’ve never before built machines that operate in ways their creators don’t understand. How well can we expect to communicate—and get along with—intelligent machines that could be unpredictable and inscrutable?

Illustration : Adam Ferriss

→ MIT Technology Review

Be Your Selves

The internet and social media don’t create new personalities; they allow people to express sides of themselves that social norms discourage in the “real world”.

• • •

We may come to see face-to-face conversation as the social medium that most distorts our personalities. It requires us to speak even when we don’t know what to say and forces us to be pleasant or acquiescent when we would rather not.

• • •

Social media have turned a species used to intimacy into performers. But these perfor­mances are not necessarily false. Person­ality is who we are in front of other people. The internet, which exposes our elastic personalities to larger and more diverse groups of people, reveals the upper and lower bounds of our capacity for empathy and cruelty, anxiety and confidence.

→ 1843 Magazine

Is Artificial Intelligence Permanently Inscrutable?


The result is that modern machine learning offers a choice among oracles: Would we like to know what will happen with high accuracy, or why something will happen, at the expense of accuracy? The “why” helps us strategize, adapt, and know when our model is about to break. The “what” helps us act appropriately in the immediate future.

It can be a difficult choice to make. But some researchers hope to eliminate the need to choose—to allow us to have our many-layered cake, and understand it, too. Surprisingly, some of the most promising avenues of research treat neural networks as experimental objects—after the fashion of the biological science that inspired them to begin with—rather than analytical, purely mathematical objects.

→ Nautilus