A startup calledSpectrum Labsprovides artificial intelligence technology to platform providers to detect and shut down toxic exchanges in real-time.
But experts say that AI monitoring also raises privacy issues.
More Hate Speech
Spectrum Labs promises a high-tech solution to the age-old problem of hate speech.
Christine Hume / Unsplash
“There are about 500 million tweets a day on Twitter alone,” he added.
Instead, we use smart tools like AI to automate the process."
There is also a cost for those people who have to monitor and moderate content.
Morgan Basham / Unsplash
Spectrum isn’t the only company that seeks to detect online hate speech automatically.
For example,Centre Malaysia recently launched an online trackerdesigned to find hate speech among Malaysian netizens.
The software they developedcalled the Tracker Benciuses machine learning to detect hate speech online, particularly on Twitter.
The challenge is how to create spaces in which people can really engage with each other constructively.
“The challenge is how to create spaces in which people can really engage with each other constructively.”
AI speech monitoring shouldn’t raise privacy issues if companies use publicly available information during monitoring, Fox said.
Most importantly, technology can reduce the amount of toxic content human moderators are exposed to, he said.
We may be on the cusp of a revolution in AI monitoring human speech and text online.
But some experts say that humans will always need to be working with computers to monitor hate speech.
“AI alone won’t work,” Raicu said.