The internet is supposed to be a place that fosters community and conversation—but all too often, it’s a place where people are subjected to hateful speech and online harassment. More than 40 percent of Americans have personally experienced some degree of online harassment, and in 2017, 27 percent reported that they wouldn’t post a comment after witnessing harassment posted online. Meanwhile, the sheer volume of toxic commentary online has been enough to force prominent news organizations to shut down comments sections on their articles entirely.
The New York Times, however, refuses to let the trolls win. They’ve invested significant resources in community moderation to ensure that their readers have a productive place to discuss all sides of an issue and connect freely over the topics that matter most, without being subjected to abuse.
But those high standards are hard to scale. Careful moderation takes time, and in 2016 The New York Times was only able to enable comments on about 10 percent of articles. Eager to find a solution that would let them support comments across a greater swath of articles, they looked to machine learning to help.
Important Note
Content Editors rate, curate and regularly update what we believe are the top 11% of all AI resource and good practice examples and is why our content is rated from 90% to 100%. Content rated less than 90% is excluded from this site. All inclusions are vetted by experienced professionals with graduate level data science degrees.
In the broadest sense, any inclusion of content on this site is not an endorsement or recommendation of any service, product or content that may be discussed, recommended, endorsed or affiliated with the content, company or spokesperson. We are a 501(c)3 nonprofit and receive no website advertising monies or direct or indirect compensation for any content or other information on any of our websites. For more information, visit our TOS.