20180729_155028_HDR Conversational toxicity tool team mission
Here is conversational toxicity tool team mission in their own words:
Understanding the tone of the threads on social media for consumers and stakeholders in business.
This addresses a new pain point in the modern tech world, many of us now talk more in chat than we do in person, but there are few tools out there to see top level information about these conversations.
They imagined building two versions of the tool: one version aimed to enterprise companies, so they could identify toxic conversations going on in their internal channels, and another for individuals to identify toxic threads on social media. I don't know if I was the only person in the room who cringed hearing that someone wanted to provide a yet better way for corporations to monitor employee conversations. As if one didn't already feel that corporations monitor employees more and more closely! Besides, if the goal of such monitoring was to prevent harassment, its utility would be dubious, because most harassment does not take place through text channels anyway. Everybody knows that anything you might send through email, Skype, Slack, or some instant messaging tool is being saved on company's servers. (Yes, there was that high profile case at Uber where harassment took a form of Slack messages, but that was an exception to the rule, and it gave the victim a much stronger ground for her claims than most harassment victims have.)
If anyone had similar reservations, nobody said anything. Instead, somebody asked a question that was even funnier because it was legitimate: "Did you think you had an accurate model of conversational toxicity if you trained your model only on Reddit messages?"
Conversational toxicity tool was one of the projects presented at the Women in Tech Machine Learning Product hackathon that took place July 28 - 29, 2018 in Austin, TX at Capital Factory.