20180729_155304_HDR Conversational toxicity product UX
This is how the Conversational toxicity monitoring tool team envisioned the User Experience of their product.
- It would be offered as a browser extension
- A certain set of websites can be used: Reddit, Facebook, Twitter
- When a user sees a post or group thread, next to the title, a colored box appears with a Volatility measure. The Volatility measure is either Red, Yellow, or Green.
For enterprise users:
-- It would integrate with a certain set of products: Slack, Skype, Lync (typical channels for intra-company communication)
-- When identified users look at a channel, they can see the Volatility Measure
-- Volatility Measure can be turned on or off for certain users
I could see how it could be useful for individual users for making the decisions whether or not to read the comments on some post, especially on highly-contentious websites (yes, the classic advice says "Never read the comments!", but sometimes comments can be useful and informative). But as far as the enterprise user version... I don't know if I was the only person in the room who cringed hearing that this team wanted to provide a yet better way for corporations to monitor employee conversations. As if one didn't already feel that corporations monitor employees more and more closely! Besides, if the goal of such monitoring was to prevent harassment, its utility would be dubious, because most harassment does not take place through text channels anyway. Everybody knows that anything you might send through email, Skype, Slack, or some instant messaging tool is being saved on company's servers. (Yes, there was that high profile case at Uber where harassment took a form of Slack messages, but that was an exception to the rule, and it gave the victim a much stronger ground for her claims than most harassment victims have.)
If anyone had similar reservations, nobody said anything. Instead, somebody asked a question that was even funnier because it was legitimate: "Did you think you had an accurate model of conversational toxicity if you trained your model only on Reddit messages?"
Conversational toxicity tool was one of the projects presented at the Women in Tech Machine Learning Product hackathon that took place July 28 - 29, 2018 in Austin, TX at Capital Factory.