The Conversation Volatility Tool team worked on software that would enable individuals and businesses to better understand the volatility of conversations. I could not really tell if they developed it into an actual product, as in, into a running piece of code, or if it remained at a conceptual level. They didn't demonstrate any running code -- but then again, almost no one at the hackathon did. My guess is that turning a machine learning concept into a product over a weekend is even more difficult than building a prototype web or mobile app.
The Conversation Volatility Tool aimed to "understand the tone of the threads on social media for consumers and stakeholders in business". Many of us now talk more in chat than we do in person, they explained, but there are no good ways to monitor those conversations for toxicity.
They imagined building two versions of the tool: one version aimed to enterprise companies, so they could identify toxic conversations going on in their internal channels, and another for individuals to identify toxic threads on social media. I don't know if I was the only person in the room who cringed hearing that someone wanted to provide a yet better way for corporations to monitor employee conversations. As if one didn't already feel that corporations monitor employees more and more closely! Besides, if the goal of such monitoring was to prevent harassment, its utility would be dubious, because most harassment does not take place through text channels anyway. Everybody knows that anything you might send through email, Skype, Slack, or some instant messaging tool is being saved on company's servers. (Yes, there was that high profile case at Uber where harassment took a form of Slack messages, but that was an exception to the rule, and it gave the victim a much stronger ground for her claims than most harassment victims have.)
If anyone had similar reservations, nobody said anything. Instead, somebody asked a question that was even funnier because it was legitimate: "Did you think you had an accurate model of conversational toxicity if you trained your model only on Reddit messages?"
For individuals they would offer this tool as a browser extension, where a user could see a colored box next to a conversation (e. g. a comment thread) in green, yellow or red to indicate the level of volatility. For corporations they would offer integration with Skype, Linc, or Slack.
Their model of volatility is based on percentage of angry users per post. Angry users are determined based on the numbers of "angry words" (per... what? Per unit of conversation, whatever that might be? A post?) No telling if this model accounts for angry words used in a sarcastic or humorous manner, or even as the opposite of its meaning -- which is usually clear to readers from the context, but not obvious to machine learning algorithms? Such as "I really hated this dish" next to a picture of an empty, dirty plate? Come to think about it, did sentiment analysis -- and this project is basically sentiment analysis, which was all the rage 5-10 years ago, which is why I say that hackathon judges give higher points to project that remind them of something they already know -- ever solve this problem?
To get the data for training their model, they scraped it from various websites, most notably Reddit, which lead the judges to ask the above-mentioned funny question. And while the team had enough time for that and for building their model, they did not write the application frontend. That was going to come in the next steps, since one of their future goals was stated as "Begin exploring integration and website applications for understanding the requirement for writing the front end." This is yet another data point for my hunch that most teams at hackathons can hardly even get a demo'able prototype running on their own machines, let alone deploy it in the cloud.
This was one of the projects presented at the Women in Tech Machine Learning Product hackathon that took place July 28 - 29, 2018 in Austin, TX at Capital Factory.