Twitch has always demonstrated how it is constantly taking an aggressive stance against online harassment, and more so with a new update to its guidelines. The streaming platform is now including verified harassment outside of Twitch as a punishable offense for its users.
The online streaming platform has posted an update on its Community Guidelines last Friday, Feb. 9, and a part of the change is how they deal with harassment and hateful behavior both on and outside their platform.
"First, conduct we deem to be hateful will result in an immediate indefinite suspension. Hate simply has no place in the Twitch community," the company reminded readers in their blog post.
The update went on to announce that the Twitch moderation team is now considering verified hate speech or harassment as a violation of Twitch policies. Any unacceptable posts that can be connected to a Twitch user, whether it's through Twitter or other social media services, is now grounds for a penalty.
"If you use other services to direct hate or harassment towards someone on Twitch, we will consider it a violation of Twitch's policies," the blog post explained.
A good portion of Twitch streamers use YouTube as a way to archive their streams, and this new policy could also mean that a denigrating comment left on YouTube could lead to a Twitch ban for that commenter, as Ars Technica pointed out.
Twitch has not come out and announced that they will be policing Twitter, YouTube and other platforms for hateful speech from their users. What this new policy now enables, however, is a way for reports against users to use posts on social media to bolster their case.
"When filing a report, users can provide documentation that illustrates harassment from any source," a Twitch representative explained, adding that this evidence can be taken from "things like public-facing social media sites," as long as they can be personally verified to belong to the Twitch user.