In an effort to avoid unsavory or illegal posts, Instagram and YouTube have introduced new limitations on content.
Instagram now warns users about posts that may link to animal abuse and harm. Hundreds of hashtags in the social network now trigger a content advisory screen when used or searched. For example, typing “#koalaselfie” yields a pop-up message reading: “Animal abuse and the sale of endangered animals or their parts is not allowed on Instagram. You are searching for a hashtag that may be associated with posts that encourage harmful behavior to the animals or the environment.”
Images or videos that depict animal abuse and the sale of endangered animals or their parts are explicitly prohibited.
Instagram announced the new policy in a blog post. “The protection and safety of the natural world are important to us and our global community,” the company wrote. “We encourage everyone to be thoughtful about interactions with wild animals and the environment to help avoid exploitation and to report any photos and videos you may see that may violate our community guidelines.”
At YouTube, the company introduced a four-step “action plan” to cut down on objectionable and illegal content. Several brands pulled their ads from the site earlier this year after the ads appeared next to videos with content ranging from extremist propaganda to scantily clad children accompanied by comments from pedophiles.
YouTube plans to hire more humans to review content on the site (with a goal of more than 10,000 individuals tasked with removing content that violates policies) and will expand the use of machine learning to evaluate content. YouTube said this technology has already helped remove more than 150,000 violent and extremist videos and is currently being programmed to flag videos that raise child safety concerns.
YouTube will also publish a regular report that will provide more transparency about the process of removing videos and comments that violate YouTube policies. It will also implement stricter advertising criteria that includes “ramping up” the use of ad reviewers to ensure ads are running only where they should, the company explained in a blog post setting forth the plan.
Why it matters: Advertisers should be aware of the changes that were designed in part to protect advertisers from inappropriate content. “We want advertisers to have peace of mind that their ads are running alongside content that reflects their brand’s values,” YouTube CEO Susan Wojcicki wrote in a blog post explaining the company’s plan.