With a quarter of the world’s population using social media tospread millions of pieces of information, social media websites are now under tremendous pressures of finding ways to better police the tidal wave of content published daily. Last year it was reported a social media website had posted images which were offensive and had failed to block the content, even though many social media websites have staff employed to review online content.
In the era of social media and big data, most UGC platforms are dedicating much manpower to wrestling with the problem of content moderation in order to protect both the user community and the enterprise brand. However, there are obvious limitations of human review. As online information keeps emerging much faster than people can respond to, it is hard to always ensure the accuracy and efficiency of human review. In addition, most content moderators are more or less suffering physical and mental uncomfortableness. Recently, two employees sued a technology company for negligent infliction of emotional distress with the job requiring them to view inappropriate and unprofessional images.
Many companies are looking for better approaches and hope that artificial intelligence will take a bigger role. Tuputech, a china-based AI company focusing on machine deep learning research, is bringing about a revolution in the field of content moderation. Teaching and training machines to learn and judge as the way a human’s convolutional neural network (CNN) works, Tuputech makes it possible for machines to take large part of a human’s review work, processing averagely 900 million images and video one day with accuracy up to 95%. A first company to apply the machine learning AI into content moderation, Tuputech has now become a market leader in China, helping social media, including a live streaming app, saving 90% manpower on the content review. Machine learning technology is definitely bringing good tidings to enterprises under the pressure of UGC management.