Tech

Google to Protect Children from Inappropriate Content with AI

In today's Internet world, where children are completely vulnerable, artificial intelligence software, developed by Google to protect children from inappropriate content, has been launched.

Many companies have begun research to prevent the spread of child sexual abuse material (CSAM). Artificial intelligence software developed to prevent children from being exposed to inappropriate content in a vulnerable way will be used by Google.

RELEASED NEWS

            Google's China-specific 'Censored' Search Engine Cuts Google Employees in Cubes
        
    

Internet giant Google introduced artificial intelligence software that children developed to protect children from inappropriate content. Programs like PhotoDNA, used on Facebook and Twitter platforms, work by censoring previously identified dangerous shares. Google's new program makes it easier for moderators to sort out content that can be censored and to create a safer environment. The program itself identifies potentially inappropriate content.

RELEASED NEWS

            Unmanned and Artificial Intelligence Supported Indigenous Sea Vehicle Produced
        
    

Fred Langford, CEO of the Internet Watch Foundation (IWF), believes that such tools will make it easier to pass automated systems that unnecessarily define human control. At the same time, Langford said, "The teams with limited resources like ours only work with human power. With this application, we will keep the content cleaner. I think it's only about a year or two before you create something completely automated in some situations. "

Source:
                    the https://www.theverge.com/2018/9/3/17814188/google-ai-child-sex-abuse-material-moderation-tool-internet-watch-foundatio