Pages

Sunday 25 July 2021

YouTube’s Search Algorithm Directs Viewers to False and Sexualized Videos, Study Finds

 YouTube has made a number of changes over the past year to limit the problematic videos it recommends to viewers. New research suggests that there is a way to go for repairs.

The Mozilla Foundation, a non-profit software company, has discovered that YouTube’s powerful recommendation engine continues to direct viewers to videos that claim to show false claims or sexual content. The platform’s algorithms suggest 71% of videos that participants find unpleasant.

This study highlights ongoing challenges

Alphabet Co., Ltd.

Google 0.82%

YouTube, a subsidiary, is faced with monitoring user-created content and turning it into a world-leading video service. This is Facebook Inc.From

twitter Co., Ltd.

This has skyrocketed by encouraging people to share information, but now there is regulatory and social pressure to crack down on fragmented and misleading and dangerous content without censoring diverse perspectives. Facing

In the case of YouTube, there are gaps in efforts to guide users to videos of interest based on viewing patterns, as opposed to videos that are popularized by word of mouth for other reasons.

In this study, one of the largest of its kind, 37,000 volunteers used a browser extension to track YouTube usage for the 10 months that ended in May. When a participant flagged a video as having a problem, the extension was able to track whether it was recommended to the viewer or if a person found it on their own. The videos flagged as unpleasant by the participants included a sexual parody of “Toy Story” and an election video that falsely suggested.

Microsoft Co., Ltd.

Founder Bill Gates hired students related to Black Lives Matter to count ballots in the fierce battle state.

According to Mozilla, YouTube has since deleted 200 videos that participants have flagged. It has been played over 160 million times before being deleted.

A YouTube spokesman said he reduced recommendations for content that the company defines as harmful to less than 1% of the videos it watches. He said he has launched 30 changes over the past year to address this issue. According to the company’s safety team, the automated system now detects 94% of videos that violate YouTube policy and removes most of them before they are played 10 times.

“The goal of our recommender system is to connect viewers with the content they love,” said a spokesman.


YouTube has been plagued by concerns for years regarding Dangers of videos on the site, including children.. last year, I pulled down the channel that promotes QAnon And the other conspiracy theories it said may contribute to real-world harm.

The company evaluates the recommended algorithm by boosting more than two-thirds of the billion hours of viewership per day, which generated $ 19.7 billion in revenue last year. These videos (720,000 hours of video uploaded daily) will not be shown by human moderators prior to publication as part of a long-standing effort to prioritize user uploads and control costs.

The result is a Whac-A-Mole system where YouTube uses machine learning to try to catch videos that violate the policy before it is shown to the viewer. Since 2019, the company says its policing algorithms have reduced so-called borderline content by 70%.

Brandi Gerkink, Mozilla’s senior manager who led the study, said the study emphasized the inherent inconsistencies between YouTube’s algorithms, one set recommending problematic videos and another trying to remove them. Said. She said participants rarely reported finding problematic videos when searching for content on their own, but the algorithm recommended what they didn’t want to see.

“If a user says,’These recommendations aren’t working for us,’ who is the service of the recommended algorithm? “Gerkink said.

According to the company’s research, users are generally satisfied with the recommendations.

Mozilla owns and operates the Firefox Internet Browser. The Firefox Internet Browser competes with Alphabet’s Chrome Browser for user and display advertising. Browser-related license agreements account for more than half of the Mozilla Foundation’s funding.

Among other findings from this survey, in countries where English is not their native language, especially Brazil, Germany and France, 60% of videos that participants find offensive are likely to be recommended. was.

More than one-fifth of the videos are classified by participants as incorrect information, often flagged for political issues or Covid-19, 10% malicious, 7% sexual It was content.

YouTube refuses to provide public information about how the recommendation algorithm works because it considers it to be its own algorithm. As a result, it can be difficult to study. Mozilla sought to overcome this hurdle by creating a browser extension and asking participants to provide a large, independent dataset.

No comments:

Post a Comment