The social media giant, compelled by IT rules, said in its first monthly compliance report that more than 30 million pieces of content were smashed in 10 breaches in the country on May 15-15-June. Instagram took action against nearly two million pieces across nine categories at the same time.
Under the new IT rules, larger digital platforms (including more than 5 million users) are required to publish periodic compliance reports each month, detailing the complaints received and the steps taken above. The report will include the number of specific communication links or parts of the information that have removed or disabled intermediary access following any practical monitoring conducted using automated tools.
Facebook has executed more than 30 million pieces of content in multiple categories between May 15th and June 15th, while Instagram has taken action against nearly 2 million pieces.
Over the years, Facebook has consistently invested in technology, people and processes to further advance the agenda of keeping users safe and secure online and enabling them to express themselves seamlessly on their platforms, a Facebook spokesperson said.
“We use a combination of artificial intelligence, community reports and reviews by our teams to identify and review content against our policies. As this report develops, we will add more information and continue these efforts toward transparency,” the spokesman said in a statement to PTI. Dr.
Facebook said its next report would be released on July 15, with details of user complaints and action taken.
“We expect to release subsequent versions of the report within 30-45 days after the reporting period to allow adequate time for data collection and validation. We will bring more transparency to our work and include more information about our efforts in future reports,” it added.
Earlier this week, Facebook said it would release an interim report on July 2 to provide information on the number of content removed between May 15 and May 15. The final report will be published on July 15, containing details of user complaints and action taken.
The July 15 report will contain data about WhatsApp, which is part of Facebook’s family of apps.
Other major platforms that have made their reports public include Google and homegrown platform cooo.
In its report, Facebook said more than 30 million content actions were taken across 10 categories between May 15 and June 15. This included spam (25 million), violent and graphic content (2.5 million), content related to adult nudity and sexual activity. Contains (1.8 million), and hate speech (311,000).
Things against which action was taken under other categories include Threats and Harassment (116,000), Suicide and Suicide (569,000), Dangerous Organizations and Individuals: Terrorist Propaganda (10,000,000) and Dangerous Organizations and Individuals : Includes Organized Hatred (75,000,000).
“Actionable” content refers to the number of pieces of content (such as posts, photos, videos or comments) where action has been taken for a breach of standard. Steps should include removing some of the content from Facebook or Instagram or covering any photos or videos that might annoy some listeners with caution.
The active rate, which indicates the percentage of all content or accounts that users found and flagged Facebook using technology before reporting them, was in most cases between 96.4-99.9 percent.
The practical rate of removal of content related to rape and harassment was 36.7 percent because this content is relevant by nature and highly personal. In many cases, people need to report this behavior on Facebook before they can detect or remove such content.
For Instagram, action was taken against 2 million pieces of content in nine sections between May 15 and June 15. This includes content related to suicide and self-injury (699,000), violent and graphic content (668,000), adult nudity and sexual activity (490,000), and threats and harassment (108,000).
Subjects that were addressed under other categories include hate speech (53,000), dangerous organizations and individuals: terrorist propaganda (5,800), and dangerous organizations and individuals: organized hatred (6,200).
Google reported 227,596,222 complaints from Google and YouTube by Google and YouTube in April this year alleging violations of local laws or personal rights, resulting in the removal of 59,350 pieces of content.
Co, in its report, said it had actively added 54,235 pieces of content piece by piece, and 5,502 posts were published by users in June.
According to the IT rules, significant social media intermediaries are required to appoint a Chip Compliance Officer, a Nodal Officer and a Complaints Officer and these officers are required to reside in India.
Failure to comply with IT regulations will result in these platforms losing their intermediary status which exempts them from liability for any third party data hosted by them. In other words, they may be liable for criminal proceedings in the case of a complaint.
Facebook has recently named Spurti Priya as the complaining officer in India.
India is a major market for digital platforms worldwide. According to recent government data, India has 53 crore WhatsApp users, 41 crore Facebook subscribers and 21 crore Instagram clients in India, while 1.75 crore account holders are on the microblogging platform Twitter.