Techno Blender
Digitally Yours.

How AI can help give teens protection and privacy on social media

0 33



Meta announced on January 9 that it will protect teen users by blocking them from viewing content on Instagram and Facebook that the company deems to be harmful, including content related to suicide and eating disorders. The move comes as federal and state governments have increased pressure on social media companies to provide safety measures for teens.

At the same time, teens turn to their peers on social media for support that they can’t get elsewhere. Efforts to protect teens could inadvertently make it harder for them to get help.

Congress has held numerous hearings in recent years about social media and the risks to young people. The CEOs of Meta, X (formerly Twitter), TikTok, Snap, and Discord testified before the Senate Judiciary Committee on January 31 about their efforts to protect minors from sexual exploitation.

The tech companies “finally are being forced to acknowledge their failures when it comes to protecting kids,” according to a statement in advance of the hearing from the committee’s chair, Democratic Senator Dick Durbin of Illinois, and its ranking member, Republican Senator Lindsey Graham of South Carolina.

Using this dataset, we found that direct interactions can be crucial for young people seeking support on issues ranging from daily life to mental health concerns. Our finding suggests that these channels were used by young people to discuss their public interactions in more depth. Based on mutual trust in the settings, teens felt safe asking for help.

Research suggests that privacy of online discourse plays an important role in the online safety of young people, and at the same time a considerable amount of harmful interactions on these platforms comes in the form of private messages. Unsafe messages flagged by users in our dataset included harassment, sexual messages, sexual solicitation, nudity, pornography, hate speech, and sale or promotion of illegal activities.

However, it has become more difficult for platforms to use automated technology to detect and prevent online risks for teens because the platforms have been pressured to protect user privacy. For example, Meta has implemented end-to-end encryption for all messages on its platforms to ensure message content is secure and accessible only by participants in conversations.

Also, the steps Meta has taken to block content about suicide and eating disorders keep that content from public posts and search even if a teen’s friend has posted it. This means that the teen who shared that content would be left alone without their friends’ and peers’ support. In addition, Meta’s content strategy doesn’t address the unsafe interactions in private conversations teens have online.

Striking a balance

The challenge, then, is to protect younger users without invading their privacy. To that end, we conducted a study to find out how we can use the minimum data to detect unsafe messages. We wanted to understand how various features or metadata of risky conversations such as length of the conversation, average response time, and the relationships of the participants in the conversation can contribute to machine learning programs detecting these risks. For example, previous research has shown that risky conversations tend to be short and one-sided, as when strangers make unwanted advances.

We found that our machine learning program was able to identify unsafe conversations 87% of the time using only metadata for the conversations. However, analyzing the text, images, and videos of the conversations is the most effective approach to identifying the type and severity of the risk.

These results highlight the significance of metadata for distinguishing unsafe conversations and could be used as a guideline for platforms to design artificial intelligence risk identification. The platforms could use high-level features such as metadata to block harmful content without scanning that content and thereby violating users’ privacy. For example, a persistent harasser who a young person wants to avoid would produce metadata—repeated, short, one-sided communications between unconnected users—that an AI system could use to block the harasser.

Ideally, young people and their caregivers would be given the option by design to be able to turn on encryption, risk detection, or both so that they can decide on trade-offs between privacy and safety for themselves.


Afsaneh Razi is an assistant professor of information science at Drexel University.





Meta announced on January 9 that it will protect teen users by blocking them from viewing content on Instagram and Facebook that the company deems to be harmful, including content related to suicide and eating disorders. The move comes as federal and state governments have increased pressure on social media companies to provide safety measures for teens.

At the same time, teens turn to their peers on social media for support that they can’t get elsewhere. Efforts to protect teens could inadvertently make it harder for them to get help.

Congress has held numerous hearings in recent years about social media and the risks to young people. The CEOs of Meta, X (formerly Twitter), TikTok, Snap, and Discord testified before the Senate Judiciary Committee on January 31 about their efforts to protect minors from sexual exploitation.

The tech companies “finally are being forced to acknowledge their failures when it comes to protecting kids,” according to a statement in advance of the hearing from the committee’s chair, Democratic Senator Dick Durbin of Illinois, and its ranking member, Republican Senator Lindsey Graham of South Carolina.

Using this dataset, we found that direct interactions can be crucial for young people seeking support on issues ranging from daily life to mental health concerns. Our finding suggests that these channels were used by young people to discuss their public interactions in more depth. Based on mutual trust in the settings, teens felt safe asking for help.

Research suggests that privacy of online discourse plays an important role in the online safety of young people, and at the same time a considerable amount of harmful interactions on these platforms comes in the form of private messages. Unsafe messages flagged by users in our dataset included harassment, sexual messages, sexual solicitation, nudity, pornography, hate speech, and sale or promotion of illegal activities.

However, it has become more difficult for platforms to use automated technology to detect and prevent online risks for teens because the platforms have been pressured to protect user privacy. For example, Meta has implemented end-to-end encryption for all messages on its platforms to ensure message content is secure and accessible only by participants in conversations.

Also, the steps Meta has taken to block content about suicide and eating disorders keep that content from public posts and search even if a teen’s friend has posted it. This means that the teen who shared that content would be left alone without their friends’ and peers’ support. In addition, Meta’s content strategy doesn’t address the unsafe interactions in private conversations teens have online.

Striking a balance

The challenge, then, is to protect younger users without invading their privacy. To that end, we conducted a study to find out how we can use the minimum data to detect unsafe messages. We wanted to understand how various features or metadata of risky conversations such as length of the conversation, average response time, and the relationships of the participants in the conversation can contribute to machine learning programs detecting these risks. For example, previous research has shown that risky conversations tend to be short and one-sided, as when strangers make unwanted advances.

We found that our machine learning program was able to identify unsafe conversations 87% of the time using only metadata for the conversations. However, analyzing the text, images, and videos of the conversations is the most effective approach to identifying the type and severity of the risk.

These results highlight the significance of metadata for distinguishing unsafe conversations and could be used as a guideline for platforms to design artificial intelligence risk identification. The platforms could use high-level features such as metadata to block harmful content without scanning that content and thereby violating users’ privacy. For example, a persistent harasser who a young person wants to avoid would produce metadata—repeated, short, one-sided communications between unconnected users—that an AI system could use to block the harasser.

Ideally, young people and their caregivers would be given the option by design to be able to turn on encryption, risk detection, or both so that they can decide on trade-offs between privacy and safety for themselves.


Afsaneh Razi is an assistant professor of information science at Drexel University.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment