Techno Blender
Digitally Yours.

Examining gender stereotypes embedded in natural language

0 36


Credit: CC0 Public Domain

Gender stereotypes harm people of both genders—and society more broadly—by steering and sometimes limiting people to behaviors, roles, and activities linked with their gender. Widely shared stereotypes include the assumption that men are more central to professional life while women are more central to domestic life. Other stereotypes link men with math and science and women with arts and liberal arts.

Perhaps surprisingly, research has shown that countries with higher economic development, individualism, and gender equality tend to also have more pronounced gender differences in several domains, a phenomenon known as the gender equality paradox.

To help explain this pattern, Clotilde Napp used a natural language processing model to look for stereotypes in large text corpora from more than 70 countries. The paper is published in the journal PNAS Nexus.

Napp’s model looked for words representing the categories men and women as well as sets of words representing the attributes career-family, math-liberal arts, and science-arts. The model then applied the Word Embedding Association Test (WEAT), which measures the association between sets of target words in terms of their relative semantic similarity to sets of attribute words.

Napp finds that gender biases about careers, math, and science are all stronger in the text corpora of more economically developed and individualistic countries.

The author urges caution in interpreting the results which are based on big data analysis in an international context and may involve various underlying mechanisms. The cause of this pattern remains to be established with certainty, but Napp points to theoretical work suggesting that in societies where beliefs about the inherent inequality of men and women have declined, beliefs about the equality but inherent differences of men and women may have emerged to replace older hierarchical ideas.

Another explanation, which is not mutually exclusive with the previous explanation, is that the biased associations reflect existing gender differences in behaviors that are stronger in wealthy countries.

The presence of gender stereotypes in the online text corpora used to train AI could reinforce these stereotypes in artificial intelligence models, according to the author.

More information:
Clotilde Napp et al, Gender stereotypes embedded in natural language are stronger in more economically developed and individualistic countries, PNAS Nexus (2023). DOI: 10.1093/pnasnexus/pgad355

Citation:
Examining gender stereotypes embedded in natural language (2023, November 22)
retrieved 22 November 2023
from https://phys.org/news/2023-11-gender-stereotypes-embedded-natural-language.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.




gender
Credit: CC0 Public Domain

Gender stereotypes harm people of both genders—and society more broadly—by steering and sometimes limiting people to behaviors, roles, and activities linked with their gender. Widely shared stereotypes include the assumption that men are more central to professional life while women are more central to domestic life. Other stereotypes link men with math and science and women with arts and liberal arts.

Perhaps surprisingly, research has shown that countries with higher economic development, individualism, and gender equality tend to also have more pronounced gender differences in several domains, a phenomenon known as the gender equality paradox.

To help explain this pattern, Clotilde Napp used a natural language processing model to look for stereotypes in large text corpora from more than 70 countries. The paper is published in the journal PNAS Nexus.

Napp’s model looked for words representing the categories men and women as well as sets of words representing the attributes career-family, math-liberal arts, and science-arts. The model then applied the Word Embedding Association Test (WEAT), which measures the association between sets of target words in terms of their relative semantic similarity to sets of attribute words.

Napp finds that gender biases about careers, math, and science are all stronger in the text corpora of more economically developed and individualistic countries.

The author urges caution in interpreting the results which are based on big data analysis in an international context and may involve various underlying mechanisms. The cause of this pattern remains to be established with certainty, but Napp points to theoretical work suggesting that in societies where beliefs about the inherent inequality of men and women have declined, beliefs about the equality but inherent differences of men and women may have emerged to replace older hierarchical ideas.

Another explanation, which is not mutually exclusive with the previous explanation, is that the biased associations reflect existing gender differences in behaviors that are stronger in wealthy countries.

The presence of gender stereotypes in the online text corpora used to train AI could reinforce these stereotypes in artificial intelligence models, according to the author.

More information:
Clotilde Napp et al, Gender stereotypes embedded in natural language are stronger in more economically developed and individualistic countries, PNAS Nexus (2023). DOI: 10.1093/pnasnexus/pgad355

Citation:
Examining gender stereotypes embedded in natural language (2023, November 22)
retrieved 22 November 2023
from https://phys.org/news/2023-11-gender-stereotypes-embedded-natural-language.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment