Tuesday, May 14, 2024
HomeLatest NewsBlogWomen Rights in the Era of Artificial Intelligence AI

Women Rights in the Era of Artificial Intelligence AI

Women rights in the era of the AI, has swiftly taken over the world.

In an era of increasing digitization, as artificial intelligence (AI) pervades every aspect of our lives, there are rising fears that the lack of effective regulation of such technologies may create more harm than good. The threats posed by AI in this regard disproportionately harm women compared to men. Women Rights in the Era of AI.

Women are becoming increasingly susceptible in today’s technology-driven world, from demonstrating gender bias to dealing with the consequences of data privacy and security breaches.

This difficulty is exacerbated for women in APAC countries, particularly Southeast Asian countries, where age-old social stereotypes still have a considerable impact.

AI Bias

AI achieves precise results by combining data, statistical analysis, and human-created theories. The operation of AI is dependent on the data provided to it, which mostly determines the outputs it produces.

AI, like the data it is fed, is neither immune nor exempt from human prejudice, and hence its decisions are prone to the same bias.

In recent years, there have been multiple reports of AI bias having a direct and negative impact on women’s rights

Such AI biases are a significantly greater problem than human bias, because AI has the capacity to aggravate these biases while also having far-reaching implications that go beyond those of human actions.

Amazon’s AI Recruitment Tool

In 2015, Amazon’s machine-learning specialists discovered that the AI system used as their recruiting engine ‘did not like women’.

The data presented to the AI tool demonstrated male dominance, prompting the software to train itself that men were preferred. This had the effect of penalizing resumes that included the word “women’s”.

Although Amazon remedied this bias, there was no guarantee that the software would not reveal other forms of bias in the recruitment process, eventually leading to the AI tool’s discontinuance.

Apple Credit Card

Similarly, Apple’s credit card, which uses AI algorithms to judge creditworthiness, was accused of discriminating against women by offering lower credit limits to women than to males.

The problem gained attention in 2019 after an influencer claimed on his X (previously Twitter) that Apple Credit Card had displayed misogyny.

Despite having the same income and credit score, he claimed that his credit line was 20 times that of his wife’s. The software did not ask gender input, which made this charge confusing.

The discrimination it demonstrated occurred without any understanding of gender, highlighting AI’s ability to exacerbate gender-based bias.

Read More Informative Articles In Urdu

Gym Software

A gym in the United Kingdom employed computer software that automatically classified Dr. Louise Shelby, a woman, as a male when she typed “doctor” as her title. She was allowed to use the men’s changing room after being identified as male.

She was told that if she wanted it rectified, she needed to remove her professional title from the gym’s online registration system. These examples demonstrate AI’s potential to aggravate systemic bias against women, reversing decades of progress in women’s rights.

Furthermore, such actions violate anti-discrimination legislation, including the Convention on the Elimination of All Forms of Discrimination Against Women (CEDAW). It defines “discrimination” in the following way: “any distinction, exclusion or restriction made on the basis of sex which has the effect or purpose of impairing or nullifying the recognition, enjoyment or exercise by women, irrespective of their marital status, on a basis of equality of men and women, of human rights and fundamental freedoms in the political, economic, social, cultural, civil or any other field”.

This notion implies that when AI systems demonstrate such bias against women, they effectively perpetuate discrimination and maintain socioeconomic disparities.

Without a strong legal framework overseeing the junction of innovation, ethics, and law, as well as procedures to address these AI shortcomings, protecting fundamental rights, particularly those of women, becomes a hard task.

Privacy & Data Protection

Without the data provided to it, AI is an empty vessel, which highlights the growing concern about the impact of AI technologies on data privacy.

One’s face characteristics or voice can be exploited by ‘bad’ actors to make AI-generated phony photographs or films, mostly of women (often sexual in nature), to harm their social reputation, exact retribution, demonstrate’masculinity’,’sextort’, bully, or propagate disinformation.

Such ‘deepfakes’ are non-consensual constructs that mostly target women. According to Sensity AI’s analysis, 99 percent of the non-consensual sexual deepfakes involved women.

Individuals, especially women, who have ventured to campaign against such image-based sexual assault have encountered being victims of such abuse.

There are worries that the generation of deepfakes does not constitute the processing of people’ ‘personal data’ and hence does not violate their data or privacy rights.

Some claim that data protection standards do not apply to the governance of deepfakes because the content created by deepfake AI technologies does not belong to any real people.

According to Chidera Okolie, while this argument may be valid in scenarios involving deepfakes of non-existent or fictional humans, it becomes less credible when applied to deepfakes of real, existing individuals.

Okolie maintains that when a deepfake is created using an individual’s data, and the end result, i.e. the deepfake, can be traced back to that person, it would fall under Article 4(1) of the EU General Data Protection Regulation: Regulation (EU) 2016/679, the definition of ‘personal data’, which is defined as “any information related to an identified or identifiable natural person.”

As a result, a deepfake that can be linked to an individual may be considered ‘personal data’ for that person—a non-consensual deepfake thus violates an individual’s data and privacy rights.

Deepfake

The development and broadcast of nonconsensual sexual deepfakes starring women can cause serious psychological harm. In principle, there is a clear comparison between seeing pornographic deepfakes and being drugged and assaulted.

This explains AI’s propensity to perpetuate gender-based violence against women, making it the modern form of depriving women of their sexual autonomy.

Apart from the mental trauma suffered by the victim, the consequences of sexual deepfakes targeting women are especially severe in emerging Southeast Asian nations like Pakistan, where the killing of women in the name of ‘honor’ is still common.

Although the country’s legal system prohibits honor crimes, pervasive cultural stereotypes frequently justify such senseless acts, even leading to the acquittal of people convicted of murder under the pretense of honor.

At the national level, different countries have different legal frameworks to address this issue. Most of them outright criminalize deepfakes as sex crimes. UK’s Revenge Pornography Guidelines, Nigeria’s Sexual Offenses Act 2003 and Violence Against Persons (Prohibition) Act 2015, Pakistan’s Electronic Crimes Act (PECA) 2016 and Pakistan Penal Code (PPC) 1860, all of Canada’s Legislation has been enacted under the Sex Offenses Act. Women, including non-consensual explicit deepfakes targeting women.

Technology and Women at Work

The quantitative and qualitative consequences of technological progress also disproportionately harm women. Women now hold a sizable proportion of low-skilled and labor-intensive jobs, which are on the verge of displacement as automation and technology improve.

McKinsey & Company estimates that between 2016 and 2030, approximately 15% of the global workforce might be displaced.

The International Labour Organization’s research ‘The Game Changers: Women and the Future of Work in Asia and the Pacific’ highlights how technology will disrupt the garment sector in APAC countries, which is dominated by working women in the region.

Women from Southeast Asian nations with conservative social traditions, such as India, Bangladesh, and Pakistan, have benefited greatly from the garment industry in terms of economic and social status.

As a result, the technological displacement of this industry would deal a serious blow to women’s economic and social rights throughout the region.

Women Job Categories

With the phasing out of low-skilled and labor-intensive industries, women may find new chances in formerly male-dominated job categories.

However, technology does not automatically bridge gender inequality—”it largely depends on the design of the technology and the capabilities of women from under-represented groups to access and use technologies and solutions that respond to their needs.”

While women have the ability to thrive at jobs historically undertaken by males, the key element in their transition from low-skilled professions to roles previously monopolized by men is deeply rooted cultural preconceptions and attitudes.

Nonetheless, online work procurement via app-based platforms has provided women with access to occupations in formerly male-dominated areas.

The flexibility associated with such work procurement is a means to provide women with additional job opportunities; yet, research by Barzilay and Ben David demonstrates that male and female online workers have significantly different pay rates.

Furthermore, the rate of digitization has placed labor laws substantially behind the changing labor landscape, potentially jeopardizing women’s economic and social rights. Womens, on the other hand, may have more prospects for employment in the technology industry, which includes automation, robotics, and artificial intelligence, where they might work as technicians, clerical, or administrative support personnel.

In the Philippines, women already account for 59% of the business processing outsourcing workforce. To preserve and expand this trend across APAC, women in the region must receive the required training and assistance. Furthermore, certain organizations, such as All Claims Adjustment Bureau, engage female workers from the APAC region for remote jobs.

Although cost-effectiveness may be the major motivator for such attempts, these opportunities remain critical pillars in defending women’s employment rights in Asia Pacific in the face of a changing technological landscape.

Conclusion

As humanity enters the fourth industrial revolution, there are real concerns about the negative effects of technology on women’s rights. This issue stems from reports of AI discrimination, infringement of women data privacy rights, and the impending threat of employment displacement due to technology and automation.

The unparalleled pace of technological innovation highlights the necessity for regulation of AI and other developing technologies. Such regulation should attempt to take a human-rights (or possibly women-rights) approach, testing and verifying AI software in accordance with existing women’s rights norms and standards. It also calls for an upgrade to the existing legal framework for protecting women’s rights in order to maintain them relevant in today’s world.

Regarding the potential threat of women’s job displacement, states should implement measures to assist women in preparing for and adapting to the changing tech landscape by investing in the development of their abilities through training.

Also Read this: Artificial Intelligence Could Influence Elections, Experts

RELATED ARTICLES
- Advertisment -

Most Popular