Skip to main content

Why do ethics in artificial intelligence matter?

December 23, 2024

In today’s data-driven world, information leaders hold the metaphorical keys to the kingdom. As guardians of a vast wealth of knowledge that includes highly sensitive information, their work demands conscientious, ethical execution.

That’s especially true in the field of artificial intelligence (AI), which utilizes massive data sets to simulate human reasoning and learning. AI’s capacity to process immense amounts of data enables it to revolutionize analytical processes employed in health care, business, education and other sectors. However, as with all enormously powerful tools, AI poses significant risks. AI is not neutral. That is, the outcomes of opaque AI systems and “black boxes” can be biased, discriminatory or unjustified. 

As the stakes mount, the need for ethical AI practices across the technology sector becomes paramount. When you pursue a Master of Science in Information Management, offered online, from the University of Washington Information School, you will learn an ethical approach to information management and the knowledge and skills required to excel in this profession.

Excel in the digital age with an M.S. in Information Management

Advance to a leadership role in information management
Find Out More

Ensuring fairness and equity

From health care to recruitment, the use of ethical AI has become ubiquitous and raised questions around social consciousness. With AI systems developing rapidly, experts advise tech and information professionals to guard against biases and implement ethical best practices.

“In no other field is the ethical compass more relevant than in artificial intelligence,” observes Assistant Director-General for Social and Human Sciences of UNESCO Gabriela Ramos. “AI technology brings major benefits in many areas, but without the ethical guardrails, it risks reproducing real-world biases and discrimination, fueling divisions and threatening fundamental human rights and freedoms.”

A National Institute of Standards and Technology (NIST) report acknowledges that AI bias stems from individuals and groups (human bias), from institutions (systemic bias), and from technical systems (statistical/computational bias). NIST recommends that these three categories of bias be addressed throughout the AI system lifecycle and holistically within organizational structures and processes. 

The three kinds of biases can produce substantial detrimental impacts. Consider systems that drive decisions impacting whether a person is approved for a bank loan, accepted as a rental applicant, qualified for a job opening, or correctly diagnosed for an illness or condition. If the systems embed AI bias, the outcomes of their use will perpetuate systemic discrimination against particular groups. (Note: The University of Washington Information School does not use AI in screening applications.)

Effective information leaders – executives, managers, and team leads of all kinds – must be aware of these issues and implement practices and guiding principles to mitigate bias in AI. The University of Washington MSIM program stresses information ethics throughout the curriculum. All students complete the core Policy and Ethics in Information Management course so they develop “tools for analysis of the kinds of social and ethical issues that will arise in their future lives as information professionals.”

Protecting user and data privacy 

AI and big data are inextricably linked. Well-known large language model (LLM) tools such as ChatGPT process vast amounts of data, raising concerns about user consent, especially when it involves data related to individuals’ financial information and health records.

The U.S. Food and Drug Administration, Health Canada and the United Kingdom’s Medicines and Healthcare products Regulatory Agency jointly identified guiding principles to help promote safe, effective and high-quality medical devices that use AI and machine learning. The document proposes data collection protocols that manage sources of bias and suggests best practices for data management. Because these data sets drive individual and public health practices worldwide, the data must be relevant, well-characterized and secure.

In May 2023, the World Health Organization (WHO) called for safe and ethical AI for the health sector to protect and promote human well-being, human safety and autonomy, and to preserve public health. While the WHO is enthusiastic about using tech to support health-care professionals and patients, it raised concerns that LLM users don’t exercise the necessary caution. The WHO reiterated the importance of transparency, inclusion, public engagement, expert supervision and rigorous evaluation. 

Promoting transparency and accountability 

As AI tools and software programs quickly develop, transparency and accountability are essential to ensure trust and buy-in. These principles lie at the heart of the UW’s Responsibility in AI Systems & Experiences (RAISE) center. RAISE co-founder Chirag Shah notes: “When blind trust in these systems is combined with bias and a lack of transparency, you realizse what a dangerous mix it can be. The more confidence we have in these systems, the more we don’t understand their limitations.” That’s why, Shah argues, “Businesses must establish strong ethical guidelines for AI, including ongoing monitoring and using diverse data to prevent biases. It is also important to be transparent about how AI works and makes decisions. By committing to these practices, companies can mitigate bias, promoting fairness and inclusivity in AI applications.”

The importance of promoting public awareness 

As AI exerts a more significant impact throughout the public sphere, public education and awareness become critical. Those affected by AI’s ascent — i.e., all of us — need to understand the technology’s capabilities, limitations and ethical implications. Only then can we engage in discussions, advocate for responsible AI development, and make informed decisions. Increased public attention should, in turn, exert pressure on AI practitioners, developers and policymakers to maintain transparency and advance ethically.

An engaged public would likely foreground concerns over bias in AI. It would also promote broad consultation in AI development, as well as policies and regulations that promote ethical AI practices, including corporate accountability. Public pressure would make such practices a priority; otherwise, they might be diminished or ignored.

Develop your AI ethics skills and tools

AI has rapidly become an essential tool in information management. Professionals such as business analysts, information architects, risk managers and technology managers now are required to have a solid understanding of AI’s principles, applications and potential limitations.

The University of Washington M.S. in Information Management prepares information leaders to address the many ethical challenges at the complex intersection of IM and AI. The program offers seven specializations — artificial intelligence, business intelligence, data science, information architecture, information and cyber security, program/product management & consulting, and user experience — that enable students to customize the degree to their career goals. 

Are you interested in learning more? Contact an enrollment advisor to discover what the UW MSIM program can offer you.

Drive ethical data-driven decision-making

Earn your information management master’s degree from UW
Apply Now