top of page

THE ETHICS OF AI - Part I: Background of the Field

Updated: Jan 21, 2022

The Ethics of Artificial Intelligence (AI) is a branch of ethics of technology that focuses on artificially intelligent systems. Its purpose is to evaluate and guide the development and the use of these systems. Computer scientists have been exploring the ethics of AI since the late 1970s.

It is often separated into two categories. One being the ethics of human behavior as they design, develop, utilize and treat AI systems. The other is the ethics of the behavior of machines, otherwise known as machine ethics.

In this 11-part series, we will be diving into the 10 principal debates in the field of the ethics of AI, as laid out by the Stanford Encyclopedia of Philosophy: Privacy & Surveillance, Manipulation of Behavior, Opacity of AI Systems, Bias in Decision Systems, Human-Robot Interaction, Automatic & Employment, Autonomous Systems, Machine Ethics, Artificial Moral Agents, and Singularity.

The ethics of AI is quite a young field within applied ethics. Though the UK is taking an active role in the evolution of the ethics of AI. Former prime minister Theresa May pledged to create a Centre for Data and Ethics and Innovation that will work to ensure society is prepared for data-driven technologies.

"From helping us deal with the novel ethical issues raised by rapidly-developing technologies such as artificial intelligence, agreeing best practices around data use to identifying potential new regulations, the Centre will set out the measures needed to build trust and enable innovation in data-driven technologies," May said. "Trust underpins a strong economy, and trust in data underpins a strong digital economy."

Microsoft has also worked with the European Union to develop a regulatory framework for AI technologies. A draft version was recently published on 21 April 2021. This draft proposes to protect EU citizens from the use of AI for mass surveillance by law enforcement, which the UK ruled unlawful in 2020.

This draft also proposes that certain uses of AI be labeled as ‘high-risk’ due to concerns of discrimination, such as recruitment, credit score evaluation and border control management. Governments using AI to create ‘social scoring’ will be banned under this proposal.

As you can see, the field of ethics of AI is imperative for creating the necessary laws, regulations and responses as society responds to advancing AI technologies further integrating our daily landscape.

Interested in learning about ALTURIS ethical standards?

Contact us at or use our contact form.

Recent Posts

See All


bottom of page