As many have predicted, AI has continuously taken over numerous aspects of today’s industries. The status quo of AI ethics is currently at its “quiet” mode since this new technology is still at its infancy stage. But, it’s important to know and study this as we know it will, most likely, take over quite soon.

AI Ethics can be a broad and problematic topic in the future, as different people have different sets of ethics. Let’s first define ethics – it is the set of values and moral principles that people should adhere to. Decisions are made in consideration of ethics. It is, in short, doing the right thing. AI or Artificial Intelligence, on the other hand, is the ability of computers to intelligently decide and perform tasks based on the data that it is being “fed”. A simple example is facial recognition. The computing hardware analyses the facial features of a person and uses these data to recognize that person in a picture. Self-driving cars and smart assistants are examples of technology advancements that in the past, we did not think is conceivable, but it is now. The possibilities of what AI holds for us in the future is typically boundless.

Since AI operate based on how they’re “taught”, humans can feed them moral preferences that might differ across the globe. We’ve established earlier that ethics is subjective. Hence, there is a high possibility that countries might use AI for the specific goal of advancing their own agenda. That’s why, it is important as early as now for organisations to set their own ethical boundaries in the dawn of this emerging technology. If that is the case, how can companies and organisations safeguard their own values and company ethics while at the same time, using AI to further their growth?

AI Accountability

Organisations have quickly learned how to use AI to drive profitability and revenues of their companies. There is no way that they’ll turn their backs from AI because of moral concerns and skepticism of its use. The only way that they can go around with these problems is to use AI responsibly. This is by operating with a set of rules that are also aligned with their organisations’s values. Setting these “values” should involve the company’s decision makers rather than delegating the job to data science leaders only. This will not only help these decision makers to explain the reasoning behind their AI’s functions. It will also ensure that the ethical standards being followed are scrutinized meticulously to align with their company’s ethics.

Translating your Company’s Values

 The first step in implementing responsible AI is to translate the company’s values into a language that AI understands. This is essential as setting these rules are difficult to convert in a binary—yes or no, right or wrong- language. Decision makers should then provide their data science leaders with set of questions that provide critical guidance in three main areas: clearly identifying which processes to automate, articulate the metrics and definitions to evaluate fairness and bias in evaluating AI, and provide the hierarchy of company values to be followed, apart from defining the role of diversity in selecting talent.

CEOs and decision makers must clearly articulate the values behind the selection process when selecting AI applications for their organizations. This can be done through the use of visual thinking tools such as mind maps. They can also align the business and analytics leaders by asking them to elaborate on how they understand value in their jobs and how they use it to make better decisions.

Since value statements can be inadequate to define fairness and bias, decision makers need to take the lead on defining and setting the metrics that best represents the company values in AI context. This should involve collaboration between business leaders and data science leaders alike.

Apart from explaining the company values that the AI should be aligned to, another critical area is to define the hierarchy of these values. This is critical as there is often an observed trade off in AI development. An example is as the accuracy of the algorithm increases, the ability to explain the AI’s predictions becomes more difficult to explain. Another example is that highly accurate predictions tend to have more privacy concerns as it acquires more information.

Areas Decision-Makers Should Keep a Close Eye On

While most data scientists have great intentions and highly talented, there is always danger that they accidentally drag the companies into negative publicity because of unintentional mistakes. Below are areas that CEOs need to keep a close eye on with high level of scrutiny since these areas are highly vulnerable to public opinion.

How data is acquired

AI learns through the data that is being fed to them. The more data they consume, the more accurate their predictions are. These data come from individuals who gave their consent with full knowledge about where their data will be used. It is tempting for data scientists and analytical teams to make use of these data for innovations they discover along the way. This poses a problem as some consumers might consider it highly inappropriate.

Leaders and decision makers should then be heedful in questioning their data science team on where and how their data was acquired, the purpose of the data, and challenge the team on how the customers and society might react to their methods.

Data Representations

It is important that AI was fed an ample amount of sample sets that can represent all populations they analyze. It s dangerous to under-represent a certain group as this might lead to speculations of discrimination.

Leaders must ask their data science teams granular questions regarding the sampling techniques they used to train the AI model. They have to ensure that it reflects the real-world scenario. Challenge them if the team thought about certain minority groups, or if they are missing something.

Objectivity of AI Yields

Past biases still affect AI’s decision-making yields and predictions even if they have the correct sample size for each group. This is because machines that learn algorithms don’t solely base their recommendations from data, but from its experiences too.

Decision makers and leaders should then speak about establishing fairness in the design process of AI. They should also continuously challenge their data science teams to consider fairness every time. This means that they should scrutinize how the team is choosing data. They should scrutinize if there are data that they should exclude and include to ensure there is no bias. An example is removing gender in the data set of applicants for a certain position to help avoid gender bias. It also means that there are standards for continuous developing, testing, and monitoring of the AI machines.

Ability to Explain

It’s tempting to believe that if you get the right output of an AI model, lack of explainability doesn’t matter. It is important that data scientists can explain the decisions and predictions of AI to their stakeholders. This lets decision makers to assess if the AI is still performing in line with the company’s ethical standards.

Leaders should then probe their data science teams on the types of models they use. This can be done by challenging them to show that the team has chosen the simplest model. They should also ask them to use explainability techniques for those more complex models.

 

Although the use of AI can have ethical implications in the future, it is much more dangerous not to embrace this new technology and reap the benefits it brings. Just like in all things, moderation is key—in AI, its regulation. We know that super power countries might seek to have their own set of AI ethics which may not be aligned with most of the world’s. Nevertheless, I believe that if companies represent the majority of the people, it is still doable to establish and implement AI ethics  that suits the wider society.