Ethics is a hot topic for the tech industry, especially in the rapidly developing fields around artificial intelligence (AI), machine learning and automated decision systems.

Why is this important for your business? These are some risks if you let your ethical guard down:

  • Customers may abandon you for more-ethical service providers.
  • Talent may leave you for more-ethical employers.
  • You may do great public harm through unintentionally exacerbating systemic inequalities.
  • You may lose the competitive edge of technical prowess to companies or countries with ethics that endanger your employees, customers or the public at large.

Consumers and tech industry workers are raising their voices to influence what kinds of automated decision-making systems get designed, what decisions they should be allowed to make, and what datasets should be used for building the models that power those systems.

Google is pledging $25 million towards “AI for Social Good”, yet some critics dread the prospect of efforts spearheaded by the tech giant that may accelerate the dominance of one mode of thinking to the detriment of other groups. For example, a recent global survey of autonomous driving ethics — whose results will be used to design many of the algorithms that will make life-or-death decisions in driverless cars — was criticised for not having sufficient samples from the developing world.

To decrease negative perception, the tech industry is increasingly introducing new data privacy and security policies.  The use of chief ethical officers as self-checking mechanisms is one emerging strategy. These officers may end up like the eponymous character in Lois Lowry’s novel, The Giver, bearing alone the responsibility for remembering and experiencing all painful history lessons while the others exist in ignorance.

Swimming in the ethics sea

Conversations around AI ethics reflect the wider social polarisation about who benefits from systems of power and who is underrepresented, excluded or left behind. Technology platforms, including automated decision-making systems, can exacerbate these rifts but technology does not create them in isolation. How can we build ethics for AI when we aren’t even aligned on ethics in technology, or in most forms of human behaviour?

Despite the lack of consensus, various approaches are being developed to create more ethical AI and technology. These range from data-literacy campaigns to flight-plan-like checklists, from codes of practice to the emergent industry of algorithmic auditing.

Quis custodiet? Who will watch the watchers?

Is it desirable or even feasible for the tech industry to lead the way on ethics? In other industries, new regulations and regulatory bodies are set up in response to public exposures of hazards to the social and environmental good (e.g., the U.S. Environmental Protection Agency and the Food and Drug Administration). So, will the chief ethics officers or self-assembled ethics review boards within tech companies be enough to regulate ethical issues?

Palmer Luckey, the founder of Oculus and the co-founder of Anduril, argued at Web Summit 2018 that rather than waiting for legislators to set the agenda, technologists have a moral imperative to lead the way on building ethical technologies.

The United Kingdom is trying to lead the ethical discussion, with organisations such as the Open Data Institute, CognitionX and Doteveryone advocating for greater ethical development in AI. With the establishment of the Centre for Data Ethics and Innovation, the United Kingdom is clearly seeking to capitalise on this growing capability. But the centre is an advisory body only: Will it have sufficient tools to make an impact?

What’s next?

What steps do industry leaders need to take to be ethical in AI?

  • Conduct an ethics audit. What are the internal perceptions of ethical responsibility within your company? Do managers and team members feel empowered to make and take ownership of ethical decisions? Are parts of your organisation already using ethical frameworks to guide their decision-making?
  • Decide on your company ethics. Draft an ethical framework based on a consensus about the guiding principles of the organisation. How do these relate to your company values? What resources can you use to create a framework that will protect employees and customers? What are your considerations for the ethics of obedience, duty of care and the ethics of reason?
  • Design ethical feedback loops into your projects. This entails getting team members to engage in a discussion-led approach, or introducing a direct checklist of responsibilities. For example, during design and experimentation, a design-thinking model can encourage the teams to ask “What if …?” about the tools they are building, and this may help them anticipate any red flags.
  • Have a plan. If you have an emergency plan for fire or natural disasters, you should have one to protect your company from AI scandals, too. Think about what tools you can use when ethical risks and breaches occur.

To learn more about emerging trends in algorithmic ethics, read the full Leading Edge Forum (LEF) research commentary here.