Deloitte recently published an article arguing that organizations should not wait for AI regulation.
With AI applications becoming ubiquitous in and out of the workplace, can the technology be controlled to avoid unintended or adverse outcomes? Organizations are launching a range of initiatives to address ethical concerns.
WITH great power, the saying goes, comes great responsibility. As artificial intelligence (AI) technology becomes more powerful, many groups are taking an interest in ensuring its responsible use. The questions that surround AI ethics can be difficult, and the operational aspects of addressing AI ethics are complex. Fortunately, these questions are already driving debate and action in the public and commercial sectors. Organizations using AI-based applications should take note.
- In 2018, media mentions around AI and ethics doubled as compared to the previous year, with over 90 percent indicating positive or neutral sentiment.
- About a third of executives in a recent Deloitte survey named ethical risks as one of the top three potential concerns related to AI.
- Since 2017, more than two dozen national governments have released AI strategies, road maps, or plans that focus on developing ethics standards, policies, regulations, or frameworks.
- Governments are setting up AI ethics councils or task forces and collaborating with other national governments, corporations, and other organizations on the ethics of AI.
- Major technology companies such as Google, IBM, and Facebook have developed tools, designed guidelines, and appointed dedicated AI governance teams to address ethical issues, such as bias and lack of transparency.5
- Enterprises across industries such as financial services, life sciences and health care, retail, and media are joining consortia to collaborate with technology vendors, universities, governments, and other players in their respective industries to promote ethical AI standards and solutions.
To read the article, click here