Covid-19, and the rapid increase in remote working has accelerated the development and use of artificial intelligence (AI) across organizations and in consumer interactions.
This has highlighted both the benefits and the potential risks of AI — notably the issue of trust in technology. While trust has long been a defining factor in an organization’s success or failure, the risk of AI now goes beyond reputation and customer satisfaction’ — it is playing a critical role in shaping the well-being and future of individuals and communities around us — even as few fully understand how it works.
A new study by KPMG, The Shape of AI Governance to Come, finds that the majority of jurisdictions globally have yet to fully grasp the full implications of how AI will shape their economies and societies. Furthermore, the pace of AI innovation is happening so quickly that even the most technologically sophisticated governments are struggling to keep up.
87% IT decision makers believe that technologies powered by AI should be subject to regulation.
Of this group, 32% believe that regulation should come from a combination of both government and industry, 25% believe that regulation should be the responsibility of an independent industry consortium.
94% IT decision makers feel that firms need to focus more on corporate responsibility and ethics.
Current challenges to regulation: 1. The precision and accuracy of the technology. 2. Discrimination and bias in decisioning. 3. The appropriate use of consumer data and data privacy to inform AI.
No comments:
Post a Comment