The Fundamental Necessity of AI Oversight - Yvonne E. Hyland
Like many technologists and technology business leaders, my perspective on AI and AI Ethics was, I felt, reasonably good. The need to train the models, govern the quality of the model and retrain the model based on new data on an ongoing basis to ensure fairness of outcomes seemed obvious. This is only the tip of the iceberg! As I recently worked on achieving my AI Ethics certification from the London School of Economics and Political Science, I learned a great deal about the myriad of real challenges and risks that AI entails for corporations and humanity as a whole. Thankfully, what I also learned, was a great deal about the potential mitigation and remedies of these risks too.
Standards and principles for regulating AI are needed at all levels of governments, all levels of corporations, and globally. Haphazard implementation of AI without sufficient governance can have disastrous consequences, as AI systems are not neutral — they become good or bad actors, driven by the data and context where they have been deployed. This is “garbage in garbage out” on steroids, amplified at scale with dramatic speed. With great data sets there are great outcomes like rapid drug development; conversely, with poor datasets there are poor outcomes, such as denial of healthcare and discriminatory job application rejections. One analysis showed that fewer than 50% of AI-using enterprises today have infrastructure that adequately regulates AI — only 40% even have an agreed definition of Artificial Intelligence/Machine Learning, and just 20% have a centrally managed department for AI governance.
Today’s societal efforts include the Blueprint for an AI Bill of Rights in the US, The European Union Artificial Intelligence Act, and the desire and mission to enable the UN 2030 Sustainable Development Goals via AI. Whilst the intent is conceptually well-grounded, the timing for definition and deployment is many years out or has yet to be determined — given the massively accelerated pace of AI, with ChatGPT being the 100 million-user poster child, corporations need to act now to maintain their reputation and avert liability and risk due to AI induced corporate crises.
AI Ethics is predicated on the philosophical models of legitimacy, stability and justice. In a pluralistic society where there are differing views, public administrations are often the best placed to provide legitimacy and stability for the AI system rules and guidelines that are generally aligned with societal goals. Public administrations should coalesce around a set of principles to be encoded into every AI system that will be subject to audit by industry regulators. In addition to these values, corporations will develop a set of governing principles and value alignment for AI which adhere to their specific corporate culture and values, potentially also at a more granular level for specific use cases.
Corporate Senior Leadership and the Board of Directors will very likely become responsible for setting the overall AI governance strategy and guidelines and monitoring outcomes. In the initial absence of regulation and standards, a pragmatic approach to risks (such as those proposed in “Ten Legal and Business Risks of Chatbots and Generative AI” from Tech Policy Press) is a pathway forward, with the very clear need to dynamically monitor and re-calibrate for the next few years! Our future with AI is exciting, but companies and governments have the responsibility to develop and implement a foundation of AI standards to avoid regrettable outcomes.
Connect with Yvonne on LinkedIn.
Yvonne E. Hyland is a people-centric, solutions-driven executive with 30 years of experience in international enterprise technology leadership. A pragmatic innovator and former intrapreneur, Yvonne improves and optimizes businesses with the power of technology.