A Survey on AI Risk Assessment Frameworks
The rapid development of artificial intelligence (AI) has led to increasing concerns about the capability of AI systems to make decisions and behave responsibly. Responsible AI (RAI) refers to the development and use of AI systems that benefit humans, society, and the environment while minimising the risk of negative consequences. To ensure responsible AI, the risks associated with AI systems' development and use must be identified, assessed and mitigated. Various AI risk assessment frameworks have been released recently by governments, organisations, and companies. However, it can be challenging for AI stakeholders to have a clear picture of the available frameworks and determine the most suitable ones for a specific context. Additionally, there is a need to identify areas that require further research or development of new frameworks. To fill the gap, we present a survey of 16 existing RAI risk assessment frameworks from the industry, governments, and non-government organizations (NGOs). We identify key characteristics of each framework and analyse them in terms of RAI principles, stakeholders, system lifecycle stages, geographical locations, targeted domains, and assessment methods. Our study provides a comprehensive analysis of the current state of the frameworks and highlights areas of convergence and divergence among them. We also identify the deficiencies in existing frameworks and outlines the essential characteristics a concrete framework should possess. Our findings and insights can help relevant stakeholders choose suitable RAI risk assessment frameworks and guide the design of future frameworks towards concreteness.
READ FULL TEXT