AI Models Ranked by Compliance with Regulatory Norms

Bo Li works to make AI models adhere to ethical norms, highlighting the need for improved AI safety and regulation.

Innews Editors profile image
by Innews Editors
AI Models Ranked by Compliance with Regulatory Norms

The author of the text is Bo Li, a University of Chicago associate professor, who attempts to make AI models follow ethical and regulatory norms. At this point, Li’s work, and the recognition she has received for it, could not have been more pertinent. Interviewed by consulting firms, Li increasingly discovered fears that artificial intelligence – useful though it may be – would result in one firm or another becoming mired in a lawsuit. Li, together with her colleagues and her businesses, Virtue AI and Lapis Labs, has developed a systematic catalogue of AI risks. Moreover, with the introduction of AIR-Bench 2024, the current research is based on the model’s comparison to existing ones in terms of compliance with regulatory norms.

The results show that Claude 3 Opus, an Anthropic model, as identified by the AIR-Bench 2024, is far closer to the ideal than any of the models. At the same time, the DBRX Instruct model of Databricks was found laggard in safety features on all parameters. The most critical difference, the research claims, is that the regulatory policies on which AIR-Bench 2024 is based far outstrip the internal policies of all AI firms. This, according to Li, exposes the government’s inadequate attention to the regulation of AI processes. After all, if AI evolves, AI safety should grow in line with its leaders – and this is not happening.

Innews Editors profile image
by Innews Editors

Latest posts