Wednesday, January 8, 2025

Government panel suggests AI incident database, invites comments till January 27

The report recommends that the governance group should bring together all authorities and institutions that deal with AI governance at the national level.

A panel set up by the government in 2023 to look at AI governance has recommended an AI incident database to better understand the actual risks of artificial intelligence, stressing that an inter-ministerial, or a holistic government approach is required to govern AI, an effort that could be spearheaded by the IT ministry and the office of the principal scientific advisor (PSA).

For representational purposes only. (Getty Images/iStockphoto)


These recommendations were made in the IT ministry’s ‘Report on AI Governance Guidelines Development’, released for public consultation on Tuesday, and comments are welcome until January 27. HT had reported on the contents of this report in October 2024.

The report recommends that the governance group should bring together all authorities and institutions that deal with AI governance at the national level.

MeitY had formed the subcommittee on November 9, 2023, and it is headed by Dr Balaraman Ravindran, chief of the department of data science and AI and the Centre for Responsible AI at IIT Madras.

The report wants MeitY to house a technical advisory body and act as the coordination focal point for this governance group, which should build an AI incident database to understand the “actual incidence of AI-related risks in India”, echoing a similar recommendation by the NITI Aayog in 2021.

These AI incidents could include “cyber incidents” and “cyber security incidents” and extend to “adverse or dangerous” outcomes from the use of AI that can disadvantage or harm individuals, businesses, and societies”. They could include “malfunctions, unauthorised outcomes, discriminatory outcomes, unforeseeable outcomes and unexpected emergent behaviour, system failures, privacy violations, physical safety problems, etc.”

It said that initially, only public sector organisations using AI systems should mandatorily report to this database while private entities were “encouraged to voluntarily report AI incidents”. “The focus should be on defining reporting protocols to ensure confidentiality and to focus on harm mitigation, not fault finding,” the report said.

The technical secretariat, as per the report, should try to get “voluntary commitments on transparency across the overall AI ecosystem and on baseline commitments for high capability/widely deployed systems” from the industry. These could include regular transparency reports by AI developers and deployers, internal and external red-teaming of models or systems, peer review by third-party qualified experts, and others. The commitments could vary across sectors.


The sub-committee also recommended that the technical secretariat could examine the suitability of technology measures to address AI-related risks, such as use of watermarking, labelling and fact-checking to deal with deepfakes.

It said that the governance group should have a mix of government and non-government members to bring in “external expertise from industry and academia” and could be headed by the PSA. On the other hand, the technical secretariat located in MeitY could be staffed by MeitY officials and “lateral hires, young professionals, and consultants.

This subcommittee also noted that providers and deployers of AI systems cannot claim “safe harbour” by default as in many scenarios, they “select or modify the content”, something that intermediaries such as Google, Facebook, Cloudflare and Airtel cannot do to claim protection from liability for third party content. This, however, is not a recommendation of the subcommittee.

No comments:

Post a Comment

Have a Say?..Note it down below.

Featured Posts

Book Review Competition on 24th January 2025

  SFIT LIRC invites all the students to participate in an exciting  Book Review Competition on 24th January 2025. Venue: SFIT Library, B Win...