Companies hail tweaked advisory easing AI model rollout – The Economic Times

7 minutes, 3 seconds Read

Artificial intelligence (AI) companies welcomed the IT ministry’s revised AI advisory issued late on Friday doing away with the provision that mandated intermediaries and platforms to get government permission before deploying “under-tested” or “unreliable” AI models and tools in the country.
Elevate Your Tech Prowess with High-Value Skill Courses


World’s Leading High-rise Marketplace

Offering College Course Website
MIT MIT Technology Leadership and Innovation VISIT
IIM Lucknow IIML Executive Programme in FinTech, Banking & Applied Risk Management VISIT
Indian School of Business ISB Professional Certificate in Product Management VISIT
Indian School of Business ISB Product Management VISIT

Though the advisory was sent to eight significant social media intermediaries with more than 50 lakh registered users in India, it did not explicitly say it applies only to these eight companies – Facebook, Instagram, WhatsApp, Google/YouTube (for Gemini), X (Twitter), Snap, Microsoft/LinkedIn (for OpenAI) and ShareChat.

Conversational AI platform Haptik’s chief executive Aakrit Vaish told ET the revised AI advisory is a huge win for startups. “The ministry of electronics and information technology (MeitY) was open to a dialogue and listen from the startups. Now nothing will come in the way of innovation in the country. We’re long on AI in India now,” he said.

IT MoS Rajeev Chandrasekhar on March 4 in a post on microblogging site X had tweeted saying the March 1 AI advisory was not applicable to startups in response to an ET report published on the same day capturing the concerns raised by AI startups.

However, the advisory on March 1 and the revised advisory on March 15 did not mention that it was not applicable to startups.
The new advisory said under-tested and unreliable AI models should be made available in India only after they are labelled to inform the users of the “possible inherent fallibility or unreliability of the output generated.”
Tanuj Bhojwani, head, people+ai, who collated views from 75 companies on the March 1 AI advisory, said, “I thinkthis (revised AI advisory) is a much fairer ask.”

“This is a good step overall from the government to listen to startups and encourage innovation. One general idea of policy making we should adopt to be nimble in this new age is that bad actors will continue to do bad things. Adding a burden on everyone will only slow innovation, without making a difference to adverse outcomes,” he said.

The revised AI order reflects that understanding and holds intermediary platforms accountable to an existing act of law, he added.

The revised AI advisory also said AI models should not be used to share content that is unlawful under any Indian law. Intermediaries “should ensure” that their AI models and algorithms do not permit any bias or discrimination or threaten the integrity of the electoral process.

Chaitanya Chokkareddy, chief technology officer of Ozonetel, who created a small language model in Telugu, said this is a step in the right direction. “It is good that the government is listening to the people and updating its advisories,” he said.

If the advisories are released with consultation, then we won’t have fear mongering and uncertainty, he explained.

The advisory to label AI generated content as fallible or unreliable is becoming the standard way to deal with AI content, he said. “This will allow startups to experiment with new models and also keep people safe by giving enough indication that the content they are consuming might not be reliable,” he said.

In the revised AI advisory, the intermediaries have also been advised to use “consent popup” or similar mechanisms to “explicitly inform users about the unreliability of the output.”

Pratik Desai, founder of KissanAI, which built the agriculture large language model (LLM) Dhenu, said this is a good and progressive change. “Cautioning users about the limitations of GenAI or any other tech is anyway an important thing to do,” he said.

The revised AI advisory has advised intermediaries to either label or embed the content with “unique metadata or identifier.” Content can be in the form of audio, visual, text or audio-visual. The government wants the content to be identified “in such a manner that such information may be used potentially as misinformation or deepfake.”

Gaurav Juneja, chief revenue officer of Kapture, an AI customer support platform, said AI is still in its infancy and that there are going to be a lot of changes on the regulatory side in the coming days.

“We welcome this proactive move by the government. It strikes a good balance between fostering innovation while having the right guardrails,” he said.

Ameet Datta, a partner in law firm Saikrishna & Associates, said by shifting from requiring explicit governmentpermission to advising that AI models be labelled for their potential fallibility, the government has demonstrated an appreciation for the dynamic nature of AI development and also implicitly recognised the need to adopt more formal mechanism rooted in statutory powers.

The revised approach encourages transparency and user awareness without stifling innovation. “The legal landscape, however, remains complex. While the advisory clarifies the obligations of intermediaries under the IT Act and Rules, its legal scope/binding nature and the specifics of compliance continue to pose challenges for both domestic and international platforms,” he said.

However, the ambiguity surrounding the advisory’s legal status and its application to AI models and LLMs, specifically, calls for a proactive dialogue between technology companies, legal experts, and policymakers to establish clear, actionable guidelines that support innovation, creators, and user protection, he said.

This post was originally published on 3rd party site mentioned in the title of this site

Similar Posts