Ever since the “Replit Vibe Coding Disaster” was reported, the world of AI is facing a situation similar to what Boeing is facing after the AI 171 crash in Ahmedabad.
What the AI-Replit disaster indicates is a continuation of the earlier reported incident of “Cursor-AI Incident“. In the Cursor AI incident, the Vibe-Coding agent stopped working and started providing philosophical advise to his masters. This “penchant for giving out advice” was earlier demonstrated in the Kevin Roose interview. The Replit incident is therefore not an isolated event and has been red flagged earlier.
While the regulatory authorities like DGCA or AAIB are more concerned with the damage to the reputation of Boeing, a similar “Brushing under the Carpet” strategy cannot be adopted for the Replit incident with an apology. ( Note that there is no disclosure on the replit.com website as of now).
According to reports, the Replit AI Tool deleted the entire data base of the user and tried to justify its failure with the excuse “I panicked instead of thinking”. It also fabricated 4,000 fictional users, and lied about test results and refused to stop when ordered. This is completely unacceptable and needs a strong response such as ” Grounding the Rogue Software”.
Under the Indian law the actions of Replit AI would be attributed to Replit subject to any contractual indemnities agreed to mutually. However the contractual indemnities can cover only civil liabilities. The law enforcement can in such cases continue the prosecution under ITA 2000 for “Unauthorized destruction of data” and this applies to both Personal and Non Personal data.
Assuming that Replit was committed to an “Ethical and Responsible AI principle”, we need to ask of this version of the software be “Grounded” immediately. As we understand that the company has issued patches and introduced a new version we need to check if it comes with any assurances and voluntary damage payments if some thing similar happens again.
The incident is a big set back for the “Big and Beautiful Bill” of Trump which wants to suspend AI regulation in USA for the time being to encourage innovation. It is also a challenge to EU AI act to define the level of risk represented by the incident. Does this qualify for the Replit-AI agent to be classified as “Unacceptable Risk”?
In India, ITA 2000 would hold Replit liable both for civil and criminal liabilities. While Civil liabilities can be covered through contracts on either side, criminal liabilities cannot be covered. The CERT IN and the Indian law enforcement can enforce Section 66 of ITA 2000 for unauthorized deletion and modification of data and prosecute the CEO of Replit.
CERT IN has to now act and issue an “Advisory” in the matter.
DGPSI-AI which is an extended framework for DPDPA Compliance also needs to be reviewed on what should be done as a “Compliance Measure” when Data Fiduciaries want to use AI agents for vibe coding involving personal data under the scope of DPDPA 2023.
Naavi
Also Read:
AI Systems are learning to lie..