AI regulation through Model clauses in Australia

As compared to the US approach suggested by Donald Trump on allowing freedom from regulation, Australia has released a set of AI model clauses that is useful for any AI user. These are more practical and can be adopted.

The Australian guideline has recognized the following three scenarios and suggested contractual regulation between the buyer and the seller.

  1. When an organization procures services from a seller using AI in such provision of the services; (Bespoke AI systems)
  2. When an organization develops AI (such as automated decision-making tools) within their own organisation with assistance from a consultant; or
  3. When an organization procures software with embedded AI capabilities.

This approach of using “Contractual Controls” on the use of AI is more in sync with the Indian approach and is in tune with the requirements of ITA 2000 and DPDPA2023.

Key Sections and Highlights
1. AI Use in Service Provision
  • Sellers must notify and obtain buyer approval before using AI systems in delivering services.
  • Sellers are responsible for accuracy, quality assurance, and record keeping related to AI use.
  • Use of banned AI systems (e.g., DeepSeek products) is prohibited, with immediate notification and removal required if discovered.
2. Development and Provision of AI Systems
  • Sellers must develop and deliver AI systems per detailed Statements of Requirement specifying intended use, environment, integration, training, testing, acceptance, and reporting.
  • Transparency of underlying AI models is required, including country of origin, ownership, and data location.
  • Sellers must notify buyers immediately of AI incidents, hazards, or malfunctions and comply with buyer directions.
  • A “circuit breaker” mechanism must be included to allow immediate human intervention or shutdown of the AI system.
  • Fairness clauses require AI systems to avoid discrimination, harm, or reputational risk, with optional provisions addressing inclusivity and ethical operation.
3. Compliance and Privacy
  • Sellers must comply with applicable laws, policies, and privacy obligations, including handling eligible data breaches and supply chain security.
4. Oversight, Explainability, and Transparency
  • Human oversight is mandated, with requirements for competence and expertise.
  • Transparency and explainability standards must be met, including regular reporting.
5. Training, Testing, and Monitoring
  • Clauses cover training data requirements, ongoing testing, monitoring, and optional acceptance and pilot testing phases.
  • User manuals and training for AI system users are optional but recommended.
6. Updates, Security, and Record-Keeping
  • Provisions for iterative updates, source code access (optional), digital security, and detailed record-keeping including audit and logging capabilities.
7. Intellectual Property and Data Use
  • Rights and warranties related to contract materials, third-party software, and buyer data are defined.
  • Seller use of buyer data is restricted to contract terms, with prohibitions on unauthorized data mining and requirements for data security.
8. Handover and Destruction
  • Procedures for handover, destruction, or return of AI datasets and buyer data at contract end.
9. Risk Management (Optional)
  • Sellers may be required to comply with buyer AI policies and risk management systems aligned to ISO/IEC 42001:2023 standards.
  • Sellers must establish, implement, and maintain AI risk management systems with due diligence and record retention.

The above principles have already been adopted under DGPSI the Golden standard of DPDPA compliance framework in the following form.

a) The responsibility for the consequences of an AI is that of the Data Fiduciary

b) In Risk assessment of an AI algorithm, the disclosures and assurances of the supplier has to be taken into account including asking for test related assurances like in the FDA-CFR compliance.

c) Developers need to provide indemnities to the users if the source code is proprietary

d) If Risk is unknown and indeterminate the user is considered as a “Data Fiduciary” even if otherwise he is a data processor not determining the main purpose of processing of DPDPA protected data.

Naavi

Refer here:

  1. Details from Digital Transformation Agency
  2. Model clauses suggested in EU

Posted in Cyber Law | Leave a comment

“Big Beautiful Bill”

Mr Donald Trump who has already struck a few ups and downs in his policy drive now has entered the AI regulation domain.

What is termed as a Draft Big Beautiful Bill (OBBB) bans all state-level AI regulations for the next 10 years.

There is an expectation that this would have a “Freedom first” approach to US-AI laws (Refer here). Given the fickleness of Mr Donald Trump, I donot think this would turn out to be an “freedom to innovate”. It could only be a different approach to AI regulation in USA.

OBBB does not mean that AI will not be regulated in USA. It only means that the US Federal regulations will be the only regulatory agency for AI. This is an attempt to Centralize the AI regulation in USA.

OBBB also does not mean that Indian businesses can use any AI algorithm without responsibility. Any user of AI in India will have the vicarious obligation under ITA 2000 and DPDPA 2023 and will have to absorb the Risks. If the Risk is “Unknown” because US sheds the ethical AI development model, the users will be considered as “Significant Data Fiduciaries” and “High Risk Intermediaries” and could have more liability under law than what they may recognize.

Let us remember that Indian companies are regulated by Indian laws..not US or EU laws when it comes to operating on Indian data. Hence the responsibility for ITA 2000 and DPDPA Compliance remains paramount and if unregulated AI is dumped on India, Indian user organizations have to be more careful than they were before to ensure compliance of Indian laws. Taking refuge that US does not need “Disclosure” or “Transparency” or “Accountability” does not help Indian “Intermediaries” or “Data Fiduciaries” with compliance of Indian laws.

Since OBBB would directly affect the federal nature of the US , it could face a challenge in the Courts but beyond this internal clash between Republic states and the Democrat states, this is unlikely to have any impact on the global scenario as some predict.

US did have the confusion of 50 states each having different regulations in the Privacy area and if there were 50 AI regulations also, it would have been a problem to the world. This unification of regulations is therefore welcome since the world does not see the State of California as different from the State of New York.

Hopefully there would be more such unification of laws related to “Internet Economy”.

Naavi

Posted in Cyber Law | Leave a comment

Bill Alert System goes wrong

There are many services in the FINTECH arena where the service provider tries to assist the account holder to make payments of pending bills. For this purpose the service provider takes the permission to view the SMS of the account holder and periodically reads the SMS.

Under DPDPA, this permission is mandatory and is covered under the DPDPA consent regulations. This consent is purpose specific and has to be considered as closed once the purpose is served.

I recently have come across such “Bill Alerts” from CRED on the CRED application linked to my mobile number. These bills were not related to me and had I mistakenly clicked “Pay Now”, the payment could have been effected.

I therefore consider the message as an “Attempt to induce me to make payment to a third party” which is an offence under ITA 2000 and BNS.

last time, CRED had indicated that the message could have been picked up from my SMS store and I also presumed that the mistake might have been at the HESCOM side in wrongly linking my mobile with another account.

I am now given to understand that the mobile number associated with the account in HESCOM is not my mobile. However, I have received the CRED alert again today. I am not able to view the corresponding SMS in my SMS inbox.

Under the circumstance, I feel that CRED has picked up the bill from a source other than my SMS inbox.

If so, the mistake lies with CRED and not HESCOM. If this is true, I owe an apology to HESCOM and I am duty bound to apologize. I am yet to get the confirmation but my advance apologies to HESCOM if the mistake lies with CRED.

We can now surmise that CRED has my account as well as the account of the individual whose bills are coming to my CRED account. Perhaps CRED has mis configured the accounts or their technical system is sending bills of one client to another. Alternatively, it is possible that HESCOM has corrected its mistake but there is a Cache maintained by CRED where the bills related to another account are getting diverted to my account.

I have raised a query with CRED now and am expecting a reply.

Once DPDPA 2023 penalties kick in, these are mistakes for which RS 250 crore penalties may be applicable. Until then remedy is under ITA 2000 which is even more serious. I hope corporate entities do understand their responsibilities when they take “Data Access permissions” particularly if they are not capable of managing the data collected.

While I have used the example of CRED here because it is out of my personal experience. this could be happening with others also including Banks.

Looking forward to get more information on this case.

Posted in Cyber Law | Leave a comment

Niti Aayog not clarifying about Mandatory Darpan Registration

Naavi has been repeatedly requesting Niti Aayog to clarify that registration of Section 8 companies is not mandatory for all Section 8 Companies. Unfortunately NITI Aayog does not respond to the query and prefers to remain silent.

In the meantime some REs like PayU and Razor Pay consider that registration on Darpan Portal is mandatory for Section 8 company and are not completing the KYC process.

It is highly irresponsible for Niti Aayog and RBI not to make a proper announcement that Darpan Registration is not mandatory for KYC. At the same time it is disappointing to note that companies like PayU and Razor Pay are unable to complete KYC ignoring the Darpan portal Registration.

Further registering a Section 8 company like FDPPI in Darpan Portal is not possible and the portal returns error page every time.

Further registering a Section 8 company like FDPPI in Darpan Portal is not possible and the portal returns error page every time.

I hope some senior person like Mr Amitabh Kant looks into this issue and set right this anomaly.

Naavi

Posted in Cyber Law | Leave a comment

optimum.net spam

I am informed that spam mails are being sent from the optimum.net server to many using the email Vijayashankar Nagarajarao (archer83@optimum.net).

Kindly ignore them and if possible file a complaint with abuse@optimum.net.

I don’t use any service from optimum.net and the email archer83@optimum.net does not belong to me. This scam seems to originate from a compromised optimum.net server which is extracting emails of contacts from the customers and using it for spamming.

Naavi

Posted in Cyber Law | Leave a comment

UIDAI website having problems

It is observed that UIDAI website is experiencing some serious technical issues. It is downloading aadhaar cards of persons other than for which a request is submitted and OTP is authenticated.

Though the downloaded file is protected by the password, this is a serious flaw which needs to be corrected.

UIDAI has recognized the bug and has posted a message on the website. I hope it would be set right soon.

This could be considered as a “Potential Data Breach” and needs to be addressed as such under ITA 2000/DPDPA

Naavi

Posted in Cyber Law | Leave a comment