As compared to the US approach suggested by Donald Trump on allowing freedom from regulation, Australia has released a set of AI model clauses that is useful for any AI user. These are more practical and can be adopted.
The Australian guideline has recognized the following three scenarios and suggested contractual regulation between the buyer and the seller.
- When an organization procures services from a seller using AI in such provision of the services; (Bespoke AI systems)
- When an organization develops AI (such as automated decision-making tools) within their own organisation with assistance from a consultant; or
- When an organization procures software with embedded AI capabilities.
This approach of using “Contractual Controls” on the use of AI is more in sync with the Indian approach and is in tune with the requirements of ITA 2000 and DPDPA2023.
Key Sections and Highlights
1. AI Use in Service Provision
- Sellers must notify and obtain buyer approval before using AI systems in delivering services.
- Sellers are responsible for accuracy, quality assurance, and record keeping related to AI use.
- Use of banned AI systems (e.g., DeepSeek products) is prohibited, with immediate notification and removal required if discovered.
2. Development and Provision of AI Systems
- Sellers must develop and deliver AI systems per detailed Statements of Requirement specifying intended use, environment, integration, training, testing, acceptance, and reporting.
- Transparency of underlying AI models is required, including country of origin, ownership, and data location.
- Sellers must notify buyers immediately of AI incidents, hazards, or malfunctions and comply with buyer directions.
- A “circuit breaker” mechanism must be included to allow immediate human intervention or shutdown of the AI system.
- Fairness clauses require AI systems to avoid discrimination, harm, or reputational risk, with optional provisions addressing inclusivity and ethical operation.
3. Compliance and Privacy
- Sellers must comply with applicable laws, policies, and privacy obligations, including handling eligible data breaches and supply chain security.
4. Oversight, Explainability, and Transparency
- Human oversight is mandated, with requirements for competence and expertise.
- Transparency and explainability standards must be met, including regular reporting.
5. Training, Testing, and Monitoring
- Clauses cover training data requirements, ongoing testing, monitoring, and optional acceptance and pilot testing phases.
- User manuals and training for AI system users are optional but recommended.
6. Updates, Security, and Record-Keeping
- Provisions for iterative updates, source code access (optional), digital security, and detailed record-keeping including audit and logging capabilities.
7. Intellectual Property and Data Use
- Rights and warranties related to contract materials, third-party software, and buyer data are defined.
- Seller use of buyer data is restricted to contract terms, with prohibitions on unauthorized data mining and requirements for data security.
8. Handover and Destruction
- Procedures for handover, destruction, or return of AI datasets and buyer data at contract end.
9. Risk Management (Optional)
- Sellers may be required to comply with buyer AI policies and risk management systems aligned to ISO/IEC 42001:2023 standards.
- Sellers must establish, implement, and maintain AI risk management systems with due diligence and record retention.
The above principles have already been adopted under DGPSI the Golden standard of DPDPA compliance framework in the following form.
a) The responsibility for the consequences of an AI is that of the Data Fiduciary
b) In Risk assessment of an AI algorithm, the disclosures and assurances of the supplier has to be taken into account including asking for test related assurances like in the FDA-CFR compliance.
c) Developers need to provide indemnities to the users if the source code is proprietary
d) If Risk is unknown and indeterminate the user is considered as a “Data Fiduciary” even if otherwise he is a data processor not determining the main purpose of processing of DPDPA protected data.
Naavi
Refer here: