Log In

Agentic Commerce and the Law: Are we ready for AI-driven transactions?

Published 1 day ago6 minute read

Namrata Dubey, Sharda Balaji

Published on

In this digital age, it can be said with some surety that most of the urban population with access to the internet and preliminary knowledge of gadget-related technologies, uses artificial intelligence in some form or another. It may be to manage smart home devices, or to develop a listicle or simply to generate images in the likeness of Studio Ghibli characters.

Fintech giants, having realised the usefulness of AI, such as its predictive capabilities, automation potential, and ability to personalise user experiences at scale, have started developing frictionless financial products. One particularly interesting development is the emergence of “, that is, a system where AI-agents act on behalf of users to initiate, authorise, and complete financial transactions. This means that a user’s AI partner will no longer be limited to offering suggestions or sending reminders. Rather, these intelligent agents will now be able to autonomously place grocery orders, pay bills, or subscribe to services independent of human involvement, based on learned user preferences and behavioural patterns.

Recently, Mastercard introduced “Mastercard Agent Pay”, which, by its own description, will integrate seamless payments experiences into the tailored recommendations and insights already provided on conversational platforms. This means that AI-agents will not only be able to gather user preferences on conversational platforms, but can go a step further by autonomously making purchases based on what AI-agents may interpret to be desired by the human. Not only Mastercard, but Visa has joined the bandwagon by introducing the “Visa Intelligent Commerce." As per Visa’s description, this initiative will empower AI-agents to deliver personalized and secure shopping experiences for consumers – at scale. From browsing and selection to purchase and post-purchase management, this program will equip AI-agents to seamlessly manage key phases of the shopping process. With the introduction of “Mastercard Agent Pay” and “Visa Intelligent Commerce” by major industry stakeholders, the prospect of delegating daily financial decisions to an AI-powered digital agent is rapidly becoming an eventuality rather than a possibility.

Despite its revolutionary nature, agentic commerce raises multiple legal concerns, some of which are detailed below:

One of the key issues is that as per the Indian Contract Act, 1872 , a legally binding contract requires that consent must be given by a person who is competent to contract i.e., (i) has attained majority according to the Indian law, (ii) is of sound mind, (iii) is not disqualified from contracting under Indian law. This is to ensure that the human principal is aware of and agrees to the terms of the contract. However, in the case of , where AI-agents independently initiate and complete transactions on behalf of the users, it is only logical to ask - whether a legally binding contract is even formed?

Additionally, Section 184 of the ICA defines who may become an “agent” to mean “As between the principal and third persons, any person may become an agent, but no person who is not of the age of majority and of sound mind can become an agent, so as to be responsible to his principal according to the provisions in that behalf herein contained.”  This clearly indicates that an agent, like the principal, must also be a person with sound mind, and having attained majority. Therefore, AI-agents lacking legal personhood, age, or mental capacity do not qualify as agents under Indian law, and any contract entered into with an AI-powered digital agent shall be void ab initio.

This raises actual legal and commercial implications. For example – in case a transaction is initiated by AI-powered agents independently, based on learned user preferences, without any active input or consent from the user, the user or the service/ goods provider may simply claim that no binding agreement was ever formed citing the lack of legal personhood or contractual capacity of the AI-agents. This may lead to rejected payment obligations or denial of services/ goods post payment by the user. Such a scenario raises serious questions about the enforceability of the contract in the event of a dispute.

A possible solution to this conundrum would be to adopt an approach like the one adopted in the United States with the Uniform Electronic Transactions Act, 1999 , which explicitly recognises that contracts may be formed and executed via electronic agents, and that an individual may be bound by an action of their electronic agent if it operates within the scope of the system designed by the individual. More specifically, Section 14 of UETA states that a contract may be formed by the interaction of electronic agents, or between an electronic agent and an individual, without review or intervention by a human.

While the above may be a response to one legal question, several others remain. For instance, the Indian contract law is predicated on the principle of “consensus ad idem”, that is, a meeting of minds. In agentic commerce, transactions may be executed without any active or contemporaneous input from the human principal. This effectively means that the human principal will never have the opportunity to express his/ her consent, making it impossible for parties to be “consensus ad idem." This may become a complication when the human principal disputes an AI-agent-initiated transaction.

Another problem concerns what would happen when AI-agents make a mistake in placing orders due to bugs, algorithmic errors, or unexpected outputs or if the AI-agents fall prey to misleading advertisements, fake orders, or fraudulent schemes. In such cases, determining liability becomes a challenging issue. Should the responsibility fall on the consumer, in accordance with the principle of “Buyer Beware,” even when they have no active role in the transaction, or should it rest with the developer or service provider of the AI system, who may have designed or programmed the agent to act autonomously?

In the absence of a clear-cut legal or regulatory answer to the questions posed above, it is of utmost importance that the innovator designing products on the agentic commerce space, operates responsibly and with reasonable foresight. This includes building systems that anticipate legal ambiguities, prioritise user autonomy, and incorporate safeguards such as:

 Implement real-time notifications and confirmations to ensure that users can validate the transaction before it is finalized.

 Create and maintain audit trails so users can retrospectively review the steps taken by AI-agents and the reasoning for the same.

 Create post-transaction procedures, including grace periods where users can flag unintended or mistaken transactions and request reversals without any cost being levied on them.

 Create a list of activities which the AI-agents shall not be permitted to undertake, such as, taking loans, buying houses or vehicles, purchasing vacation plans, etc.

 Namrata Dubey is a Senior Associate at NovoJuris Legal.

Sharda Balaji (Founder) provided inputs.

: The opinions expressed in this article are those of the author(s). The opinions presented do not necessarily reflect the views of Bar & Bench.

If you would like your Deals, Columns, Press Releases to be published onplease fill in the form available

Origin:
publisher logo
Bar and Bench - Indian Legal news
Loading...
Loading...

You may also like...