Examining Legal Preparedness for AI-Driven Agentic Commerce Transactions

In our current digital era, a significant portion of the urban populace, equipped with internet access and basic technological know-how, interacts with artificial intelligence (AI) in various aspects of daily life. This ranges from managing smart home devices to generating creative content like listicles or artistic images. Recognizing the immense potential of AI, particularly its predictive capabilities, automation power, and ability to deliver personalized user experiences at scale, the financial technology (fintech) sector has proactively begun to integrate AI into the development of frictionless financial products.
A notable innovation in this domain is the emergence of "agentic commerce." This system empowers AI agents to act on behalf of users to independently initiate, authorize, and complete financial transactions. Consequently, a user's AI assistant will transcend its role of merely offering suggestions or sending reminders. Instead, these intelligent agents will gain the autonomy to place grocery orders, settle bills, or subscribe to services without direct human intervention, guided by learned user preferences and behavioral patterns.
Leading payment industry giants have already embraced this trend. Mastercard recently introduced "Mastercard Agent Pay," designed to seamlessly integrate payment experiences with tailored recommendations provided on conversational platforms, enabling AI agents to not only understand user preferences but also autonomously make purchases. Similarly, Visa has launched "Visa Intelligent Commerce," an initiative aimed at empowering AI agents to deliver personalized and secure shopping experiences, managing key phases from browsing to post-purchase management. With such significant industry players backing these developments, the prospect of delegating everyday financial decisions to AI-powered digital agents is rapidly transforming from a mere possibility into an impending eventuality.
Despite its revolutionary potential, agentic commerce gives rise to several complex legal concerns, especially within the framework of Indian law. These issues primarily revolve around contract validity, agent status, and liability when AI agents conduct transactions autonomously.
A primary legal hurdle stems from the Indian Contract Act, 1872 (ICA), which stipulates that a legally binding contract requires consent from a person competent to contract. Competency is defined as having attained the age of majority, being of sound mind, and not being legally disqualified from contracting. This requirement ensures the human principal is aware of and agrees to the contract's terms. However, in agentic commerce, where AI agents independently initiate and finalize transactions, the fundamental question arises: is a legally binding contract even formed?
Furthermore, Section 184 of the ICA defines an "agent" as a person who has attained majority and is of sound mind, making them responsible to their principal. This clearly indicates that an agent, much like the principal, must possess legal personhood, including age and mental capacity. AI agents, lacking these attributes, do not qualify as agents under Indian law. Consequently, any contract purportedly entered into by an AI-powered digital agent could be considered void ab initio (invalid from the outset).
The legal and commercial implications of this are substantial. For instance, if an AI agent initiates a transaction based on learned preferences without active user input, either the user or the service/goods provider could argue that no binding agreement was ever formed due to the AI's lack of legal personhood or contractual capacity. This could lead to scenarios such as rejected payment obligations or the denial of services or goods even after payment has been made by the user, thereby raising serious questions about the enforceability of such AI-mediated contracts in the event of a dispute.
Another significant challenge relates to the principle of "consensus ad idem" – a meeting of minds – which is a cornerstone of Indian contract law. In agentic commerce, transactions might be executed without any active or contemporaneous input from the human principal. This lack of direct involvement means the human principal may never have the opportunity to express explicit consent at the point of transaction, making it difficult to establish that the parties were truly "consensus ad idem." This could become a critical complication, particularly when a human principal disputes a transaction initiated by an AI agent.
The issue of liability also presents a complex problem. What happens if AI agents make errors in placing orders due to software bugs, algorithmic flaws, or unexpected outputs? Or what if these AI agents fall victim to misleading advertisements, fake orders, or fraudulent schemes? Determining who bears responsibility in such situations is challenging. Should the onus fall on the consumer, under the "Buyer Beware" principle, even if they had no active role in the specific transaction? Or should liability rest with the developer or service provider of the AI system, who designed or programmed the agent to act autonomously?
One potential path to address the contractual validity issue could be to consider approaches similar to that of the United States' Uniform Electronic Transactions Act, 1999 (UETA). The UETA explicitly recognizes that contracts can be formed and executed via electronic agents and that an individual can be bound by their electronic agent's actions if it operates within the scope of the system designed by that individual. Specifically, Section 14 of UETA states that a contract may be formed by the interaction of electronic agents, or between an electronic agent and an individual, without requiring human review or intervention. Adopting a similar framework could offer a response to some of the legal questions surrounding AI agents in commerce.
In the current absence of clear-cut legal or regulatory frameworks addressing these novel questions, it is paramount that innovators designing products in the agentic commerce space operate with a strong sense of responsibility and reasonable foresight. This involves a proactive approach to building systems that anticipate potential legal ambiguities.
Developers should prioritize user autonomy and incorporate robust safeguards. These measures could include: (a) implementing real-time notifications and confirmations to ensure users can validate transactions before finalization; (b) creating and maintaining comprehensive audit trails, allowing users to retrospectively review the actions taken by AI agents and understand the reasoning behind them; (c) establishing clear post-transaction procedures, such as grace periods during which users can flag unintended or mistaken transactions and request reversals without incurring costs; and (d) creating a defined list of activities that AI agents are not permitted to undertake, for example, taking out loans, purchasing real estate or vehicles, or booking expensive vacation plans.