Log In

The Rise of Responsible AI Jobs - Indeed Hiring Lab

Published 7 hours ago7 minute read

Mentions of Responsible AI continues to rise alongside other AI-related job postings, both in the US and globally.

Responsible AI mentions in job descriptions have risen as a share of all AI-related postings, from close to zero in 2019 to 0.9% globally in 2025 (on average among the 22 countries in our sample), suggesting a rising focus on the ethical integration of AI into society.  

According to a Hiring Lab analysis of AI- and Responsible AI-related keywords in job postings, countries including the Netherlands, Switzerland, and Luxembourg feature the highest share of Responsible AI mentions, while Japan, Mexico, and Brazil lag below the global average. Country-specific local regulation efforts alone do not seem to account for these differences. And occupations where Responsible AI is more commonly noted are typically human-centered occupations, including legal, banking & finance, and education & instruction.

As AI technology continues to advance, concerns about its risks are also rising (including inaccuracy, cybersecurity, and intellectual property infringement). Despite these concerns, a gap persists between those companies that choose to recognize AI risks and are taking meaningful action, and those that are not (at least not outwardly). 

This analysis follows a similar methodology to Hiring Lab’s AI tracker, which measures the volume of mentions of a basket of select AI-related keywords in job descriptions. While the standard AI tracker looks for keywords such as “artificial intelligence” and “natural language processing,” Responsible AI-specific keywords for purposes of this analysis include terms such as “Responsible AI” and “ethical AI.” The keywords are language-specific, but given the global nature of many of these roles, we also included the English keywords in our analyses of non-English speaking countries. 


Responsible AI postings have been growing steadily, from practically non-existent in 2019 to almost 1% of all AI postings by 2025 (AI postings, more broadly, have also grown in this period, though with some ups and downs). Although job postings explicitly referencing Responsible AI emerged after the more general AI-related postings, their growth has accelerated notably, particularly from 2024 onward. Looking across some selected large markets, the Netherlands stands out with the highest Responsible AI mention share (1.7%), followed by the UK (1.2%) and Canada (1.16%).

Interestingly, the rise in mentions of Responsible AI has been relatively uniform across countries. One might assume that heightened regulatory focus in the European Union, including the EU Artificial Intelligence Act and the earlier General Data Protection Regulation (GDPR), would lead European countries to emphasize Responsible AI more strongly than others, including the US. However, mentions of Responsible AI have also grown rapidly in the U.S., standing at 1% in March, slightly above the global average. Half of all global AI-related postings used for this analysis originated in the US.

The figure shows the evolution of Responsible AI share within AI-related job postings across eight selected markets (Netherlands, UK, Canada, U.S., Australia, France, Italy, and Germany). The y-axis represents the percentage of AI job postings that mention Responsible AI, based on a 12-month moving average. The x-axis spans from January 2019 to March 2025.
The figure shows the evolution of Responsible AI share within AI-related job postings across eight selected markets (Netherlands, UK, Canada, U.S., Australia, France, Italy, and Germany). The y-axis represents the percentage of AI job postings that mention Responsible AI, based on a 12-month moving average. The x-axis spans from January 2019 to March 2025.

AI job postings in Luxembourg, the Netherlands, Switzerland, and Belgium reference Responsible AI more frequently than other nations; perhaps partly influenced by the fact that many international organizations and regulatory bodies are based in these countries. In contrast, Singapore, India, Spain, and Poland show a relatively low level of Responsible AI mentions, despite having a large share of AI postings overall. In other words, even countries with strong demand for AI-related jobs do not necessarily exhibit a comparable focus on Responsible AI, a potential sign of varying national attitudes towards AI itself, and/or that Responsible AI practices are still emerging. 

A scatter plot titled “The emphasis on Responsible AI varies across countries” showing the share of AI-related job postings among all job postings (X-axis) and the share of Responsible AI mentions within AI postings (Y-axis) by country as of March 2025. All values are based on 12-month moving averages.
A scatter plot titled “The emphasis on Responsible AI varies across countries” showing the share of AI-related job postings among all job postings (X-axis) and the share of Responsible AI mentions within AI postings (Y-axis) by country as of March 2025. All values are based on 12-month moving averages.

Looking specifically at US-based AI jobs, the occupations with the highest shares of Responsible AI mentions between April 2024 and March 2025 included legal (3.5%), banking & finance (2.3%), research & development (2.3%), and education & instruction (1.7%). Broadly speaking, the occupational patterns identified for the US are consistent with global trends.

The scatter plot shows the relationship between the Responsible AI/AI share (%) on the y-axis and the AI/Total share (%) on the x-axis, by occupation in the U.S. The data covers the period from April 2024 to March 2025. Each bubble represents one occupational category. Bubble size is scaled according to the share of that occupation in total U.S. job postings.
The scatter plot shows the relationship between the Responsible AI/AI share (%) on the y-axis and the AI/Total share (%) on the x-axis, by occupation in the U.S. The data covers the period from April 2024 to March 2025. Each bubble represents one occupational category. Bubble size is scaled according to the share of that occupation in total U.S. job postings.

While these are generally not the most technically intensive AI jobs, responsible and ethical AI use may be especially important in these occupations to minimize the potential for harm and/or to comply with existing laws. For example, while AI can help an attorney summarize and produce dense and/or highly technical documents, that work must adhere to certain legal and ethical guidelines. Banking & Finance functions increasingly rely on AI for tasks including fraud detection and credit assessment, which require fairness, transparency, and accountability.

The data also show that Responsible AI mentions are more limited in the occupational segments with the highest demand for workers — including retail and food preparation & service — reflecting the limited AI adoption in these roles overall and its concentration in smaller, more-specialized fields.

Using data from Stanford’s Global AI Vibrancy Tool, we assessed whether stronger national regulatory environments are associated with higher rates of Responsible AI mentions in AI postings. Interestingly, regulation alone was not found to be correlated with Responsible AI mentions. Despite limited regulation, countries including the Netherlands, Switzerland, and Sweden exhibit relatively high Responsible AI shares. Conversely, while AI regulations in the UK are more strident, its share of Responsible AI postings is somewhat middling. 

The figure shows Responsible AI/AI share (%) (y-axis) against a composite indicator of AI legislation strength (x-axis), by country. The y-axis reflects a 12-month moving average of Responsible AI share as of March 2025.
The figure shows Responsible AI/AI share (%) (y-axis) against a composite indicator of AI legislation strength (x-axis), by country. The y-axis reflects a 12-month moving average of Responsible AI share as of March 2025. 

This lack of correlation suggests other factors may be at play. One possibility is cross-country differences in political institutions, with some nations being more apt to consider and/or pass more legislation than others. Company-level dynamics also likely play a role. A public embrace of Responsible AI practices may be part of a broader brand or reputational strategy rather than any overt response to regulation. And multinational companies, in particular, often publish the same or similar job postings across several nations and may take a “one size fits all” approach to satisfying different national requirements, regardless of specific local regulations.

AI risks can be viewed as negative externalities, where companies impose costs on society without bearing them fully. Our research can complement other studies that attempt to answer such AI dilemmas. We don’t currently know what the optimal level of Responsible AI is, but it’s clear that awareness of its importance is rising. If the optimal level is above the current 1%, then companies are still under-investing in risk mitigation. 

To date, countries with relatively strict AI regulation have similar rates of Responsible AI mentions as less-regulated markets. This suggests that other factors, including reputational concerns or international business strategies, might be driving Responsible AI adoption as much as (or more than) regulatory requirements. Companies appear to be trying to internalize AI risks and address them based on market incentives or corporate and social responsibility, rather than regulatory mandates.  

We identify Responsible AI in job postings based on the presence of related keywords. These include frequently used terms such as “responsible AI”, “ethical AI”, “AI ethics”, “AI governance”, and “AI safety”, among others. The keyword list was developed using references from major public sources (e.g., UNESCO, OECD) and terminology frequently found in AI-related job descriptions. We capture keywords and phrases in English, French, German, and other languages used in our sample. 

Our analysis only covers countries where Indeed has operations, and thus does not include China, an acknowledged global AI powerhouse.

We analyzed job titles in addition to job descriptions, which revealed similar patterns in the use of Responsible AI-related language.

To ensure robustness, we also applied fixed occupational weights to account for shifts in occupational composition over time; results were broadly consistent with the unweighted findings reported in the main text.

We tested many alternative regulation indicators from Stanford University’s Global AI Vibrancy Tool. Regardless of the specific measure used, the relationship between regulation and the Responsible AI share remained broadly stable. For instance, neither legislative proceedings nor the legislation passed were statistically significant when regressed individually.

Origin:
publisher logo
Indeed Hiring Lab
Loading...
Loading...
Loading...

You may also like...