BMC Medical Education volume 25, Article number: 438 (2025) Cite this article
The growing prevalence of mental health conditions, worsened by the COVID-19 pandemic, highlights the urgent need for enhanced psychiatric education. The distinctive nature of psychiatry– which is heavily centred on communication skills, interpersonal skills, and interviewing techniques– indicates a necessity for further research into the use of GenAI in psychiatric education.
Given GenAI has shown promising outcomes in medical education, this study aims to discuss the possible roles of GenAI in psychiatric education.
We conducted a scoping review to identify the role of GenAI in psychiatric education based on the educational framework of the Canadian Medical Education Directives for Specialists (CanMEDS).
Of the 12,594 papers identified, five studies met the inclusion criteria, revealing key roles for GenAI in case-based learning, simulation, content synthesis, and assessments. Despite these promising applications, limitations such as content accuracy, biases, and concerns regarding security and privacy were highlighted.
Despite these promising applications, limitations such as content accuracy, biases, and concerns regarding security and privacy have been highlighted. This study contributes to understanding how GenAI can enhance psychiatric education and suggests future research directions to refine its use in training medical students and primary care physicians. GenAI has significant potential to address the growing demand for mental health professionals, provided its limitations are carefully managed.
Generative artificial intelligence (GenAI) emulates human creativity and intelligence in the form of texts, images, videos, codes, and other modalities. According to Samala et al. (2024), it offers advantages such as cost-effectiveness, multilingual support, and efficiency [1]. It is important that educators learn to use GenAI to improve education through streamlining the process of generating educational resources and creating creative lesson plans, case-based scenarios, and assessments to deepen learners’ cognitive processes.
The need for improved psychiatric education has become increasingly evident as mental health issues continue to rise globally. This rise is attributed to various factors, with the most significant being the COVID-19 pandemic, which triggered a 25% increase in the prevalence of anxiety and depression [2]. In response, some countries, such as Singapore, have extended mental health education to primary care physicians, underscoring the need to emphasise psychiatric education more [3]. However, current psychiatric education faces several challenges, including inadequate exposure to diverse patient experiences and limited resources for comprehensive training [4]. The introduction of GenAI may bridge these gaps and better prepare medical students, primary care physicians, and practitioners from other disciplines who are eager to pursue formal psychiatric education for future encounters with patients experiencing mental health-related issues.
GenAI applications in medicine can be categorised into two groups: clinical use and educational use. The clinical application of GenAI has been integrated into disease detection, diagnosis, and screening across various fields, such as radiology, cardiology, and gastrointestinal medicine [5]. GenAI has shown promising results in medical education in several areas, including self-directed learning and simulation [6].
In psychiatry, studies on the utility of GenAI primarily focus on clinical applications rather than educational purposes, such as its potential to provide diagnostic assistance, treatment considerations, and enhanced access to mental health support [7]. However, the question of whether GenAI can effectively support psychiatric education, given the unique nature of the field, has not been thoroughly addressed. The skills required of a psychiatrist place a greater emphasis on soft interpersonal skills than procedural skills, marking a significant difference from other specialities such as surgery, radiology, and endocrinology. Psychiatrists must not only be familiarised with the diagnostic criteria and prescribe appropriate medications, but they also need to master interviewing techniques and psychotherapy comprehensively while grasping phenomenology and patients’ subjective experiences to formulate effective treatment plans [8]. Many elements of psychiatric practice rely on soft skills, including conducting a Mental State Examination, suicide risk assessment, motivational interviewing, and Cognitive Behavioural Therapy. Soft skills are often more challenging to teach and evaluate than technical skills, underscoring the distinctive nature of psychiatric education [9]. This indicates that the application of GenAI in psychiatric education may differ significantly from its use in other specialities; prior studies on GenAI in medical education broadly may not be directly applicable to psychiatry.
Moreover, there is a lack of standardised guidelines regarding the use of GenAI in psychiatric education and the management of sensitive patient information and data privacy. Furthermore, GenAI may find it challenging to replicate the nuanced clinical judgement inherent in psychiatry, which heightens concerns about its accuracy. Evidence regarding the effectiveness of GenAI in enhancing psychiatric education is also limited.
By conducting a scoping review, we aim to explore our research topic by identifying GenAI’s educational aspects, the benefits and risks associated with its use in psychiatric education, and the need for future research in specific areas.
We conducted a review, limited to English publications from four databases, to identify GenAI’s role in psychiatric education according to the educational framework proposed by the World Psychiatric Association-Asian Journal of Psychiatry Commission [10].
The scoping review was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [11]. Our findings are presented in line with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR) checklist [12]. A literature search in the PubMed, PsycINFO, and Embase databases was performed on 12 September 2024, followed by a search in Web of Science on 16 February 2025. A fourth database was added due to the limited number of eligible papers available for review from the first three databases. We employed the following search strategy: (“Artificial intelligence” OR “Computer reasoning” OR “Machine intelligence” OR “Machine learning” OR “Deep learning” OR “Foundation model” OR “ChatGPT” OR “Generative AI”) AND (“Mental health” OR “Psych*” OR “Psychiatric”) AND (“Education” OR “Educational” OR “Training” OR “Learning” OR “Teaching”). We limited our search to English publications containing the keywords from the search strategy. The publication years for the identified papers range from 1933 to 2024.
Original research discussing the use of GenAI or ChatGPT in medical education was selected for review. Only original articles in English are included.
We excluded papers discussing the clinical use of GenAI, public/patient mental health education, technology in general, virtual reality, and augmented reality, as well as papers addressing a specific field of medical education that is not related to psychiatry (e.g., oncology, surgical), nursing, psychology, and the perception of GenAI. We also excluded conference papers, preprints, editorials, and other non-original research.
Search results from the databases were uploaded to EndNote. Duplicates were removed, followed by title and abstract screening using the inclusion and exclusion criteria. LQY and MC conducted the initial screening independently. Studies deemed eligible were downloaded, and full-text screening was carried out by LQY and MC. Any disagreements were resolved by consulting a third senior reviewer (CWO and CSH).
Details of the reviewed paper, such as authors, year of publication, type of GenAI, methodology, outcome measure, and key findings, were charted in a table by LQY and MC (refer to Table 1). Through thematic analysis, the role of GenAI in psychiatric education was grouped into four themes, and evidence synthesis was done to achieve the aim of this study. The senior authors checked the tabulation of data, themes, and syntheses.
We identified 12,594 papers, of which 118 were duplicates. After the abstract screening, 12,439 papers were excluded. 37 papers were reviewed in full text, and 32 papers were excluded because they did not meet the inclusive criteria (refer to Fig. 1). The remaining 5 papers discussing using GenAI in medical education were selected for review.
Flow chart showing identification, screening and inclusion of papers reviewed according to PRISMA [11]
The types of GenAI used include ChatGPT (3.5 and 4), Claude 3, and Llama 3. All papers addressed the use of ChatGPT. Most examined the differences between content generated by GenAI and that produced by traditional handwritten or expert-written sources. The five papers explored four roles that GenAI can fulfil in medical education: case-based learning, simulation, content synthesis, and assessments [7, 13,14,15,16].
Two papers discussed leveraging GenAI to create case vignettes for case-based learning [7, 13]. In a study by Coşkun et al. (2024), a randomised controlled trial was conducted to compare the quality of ChatGPT-synthesised vignettes with those written by humans. There was no significant difference in quality between the two types of vignettes. The scores suggested that vignettes generated by ChatGPT may promote higher utilisation of clinical reasoning skills among students compared to those created by humans.
Furthermore, the study by Smith et al. (2023) highlighted the efficiency and variety of ChatGPT-generated case vignettes. Educators can modify these vignettes to teach various learning outcomes, such as the diagnostic process, treatment, determining if a psychopharmacological therapy is necessary, or the ethics surrounding the case. For instance, adjusting the prompt to exclude suggestions for treatment plans allows students to discuss and describe the factors to consider when prescribing a treatment [7]. The parameters of the case can also be varied, including its difficulty and complexities, offering flexibility in the creation of course materials. Psychiatric disorders often present a complicated array of symptoms. A diverse range of case vignettes could better prepare medical students to diagnose and treat patients.
Other advantages include addressing ethical concerns associated with utilising real case vignettes and the capacity to produce case vignettes in various languages [7]. Real case vignettes must undergo rigorous scrutiny and documentation to guarantee informed consent and maintain patient confidentiality. This is particularly challenging in psychiatry, as patients may lack the mental capacity to consent, and ensuring anonymity can be problematic [17]. GenAI-generated vignettes do not face the same issues.
ChatGPT is recognised for its ability to conduct simulations by adopting character roles and providing real-time responses based on input [1]. One study by Smith et al. (2023) briefly noted that ChatGPT could simulate a patient, facilitating interactions with students to practise their clinical skills or their ability to identify risk factors [7]. Previously, a review indicated that simulation in psychiatry effectively enhances students’ competencies in performing psychiatric risk assessments on patients [18]. However, a shortage of studies exists that address the methods and effectiveness of GenAI in patient simulation within psychiatric education, which would facilitate its implementation.
ChatGPT can streamline the content synthesis process, enhancing efficiency while upholding academic standards [1]. It has been shown to provide accurate medical information and simplified summaries of complex research [7]. Specifically, a paper discusses using GenAI to create illness scripts for educational purposes [16]. An illness script is a specific format representing patient-oriented clinical knowledge containing valuable information. It is generally dynamic, depending on the physician’s requirements, but it can also be standardised for medical education. Illness scripts can teach medical students clinical reasoning skills, thereby improving diagnostic accuracy. In the study by Yanagita et al. (2024), 84% of the 184 illness scripts demonstrated relatively high accuracy.
Three papers explored the application of GenAI to develop assessment tools for medical students [13,14,15].
Coşkun et al. (2024) discussed the quality of ChatGPT-generated multiple-choice questions (MCQs). Out of 15 questions generated, six items met the criteria and were concluded to be effective. The quality of MCQs can be further refined by using more complex prompts, including factors such as learner type, competency level, and difficulty level [13].
In addition to generating MCQs, two papers discussed the creation of the Script Concordance Test (SCT) using Large Language Models (LLMs) [14, 15]. The SCT is designed to refine clinical reasoning and decision-making in uncertain clinical situations. Developing an SCT is both challenging and complex; therefore, employing GenAI-like LLMs can help expedite the development process [15]. Hudon et al. (2024) examined the application of ChatGPT-generated SCTs in psychiatry for undergraduate medical education. It has been demonstrated that there is no significant difference between ChatGPT-generated SCTs and those created by experts in terms of the scenario, clinical questions, and expert opinions. With the appropriate target group, a relevant focus of the question, the clinical problem, and guidelines to follow, an SCT for psychiatric education can be easily developed. This can be conveniently achieved using the “Script Concordance Test Generator,” a custom GPT designed for SCT generation [15].
Overall, there are concerns regarding inaccuracies, bias, and a lack of control over generated content [7, 13]. Moreover, using GenAI for simulation poses the risk of sharing sensitive or personal data, thereby raising security and privacy issues [7]. ChatGPT may also display grammatical errors in certain languages, exhibit biases against minorities, experience hallucination effects, show a lack of replicability, possess limited awareness of recent events, and may eventually adopt a paywall, leading to inequality [7].
Moreover, GenAI-generated illness scripts for psychiatric disorders received the highest number of “C” ratings, comprising 45.5% of psychiatric scripts [16]. These scripts present generic information, such as “diagnosis based primarily on clinical interview and symptom criteria”, instead of outlining the specific steps involved [16]. This issue may arise from the limited character count; however, this constraint could be alleviated by increasing it. With a greater character count, more details could be covered, particularly considering the wide variety of psychiatric symptoms.
Another limitation of GenAI is that the SCTs it generated were too simple [14]. Well-designed and more complex prompts can improve the quality of SCTs. Subject matter experts can make minor adjustments. Appropriate guidelines are still needed to leverage GenAI, as the content generated may not meet the standard required for education use. For example, “none of the above” is discouraged by test development guidelines for MCQ, but it was included as an option for one of the generated questions [13].
Despite the limited number of papers, GenAI has demonstrated its potential role in psychiatric education. While the role of GenAI is extensively discussed in other specialities and for clinical applications, there is minimal analysis regarding its use in psychiatric education. The intricate nature of psychiatry may be one factor contributing to the lack of exploration into the role of GenAI in this field [19]. In this section, we will utilise an established psychiatric education framework to analyse the applicability of GenAI’s role in psychiatric education.
This review demonstrates favourable evidence that GenAI, such as ChatGPT, supports psychiatric education through case-based learning, simulation, content analysis, and assessment. To explore the potential of GenAI in psychiatric education, we compare our findings with the new training framework established by the World Psychiatric Association-Asian Journal of Psychiatry Commission. This framework is based on The Canadian Medical Education Directives for Specialists (CanMEDS) developed by The Royal College of Physicians and Surgeons of Canada in response to the evolving landscape of psychiatry [10]. CanMEDS is applicable to various disciplines, including psychiatric education [20]. It comprises seven competencies: communicator, collaborator, leader, health advocate, scholar, professional, and medical expert (see Fig. 2). The World Psychiatric Association-Asian Journal of Psychiatry Commission paper discussed specific requirements and recommendations necessary to achieve these seven competencies given Psychiatry’s increasingly complex and uncertain nature (see Table 2). Therefore, by analysing how GenAI can contribute to this framework, we can provide improved solutions for delivering quality psychiatric education that meets modern needs. In the following sections, we explore how the use of GenAI in case-based learning, simulation, content analysis, and assessment contribute to shaping the seven competencies of CanMEDS.
CanMEDS Diagram designed by the Royal College of Physicians and Surgeons of Canada [10]
Generative AI opens up opportunities for creating various case vignettes. The effectiveness of case-based learning in psychiatric training has been highlighted in previous research [21]. An efficient method for generating case vignettes can contribute to the development of roles such as medical expert, communicator, collaborator, leader, scholar, and professional.
Moreover, case vignettes can be adapted to teach the use of diagnostic tools and criteria while practising within legal frameworks and safety protocols. Most importantly, the ability to vary case vignettes can train medical students to handle situations involving uncertainty or dilemmas. This is crucial in today’s world, where mental health issues are complicated by contemporary factors such as social media influences and non-evidence-based self-diagnostic tools found online [22]. In this context, GenAI can potentially support the role of a medical expert. Understanding how to respond to cases involving dilemmas also enhances the roles of communicator, leader, and professional.
Furthermore, GenAI can be prompted to create cases that necessitate interdisciplinary collaboration (e.g., a combination of mental and physical illnesses or a scenario where the involvement of a financial adviser or social worker is essential). This fosters medical students’ development of collaborative skills. During discussions of the cases, medical students can explore how to integrate evidence-based knowledge alongside patients’ values and preferences. This competency is expected of a scholar.
In addition to case-based learning, applying what students learn in practice is essential in psychiatry. Compared to other specialities, communication in psychiatry must be extremely precise and sensitive to patients. By having GenAI simulate patient dialogues, students can practise the communication framework they have learnt and learn to be flexible with these communication techniques, such as motivational interviewing for addiction. This aligns with the roles of medical expert and communicator. Students can also learn to maintain professional boundaries by carefully selecting their words during conversations with the simulator, aiding their development into professionals.
Through simulation, students can better convey information regarding treatments, apply diagnostic assessments, and gather comprehensive patient histories. However, Dave (2012) highlighted several concerning limitations associated with implementing simulated patients in psychiatric education. Given that mental health illnesses are often complicated to understand, it is challenging to train simulated patients to accurately portray the complexities of psychiatric conditions [23]. Additionally, actors may introduce and act upon their prejudices towards mental illness. Cost is also a significant concern. Interestingly, some studies discuss the use of GenAI-based 2D or 3D avatars to enhance patient encounters in other specialities [24]. GenAI-based simulators could assist in overcoming these challenges, provided there are no inherent biases or paywalls. Further research into its application in psychiatric education is warranted.
To encourage medical students to embody the role of medical experts and scholars, GenAI synthesises illness scripts, enabling students to grasp essential information regarding various diseases. However, given the complexities of psychiatric illnesses, further studies are necessary to enhance the quality and examine the effectiveness of GenAI-created illness scripts in psychiatric education.
Furthermore, GenAI can also promote lifelong learning, provided graduated healthcare workers are granted free access to it, or no paywalls are implemented in the future. This allows them to obtain the information as and when it becomes available. However, as highlighted in the results section, GenAI has its inaccuracies. Users need to be careful and stay cautious when using GenAI.
In the results section, we discussed using GenAI to generate MCQs and SCTs. These can either serve as summative assessments or as self-quizzes for students to prepare for an assessment.
Most medical education exams are conducted as MCQs, or at least in Singapore. Thus, students can practise applying knowledge by generating and completing MCQs for self-preparation. However, this is provided that GenAIs that can generate quality MCQs do not have paywalls; otherwise, this could facilitate greater inequality between different income groups.
The use of SCTs in psychiatry has been studied, and the feasibility of evaluating clinical reasoning has been shown [25]. SCTs can be adapted to assess whether students fulfil the roles of CanMEDS. They have the potential to assess psychiatry clinical competencies, such as understanding diagnostic frameworks and clinical assessment tools, dealing with uncertainty, and practising evidence-based medicine. These competencies are required of a medical expert, leader, and professional.
In both MCQs and SCTs, GenAI can easily generate a diverse range of questions. These questions can incorporate the socio-economic or racial backgrounds of the patient, allowing for the assessment of the student’s objectivity and training them to remain non-judgemental. This could enhance the student’s role as a health advocate, helping to reduce stigma for patients, particularly those from minority groups.
The different applications may work and integrate together—for example, GenAI–created case vignettes can be used as a prompt to generate video simulations, and GenAI can assess students’ answers to GenAI-created questions. In addition to the four applications discussed in this study, other applications can be explored, such as using GenAI to translate content into various languages, demolishing language barriers, and promoting access to psychiatric education resources in more countries, contributing to global mental healthcare [1].
However, implementing GenAI in psychiatric education may present some challenges. Educators might hesitate to adopt GenAI since the current approach remains traditional and predominantly face-to-face. They may worry about the potential loss of warmth, empathy, and personal interactions from using GenAI. Many educators and clinicians are not yet trained to use GenAI tools for psychiatric education. There may also be scepticism about whether GenAI can enhance traditional case-based discussions, psychotherapy training, or diagnostic reasoning exercises.
GenAI can produce hallucinations– the generation of factually inaccurate information [26]. AI hallucinations occur when AI creates seemingly realistic but entirely fabricated content that may be illogical or incorrect [27]. Several reasons for the occurrence of AI hallucinations include insufficient diversity in training data or biases rooted in certain background traits. GenAI-generated content, such as illness scripts, may lack accuracy [1]. Students who study these illness scripts without expert revisions risk grasping incorrect medical concepts, which could lead to poor medical decisions in the future. Similarly, Coşkun et al. (2024) highlighted that inaccurate information was identified in GenAI-generated clinical vignettes and MCQs, posing the risk of disseminating incorrect information to students [13].
As GenAI heavily relies on training data to generate outputs, assessment questions and vignettes produced by ChatGPT may follow a predictable pattern. This might result in a limited variety of exam questions, failing to encapsulate the sophistication of psychiatric education [28].
When it comes to GenAI, privacy is a significant concern. In this study, however, GenAI can assist with the issue of patient privacy by removing the use of real case vignettes for case-based learning and SCT generation. Nevertheless, there is a risk of question banks being leaked to medical students. The outputs (e.g., generated examination questions) of GenAI may be stored in the AI system, which raises the possibility of the questions being leaked to students using the same AI system [29].
While GenAI presents several risks, including ethical concerns and inaccuracies, these issues can be effectively managed through specific recommendations. Further research should focus on establishing clearer guidelines for GenAI usage in psychiatric education founded on ethical principles. Additionally, the potential for bias in GenAI could be alleviated by training it with more comprehensive datasets. The data utilised must adhere to data protection laws. Furthermore, experts should conduct a manual review to evaluate the accuracy and relevance of GenAI-generated content. Technologies such as Federated Learning and Blockchain can be explored as potential solutions to the issue of question leaks in psychiatric education assessments.
Here, we analyse the quality of the studies reviewed and identify the strengths and limitations of each study reviewed:
The study by Smith et al. (2023) examined the various applications of ChatGPT in depth; however, it lacked a definitive methodology for assessing its effectiveness in social psychiatry.
The study by Coşkun et al. (2024) is a randomised controlled experiment that employs strong methodology and psychometric evidence to justify ChatGPT’s potential in generating clinical vignettes and MCQs for assessment. However, this study does not directly address psychiatric education.
Kıyak et al. (2024) examined various types of GenAI beyond ChatGPT and proposed specific prompts for generating SCT items, thereby justifying the potential to streamline the creation of complex educational materials. However, this study did not assess the effectiveness of GenAI-generated SCTs in improving psychiatric educational outcomes.
Hudon et al. (2024) have a methodology designed to avoid biased results. A considerable number of clinician-educators and resident doctors evaluated the effectiveness of ChatGPT in psychiatric education. Future studies may consider adopting their framework to assess the effectiveness of GenAI in psychiatric education in other ways.
Yanagita et al. (2024) analysed a considerable number of ChatGPT-generated illness scripts. However, only three physicians reviewed the quality of these illness scripts, amplifying the issue of subjectivity.
Considering the limitations of existing studies, future research could employ a more quantitative measure to assess GenAI’s effectiveness on student outcomes, explore different types of language models, and involve a larger study size.
The risk of publication bias in selecting articles was minimised during the screening process through independent assessments and third-party opinions. However, the limited literature search yielded only two studies that directly addressed psychiatric education, while the remaining three studies focused more generally on medical education, leading to generalisations from medical education to psychiatry. The small number of relevant studies restricts the generalisability of our findings and discussion. The reviewed papers did not permit quantitative analysis and were not comparable. The absence of a quantitative, comparative analysis for drawing conclusions is a limitation of our study. Nonetheless, this underscores the need for further research in this area.
To our knowledge, no paper has discussed the use of GenAI in psychiatric education. Prior studies mainly focused on the use of GenAI in clinical psychiatry or medical education in general but did not discuss its suitability in psychiatric education.
There are several reasons why psychiatric education has received less attention regarding the incorporation of GenAI. Firstly, the skills a psychiatrist must acquire are highly humanistic, emphasising the doctor-patient relationship [8]. Employing GenAI, essentially a non-human entity, to teach psychiatry is an unusual approach at first glance, thus making it a topic that is rarely discussed. In contrast, for other specialities such as radiology, GenAI can directly assist with technical skills important in the field—such as generating images of pathological findings (for example, x-ray imaging and skin lesions) as training materials [30]. This is not applicable in psychiatry, as the diagnosis and management of mental health disorders can be subjective and cannot be easily determined by observing images.
Secondly, since soft skills are the core competencies required of a psychiatrist, it is essential to evaluate students’ performance based on these skills. GenAI may not accurately assess this, as it fundamentally lacks a deep understanding of empathy and emotional states [1]. However, in other fields, GenAI can be effectively utilised to assess performance and provide appropriate feedback. For instance, the OpenAI GPT-4 Turbo API could review revisions of radiology reports made by trainees and generate relevant educational feedback [31].
Our scoping review showed that Generative AI has potential in psychiatric education. GenAI can complement traditional pedagogies, gearing psychiatric education toward achieving the goal of CanMEDS. This suggests that GenAI can cater to the unique nature of psychiatry.
Nevertheless, this area remains largely unexplored. Limitations such as content accuracy, privacy, and ethical concerns must be addressed. Further research and measures should be established before implementing GenAI. Future studies should address these limitations, propose mitigating strategies, and evaluate GenAI’s effectiveness on educational outcomes and how such outcomes contribute to students’ performance in clinical practice. There is a need to enhance the engagement of individuals researching the use of GenAI in psychiatric education, discover more effective methods to cater to the nature of psychiatry and encourage educators to be more receptive to participating in the research and implementation of GenAI in psychiatric education. Comprehensive studies on the cost-benefit analysis of implementation should be conducted, assessing benefits (e.g., student outcomes and educator efficiency) against costs (e.g., expenses of integrating GenAI and addressing potential ethical concerns). With further studies, potential breakthroughs in psychiatric education may be realised.
No datasets were generated or analysed during the current study.
- GenAI:
-
Generative Artificial Intelligence
- CanMEDS:
-
The Canadian Medical Education Directives for Specialists
- SCT:
-
Script Concordance Test
- MCQ:
-
Multiple Choice Questions
Not applicable.
Not applicable.
Not applicable.
The authors declare no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
Lee, Q.Y., Chen, M., Ong, C.W. et al. The role of generative artificial intelligence in psychiatric education– a scoping review. BMC Med Educ 25, 438 (2025). https://doi.org/10.1186/s12909-025-07026-9