Pharmacology and therapeutics

Consensus-based development and practice testing of a generic quality indicator set for parenteral medication administration at home: a RAND appropriateness method study

    Due to nursing shortages, an ageing population and increasing care demand, there is a growing interest in parenteral medication administration at home (PMAaH), comprising the administration of parenteral medication in the home situation of patients. The operational design of such PMAaH care pathways is complex, resulting in many variations of adoptions, showing a need for a quality framework. Although quality indicators (QIs) have been proposed to monitor the quality of specific care pathways, a generic quality framework for all types of PMAaH is lacking. Therefore, this study proposes a generic quality set for PMAaH, which includes structure and process QIs, to benchmark and redesign PMAaH care pathways to ensure high quality.

    A generic QI set was developed for PMAaH using a systematic RAND appropriateness method adapted at the third phase. This method consisted of a scoping review to identify indicators, an expert panel rating phase including an online questionnaire and subsequent panel meeting to assess the appropriateness of the indicators and a retrospective practice testing to evaluate the feasibility, clarity and measurability of the indicators. After the practice testing, which consisted of an online questionnaire where experts could indicate the implementation state of all indicators in their hospital, a third expert panel adjusted the set to increase the likelihood of implementation in practice.

    The experts, all healthcare professionals involved in PMAaH processes, were recruited using the snowball sampling technique from three large Dutch, teaching hospitals. Subsequently, a practice testing by self-assessment was conducted in seven large Dutch teaching hospitals.

    17 and seven healthcare professionals with diverse backgrounds participated in the online questionnaire and panel meeting, respectively.

    The scoping review resulted in 36 QIs for PMAaH. After two expert panel rating rounds (online questionnaire and panel meeting), two indicators were removed: a QI related to travel distance policy since it was irrelevant and redundant, and a QI stating that a clinician should take the lead in a PMAaH-team, which was deemed too restrictive. After the practice testing, two QIs were removed: a QI related to clinical response documentation, which was unclear for the practice testing respondents and already covered by other QIs, and a QI related to survival documentation, which was deemed infeasible and undesirable to measure this differently than other patients by the third expert panel.

    The final set consists of 32 indicators (of which 15 were structure indicators and 17 were process indicators). The final set predominately includes QIs that are aimed at patient safety but also QIs focusing on the working conditions of the healthcare workers. 17.6% of the QIs are currently fully implemented in general in all seven hospitals. The practice testing revealed that operational QIs are more frequently implemented in practice than systemic QIs and that a structured quality assurance programme is needed in the hospitals.

    This study proposes a generic quality set for PMAaH that hospitals can use to redesign and benchmark PMAaH care pathways to assure high quality. The practice testing confirmed that there is a need for this structured quality set.

    Data are available upon reasonable request.

    http://creativecommons.org/licenses/by-nc/4.0/

    This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

    If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

    In recent years, healthcare providers have increasingly shown interest in value-based care initiatives that move care from in-hospital settings to care settings closer to or in patients’ homes, such as hospital at home1 and outpatient parenteral antimicrobial therapy (OPAT).2 This shift is further fuelled by, among others, the COVID-19 pandemic, experienced nursing shortages and an ageing population with increasing healthcare demands, all of which create a need to use hospital capacity more efficiently.

    One of the opportunities to reduce in-hospital care is parenteral medication administration at home (PMAaH). PMAaH is the home-based administration of intravenous, intramuscular and subcutaneous medication. Besides creating hospital capacity, this has also proven to increase patient satisfaction3 4 and to be cost-effective compared with inpatient medication administration4 5 for a range of patient types. In the Netherlands, many hospitals have adopted PMAaH pathways, with the most common being infectious patients requiring infusion treatment with antibiotics and oncology patients requiring immunotherapy. During this treatment at home, the medical specialist of the hospital is responsible for the patients, but the administration at home can be executed by hospital-employed nurses or third parties, such as home healthcare organisations.

    The organisation of PMAaH is inherently complex and multidisciplinary. Multiple healthcare professionals, such as physicians, nurses, pharmacists and pharmacy technicians, are involved in various stages of the care pathway, including enrolment of patients in the programme, medication preparation, administration, follow-up and discontinuation. Since PMAaH pathways were historically disease-specific or drug-specific (eg, OPAT), many variations in PMAaH pathway applications have emerged, even within a single hospital or region. Hence, a multimorbid patient may receive parenteral medication administered by a hospital nurse at home, who brings the medication from the hospital pharmacy, as well as by a home-healthcare nurse, after medication is delivered by commercial parties at the patient’s home. These variations in PMAaH pathways may lead to unwanted variations in the quality of the provided care.

    Given the complexity of PMAaH pathways and the variations in the applications of PMAaH, the measurement and benchmarking of a generic set of quality indicators (QIs) can provide insights into the quality and safety of the adoption of care processes, as seen in mental health6 and palliative care.7 These QIs are used to evaluate healthcare quality in terms of the characteristics and the context in which care is provided (ie, structure QIs), and processes and actions that are taken in the care pathway (ie, process QIs).8 Both structure and process QIs provide insight into the differences in medication administration practices, allow for monitoring and evaluation of the adherence to standardised protocols, guidelines and best practices and enhance collaboration between healthcare providers.9 Therefore, these QIs intend to measure to what extent care processes are implemented in line with the current professional standards to guarantee the quality of care processes9–11 and guide improvements in PMAaH pathways that in turn improve patient health outcomes.

    Several QI sets are proposed and evaluated for subsets of PMAaH. For example, Berrevoets et al2 and March-López et al12 developed and tested a set of QIs for assessing the appropriateness of OPAT care at home. Furthermore, Kirwin et al,13 and King et al14 developed process indicators for the activities of pharmacists. However, these sets are all developed for a specific care pathway related to a particular patient group or medication type (eg, antibiotics), or related to a specific profession, whereas PMAaH applications typically encompass the variety of these applications. Moreover, currently no generic set of QIs for PMAaH is available. In this study, we therefore aim to develop and pilot test a generic set of structure and process QIs to be used by hospitals in the (re)design and quality evaluation of PMAaH pathways. The aim of this set is to complement the already existing quality frameworks for parenteral medication in the hospital setting, since this set offers a framework for the outpatient setting, not including safety concerns related to specific medications, but is generic for all parenteral medication at home.

    We used a systematic, RAND appropriateness method (RAM) to develop a set of generic QIs for PMAaH pathways,15 16 which we slightly adapt to the context of process and structure QIs. This RAM method combines the commonly used Delphi method, which focuses on independent judgements of experts using questionnaires, with focus groups, where an expert panel discusses their judgements.17 18 We chose this method to ensure that the QIs should be appropriate for the PMAaH setting, since it promotes finding consensus, which is especially important in a multidisciplinary setting such as the PMAaH pathways.

    Figure 1 visualises the methodology in three phases, adapted from Fitch et al.15 During the first phase, we collected relevant articles on quality assurance of PMAaH pathways by a scoping review. In the second phase, we reached consensus among an expert panel, starting with an online questionnaire, without interaction and followed by panel meeting discussions. If no consensus was reached in the questionnaire round, the QIs were discussed in a panel meeting with various healthcare professionals involved in PMAaH. During the last phase, multidisciplinary teams retrospectively assessed to what extent the acquired QIs were already implemented in practice, by scoring their hospital’s policies on the indicators in a practice testing. The methodology of this study follows the RAM as proposed by Fitch et al,15 except for the last phase. Fitch et al15 propose a retrospective or prospective evaluation of the outcomes using clinical records or clinical decision aids. Since, in this study, process and structure QIs were developed, we aim to retrospectively assess the extent of implementation of these QIs in the last phase, rather than the appropriateness using clinical records.

    The study ran between February 2023 and March 2024. The second phase (expert panel ratings) of the method was carried out in three large Dutch teaching hospitals and the last phase (practice testing) in seven Dutch teaching hospitals. The three initial hospitals were selected because they all have multiple PMAaH pathways applied in practice and therefore benefit from a generic QI set to redesign their pathways and evaluate their quality. Additionally, these hospitals are all members of the mProve network, creating the opportunity to test this set in seven similar hospitals during the third phase. Ethics approval was provided on 24 April 2023 by the University of Twente-BMS Domain Humanities and Social Sciences ethics board, registration number 230 125. Reporting of this study was according to the ACcurate COnsensus Reporting Document guideline.19 Since this study focuses on structure and process indicators, and not outcome indicators, there was no patient or public involvement in this study. The study was not registered prospectively.

    During the first step, we conducted a scoping review using the search strategy provided in online supplemental appendix A to identify QIs to be used by hospitals in the (re)design and quality evaluation of PMAaH pathways. Two reviewers (RH and JGM) reviewed the literature independently, by selecting eligible papers based on the title and abstract. We included Dutch and English peer-reviewed studies that concerned PMA in the outpatient setting and that reported on the organisation of healthcare (ie, the structure and processes involved in the delivery of healthcare). We excluded studies that considered specifically paediatric care and/or parenteral nutrition, since these are often only offered by tertiary centres and require complex, additional QIs. We also excluded studies on the cost-effectiveness of drugs, or drug effectiveness comparisons, since these do not mainly focus on quality assurance, and studies that were not full-text available. From the included studies, one reviewer (RH) extracted QIs, after which the second reviewer (JGM) checked these QIs. A QI is defined as a component or aspect of the structure of a healthcare system, or of the process or outcomes of care, which has a bearing on the quality of care.9 The QIs were included if they focused on parenteral medication administration close to home (eg, infusion centre or haemodialysis centre) or at the patient’s home and concerned the organisation of healthcare (ie, the structure and processes involved in the delivery of healthcare). QIs were excluded if they considered inpatient treatment only, did not include anything related to the outpatient setting and were only focused on paediatric care. Structure QIs were defined as attributes of the settings in which care occurs (overall policies regarding the organisation), and process QIs as attributes of what is actually done in giving and receiving care (operational activities in the patient setting).8 Note that the aim of this generic QI set is to be applicable for all PMAaH. Therefore, it should be combined with protocols specific to medication types, which are unrelated to the administration setting. As a result, the inclusion criteria did not focus on specific medication or treatment types.

    After obtaining ethics approval, the expert panel ratings of the QIs continued with a round without interaction, an online questionnaire and a panel meeting (2a and 2b in figure 1).

    Expert panel without interaction, online questionnaire

    We invited a heterogeneous expert panel from three large Dutch teaching hospitals, based on the snowball sampling technique,20 which resulted in 35 people receiving the questionnaire. These experts were included based on the following criteria: (1) having at least 1 year of experience regarding PMAaH and (2) having one of the following positions of employment: nurse, medical specialist (in particular, oncologists and infectious disease physician), clinical microbiologist, pharmacist, administrative officer, project leader or manager. Since most of the already applied PMAaH pathways in the three hospitals were related to oncology and infectious disease patients, there was an emphasis on oncologists and infectious disease physicians, but other medical specialists involved in other PMAaH pathways at their hospital were invited as well. First, experts involved in the PMAaH projects were invited. If they declined or did not respond, their colleagues were asked. Experts were excluded if they declined to provide informed consent. The experts were asked to participate in the questionnaire via e-mail, received a reminder from a pharmacist affiliated with their own hospital and provided informed consent prior to participation.

    The online questionnaire included a question for each QI, formulated as ‘How appropriate do you score the following (sub-)indicator?’, where the scoring was based on a 9-point Likert scale (1=‘inappropriate’ to 9=‘appropriate’). This scale is commonly used in the RAM15 and was chosen to measure (dis)agreement between the expert panel and assess the appropriateness (inappropriate, uncertain and appropriate) of the QI. Some QIs consisted of subindicators, such as a QI consisting of multiple eligibility criteria. In this case, the experts were asked to score both the QI as a whole, as well as all the sub-QIs individually. The questionnaire included both English and Dutch translations of the QIs. The primary language of the second and third round was Dutch. Both the English and Dutch final QI sets were checked by native speakers. The Dutch translation is given in online supplemental appendix B. Based on the questionnaire results, we labelled the appropriateness of QIs as follows:

    We defined disagreement using the interpercentile range adjusted for symmetry (IPRAS) classification, which is commonly used in web-based RAND/UCLA appropriateness method panels.21 The IPRAS is calculated as IPRAS=IPRr + (AI · CFA), where IPRr is the interpercentile range when perfect symmetry exists, AI is the asymmetry index and CFA is the correction factor for asymmetry. If the IPR (interpercentile range) of 0.3–0.7 is larger than the IPRAS of a QI, then the QI is rated with disagreement. Consistent with Fitch et al,15 we use IPRr=2.35 and CFA=1.5. Online supplemental appendix C gives an example of the calculation of the IPRAS classification.

    Expert panel meeting

    In Step 2b, an online, multihospital panel meeting was held to discuss the QIs that were labelled uncertain after the questionnaire round, where the results were presented at the group level to guarantee anonymity. For this panel meeting, 13 experts were invited, including nurses who execute PMAaH, coordinators of outpatient care and medical specialists (eg, infectious diseases specialists, oncologists and pharmacists).

    In this panel meeting, the QIs labelled as uncertain were actively discussed until consensus about appropriateness was reached. To achieve consensus, all participants were actively asked for their opinion on each QI. In addition, during the panel meeting for each uncertain QI, the decision was made to relabel the QIs as appropriate, inappropriate or to reformulate the QI. In addition, the group was given the opportunity to add new QIs and to discuss any QIs that had already reached consensus after the questionnaire. A note taker (AGL) was present in addition to the discussion leader (JLV), who complemented the discussion leader and kept track of any relevant non-verbal communication, which is advised for larger focus groups.22 The note taker and discussion leader were not involved in the decisions.

    Directed qualitative content analysis23 was used to analyse the focus group discussion, to register both the final decision on the QI and the reason for this decision. To the authors’ knowledge, this coding scheme for the analysis of a focus group discussion in the RAM does not exist yet. Therefore, we developed a coding scheme (see online supplemental appendix D). This coding scheme was deductively designed by the authors, based on the various responses and motivations that each category could have. After this, the coding scheme was updated inductively by the analysis of the focus group. We also included the label uncertain in this scheme, since it is possible that consensus is not reached yet in the panel meeting discussion.

    The result of this panel meeting is a finalised QI set, which was sent around to all participants of the panel meeting to check for any ambiguities.

    The finalised QI set was sent to seven large Dutch teaching hospitals, where the hospital pharmacist involved in PMAaH was asked to score the organisation of PMAaH in their hospital on the QIs.

    The aim of this practice testing is to understand whether it is possible to implement the QIs acquired from the expert panel in practice. Therefore, we retrospectively established whether the acquired QIs are already implemented in practice. Since this phase was modified from the proposed RAM, we defined three aspects of the likelihood of implementation of a QI: feasibility, clarity and measurability.24

    This scoring was performed in a multidisciplinary setting, including the relevant experts in each individual hospital, based on self-assessment. For each QI, the hospital could score each QI as implemented, partially implemented or not implemented. An implemented QI is defined as the established practice and policies of the individual hospital aligning with the QI. If a QI was scored partially implemented, the hospital was asked to add an explanation on what is still missing, and hospitals could add remarks if the QI was unclear or difficult to score. The goal of the practice testing was to evaluate whether each QI was sufficiently clear and measurable such that a multidisciplinary team is able to score their policies and processes on this QI and sufficiently feasible that a team is able to implement this in their hospital. If a QI was scored by all hospitals, without comments, a QI is scored feasible, clear and measurable. If hospitals were unable to score a QI and/or if the remarks signalled that it is not feasible to implement this QI in practice, the QI was labelled infeasible, unclear or unmeasurable. This is done by a third expert panel, consisting of three hospital pharmacists from the three initially participating hospitals. Together, this third expert panel also adapts the QIs accordingly (ie, adapted such that the QI is more clear, measurable or feasible). This phase leads to a second finalised QI set, that is, from the perspective of hospital pharmacists, possible to implement in practice.

    This section presents the QIs identified by the scoping review, the modification of the QIs by the expert panel rating phase and the final QI set of PMAaH after the retrospective practice testing.

    Figure 2 shows the Preferred Reporting Items for Systematic Reviews and Meta-Analyses flow chart of the scoping review. After screening of 242 titles and abstracts, and full-text screening of 27 articles for eligibility, all 27 articles were included for the extraction of the QIs.

    Based on these 27 articles, 36 QIs, of which 16 were structure QIs and 20 were process QIs, were extracted. Online supplemental appendix E shows all QIs and their point of origin.

    Figure 3 visualises the numbers of structure and process QIs throughout all phases, and table 1 presents the professions of invited and participated experts in the online questionnaire (no interaction) and panel meeting. We refer to online supplemental appendix B for the detailed changes in the QIs per phase.

    Table 1

    Expert engagement in the online questionnaire and panel meeting

    Expert panel without interaction, online questionnaire

    From 35 invited experts, 17 completed the online questionnaire entirely, after one reminder (48.6% response rate) to score the appropriateness of the 36 QIs of PMAaH identified in the literature. The expert panel consisted of oncologists (n=3), infectious diseases physicians (n=3), medical microbiologist (n=1), hospital pharmacists (n=4), nurses (n=3, one of them familiar with out of hospital parenteral medication administration), manager (n=1), administrative employee (n=1) and project manager (n=1), all involved in the PMAaH process across the three hospitals.

    Based on the results of the questionnaire, all proposed QIs were assessed as appropriate, except for two QIs and one sub-QI. The median, IPR, IPRAS and final label of all QIs are shown in online supplemental appendix F. Process QI LP1, which states that a clinician must take leadership of a PMAaH team, had a median score of 5. The process sub-QI LP8e, which states that sterile techniques must be assessed for patient inclusion, acquired a median score of 6. Therefore, they were both labelled uncertain and discussed in the expert panel meeting. Structure QI LS16, which states that there must be a policy regarding the travel distance from patients to the hospital, acquired a median score of 4, and the IPRAS was larger than the IPR, showing disagreement between the expert panel. Therefore, this QI was also labelled uncertain and was discussed in the expert panel meeting.

    Expert panel meeting

    In the panel meeting, seven experts participated: a medical specialist (infectious diseases physician), a hospital pharmacist, a nurse that administers in the home situation and four coordinators of outpatient care (transfer nurse, manager and two project managers) and at least one expert representing each of the three hospitals. The other invited experts did not participate due to unavailability. These experts are a subset of the first round expert panel, except for a project manager and a coordinator of outpatient care, who were invited due to (partial) unavailability of the others from the expert panel.

    The three uncertain (sub-)QIs were discussed. Process QI LP1 (a clinician must take leadership of a PMAaH team) was considered inappropriate, since it was considered redundant (structure QI LP2 already shows that a clinician must join the committee) and overly constrained (a doctor should focus on taking care, being present is sufficient). Structure QI LS16 (policy regarding travel distance from patient to hospital) was also considered inappropriate since it was considered irrelevant (from a logistical point of view) and redundant (from an adverse event point of view: process QI LP4 covers that). Lastly, process sub-QI LP8e (sterile techniques) was reformulated, since it was considered incorrect (the home situation is never a sterile environment) and inaccurate (it should be about a hygienic working environment, not about the techniques). At the end of the expert panel meeting, all experts agreed with the resulting QI set for PMAaH. Online supplemental appendix G presents the QI set resulting from the first two phases from the RAM.

    All seven mProve hospitals participated in the practice testing. Online supplemental appendix H shows the participants of the practice testing for each hospital. Note that, if a team did not know about the implementation of a QI themselves, relevant experts within the hospital were contacted, which was not documented per expert. Figure 4 shows for each QI the number of hospitals that implemented this indicator, that partially implemented the criteria, that did not (yet) implement the criteria and that were unable to score the indicator. Based on this practice testing, process QIs LP2 and LP19 were removed, because QI LP2 appeared unclear, and for QI LP19 it was considered infeasible (and undesirable) to document this status in a way that deviated from the norm. Additionally, structure QIs LS 1–3 and LS 10, and process QIs LP 6, 8, 9, 10, 12, 14 and 18 were textually modified to increase the feasibility, clarity and measurability. An example of the feasibility is structure QI LS2, where it is not feasible to include an experienced physician for each medication administered at home, so it is updated to a physician experienced with parenteral medication at home. An example of clarity is process QI LS6, where two sub-QIs (a and h) were overlapping and in sub-QI it was unclear that the additional care is due to comorbidities. An example of measurability is structure QI LS1, where additional context is added on how the existence of a structured programme can be measured.

    This retrospective practice testing showed that only 17.6% of the QIs were fully implemented in six out of the seven participating hospitals (20.0% of the structure and 15.8% of the process QIs, respectively), and only one structure QI could be identified as implemented in all hospitals. QIs not implemented by any of the hospitals were related to self-administration policies. QIs that were implemented by only a few hospitals regarded the availability of a structured programme, documentation, patient information or selection criteria for patients and locations. QIs that were implemented by at least five of the seven hospitals were related to patient inclusion, first dose and drug delivery device policies, urgent communication between stakeholders, possibility to decline PMAaH, protocols for intravascular access systems and documentation.

    Table 2 presents the final QI set, with updated numbering (without L).

    Table 2

    Final QI set for PMAaH with structure (S) and process (P) QIs

    We developed a generic set with 32 QIs for PMAaH using a slightly modified RAM. The practice testing showed that it is possible to monitor the quality of both procedural and structural aspects of PMAaH pathways and that there is a need for such a set, since many QIs agreed on by directly involved experts are not yet implemented in large teaching hospitals in the Netherlands. Our research shows that many QIs are only implemented on a care pathway level, leading to variations. This set could be used both for benchmarking and quality improvement. To the authors’ knowledge, this study is the first to present a coding scheme for the data analysis of a panel meeting within the RAM setting. Although the outcomes of a panel meeting within the RAM are often clear, this coding scheme adds the motivation behind these decisions, which deepens the discussion of the results.

    Our study has strengths and weaknesses throughout all phases of the RAM method. In the scoping review, a highly relevant article from Berrevoets et al2 was only discovered via snowballing, which suggests that other studies might have been missed due to the used search strings. However, since only one of the QIs (final QI P12 regarding availability of rescue medication) was extracted from a single study, and all other QIs are based on multiple studies, we expect that no relevant QIs are missed.

    In the expert panel ratings phase, both the Dutch and English translations were shown, but since Dutch was the primary language of the questionnaire and panel meeting, this could have an impact on the assessment of the experts. We have tried to minimise this impact through a grammar and spelling check by a native speaker for both sets. During the first expert panel ratings round, several attendees warned that they perceived the questionnaire as too lengthy and time consuming. This could have led to response fatigue, resulting in respondents being less motivated to assess the last QIs as effective and critical as the first QIs.25 Our results do suggest that this might have played a role, since after the process QI LP8 (regarding the patient and family information details), no QI was labelled uncertain or inappropriate, while during practice testing, many of these indicators required adaptation. To overcome this in the future, we suggest only presenting the QIs as a whole with a remark option for the included sub-QIs, to reduce the length of the questionnaire. We do not believe this response fatigue has led to biased results in the end, since the practice testing ensured that all experts considered all QIs once more (besides the panel meeting) and were perceived to be more critical during practice testing than in the first round of stakeholder consensus. This showcases the importance of practice testing. The response fatigue did not lead to internal invalidity of the entire QIs compared with their sub-QIs, since there were no QIs that were labelled appropriate while all sub-QIs were labelled inappropriate or vice versa.

    During the second expert panel rating round, time was a restricting resource, since it required multiple healthcare professionals from multiple hospitals to be available at the same time. Because of the multicentre approach, we chose a virtual panel meeting, since it is reported that attendance is comparable to live meetings, see Halliday et al.26 In the panel meeting, there was sufficient time to discuss all uncertain labelled QIs, ensuring that all experts were able to express their opinion and come to consensus, but limited time to individually discuss all remaining QIs. Besides, the developed coding scheme added value to the extraction of the results in this second round, and we propose the validation of this scheme as a subsequent research step. We believe that the fact that the scheme was not validated yet did not impact our results, since only three (sub-)QIs were discussed, which all fitted in the coding scheme. It is worth mentioning that other panel meetings within the RAM could come up with arguments that do not completely fit in this coding scheme.

    Most of the adaptations in the QIs resulted from the practice testing phase, which was not expected upfront. This indicates that it is hard to assess the feasibility of a QI until it is tested in practice. An example of this is structure QI S2, where it was stated that a PMAaH committee must include a physician experienced with each administered medication in the home situation, while it is not feasible to include a separate physician for each administered medication. Wollersheim et al27 shows that 10–20% of the indicators were not measurable, which was revealed in practice testing, which is in line with our study. While not all RAM studies include a practice testing phase at the end, our study shows that the set is updated when such a phase is included, which is in line with other literature that does include a practice testing phase.7 28 This result is important, not only because the finalised QI set is more feasible, clear and measurable and therefore more likely to be adopted in practice, but also because it shows that there is a need for further research on the impact of systematic methodologies that were deployed before the practice testing, in relation to the impact of practice testing itself. These findings are in line with the literature, as Malmqvist et al29 showed that a pilot study in practice has the potential to increase the quality of the research. On the other hand, these practice testing results question the validity of the results of the first two phases. We believe that these phases still hold validity, since the QIs that were entirely removed after the practice testing were removed because their aim was already included in standard medical practice or other QIs. We propose that further research is needed on why it is hard for expert panels to assess the feasibility, clarity and measurability of a QI in the second phase of the RAM.

    The second and third phases of our research setup faced disengagement of invited experts, resulting in changing expert panel compositions. In the first round of the expert panel, all professions participated, whereas in the second round, due to unavailability and workload of the healthcare professionals, not all professions participated in the expert panel meeting, resulting in an under-representation of medical specialists in the second panel meeting, which could lead to a bias in the outcomes. In the third phase, hospital pharmacists were responsible for executing the practice testing, and due to the added workload, we did not require specific experts to join the multidisciplinary teams. The practice testing could therefore have a pharmaceutical bias towards the outcomes.

    Because of the scope of our research design, not all aspects of PMAaH care pathways were considered. It is important to note that for specific medication types, specific QIs should be considered. For example, the safe handling of hazardous drugs should be considered for chemotherapy infusion treatments,30 while the organisation of laboratory results and stability testing should be considered for antibiotics infusion treatments.2 31 It is important to note that these aspects are already considered in treatment guidelines and only sometimes have to be slightly adjusted in an outpatient setting. Therefore, we believe that a generic set is feasible, combined with the existing guidelines. Note that in parenteral drug administration, incompatibilities and diluting are important concerns. In the context of this study in the Netherlands, there are nationwide standards available on this, to ensure standardisation across organisations. We recommend that hospitals consider at least alignment on these issues within their own care network.

    Since the QI set is generic, it only includes structure and process indicators and no outcome indicators, since outcome indicators encompass the effects on the patient population and can be specific to the disease or drug type and are therefore of less relevance when developing generic PMAaH QIs. In this study, patients were not involved in the expert panel rating phase. This is often the case in the development of QI sets,32 and this specific set is primarily developed for quality improvement on a procedural and structural level. Therefore, the decision was made not to include patients, but only include experts that are responsible for providing the optimal preconditions to deliver good quality of care. When outcome indicators are included in the set as well, we believe that patient involvement is beneficial. Besides, when a treatment is relocated to the home situation of the patient, a hospital should also consider the quality of the aspects unrelated to parenteral medication, such as the definition of responsibilities when a patient leaves the hospital.

    A close analysis of the inappropriately labelled QIs in the phases of the m-RAM provides insights into the development of a generic QI set. In the expert panel rating phase, two out of three QIs were labelled inappropriate, because they both required something that might not be feasible in all situations (ie, LP1: a clinician takes leadership of PMAaH team and LS16: a policy regarding the travel distance from patients to hospital). Since the expert panel discussed in the panel meeting that this would not affect the quality of care, they were removed from the set. For the other QI, the discussion in the panel meeting showed that there were two different interpretations of the QI. A panel meeting gives the opportunity to discuss both interpretations, after which the panel meeting decided that this was not necessary or already covered in another QI. It could be possible that for other QIs, there were also multiple interpretations possible, which did not have the possibility of discussion in a panel meeting. For future RAM research, we would propose to include a motivation for each QI based on the provided references, to prevent this.

    The practice testing showed that the implementation of the QIs varies highly between hospitals, since there is only one QI that is fully implemented, and also two QIs that are not implemented at all. These latter QIs are related to self-administration, so it could be that this is not applicable for the questioned hospitals. For future research, it is therefore recommended to add a not applicable option in the practice testing, which was not included in this study. While recent studies show that self-administered OPAT is a feasible treatment option,33 34 the lack of self-administration options in hospitals could be related to negative financial incentives.35 During practice testing, experts often mentioned that some QIs were already arranged for by general protocols and/or treatment plans, although not specifically dedicated to administration in the home situation. Because of this, some QIs were removed after the practice testing (LP2 and LP19). Experts also remarked that many QIs were organised on a disease or medication type level, but not on a general level, which resulted in a partially met score on many QIs. Consequently, it is important to understand that a low percentage of full implementation does not directly imply unsafe care for patients.

    The QIs that are implemented in the majority of the hospitals are on the operational level (eg, competence of caregiver that includes patients, policy on first dose administration and selection and removal of drug delivery device), while strategic level (both in process and structure) QIs (eg, structure programme for a framework of safe and effective care and system of ongoing quality assurance) are implemented in only a few hospitals. This practice testing shows that this study, which focuses on providing a generic framework for providing safe and effective PMAaH, is contributing to this lack of and need for structured programmes.

    For the deployment of this QI set in practice, we have the following suggestions. Preferably, this set becomes the nation-wide quality standard, where a committee could assess hospitals using these QIs. Since this is not yet the case in the Netherlands, the deployment will consist of a self-assessment, in line with the practice testing. We propose that a self-assessment team consists of the professions discussed in QI 2: a physician, pharmacist and nurse, all experienced with PMAaH. The workload of this self-assessment, and afterwards improvement of the organisation, will highly vary per hospital, depending on which systems are already in place to monitor the PMAaH quality.

    This QI set is not only useful for hospitals that want to evaluate and/or improve the quality of their PMAaH care pathways, but also for organisations that collaborate with hospitals, for example, the ones responsible for the administration at patients’ homes. It is common that hospitals collaborate with multiple home healthcare organisations and that home healthcare organisations are contracted by multiple hospitals. Home healthcare organisations report challenges with monitoring their performance.36 The generic QI set presented in this study could also ensure a standardisation in quality expectations for home healthcare organisations.

    For the seven included hospitals, participating in this research resulted in practical implications on their organisation. All hospitals have expressed in the mProve working group that they are in the process of, or have already, set up a formally established committee dedicated to PMAaH policy, as stated in QI S2. Once such a committee is established, this committee is in charge of establishing the other QIs in practice. For future applications of this QI set, we envision this set to be a nationwide set for hospitals, where the assessment of the implementation of these QIs is not only informed through self-assessment.

    For future research, there is a need to evaluate the mitigation measures that have been undertaken to meet the QIs, to design new interventions to further adhere to the QIs and for a broader, international perspective. Our practice testing shows that interventions should predominantly focus on the development of a structured, systematic approach for the organisation of PMAaH, proper documentation, policies on patient information and selection criteria for patients and locations. For the evaluation of these interventions, the presented QI set is a useful tool for single-centre and multicentre studies. Since the research is executed within the context of Dutch large teaching hospitals, an international perspective could be added by either having an international panel meeting and/or an additional practice testing in an international context. Besides, other hospital types, such as small regional hospitals or large academic hospital networks, could be involved for a validation of our QI set. Additionally, practice testing with home healthcare organisations and other organisations involved in the PMAaH process is important to provide a complete perspective on quality assurance of PMAaH processes. In the development and deployment of interventions based on this QI set, these organisations should also be actively involved. Lastly, since we have adjusted the RAM accordingly for the development of a structure and process QI set, we recommend that future research considers these adaptations. First, we have proposed a panel meeting coding scheme (online supplemental appendix D), which provided us with a more structured approach for extracting the results from the second part of the second phase. Note that we have not validated this coding scheme, which we propose as a future research direction. Second, we have adapted the third retrospective phase to fit to structural and procedural outcomes. This adaptation could add value to other RAM studies, but since our results show that there are relatively many changes after this third phase, we recommend that other studies include a second questionnaire to validate the suggestions from the third expert panel. This could increase the validity of the results. Furthermore, we believe that this practice testing elevates the RAM in these types of studies, which then elevates the impact in practical settings as well.

    Data are available upon reasonable request.

    Not applicable.

    This study involves human participants and was approved by Ethics approval was provided on 24 April 2023 by the University of Twente-BMS Domain Humanities and Social Sciences ethics board, registration number 230125. Participants gave informed consent to participate in the study before taking part.

    The authors would like to thank the mProve working group medication@home for their initiative of, their support of and contribution to this research. We also would like to express our gratitude to Dr Alexia Schinagl for checking the final set on comprehensibility and proper use of the English language. We would also like to thank all experts who were included in any of the consensus round and/or practice testing for their valuable contribution.

    Read the full text or download the PDF: