Key Points
Question: How are HaH programs using RPM, what quality metric practices exist, and what are the primary motivations and barriers to measurement?
Findings: All participating programs reported using remote vital sign monitoring, while a minority reported leveraging single lead EKG and fall detection monitoring. We discovered wide variation in RPM quality metric monitoring, with institutions most often utilizing quality metric(s) related to operational and technical effectiveness. The most common motivation for RPM quality metric development was to understand the patient experience, whereas data fidelity concerns and limited ‘out-of-box’ metrics of current devices were the most cited barriers to quality metric development.
Meaning: For HaH leaders and clinicians, our work provides a foundation from which to consider further development of RPM quality metrics. Ultimately, the development of RPM standards and patient-centric quality metrics is urgently needed to guide HaH and RPM leaders in implementing and advancing the use of this technology as a key pillar of high-quality, accessible HaH care.
Introduction
Hospital at home (HaH) care has grown considerably in recent years, catalyzed by the issuance of the Acute Hospital Care at Home (AHCaH) waiver by the Centers for Medicare and Medicaid Services and mounting evidence that appropriately risk-stratified acute care in the home can improve patient outcomes, experience and decrease costs.1–5 A recent report highlighted that over 300 hospitals and 37 states were approved to deliver HaH care under the waiver allowing for over 5,000 Medicare patients to benefit.6 With growing interest from public and private payers, as well as increased acceptance by physicians and patients, it is expected for there to be increasing demand for HaH. To help meet it as well as improve efficiencies and safety of acute care at home, health systems have started to adopt and implement new technologies ranging from remote patient monitoring (RPM) to in-home diagnostics to software that helps distributed teams communicate and coordinate care.7 In particular, RPM holds significant potential in addressing key challenges to scale HaH care including expanding to additional patient populations safely, as well as improve staffing efficiencies.8
To date, RPM initiatives have largely focused on improving chronic disease management. Evaluation of remote patient monitoring for chronic disease management has shown cost-effectiveness, reduced acute care use for specific populations, and improved patient self-management and disease knowledge.9–11 In HaH care, slower RPM adoption has occurred, limited partly by underdevelopment; however the pace of adoption is increasing as RPM companies work to address key historical challenges for RPM within HaH, including limited Electronic Health Record (EHR) integration, form factor design limitations, as well as signal to noise challenges.12 As technical effectiveness of RPM solutions advances, more focused evaluation on the clinical effectiveness of these solutions is important to guide further development.13 To our knowledge, no studies have described the current use of RPM within HaH across institutions or sought to evaluate how the impact of these initiatives is currently evaluated. Furthermore, a recent research agenda for HaH crafted at the World Hospital at Home Congress, highlighted that most research on HaH was conducted during a time period where telehealth and RPM was nonexistent or in its infancy and research should prioritize developing standards for the use of technology in HaH as well as key barriers.14
We sought to describe the use of RPM within HaH units, as well as the use of quality metrics to evaluate these RPM implementations. We characterized the use of quality metrics according to the National Quality Forum’s (NQF) telehealth framework, which focuses on evaluating impact across four key domains: access to care, cost, experience and effectiveness.15 Prior work investigating quality metric use in other tech-enabled care delivery models has revealed significant variation in content of monitored quality metrics.16 Understanding which domains lack quality metrics or are “under-measured” could help stakeholders prioritize development of new metrics, while helping stakeholder evaluate the impact of the growing use of RPM within HaH. Additionally, we captured HaH programs’ motivations for and barriers to quality metric development to inform future efforts in this space.
Methods
Study Design and Setting
This was a qualitative study of nine RPM programs for HaH. HaH is the provision of acute, hospital level care in a patient’s home. In the United States, HaH services are available through academic health systems (typically affiliated with medical schools), community health systems, as well as private for-profit companies in partnership with health systems. Patients are most often admitted to HaH from the emergency department (ED) or from the inpatient ward. Under the AHCaH waiver, patients admitted to HaH require at least 2 sets of vital signs per day. Often these sets of vitals are taken by a visiting nurse or community paramedic. However, increasingly HaH organizations are incorporating the use of RPM equipment, which can stay in the home during the acute home hospitalization to help collect these required sets as well as additional measurements as indicated. We interviewed HaH leaders from diverse health systems regarding their RPM quality monitoring initiatives from March 2023 to June 2023 [Table 1].
The study team included three practicing emergency physicians (DW, KZ, and JC) with virtual care experience and included one expert in remote patient monitoring for HaH (JC). All study team members had training in qualitative research.
Participant Selection
Interview participants were identified using a convenience sample. Participants were required to be leaders at active HaH programs that had implemented remote patient monitoring. There were no other exclusion criteria. Potential participants were identified through relationships with members of the study team as well as a review of the HaH literature. The study team emailed invitations to participate to potential participants and ultimately interviewed 16 individuals representing 9 institutions. Group interviews were conducted when there was more than one participant from an institution. Additional interviews were not sought after achievement of thematic saturation.
Interview Guide Development and Interviews
The interview guide was developed with input from all team members. Initial questions focused on elucidating the capabilities of each institution’s RPM program for HaH. Subsequent questions focused on capturing the use of RPM quality metrics across the National Quality Forum Frameworks’ 4 domains – access to care, financial impact/cost, experience, and effectiveness - as well as motivations for and barriers to quality measurement.
Interviews started with introductions from the study team and participants, followed by obtaining verbal study consent. Interviews typically lasted 60 minutes and were conducted on Microsoft Teams with a minimum of two study team members. All study team members participated in at least 1 interview. The Microsoft Teams’ transcription function was used to create interview transcripts and study team members also took field notes during interviews. Transcripts were not returned to participants and repeat interviews were not conducted. Participants did not provide feedback on findings or the final manuscript.
Analysis
We identified RPM quality measures reported by institutional representatives by coding interview transcripts and field notes. RPM quality measures were then grouped according to the corresponding National Quality Forum Telehealth Framework Domain and Subdomain. Using a grounded theory approach, we conducted a thematic analysis of motivations for and barriers to quality measurement for HaH RPM programs.17 Two study members (DCW, JC) reviewed interview transcripts and independently developed a codebook with one-level themes. Coding discrepancies were then adjudicated by the third study team member (KZ).
Ethics Approval
Mass General Brigham’s Institutional Review Board reviewed the study protocol and considered it exempt [Protocol Number: 2022P003095].
Results
Interview invites were sent to potential participants at 14 institutions and ultimately 16 participants representing 9 institutions accepted the invitation to participate (response rate 64%) [Table 1]. Participants held a variety of positions, ranging from executive level operations, medical director to innovation roles. The majority of institutions interviewed represented community health systems (5/9) and were located in urban geographies (6/9) [Table 2]. Most institutions interviewed had HaH RPM technology implemented for 2 or more years (6/9). At most institutions (8/9), patients were automatically set-up with a standardized RPM suite of tools at enrollment, while one institution customized the RPM set-up based on patient condition. All institutions had remote vital sign monitoring capabilities, with the majority monitoring vital signs every 4 to 8 hours. Only 2 institutions monitored vitals continuously. A minority of programs deployed RPM that could capture a continuous single lead EKG (3/9) or detect falls (3/9). The majority of programs had a monthly HaH patient census greater than 50 patients.
Regarding implemented RPM quality measures for HaH, most institutions utilized metric(s) within the NQF domain of effectiveness [Table 3]. Metrics within this domain often monitored operational and technical effectiveness of RPM. Metrics related to the NQF domain of experience were also found to be commonly implemented. Four institutions focused primarily on evaluating the experience of the patient or caregiver within this domain. Only two HaH units reported measuring a metric related to access to care—monitoring the percentage of patients declining HaH due to discomfort with technology. No institution reported implementing a RPM quality measure related to the NQF domains of system or clinical effectiveness.
The most common motivation for RPM quality metric development was to understand the patient experience [Table 4]. Participant 7, a medical director of an academic HaH program in the Midwest commented, ‘We are interested in learning more about which patients are not having an easy time with our [RPM] equipment’. Another participant, representing a Northeast, academic HaH unit, commented, ‘one of the main value adds of remote patient monitoring is the feel that patients get - they feel watched and cared for’. The second most common cited motivation cited by programs was related to understanding how RPM implementation affected clinical staffing metrics. Participants cited aspirations to improve nursing team efficiency by safely decreasing the need for touch points, as well as utility in tracking response times to RPM alerts triggered by abnormal vital signs.
Regarding barriers to RPM quality metric development, interviewees most often mentioned concerns surrounding data fidelity (4/9). Participant 1, representing an academic, urban health system commented on current challenges with the ‘noise to signal’ ratios of current RPM devices and the potential effect that could have on developing meaningful RPM quality metrics. Another interviewee mentioned difficulty obtaining data from a RPM device vendor as a barrier to developing quality metrics. The second most common barrier cited by interviewees was related to underdeveloped data capture and reporting. A few participants mentioned not having the resources to perform required software development to allow for seamless RPM quality metric tracking and a desire for RPM companies to have these capabilities as part of their offering.
This qualitative study of HaH RPM programs at nine institutions found diverse early experiences implementing and evaluating such physiologic monitoring. Some chose intermittent monitoring (every 4-8 hours) while others leveraged continuous monitoring. Most have a one-size fits all approach for now, while at least one institution is already tailoring RPM technologies to deemed clinical risk/need. Monitoring capabilities also varied across single-lead ECG and fall-detection devices (likely in part due to patient populations enrolled). For RPM quality measurement, the focus of a majority of institutions was on the NQF’s Domain of effectiveness, while a significant minority captured quality metrics related to patient, family or caregiver experience. Participants most often mentioned a desire to understand the patient experience as a motivation for RPM quality metric development. Data fidelity concerns as well as limited ‘out-of-box’ reporting of metrics offered by current RPM device vendors were commonly cited barriers to quality metric development.
To our knowledge, this is the first study that has characterized HaH RPM implementation and described associated quality metrics. We found wide variation amongst HaH institutions in the content of implemented quality metrics evaluating RPM initiatives. Our prior work evaluating quality metric implementation in the virtual urgent care realm revealed similar variation in content measurement.16 Initial prioritization of quality metrics in the NQF domains of experience and effectiveness may stem from the most common motivations referenced by interviewees: a desire to evaluate patient experience with RPM, as well as to understand the impact on clinical staffing. While all effectiveness measures reported were process metrics likely due to ease of collection, it could potentially be even more valuable for HaH leaders to monitor and evaluate the impact of RPM on outcome metrics such as patient safety events or unanticipated escalations. To our knowledge, similarly, no widespread metrics exist across inpatient settings to inform the development of quality metrics for HaH patient monitoring. Studies have evaluated continuous vital sign monitoring in the inpatient setting with regards to both nursing and patient experience, alert burden per patient, as well as vital sign check completeness.18–22 Lack of standardized metrics to reference in the inpatient setting creates a unique opportunity for the development of metrics for HaH RPM to not only impact quality improvement initiatives in the care at home setting, but also potentially in the inpatient setting. To ease the facilitation of metric development, addressing a few key barriers commonly reported by programs could be helpful.
Participants cited a lack of data fidelity and ‘out-of-box’ reporting capabilities from RPM companies as major barriers to the development of quality metrics. Prior work evaluating quality metric development referenced similar challenges around data availability and IT limitations.16,23 These findings are not surprising given the significant cost associated with developing, implementing and monitoring quality metrics. One study in 2016 estimated the cost of quality measurement initiatives to be $40,069 per physician.24 Given the significant upfront financial and human resource commitments for starting HaH units, HaH leaders could benefit from closer partnerships with governmental agencies (e.g. FDA’s Digital Health Center of Excellence) and device vendors to develop better out-of-box reporting capabilities and improve data fidelity. Additionally, working with the Tech and Quality Indicators Councils of the Hospital at Home Users Group in the US (as well as similar international entities), RPM standards and patient-centered quality metrics (e.g. falls per hours ambulating) could be further developed.
Our study has several limitations. We used a small convenience sample of participants, identifying leaders at HaH institutions either known to the study team or through the published literature. Subsequently, the majority of HaH institutions interviewed had been established for at least 2 years and relatively large - with a monthly census for the majority of greater than 50 patients per month. As such, our findings are potentially not generalizable to the many nascent HaH programs that have only recently started. Moreover, only one HaH unit in our study is located outside of the United States, limiting global generalizability. Additionally, the majority of HaH programs represented were based in an urban setting where distance to a patient’s home and internet connectivity may be more favorable than in a rural setting which could have influenced initial RPM quality metric development. Finally, a few participants had prior collegial relationships with the study team which may have led to social desirability bias and influenced response.
Conclusion
In conclusion, our research highlights the current state of HaH RPM and associated quality metric use across 9 leading institutions. For HaH leaders and clinicians, our work provides a foundation from which to consider further development of RPM quality metrics, as well as highlights key barriers to anticipate and strategize to overcome. For health system researchers, these insights help inform an evaluation framework for larger studies investigating the use of RPM for HaH and its impact on quality and safety, as well as how it might integrate within the broader RPM ecosystem. Ultimately, it showcases that the development of RPM standards and patient-centric quality metrics is urgently needed to guide HaH and RPM leaders in implementing and advancing the use of this technology as a key pillar of high-quality, accessible HaH care.
Acknowledgements
Grant support was provided through a Mass General Brigham Centers of Excellence Research Grant. The authors would like to extend their sincere appreciation to all participants.
Funding
Mass General Brigham Centers of Excellence Research Grant
Data Sharing
All authors had full access to all of the data (including statistical reports and tables) related to the study.
The participants of this study did not give written consent for their data to be shared publicly, so due to the sensitive nature of the research supporting data is not available.
Conflicts of interest
David Whitehead
Consulting fees: Inbound Health
Jared Conley
Advisor: Eolas Medical, Vivalink
Speaking Fees: Becton Dickinson
Author Contributions
DCW, KSZ, JC had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Study concept and design: DCW, KSZ, JC
Acquisition, analysis, or interpretation of data: DCW, KSZ, JC
Drafting of the manuscript: DCW, JC
Critical revision of the manuscript for important intellectual content: DCW, KSZ, JC
Statistical analysis: DCW, KSZ, JC
Administrative, technical, or material support: DCW, KSZ, JC
Study supervision: JC
