Key Points

Question: How are HaH programs using RPM, what quality metric practices exist, and what are the primary motivations and barriers to measurement?

Findings: All participating programs reported using remote vital sign monitoring, while a minority reported leveraging single lead EKG and fall detection monitoring. We discovered wide variation in RPM quality metric monitoring, with institutions most often utilizing quality metric(s) related to operational and technical effectiveness. The most common motivation for RPM quality metric development was to understand the patient experience, whereas data fidelity concerns and limited ‘out-of-box’ metrics of current devices were the most cited barriers to quality metric development.

Meaning: For HaH leaders and clinicians, our work provides a foundation from which to consider further development of RPM quality metrics. Ultimately, the development of RPM standards and patient-centric quality metrics is urgently needed to guide HaH and RPM leaders in implementing and advancing the use of this technology as a key pillar of high-quality, accessible HaH care.

Introduction

Hospital at home (HaH) care has grown considerably in recent years, catalyzed by the issuance of the Acute Hospital Care at Home (AHCaH) waiver by the Centers for Medicare and Medicaid Services and mounting evidence that appropriately risk-stratified acute care in the home can improve patient outcomes, experience and decrease costs.1–5 A recent report highlighted that over 300 hospitals and 37 states were approved to deliver HaH care under the waiver allowing for over 5,000 Medicare patients to benefit.6 With growing interest from public and private payers, as well as increased acceptance by physicians and patients, it is expected for there to be increasing demand for HaH. To help meet it as well as improve efficiencies and safety of acute care at home, health systems have started to adopt and implement new technologies ranging from remote patient monitoring (RPM) to in-home diagnostics to software that helps distributed teams communicate and coordinate care.7 In particular, RPM holds significant potential in addressing key challenges to scale HaH care including expanding to additional patient populations safely, as well as improve staffing efficiencies.8

To date, RPM initiatives have largely focused on improving chronic disease management. Evaluation of remote patient monitoring for chronic disease management has shown cost-effectiveness, reduced acute care use for specific populations, and improved patient self-management and disease knowledge.9–11 In HaH care, slower RPM adoption has occurred, limited partly by underdevelopment; however the pace of adoption is increasing as RPM companies work to address key historical challenges for RPM within HaH, including limited Electronic Health Record (EHR) integration, form factor design limitations, as well as signal to noise challenges.12 As technical effectiveness of RPM solutions advances, more focused evaluation on the clinical effectiveness of these solutions is important to guide further development.13 To our knowledge, no studies have described the current use of RPM within HaH across institutions or sought to evaluate how the impact of these initiatives is currently evaluated. Furthermore, a recent research agenda for HaH crafted at the World Hospital at Home Congress, highlighted that most research on HaH was conducted during a time period where telehealth and RPM was nonexistent or in its infancy and research should prioritize developing standards for the use of technology in HaH as well as key barriers.14

We sought to describe the use of RPM within HaH units, as well as the use of quality metrics to evaluate these RPM implementations. We characterized the use of quality metrics according to the National Quality Forum’s (NQF) telehealth framework, which focuses on evaluating impact across four key domains: access to care, cost, experience and effectiveness.15 Prior work investigating quality metric use in other tech-enabled care delivery models has revealed significant variation in content of monitored quality metrics.16 Understanding which domains lack quality metrics or are “under-measured” could help stakeholders prioritize development of new metrics, while helping stakeholder evaluate the impact of the growing use of RPM within HaH. Additionally, we captured HaH programs’ motivations for and barriers to quality metric development to inform future efforts in this space.

Methods

Study Design and Setting

This was a qualitative study of nine RPM programs for HaH. HaH is the provision of acute, hospital level care in a patient’s home. In the United States, HaH services are available through academic health systems (typically affiliated with medical schools), community health systems, as well as private for-profit companies in partnership with health systems. Patients are most often admitted to HaH from the emergency department (ED) or from the inpatient ward. Under the AHCaH waiver, patients admitted to HaH require at least 2 sets of vital signs per day. Often these sets of vitals are taken by a visiting nurse or community paramedic. However, increasingly HaH organizations are incorporating the use of RPM equipment, which can stay in the home during the acute home hospitalization to help collect these required sets as well as additional measurements as indicated. We interviewed HaH leaders from diverse health systems regarding their RPM quality monitoring initiatives from March 2023 to June 2023 [Table 1].

Table 1.Characteristics of Hospital at Home Institutions Represented
Participating
Institutions
N=9, n (%)
Non-Participating
Institutions
N=5, n (%)
Regions - Covered
Northeast 3 (33%) 0 (0%)
Southeast 2 (22%) 2 (40%)
Midwest 2 (22%) 1 (20%)
Southwest 2 (22%) 0 (0%)
West 1 (11%) 0 (0%)
International 1 (11%) 2 (40%)
Delivery System Type
Academic Health System 3 (33%) 2 (40%)
Community Health System 5 (56%) 3 (60%)
Private HaH Vendor 1 (11%) 0 (0%)
Setting
Urban 6 (67%) 4 (80%)
Suburban 3 (33%) 1 (20%)
Rural 2 (22%) 0 (0%)
De-centralized 3 (33%) 0 (0%)
RPM for HaH Care Experience
0-1 years 1 (11%0 ***
1-2 years 2 (22%) ***
2-4 years 4 (44%) ***
5+ years 2 (22%) ***
Monitoring Modalities
Vitals 9 (100%) ***
Continuous Single Lead EKG 3 (33%) ***
Fall Detection 3 (33%) ***
Monitoring Frequenciesa
q4 1 (11%) ***
q6 2 (22%) ***
q8 3 (33%) ***
Continuous 2 (22%) ***
Indications for RPM
All Patients 8 (89%) ***
Condition-based 1 (11%) ***
Average Monthly HaH Patient Volumeb
0-10 0 (0%) ***
10-50 3 (33%) ***
50-100 2 (22%) ***
>100 3 (33%) ***

aNot reported by 2 institutions.
bNot reported by 1 institution.
*** not available

The study team included three practicing emergency physicians (DW, KZ, and JC) with virtual care experience and included one expert in remote patient monitoring for HaH (JC). All study team members had training in qualitative research.

Participant Selection

Interview participants were identified using a convenience sample. Participants were required to be leaders at active HaH programs that had implemented remote patient monitoring. There were no other exclusion criteria. Potential participants were identified through relationships with members of the study team as well as a review of the HaH literature. The study team emailed invitations to participate to potential participants and ultimately interviewed 16 individuals representing 9 institutions. Group interviews were conducted when there was more than one participant from an institution. Additional interviews were not sought after achievement of thematic saturation.

Interview Guide Development and Interviews

The interview guide was developed with input from all team members. Initial questions focused on elucidating the capabilities of each institution’s RPM program for HaH. Subsequent questions focused on capturing the use of RPM quality metrics across the National Quality Forum Frameworks’ 4 domains – access to care, financial impact/cost, experience, and effectiveness - as well as motivations for and barriers to quality measurement.

Interviews started with introductions from the study team and participants, followed by obtaining verbal study consent. Interviews typically lasted 60 minutes and were conducted on Microsoft Teams with a minimum of two study team members. All study team members participated in at least 1 interview. The Microsoft Teams’ transcription function was used to create interview transcripts and study team members also took field notes during interviews. Transcripts were not returned to participants and repeat interviews were not conducted. Participants did not provide feedback on findings or the final manuscript.

Analysis

We identified RPM quality measures reported by institutional representatives by coding interview transcripts and field notes. RPM quality measures were then grouped according to the corresponding National Quality Forum Telehealth Framework Domain and Subdomain. Using a grounded theory approach, we conducted a thematic analysis of motivations for and barriers to quality measurement for HaH RPM programs.17 Two study members (DCW, JC) reviewed interview transcripts and independently developed a codebook with one-level themes. Coding discrepancies were then adjudicated by the third study team member (KZ).

Ethics Approval

Mass General Brigham’s Institutional Review Board reviewed the study protocol and considered it exempt [Protocol Number: 2022P003095].

Results

Interview invites were sent to potential participants at 14 institutions and ultimately 16 participants representing 9 institutions accepted the invitation to participate (response rate 64%) [Table 1]. Participants held a variety of positions, ranging from executive level operations, medical director to innovation roles. The majority of institutions interviewed represented community health systems (5/9) and were located in urban geographies (6/9) [Table 2]. Most institutions interviewed had HaH RPM technology implemented for 2 or more years (6/9). At most institutions (8/9), patients were automatically set-up with a standardized RPM suite of tools at enrollment, while one institution customized the RPM set-up based on patient condition. All institutions had remote vital sign monitoring capabilities, with the majority monitoring vital signs every 4 to 8 hours. Only 2 institutions monitored vitals continuously. A minority of programs deployed RPM that could capture a continuous single lead EKG (3/9) or detect falls (3/9). The majority of programs had a monthly HaH patient census greater than 50 patients.

Table 2.Roles of Participants (Self-Reported)
Participants N=16, n (%)
Medical Director Hospital at Home 4 (25%)
Associate Vice President Virtual Care at Home 1 (4%)
Service Chief Virtual Care of Health System 1 (4%)
Medical Director Virtual Care at Home 1 (4%)
VP Clinical Innovation 1 (4%)
Regional Medical Director, Hospital Quality 1 (4%)
Regional Director, Hospital Quality and Operation 1 (4%)
Associate Director of Hospital at Home Operations 1 (4%)
SVP of Integration and Strategic Operations 1 (4%)
Chief Medical Officer 1 (4%)
VP Quality and Safety 1 (4%)
National Medical Director 1 (4%)
Hospital at Home Clinical Leader 1 (4%)

Regarding implemented RPM quality measures for HaH, most institutions utilized metric(s) within the NQF domain of effectiveness [Table 3]. Metrics within this domain often monitored operational and technical effectiveness of RPM. Metrics related to the NQF domain of experience were also found to be commonly implemented. Four institutions focused primarily on evaluating the experience of the patient or caregiver within this domain. Only two HaH units reported measuring a metric related to access to care—monitoring the percentage of patients declining HaH due to discomfort with technology. No institution reported implementing a RPM quality measure related to the NQF domains of system or clinical effectiveness.

Table 3.Remote Patient Monitoring for Hospital at Home Quality Measurement
NQF Domain Institutions Measuring Metric in Domain
N (%)
NQF Subdomains Institutions Measuring Metric in Subdomain Examples
Access to Care 2 (22%) Access for patient, family, and/or caregiver 2 (22%) % patients declining HaH due to discomfort with technology
Access for care team 0 (0%)
Access to information 0 (0%)
Financial Impact/Cost 0 (0%) Financial impact to patient, family, and/or caregiver 0 (0%)
Financial impact to care team 0 (0%)
Financial impact to health system/payer 0 (0%)
Financial impact to society 0 (0%)
Experience 4 (44%) Patient, family, and/or caregiver experience 4 (44%) Patient experience with technology - Likert Scale
Patient comfort with technology - Likert scale
Caregiver experience with technology - Likert Scale
Care team member experience 1 (11%) Physician satisfaction with technology - Likert Scale
Community experience 0 (0%)
Effectiveness 5 (56%) System effectiveness 0 (0%)
Clinical effectiveness 0 (0%)
Operational effectiveness 4 (44%) Total RPM alerts per patient
RPM alert to resolution time
Time to alert response
Patient adherence rates to vital sign collection (vitals self-inputted by patient in tablet)
% RPM alerts responded to by HaH team
Technical effectiveness 3 (33%) RPM device battery failure rate
RPM device data transmission failure rate
RPM device offline time

The most common motivation for RPM quality metric development was to understand the patient experience [Table 4]. Participant 7, a medical director of an academic HaH program in the Midwest commented, ‘We are interested in learning more about which patients are not having an easy time with our [RPM] equipment’. Another participant, representing a Northeast, academic HaH unit, commented, ‘one of the main value adds of remote patient monitoring is the feel that patients get - they feel watched and cared for’. The second most common cited motivation cited by programs was related to understanding how RPM implementation affected clinical staffing metrics. Participants cited aspirations to improve nursing team efficiency by safely decreasing the need for touch points, as well as utility in tracking response times to RPM alerts triggered by abnormal vital signs.

Table 4.Barriers to Quality Measurement and Motivations for Quality Measurement for RPM for HaH
Barriers to Quality Measurement Example Comments and Paraphrased Quotes from Interview Transcripts Frequency
Data Fidelity ‘There is a noise to signal ratio often. The noise is more on the equipment side. I anticipate with newer products we will want to know more about the accuracy of the equipment’ – Institution 1, Participant 1

‘The more we're able to feel very confident in the data coming into our system, the less we are worried about an extra 5 minute drive or an extra 10 minute drive, an extra 30 mile drive because we feel confident in the diagnostic tools we have’ - Institution 2, Participant 2

‘The difficult portion is what is the right amount of [remote patient monitoring] devices to not be invasive yet get the right amount of data without getting too much or dirty data’ Institution 3, Participant 3

‘Collecting vendor data for quality metrics is difficult. Additionally there is a question of whether that data is reliable’ Institution 3, Participant 3

‘So I'd love to be able to somehow measure compliance with the wearable and correlate that to patient events, however getting that data is difficult without someone right outside the room.’ Institution 4, Participant 4
4/9 (44%)
Limited ‘Out-of-Box’ RPM Vendor Data Reporting Tools ‘They [Remote patient monitoring vendor] do track things like repairs that are needed or you know if a device isn't working as needed, but we don't receive standard reports on it’ – Institution 5, Participant 5

‘The [remote patient monitoring] system does not allow us to indicate/track resolution of actions or actions taken.’ – Institution 6, Participant 6

‘The number one priority is that a lot of these metrics have to be passive and accurate. It cannot require active effort on anybody's part and cannot require data verification.’ Institution 1, Participant 1
3/9 (33%)
Lack of HaH Quality Metric Standards ‘We have struggled to understand how to measure and evaluate our fall rate. We had this big debate over falls and how it is recorded as a quality metric. At home people are up out of bed much more often and thus potentially more prone to fall. We have seen a fall rate 5 times that of what is recorded in the hospital fall literature. Really, the right metric should be falls per hours up out of bed or something else that is new and different for the home hospital world. Institution 3, Participant 3 1/9 (11%)
Connectivity “If you get [too far] from the [remote patient monitoring] tablet, then all of the [measurements] that are coming across just stop. Then when back within Bluetooth range, you get an influx of [biometric] data for the past hour and a half. How do you respond to something that happened an hour and a half ago?” – Institution 6, Participant 6 1/9 (11%)
Motivations for Quality Measurement
Patient Experience ‘One of the main value ads of remote patient monitoring is the feeling that patients get - they feel watched and cared for. Patients repeatedly tell us it felt really good that somebody was watching me like it felt really nice that I had this arm band on my arm. Additionally, it helps reduce false alarm burden for patients and families’ - Institution 2, Participant 2

‘There is a patient expectation that if a patient knows a [vital sign] alert is going off that they know it is being followed up on which motivates us to measure response times.’ – Institution 1, Participant 1

‘Patient experience is one measure we track as we anticipate seeing some impact.’ – Institution 8, Participant 8

‘The motivation for [tracking quality] is to prove the quality of the [remote patient monitoring] system to the patients.’ - Institution 3, Participant 3

We are interested in learning more about which patients are not having an easy time with our [remote patient monitoring] equipment’ – Institution 7, Participant 7
5/9 (56%)
Clinical Staffing Metrics ‘I would actually argue that the number of nursing touch points we have is reduced with the model of continuous monitoring and a nurse in the command center so we would be interested in tracking impact on RN ratios.’ Institution 4, Participant 4

‘From my perspective I think one of the main value ads is improving nursing team efficiency which we would like to track.’ Institution 2, Participant 2

‘Quality metric monitoring helps us do quality improvement on our staffing. Are they doing their jobs? We track if staff are answering the alerts and how fast is it taking to respond. It also helps us track productivity’ – Institution 1, Participant 1
3/9 (33%)
Expanding HaH Patient Eligibility ‘I think the other big question is whether better tracking of remote monitoring quality can allow you to take care of a wider range of patients’ - Institution 2, Participant 2

‘A primary motivator is increasing our ability to take patients and increase our census.’- Institution 7, Participant 7.
2/9 (22%)
Ensuring Patient Safety ‘[Quality measurement motivation] is because of safety for sure.’ – Colleen Hole Atrium

‘We have been told that the ‘ED bounce-back’ rate should never be 0, but with RPM, because in theory we are able to see [patient decline] sooner. We can then intervene quicker and maybe get [the ‘ED bounce-back rate’] from 3% down to 2%. So we track that metric to see if RPM is having an impact.’ Atrium

‘It would be valuable to track metrics across key domains when expanding to new patient populations to make sure that we're at least measuring up to what we think is the baseline.’ – Institution 9, Participant 10
2/9 (22%)
Clinician Experience ‘We also want to measure the provider experience because we want them to be clinically comfortable providing a certain level of care to patients at home or remotely [using RPM].’ – Institution 5, Participant 9 1/9 (11%)
Experience with Prior RPM Tech ‘We had a higher failure rate with some of our older [RPM] equipment so we just got a new set of [RPM] equipment, so [connectivity issues] are definitely being tracked.’ – Institution 7, Participant 7 1/9 (11%
Patient Engagement ‘The motivation is truly to improve the quality of the system to the patients. We want to show them what we are collecting and what our outcomes are.’ - Institution 3, Participant 3 1/9 (11%)
Clinician and Leadership Buy-In ‘We want to use [quality monitoring] to get buy in from our own providers as well as institutional leaders. …it's something that our executive team is skeptical about at times.’ Institution 3, Participant 3

‘Showing quality metrics and all the data to [leadership] is very important because that proves that we're actually making an impact and identifies areas that we're not making impact in.” - Institution 3, Participant 3
1/9 (11%)
Financial Impact ‘The primary motivators I would say are both financial and increasing our ability to take patients on our census.’ – Institution 7, Participant 7 1/9 (11%)

Regarding barriers to RPM quality metric development, interviewees most often mentioned concerns surrounding data fidelity (4/9). Participant 1, representing an academic, urban health system commented on current challenges with the ‘noise to signal’ ratios of current RPM devices and the potential effect that could have on developing meaningful RPM quality metrics. Another interviewee mentioned difficulty obtaining data from a RPM device vendor as a barrier to developing quality metrics. The second most common barrier cited by interviewees was related to underdeveloped data capture and reporting. A few participants mentioned not having the resources to perform required software development to allow for seamless RPM quality metric tracking and a desire for RPM companies to have these capabilities as part of their offering.

This qualitative study of HaH RPM programs at nine institutions found diverse early experiences implementing and evaluating such physiologic monitoring. Some chose intermittent monitoring (every 4-8 hours) while others leveraged continuous monitoring. Most have a one-size fits all approach for now, while at least one institution is already tailoring RPM technologies to deemed clinical risk/need. Monitoring capabilities also varied across single-lead ECG and fall-detection devices (likely in part due to patient populations enrolled). For RPM quality measurement, the focus of a majority of institutions was on the NQF’s Domain of effectiveness, while a significant minority captured quality metrics related to patient, family or caregiver experience. Participants most often mentioned a desire to understand the patient experience as a motivation for RPM quality metric development. Data fidelity concerns as well as limited ‘out-of-box’ reporting of metrics offered by current RPM device vendors were commonly cited barriers to quality metric development.

To our knowledge, this is the first study that has characterized HaH RPM implementation and described associated quality metrics. We found wide variation amongst HaH institutions in the content of implemented quality metrics evaluating RPM initiatives. Our prior work evaluating quality metric implementation in the virtual urgent care realm revealed similar variation in content measurement.16 Initial prioritization of quality metrics in the NQF domains of experience and effectiveness may stem from the most common motivations referenced by interviewees: a desire to evaluate patient experience with RPM, as well as to understand the impact on clinical staffing. While all effectiveness measures reported were process metrics likely due to ease of collection, it could potentially be even more valuable for HaH leaders to monitor and evaluate the impact of RPM on outcome metrics such as patient safety events or unanticipated escalations. To our knowledge, similarly, no widespread metrics exist across inpatient settings to inform the development of quality metrics for HaH patient monitoring. Studies have evaluated continuous vital sign monitoring in the inpatient setting with regards to both nursing and patient experience, alert burden per patient, as well as vital sign check completeness.18–22 Lack of standardized metrics to reference in the inpatient setting creates a unique opportunity for the development of metrics for HaH RPM to not only impact quality improvement initiatives in the care at home setting, but also potentially in the inpatient setting. To ease the facilitation of metric development, addressing a few key barriers commonly reported by programs could be helpful.

Participants cited a lack of data fidelity and ‘out-of-box’ reporting capabilities from RPM companies as major barriers to the development of quality metrics. Prior work evaluating quality metric development referenced similar challenges around data availability and IT limitations.16,23 These findings are not surprising given the significant cost associated with developing, implementing and monitoring quality metrics. One study in 2016 estimated the cost of quality measurement initiatives to be $40,069 per physician.24 Given the significant upfront financial and human resource commitments for starting HaH units, HaH leaders could benefit from closer partnerships with governmental agencies (e.g. FDA’s Digital Health Center of Excellence) and device vendors to develop better out-of-box reporting capabilities and improve data fidelity. Additionally, working with the Tech and Quality Indicators Councils of the Hospital at Home Users Group in the US (as well as similar international entities), RPM standards and patient-centered quality metrics (e.g. falls per hours ambulating) could be further developed.

Our study has several limitations. We used a small convenience sample of participants, identifying leaders at HaH institutions either known to the study team or through the published literature. Subsequently, the majority of HaH institutions interviewed had been established for at least 2 years and relatively large - with a monthly census for the majority of greater than 50 patients per month. As such, our findings are potentially not generalizable to the many nascent HaH programs that have only recently started. Moreover, only one HaH unit in our study is located outside of the United States, limiting global generalizability. Additionally, the majority of HaH programs represented were based in an urban setting where distance to a patient’s home and internet connectivity may be more favorable than in a rural setting which could have influenced initial RPM quality metric development. Finally, a few participants had prior collegial relationships with the study team which may have led to social desirability bias and influenced response.

Conclusion

In conclusion, our research highlights the current state of HaH RPM and associated quality metric use across 9 leading institutions. For HaH leaders and clinicians, our work provides a foundation from which to consider further development of RPM quality metrics, as well as highlights key barriers to anticipate and strategize to overcome. For health system researchers, these insights help inform an evaluation framework for larger studies investigating the use of RPM for HaH and its impact on quality and safety, as well as how it might integrate within the broader RPM ecosystem. Ultimately, it showcases that the development of RPM standards and patient-centric quality metrics is urgently needed to guide HaH and RPM leaders in implementing and advancing the use of this technology as a key pillar of high-quality, accessible HaH care.


Acknowledgements

Grant support was provided through a Mass General Brigham Centers of Excellence Research Grant. The authors would like to extend their sincere appreciation to all participants.

Funding

Mass General Brigham Centers of Excellence Research Grant

Data Sharing

All authors had full access to all of the data (including statistical reports and tables) related to the study.

The participants of this study did not give written consent for their data to be shared publicly, so due to the sensitive nature of the research supporting data is not available.

Conflicts of interest

David Whitehead
Consulting fees: Inbound Health
Jared Conley
Advisor: Eolas Medical, Vivalink
Speaking Fees: Becton Dickinson

Author Contributions

DCW, KSZ, JC had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

Study concept and design: DCW, KSZ, JC

Acquisition, analysis, or interpretation of data: DCW, KSZ, JC

Drafting of the manuscript: DCW, JC

Critical revision of the manuscript for important intellectual content: DCW, KSZ, JC

Statistical analysis: DCW, KSZ, JC

Administrative, technical, or material support: DCW, KSZ, JC

Study supervision: JC