Published on in Vol 12 (2024)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/47632, first published .
The Goldilocks Dilemma on Balancing User Response and Reflection in mHealth Interventions: Observational Study

The Goldilocks Dilemma on Balancing User Response and Reflection in mHealth Interventions: Observational Study

The Goldilocks Dilemma on Balancing User Response and Reflection in mHealth Interventions: Observational Study

1Department of Medicine, Vanderbilt University Medical Center, , Nashville, TN, , United States

2Center for Health Behavior and Health Education, Vanderbilt University Medical Center, , Nashville, TN, , United States

3Department of Biostatistics, Vanderbilt University Medical Center, , Nashville, TN, , United States

4Department of Biomedical Informatics, Vanderbilt University Medical Center, , Nashville, TN, , United States

Corresponding Author:

Lyndsay A Nelson, PhD


Background: Mobile health (mHealth) has the potential to radically improve health behaviors and quality of life; however, there are still key gaps in understanding how to optimize mHealth engagement. Most engagement research reports only on system use without consideration of whether the user is reflecting on the content cognitively. Although interactions with mHealth are critical, cognitive investment may also be important for meaningful behavior change. Notably, content that is designed to request too much reflection could result in users’ disengagement. Understanding how to strike the balance between response burden and reflection burden has critical implications for achieving effective engagement to impact intended outcomes.

Objective: In this observational study, we sought to understand the interplay between response burden and reflection burden and how they impact mHealth engagement. Specifically, we explored how varying the response and reflection burdens of mHealth content would impact users’ text message response rates in an mHealth intervention.

Methods: We recruited support persons of people with diabetes for a randomized controlled trial that evaluated an mHealth intervention for diabetes management. Support person participants assigned to the intervention (n=148) completed a survey and received text messages for 9 months. During the 2-year randomized controlled trial, we sent 4 versions of a weekly, two-way text message that varied in both reflection burden (level of cognitive reflection requested relative to that of other messages) and response burden (level of information requested for the response relative to that of other messages). We quantified engagement by using participant-level response rates. We compared the odds of responding to each text and used Poisson regression to estimate associations between participant characteristics and response rates.

Results: The texts requesting the most reflection had the lowest response rates regardless of response burden (high reflection and low response burdens: median 10%, IQR 0%-40%; high reflection and high response burdens: median 23%, IQR 0%-51%). The response rate was highest for the text requesting the least reflection (low reflection and low response burdens: median 90%, IQR 61%-100%) yet still relatively high for the text requesting medium reflection (medium reflection and low response burdens: median 75%, IQR 38%-96%). Lower odds of responding were associated with higher reflection burden (P<.001). Younger participants and participants who had a lower socioeconomic status had lower response rates to texts with more reflection burden, relative to those of their counterparts (all P values were <.05).

Conclusions: As reflection burden increased, engagement decreased, and we found more disparities in engagement across participants’ characteristics. Content encouraging moderate levels of reflection may be ideal for achieving both cognitive investment and system use. Our findings provide insights into mHealth design and the optimization of both engagement and effectiveness.

JMIR Mhealth Uhealth 2024;12:e47632

doi:10.2196/47632

Keywords



Background

Mobile health (mHealth) is transforming health delivery as a highly convenient and effective approach for supporting individuals with chronic conditions [1-3]. Delivered via phones, tablets, and wearables, mHealth provides education, motivation, monitoring, and other forms of support to improve health behaviors. SMS text messaging is one form of mHealth that is uniquely poised to benefit everyone, including people who are older, are disadvantaged, and are from traditionally minoritized racial or ethnic backgrounds [4-6]. A critical factor influencing mHealth effectiveness is users’ engagement or interaction with the technology, which is typically measured via system use [7-9]. Across the mHealth literature, engagement tends to be highly variable [10,11], which has spurred a whole body of research that aims to understand predictors of engagement, including user characteristics and intervention features (eg, intervention duration and frequency of sending content) [10-14]. However, very little research has attended to the type of mHealth content that users are expected to engage with [15] and, more specifically, how the content may be requesting more or less cognitive reflection.

The primary goal in having users engage with mHealth content is health behavior change. With respect to mHealth interventions, there is a hyperfocus on wanting the user to interact with the technology (eg, responding to a text message), with less consideration of whether the user is reflecting on the content cognitively (eg, reflecting on past behavior and planning future behavior) [8]. Although interaction with the technology is a critical measure, there is a growing consensus that cognitive investment is also important for meaningful behavior change in many types of mHealth interventions [16-18]. Notably, content may be designed in a way that represents a low response burden, thereby easily eliciting a response (ie, producing high engagement), but such content may not evoke the necessary cognitive reflection required to change behavior [18]. Alternatively, content that is designed to encourage deeper reflection may overwhelm users, which risks them disengaging completely. Understanding how to strike the balance between response and reflection has critical implications for effective engagement (ie, engagement needed to impact outcomes) [19]. To our knowledge, no studies have explored the association between reflection demands and the degree of interaction with an mHealth tool. Understanding the interplay between reflection burden and response burden will help guide the design of interventions seeking to strike this balance.

Objective

Our team previously developed an mHealth intervention (delivered via text messages and phone calls) called Family/Friend Activation to Motivate Self-care (FAMS) [20,21]. FAMS is a diabetes self-management intervention that targets persons with type 2 diabetes and provides the option for persons with diabetes to invite a support person to also receive text messages. We recently evaluated FAMS in a randomized controlled trial (RCT) [22], and during routine monitoring in the first few weeks of the trial, we observed a low response rate to text messages among support persons. Because support persons’ engagement with the text messages was an optional component of the intervention, we determined that this was an opportunity to explore how changing the content of these texts might impact response rates without compromising our ability to evaluate FAMS’ effects. Over the course of the RCT [22], we used a pragmatic approach to vary both the reflection burden and the response burden of the two-way text messages sent to support persons assigned to the intervention. In this observational study, our primary goal was to explore how these variations would impact users’ engagement with text messages, as measured via response rates. We also explored support persons’ characteristics that were associated with response rates for each type of text and described the different responses to each text.


Study Design and Eligibility

This study was conducted as part of the FAMS 2.0 RCT. The trial design, intervention details, and outcomes for persons with diabetes and support persons were published [22-24]. For the trial, dyads comprising a person with diabetes and their support person were randomized to FAMS or a control condition. We recruited persons with diabetes who were receiving care for type 2 diabetes at Vanderbilt University Medical Center primary care clinics. Enrolling persons with diabetes were asked to invite a support person to participate with them and receive text messages; however, support person invitation and enrollment were not required. We defined a support person as any family member or friend with whom the person with diabetes would feel comfortable talking about diabetes management and health goals. Eligible support persons were aged ≥18 years, could speak and read English, and had a mobile phone separate from that of the person with diabetes. The only exclusion criterion was the inability to receive and respond to a text after training. For this study, we analyzed data from support persons in dyads that were randomly assigned to the intervention group (FAMS).

Ethical Considerations

The Vanderbilt University Institutional Review Board approved all study procedures (institutional review board number: 200398; approved April 8, 2020), and the trial was registered on ClinicalTrials.gov (trial number: NCT04347291).

Procedure

While enrolling persons with diabetes into the trial, research assistants collected contact information for a potential support person. A research assistant then contacted potential support persons to verify interest and eligibility, obtain verbal informed consent, and ask support persons to complete a baseline survey. Surveys were completed by phone with a research assistant, on the web via an emailed link, or via a mailed paper copy, per participants’ preferences. All survey data were stored in REDCap (Research Electronic Data Capture; Vanderbilt University), a secure web-based platform that supports data capture for research studies [25,26]. In addition to collecting data on sociodemographic characteristics, surveys asked support persons to choose the time of day they wanted to receive text messages. Relevant survey responses were transferred from REDCap to our technology partner, PerfectServe, using an automated application programming interface. PerfectServe used participant information to tailor, schedule, and send text messages to support persons for 9 months. Support persons could earn a total of US $120 for completing all study surveys (through 15 mo for the larger RCT). There was no compensation for receiving or responding to text messages.

The Intervention

Persons with diabetes received daily text message support and monthly coaching sessions, during which they set behavioral diabetes self-management goals (as detailed by Mayberry et al [22]). Support persons received text messages that were designed to increase dialogue about and facilitate their involvement in the diabetes self-management of the persons with diabetes; a one-way message was sent 3 to 4 times per week, and a two-way message (also known as an interactive text message) was sent once per week. One-way messages were either a general text message about providing diabetes self-management support or a text message tailored to the identified diabetes goals of the persons with diabetes. Two-way messages asked support persons about how they supported the health of the persons with diabetes. Support persons who replied to the two-way text received an automated response, thanking them for their answer.

Although an individual support person’s intervention experience lasted 9 months (36 wk), intervention delivery for the trial lasted 2 years. Over the course of those 2 years, we varied both the response burden (the level of information requested for the response relative to that of other messages) and the reflection burden (the level of cognitive reflection requested relative to that of other messages) of the weekly two-way text messages, which were sent to support persons in 6 fixed periods (ie, waves). The waves coincided with the weeks of the trial; they did not coincide with the weeks of each individual support person’s intervention experience. Figure 1 includes the content for each version of the text message, the weeks of the trial when each text was sent (ie, calendar time), and the respective waves. We started the trial (wave 1) by sending a text message that was high in both reflection burden and response burden (high/high). In wave 2, we tested a text that was low in both reflection burden and response burden (low/low), and then in wave 3, we tested a text that involved medium reflection burden and low response burden (medium/low). In wave 4, we retested the high/high message to help determine if the point at which the text was sent during the trial impacted engagement. In wave 5, we sought to delineate the relative impacts of reflection burden and response burden; therefore, we tested a text message that was high in reflection burden and low in response burden (high/low). Finally, we closed out the trial by retesting the low/low text message (wave 6). The decisions about what messages to test were made iteratively based on response rates to the prior message, with the goal of learning how much reflection we could request while still achieving a relatively high response rate.

Figure 1. The four versions of the two-way text message. The content references an example person with diabetes named Tom.

Of note, each support person only received the versions of the two-way text message that were sent during their 36-week trial participation, with most (120/148, 81.1%) receiving 2 or 3 different versions and no participants receiving the same message in 2 separate waves. Because this analysis used data from support persons only, we refer to them as participants henceforth.

Measures

Sociodemographic and Relationship Characteristics

We collected self-reported data on age, gender, race, ethnicity, socioeconomic status (measured based on education [ie, years in school] and annual household income), and health literacy (assessed via the Brief Health Literacy Screen [27]). In addition, we asked whether participants were cohabitating with the persons with diabetes and the frequency with which they provided diabetes-specific helpful involvement to the persons with diabetes at baseline, as assessed via the Family and Friend Involvement in Adults’ Diabetes (FIAD) helpful subscale, support person version [28].

Engagement

We operationalized engagement by using response rate (ie, two-way messages responded to divided by the two-way messages sent, for each participant).

Analyses

Statistical Analysis Overview

All statistical analyses were performed by using R version 4.2.1 (R Foundation for Statistical Computing). We described participant characteristics via means and SDs or via frequencies and percentages, as appropriate. Except for when examining temporality, message waves that included the same version of the two-way text message were grouped together. Because this study was exploratory, we did not perform sample size calculations.

Overall Engagement by Text Message Version

For each version of the text message, we determined the proportion of two-way text messages sent to support persons that received a response by study week (ie, calendar time). We also generated summary statistics (means, medians, and first and third quartiles) for response rates at the participant level; reporting both mean and median provides more detailed information on the distribution of data. If participants withdrew during their intervention experience, we calculated their response rates based on the data available prior to their withdrawal. To account for repeated measures within participants, we used generalized estimating equations with a working independence correlation structure and a logistic link function to compare the odds of responding to two-way text messages across the four versions.

Participant Characteristics and Engagement

We used Poisson regression to estimate associations (as incidence rate ratios) between participant characteristics and text message response rates for each version of the text message. We included the number of two-way messages sent to a participant as an offset term in order to account for variation in the number of messages sent to each participant in a given wave; therefore, the exponentiated coefficients from the Poisson regression model compared response rates on a per-message basis. Participant characteristics included age, race and ethnicity (non-Hispanic White vs minoritized race or ethnicity), gender, education (years), annual household income (≥US $50,000 per year), health literacy (Brief Health Literacy Screen), whether the persons with diabetes and support persons were cohabitating, and self-reported baseline helpful involvement (FIAD). Further, we multiplied participants’ age by 10 to allow for easier interpretation of the results. Especially in regression models, it can be difficult to interpret the association between age and an outcome when the change in the outcome is based on a single-year change in age (ie, the coefficients end up being too small). Scaling the age variable in this way allowed us to interpret the findings in a more meaningful way, that is, we compared groups that differed in age by 1 decade rather than 1 year.

For this analysis, we excluded 5 participants who were missing all baseline data. However, missing covariate values were otherwise addressed via multiple imputation by chained equations (M=500 iterations).

Types of Responses

We characterized the responses to each version of the two-way text message. For the low/low, medium/low, and high/low texts, we reported the frequency of responses based on what the respective text requested (eg, “Yes,” “No,” “1,” “2,” “3,” “4,” or “ 5”). For the high/high text messages, 2 team members reviewed responses and categorized each as being either high effort or low effort. High-effort responses included comments on what went well that week, comments on what could go better next week, or both, and they referred to a diabetes self-management behavior such as diet, exercise, stress management, or communication (eg, “[He] and I got out several times this week walking after work. Our biggest problem is watching portion size when we are eating. Always continue to work on that.”). A low-effort response consisted of only a brief phrase that did not reference a diabetes self-management behavior (eg, “Things went well” and “We were on vacation this week”) or did mention a behavior but was unclear as to what went well or what could go better next week (eg, “Walking”).


Participant Characteristics

In the trial, of the 150 support person participants who were enrolled and randomized to receive the FAMS intervention, 2 withdrew before the intervention started. The remaining 148 were included in the analyses (Table 1). The mean age was 50.3 (SD 14.7) years; 28.4% (42/148) of participants were men, and 33.1% (49/148) reported a minoritized racial or ethnic background. The mean length of education was 14.9 (SD 2.5) years, and 31.1% (46/148) of participants had annual household incomes of <US $50,000. Over half (84/148, 56.8%) were spouses or partners of the persons with diabetes, and 70.3% (104/148) were cohabitating with the persons with diabetes. Further, 9 participants withdrew at some point during the intervention; the analyses below reflect their engagement during the time they participated.

Table 1. Participant characteristics (N=148).
CharacteristicValue
Agea (y), mean (SD)50.3 (14.7)
Genderb, n (%)
Men42 (28.4)
Women101 (68.2)
Race and ethnicity, n (%)
Non-Hispanic White92 (62.2)
Non-Hispanic Black29 (19.6)
Other non-Hispanic races12 (8.1)
Hispanic8 (5.4)
Missing7 (4.7)
Socioeconomic status
Educationc (y), mean (SD)14.9 (2.5)
Annual household income (US $), n (%)
<35,00025 (16.9)
35,000-49,99921 (14.2)
50,000-74,99922 (14.9)
75,000-99,99924 (16.2)
≥100,00040 (27)
Missing or unknown16 (10.8)
Health literacy (BHLSd,e), mean (SD)13.7 (1.5)
Relationship variables
Relationship type, n (%)
Spouse or partner84 (56.8)
Parent15 (10.1)
Son or daughter22 (14.9)
Grandchild7 (4.7)
Friend11 (7.4)
Other4 (2.7)
Missing5 (3.3)
Helpful involvement (FIADf,g), mean (SD)2.7 (0.9)
Cohabitating with person with diabetesh, n (%)104 (70.3)

a10 participants did not report their date of birth (ie, age).

b5 participants did not provide data on gender.

c8 participants did not report years of education.

dBHLS: Brief Health Literacy Screen.

e5 participants did not have data for the BHLS measure.

fFIAD: Family and Friend Involvement in Adults’ Diabetes.

g5 participants did not have data for the FIAD helpful subscale.

h8 participants did not have data about cohabitating with the persons with diabetes.

Overall Engagement by Text Message Type

Figure 2 presents the proportion of two-way text messages that received a response within each week of the trial (ie, by calendar time). Notably, text message response rates for waves 1 and 4 (both were high/high message waves) were comparable, as were those for waves 2 and 6 (both were low/low message waves). Table 2 includes descriptive statistics for the overall and participant-level response rates for each version of the two-way text. The median response rates for the high/high, medium/low, and low/low messages were 23% (IQR 0%-51%), 75% (IQR 38%-96%), and 90% (IQR 61%-100%), respectively. When we kept reflection burden high but lowered response burden in the high/low message, the median response rate (10%, IQR 0%-40%) was closest to that for the high/high message, suggesting that reflection burden was responsible for the lower response rates seen with the high/high message.

Figure 2. Text message response rates by week across each wave. Response data were excluded for the first 4 weeks and the last 5 weeks of the trial when <5 individuals were receiving the intervention.
Table 2. Response rates for each version of the two-way text message.
Reflection burden and response burdenParticipants included in analysis, nParticipant-specific response rate
%, mean (SD)%, median (IQR)
High and high12733 (33)23 (0-51)
Low and low12374 (32)90 (61-100)
Medium and low12565 (35)75 (38-96)
High and low5526 (32)10 (0-40)

We also compared the odds of responding to the four versions of the text message (Table 3). When compared to the high/high message, the odds of responding to the low/low message was 53% (95% CI 42%-65%) higher, the odds of responding to the medium/low message was 40% (95% CI 31%-49%) higher, and the odds of responding to the high/low message was 7.5% (95% CI 0.7%-14%) lower. All other pairwise comparisons (Table 3) indicated decreasing odds of responding at increasing levels of reflection burden.

Table 3. Comparison of text message response rates by text message version. Included are odds ratios (ORs) and 95% CIs, along with P values.
ComparisonOR (95% CI)P value
Relative to high reflection burden and high response burden
Low reflection burden and low response burden1.53 (1.42-1.65)<.001
Medium reflection burden and low response burden1.40 (1.31-1.49)<.001
High reflection burden and low response burden0.93 (0.86-0.99).03
Relative to low reflection burden and low response burden
Medium reflection burden and low response burden0.92 (0.86-0.98).01
High reflection burden and low response burden0.60 (0.55-0.66)<.001
Relative to medium reflection burden and low response burden
High reflection burden and low response burden0.66 (0.61-0.72)<.001

Participant Characteristics Associated With Odds of Responding

Table 4 presents estimated incident rate ratios, along with 95% CIs and P values, from multivariate Poisson regression models that were used to identify participant characteristics predictive of response rate. Younger participants, participants who were not cohabitating with the persons with diabetes, and participants who had a lower socioeconomic status had lower response rates to both the high/high message and the medium/low message compared to those of older participants, participants who were cohabitating, or participants who had a higher socioeconomic status, respectively. The only characteristic associated with response rates for the low/low message was gender, such that men had lower response rates. Further, the only characteristic associated with response rates for the high/low message was age, such that younger age was associated with lower response rates. Across message versions, younger participants had lower response rates to any message with more burden than the low/low message, and participants who were not cohabitating with the persons with diabetes had lower response rates to the higher-burden messages than those of participants who were cohabitating. Race, ethnicity, health literacy, and baseline helpful involvement provided to the person with diabetes did not show patterns indicating the prediction of response rates to any message version.

Table 4. Participant characteristics predicting text message response rates for each version of the text. Presented are estimated incident rate ratios (IRRs) and 95% CIs, along with P values.a
PredictorHigh reflection/high responseLow reflection/low responseMedium reflection/low responseHigh reflection/low response
IRR (95% CI)P valueIRR (95% CI)P valueIRR (95% CI)P valueIRR (95% CI)P value
Age (y × 10)1.14 (1.06-1.23).004b0.99 (0.95-1.03).671.08 (1.04-1.14).001b1.28 (1.09-1.51).002b
Race and ethnicity1.07 (0.85-1.35).570.96 (0.84-1.09).510.94 (0.81-1.09).400.60 (0.36-1.01).052
Gender (men)0.62 (0.49-0.78)<.001b0.78 (0.68-0.89)<.001b0.89 (0.78-1.02).101.28 (0.83-1.98).26
Education0.98 (0.94-1.02).341.00 (0.98-1.03).770.96 (0.94-0.99).007b0.99 (0.90-1.09).87
Income0.71 (0.57-0.89).003b1.05 (0.91-1.21).511.14 (0.98-1.33).100.77 (0.46-1.27).30
BHLSc1.04 (0.97-1.12).301.04 (1.00-1.09).061.02 (0.98-1.06).241.02 (0.86-1.21).80
Cohabitating1.37 (1.09-1.73).007b1.03 (0.90-1.19).641.21 (1.05-1.40).008b1.63 (0.93-2.85).09
FIADd1.05 (0.94-1.17).411.00 (0.93-1.07).990.99 (0.93-1.06).790.90 (0.71-1.14).37

aA total of 5 support persons without baseline characteristics were excluded from this analysis: 2 were excluded from the models for the high/high, low/low, and medium/low messages; 1 was excluded from the models for the high/high and medium/low messages; 1 was excluded from the models for the high/high, low/low, and high/low messages; and 1 was excluded from the models for the low/low and medium/low messages.

bP<.05.

cBHLS: Brief Health Literacy Screen.

dFIAD: Family and Friend Involvement in Adults’ Diabetes.

Types of Responses

In this section, we report on engagement at the text message level (vs the participant level). For the high/high messages, 1429 texts were sent, and 445 responses were received. Further, 13 responses were excluded from the analysis because the content was uninterpretable or was not relevant to the two-way text prompt. The reviewers categorized each response into the high- or low-effort response category, with 98.6% (responses: 426/432) agreement; of the 426 texts agreed upon, 350 (82.2%) were categorized as high-effort responses, and 76 (17.8%) were categorized as low-effort responses. For the low/low texts, 1791 texts were sent, and 1341 responses were received; almost all of the responses were “yes” responses (n=1239, 92.4%), while only 97 (7.2%) were “no” responses, and 5 (0.4%) were considered “other” responses. For the medium/low texts, 1847 texts were sent, and 1218 responses were received; the most common response was a “5” response (n=681, 55.9%), followed by a “4” response (n=338, 27.8%) and then a “3” response (n=144, 11.8%). Lastly, for the high/low texts, 446 texts were sent, and 109 responses were received; most of the responses received were “OK” responses, as requested in the message (n=86, 78.9%).


Principal Results

Despite the potential of mHealth to enhance self-management support and quality of life, there are still key gaps in understanding how to optimize mHealth engagement [16,17,19]. Most engagement research reports only on system use without consideration of the cognitive reflection done in the process of engaging with the content [16,29]. Ideally, we want to encourage reflection that results in meaningful behavior change, but it is unclear how much we can request, with respect to reflection, before users disengage. We varied the reflection and response burdens of two-way text messages to examine how these variations impacted users’ engagement, as assessed via response rates. We found, generally, that as the reflection burden of the message increased, participants’ engagement decreased. Importantly, when the same version of the text was sent at different points in the trial, participants’ engagement was consistent, suggesting that the message itself was key for response rates. The response rates for the high/low message were similar to those for the high/high message, and this supports reflection burden (vs response burden) being the primary driver of lower engagement. We also found evidence that as the reflection burden of the message increased, there were more disparities in engagement across participant characteristics. This finding helps inform who we may lose with content requiring more from users and thus how to ensure mHealth interventions do not widen existing health disparities.

Research focused on promoting mHealth engagement has proliferated in recent years, with the primary goal of increasing system use [30]. Across such studies, we see evidence that content that is more personalized; uses simpler, nontechnical language; and is empowering results in higher engagement [10,19,31-35]. Less research has compared specific types of content and has rarely tested different types within the same study. An exception is a recent study by Klimis et al [15], wherein they used machine learning to demonstrate that text messages with informative (providing health facts or education) and instructional (providing tips or recommendations) message intents were associated with increased engagement, while notification messages that addressed noneducational matters (eg, welcome and exit messages) were associated with reduced engagement [15]. Our study targeted a two-way message and varied the levels of reflection and response burdens in that message. By adjusting the reflection level specifically, we gained unique insight into how engagement with each message variation may ultimately influence behavior change [17,18]. The main way in which our study differs from others in this area of research is that we looked beyond system use as the sole dimension of mHealth engagement. Our goal was not necessarily to see which message resulted in the highest response rate but rather to determine how much reflection we could request from users and achieve a level of interaction that suggested that they were still invested in the content.

Although our results show generally that engagement decreases with more reflection, the nuances in our findings allow us to provide unique recommendations around mHealth design. For instance, it may be best to alternate through content with different levels of reflection burden. Although users were more likely to respond to content that was lower in reflection burden, nearly all (350/426, 82.2%) of the responses that we received to the high-reflection messages included a high-effort level of reflection. The act of asking people to reflect stimulates internal thoughts that are difficult to measure without a response [36] but may still occur among some persons who do not respond. Alternating content may help promote periodic responding and reflecting throughout an intervention experience. Another option involves using an adaptive intervention to tailor the content based on each person’s responsiveness. That is, everyone could start receiving content with a high reflection burden, but if a person’s response rate starts to drop, they could then switch to content with a moderate reflection burden. Finally, especially in situations where there is limited flexibility with the mHealth functionality, researchers may consider sending the medium/low message to all participants, given that the content encouraged a moderate level of reflection (more than the low/low message) yet still yielded a high response rate.

Limitations

Our study has several limitations to acknowledge. For instance, our results are based on an SMS text messaging intervention, which is a specific form of mHealth. It is possible that users would have responded differently if the content was delivered via an app or wearable technology. Importantly, compared to apps and other internet-dependent technologies, SMS text messaging is both lower in cost and more easily accessed, and it tends to have higher rates of engagement [4,37]. In addition, this study recruited persons with diabetes and their support persons from a specific region in Middle Tennessee. We acknowledge that the findings may not be generalizable to other types of individuals who are living in other locations. Relatedly, the content asked about how the support persons supported the health of the persons with diabetes, and engagement may differ when asking about a user’s own health; however, the marked differences in engagement across message types support broader applications. Another limitation of our work is that we restricted our assessment of engagement to a behavioral measure (ie, responding to the text) and did not have a way to assess participants’ cognitive investment or experience with each version of the text. Based on our analysis of participants’ responses to the high/high message, it appeared that responders were cognitively engaged, but we were not able to compare cognitive engagement across the other messages. In addition, the sample size for the high/low message analysis (n=55) was considerably smaller compared to those for the other message analyses, which was due to testing the high/low message during only 1 wave near the end of the trial when fewer participants were enrolled. Relatedly, we did not have the time or a sufficient number of participants toward the end of the trial to test the medium/low and high/low text messages in a second wave, and we do not know for certain whether engagement with these texts could be impacted by temporality; however, as engagement with the high/high and low/low texts remained similar across multiple waves, it is unlikely. The ordering of the text messages was variable across participants, and due to the observational nature of this study, we cannot determine the extent that ordering may have impacted results; however, the average response rates and trends across message versions and waves provide general insights on how these variations may impact engagement. Finally, we did not assess the impact of engagement on outcomes, as this fell outside the scope of our study; however, other studies in digital health have examined this association [38,39].

Conclusions

In order for individuals to benefit from mHealth and achieve desired effects on outcomes, engagement with the mHealth tool is needed. Our results help elucidate how truly complex the nature of engagement is. Although our past work and that of others have demonstrated the importance of behaviorally interacting with mHealth interventions [9,31,40,41], this measure represents one piece of a larger puzzle. Engagement may be best conceptualized as including both a behavioral dimension and a cognitive dimension. Balancing these dimensions may be what is ultimately needed to achieve effective engagement for impacting intended outcomes. Our study contributes to a growing body of research that encourages a more nuanced approach to studying engagement that goes beyond measuring system use. We hope that our findings help advance the field of mHealth and inform intervention design, with the goal of optimizing both engagement and effectiveness.

Acknowledgments

This research was funded by the National Institutes of Health (NIH), National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK), through grant R01-DK119282. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH. MKR is funded by the National Center for Advancing Translational Sciences (NIH; grant KL2TR002245).

Authors' Contributions

LAN, AJS, LSM, RAG Jr, and MKR significantly contributed to the conception of the study and planned analyses. LAN wrote the manuscript. AJS analyzed the quantitative data, and RAG Jr oversaw data analyses. LML and SM analyzed responses to the high/high message. LSM was the principal investigator of the Family/Friend Activation to Motivate Self-care (FAMS) study. All authors were involved in data interpretation and manuscript revision and approved the final version submitted for publication.

Conflicts of Interest

None declared.

  1. Fagherazzi G, Ravaud P. Digital diabetes: perspectives for diabetes prevention, management and research. Diabetes Metab. Sep 2019;45(4):322-329. [CrossRef] [Medline]
  2. Fan K, Zhao Y. Mobile health technology: a novel tool in chronic disease management. Intelligent Medicine. Feb 2022;2(1):41-47. [CrossRef]
  3. Li R, Liang N, Bu F, Hesketh T. The effectiveness of self-management of hypertension in adults using mobile health: systematic review and meta-analysis. JMIR Mhealth Uhealth. Mar 27, 2020;8(3):e17776. [CrossRef] [Medline]
  4. Willcox JC, Dobson R, Whittaker R. Old-fashioned technology in the era of “Bling”: is there a future for text messaging in health care? J Med Internet Res. Dec 20, 2019;21(12):e16630. [CrossRef] [Medline]
  5. Anderson-Lewis C, Darville G, Mercado RE, Howell S, Di Maggio S. mHealth technology use and implications in historically underserved and minority populations in the United States: systematic literature review. JMIR Mhealth Uhealth. Jun 18, 2018;6(6):e128. [CrossRef] [Medline]
  6. Nelson LA, Greevy RA, Spieker A, et al. Effects of a tailored text messaging intervention among diverse adults with type 2 diabetes: evidence from the 15-month REACH randomized controlled trial. Diabetes Care. Jan 2021;44(1):26-34. [CrossRef] [Medline]
  7. Murray E, Hekler EB, Andersson G, et al. Evaluating digital health interventions: key questions and approaches. Am J Prev Med. Nov 2016;51(5):843-851. [CrossRef] [Medline]
  8. Cole-Lewis H, Ezeanochie N, Turgiss J. Understanding health behavior technology engagement: pathway to measuring digital behavior change interventions. JMIR Form Res. Oct 10, 2019;3(4):e14052. [CrossRef] [Medline]
  9. Nelson LA, Spieker AJ, Mayberry LS, McNaughton C, Greevy RA. Estimating the impact of engagement with digital health interventions on patient outcomes in randomized trials. J Am Med Inform Assoc. Dec 28, 2021;29(1):128-136. [CrossRef] [Medline]
  10. Nelson LA, Coston TD, Cherrington AL, Osborn CY. Patterns of user engagement with mobile- and web-delivered self-care interventions for adults with T2DM: a review of the literature. Curr Diab Rep. Jul 2016;16(7):66. [CrossRef] [Medline]
  11. Killikelly C, He Z, Reeder C, Wykes T. Improving adherence to web-based and mobile technologies for people with psychosis: systematic review of new potential predictors of adherence. JMIR Mhealth Uhealth. Jul 20, 2017;5(7):e94. [CrossRef] [Medline]
  12. Nouri SS, Adler-Milstein J, Thao C, et al. Patient characteristics associated with objective measures of digital health tool use in the United States: a literature review. J Am Med Inform Assoc. May 1, 2020;27(5):834-841. [CrossRef] [Medline]
  13. Buck B, Chander A, Ben-Zeev D. Clinical and demographic predictors of engagement in mobile health vs. clinic-based interventions for serious mental illness. J Behav Cogn Ther. Apr 2020;30(1):3-11. [CrossRef]
  14. Nelson LA, Spieker A, Greevy R, LeStourgeon LM, Wallston KA, Mayberry LS. User engagement among diverse adults in a 12-month text message-delivered diabetes support intervention: results from a randomized controlled trial. JMIR Mhealth Uhealth. Jul 21, 2020;8(7):e17534. [CrossRef] [Medline]
  15. Klimis H, Nothman J, Lu D, et al. Text message analysis using machine learning to assess predictors of engagement with mobile health chronic disease prevention programs: content analysis. JMIR Mhealth Uhealth. Nov 10, 2021;9(11):e27779. [CrossRef] [Medline]
  16. Torous J, Michalak EE, O’Brien HL. Digital health and engagement-looking behind the measures and methods. JAMA Netw Open. Jul 1, 2020;3(7):e2010918. [CrossRef] [Medline]
  17. Yeager CM, Benight CC. If we build it, will they come? issues of engagement with digital health interventions for trauma recovery. Mhealth. Sep 11, 2018;4:37. [CrossRef] [Medline]
  18. Perski O, Blandford A, West R, Michie S. Conceptualising engagement with digital behaviour change interventions: a systematic review using principles from critical interpretive synthesis. Transl Behav Med. Jun 2017;7(2):254-267. [CrossRef] [Medline]
  19. Yardley L, Spring BJ, Riper H, et al. Understanding and promoting effective engagement with digital behavior change interventions. Am J Prev Med. Nov 2016;51(5):833-842. [CrossRef] [Medline]
  20. Mayberry LS, Berg CA, Greevy RA, et al. Mixed-methods randomized evaluation of FAMS: a mobile phone-delivered intervention to improve family/friend involvement in adults’ type 2 diabetes self-care. Ann Behav Med. Mar 16, 2021;55(2):165-178. [CrossRef] [Medline]
  21. Mayberry LS, Berg CA, Harper KJ, Osborn CY. The design, usability, and feasibility of a family-focused diabetes self-care support mHealth intervention for diverse, low-income adults with type 2 diabetes. J Diabetes Res. Nov 7, 2016;2016:7586385. [CrossRef] [Medline]
  22. Mayberry LS, El-Rifai M, Nelson LA, et al. Rationale, design, and recruitment outcomes for the Family/Friend Activation to Motivate Self-care (FAMS) 2.0 randomized controlled trial among adults with type 2 diabetes and their support persons. Contemp Clin Trials. Nov 2022;122:106956. [CrossRef] [Medline]
  23. Roddy MK, Spieker AJ, Nelson LA, et al. Well-being outcomes of a family-focused intervention for persons with type 2 diabetes and support persons: main, mediated, and subgroup effects from the FAMS 2.0 RCT. Diabetes Res Clin Pract. Oct 2023;204:110921. [CrossRef] [Medline]
  24. Nelson LA, Spieker AJ, Greevy RAJ, et al. Glycemic outcomes of a family-focused intervention for adults with type 2 diabetes: main, mediated, and subgroup effects from the FAMS 2.0 RCT. Diabetes Res Clin Pract. Dec 2023;206:110991. [CrossRef] [Medline]
  25. Harris PA, Taylor R, Minor BL, et al. The REDCap Consortium: building an international community of software platform partners. J Biomed Inform. Jul 2019;95:103208. [CrossRef] [Medline]
  26. Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)--a metadata-driven methodology and workflow process for providing translational research Informatics support. J Biomed Inform. Apr 2009;42(2):377-381. [CrossRef] [Medline]
  27. Wallston KA, Cawthon C, McNaughton CD, Rothman RL, Osborn CY, Kripalani S. Psychometric properties of the Brief Health Literacy Screen in clinical practice. J Gen Intern Med. Jan 2014;29(1):119-126. [CrossRef] [Medline]
  28. Mayberry LS, Berg CA, Greevy RAJ, Wallston KA. Assessing helpful and harmful family and friend involvement in adults' type 2 diabetes self-management. Patient Educ Couns. Jul 2019;102(7):1380-1388. [CrossRef] [Medline]
  29. Short CE, DeSmet A, Woods C, et al. Measuring engagement in eHealth and mHealth behavior change interventions: viewpoint of methodologies. J Med Internet Res. Nov 16, 2018;20(11):e292. [CrossRef] [Medline]
  30. Druce KL, Dixon WG, McBeth J. Maximizing engagement in mobile health studies: lessons learned and future directions. Rheum Dis Clin North Am. May 2019;45(2):159-172. [CrossRef] [Medline]
  31. Cao W, Milks MW, Liu X, et al. mHealth interventions for self-management of hypertension: framework and systematic review on engagement, interactivity, and tailoring. JMIR Mhealth Uhealth. Mar 2, 2022;10(3):e29415. [CrossRef] [Medline]
  32. Alkhaldi G, Modrow K, Hamilton F, Pal K, Ross J, Murray E. Promoting engagement with a digital health intervention (HeLP-Diabetes) using email and text message prompts: mixed-methods study. Interact J Med Res. Aug 22, 2017;6(2):e14. [CrossRef] [Medline]
  33. Nahum-Shani I, Rabbi M, Yap J, et al. Translating strategies for promoting engagement in mobile health: a proof-of-concept microrandomized trial. Health Psychol. Dec 2021;40(12):974-987. [CrossRef] [Medline]
  34. Wei Y, Zheng P, Deng H, Wang X, Li X, Fu H. Design features for improving mobile health intervention user engagement: systematic review and thematic analysis. J Med Internet Res. Dec 9, 2020;22(12):e21687. [CrossRef] [Medline]
  35. Shan R, Sarkar S, Martin SS. Digital health technology and mobile devices for the management of diabetes mellitus: state of the art. Diabetologia. Jun 2019;62(6):877-887. [CrossRef] [Medline]
  36. Irvine L, Melson AJ, Williams B, et al. Real time monitoring of engagement with a text message intervention to reduce binge drinking among men living in socially disadvantaged areas of Scotland. Int J Behav Med. Oct 2017;24(5):713-721. [CrossRef] [Medline]
  37. Mayberry LS, Jaser SS. Should there be an app for that? the case for text messaging in mHealth interventions. J Intern Med. Feb 2018;283(2):212-213. [CrossRef] [Medline]
  38. Delaney T, Mclaughlin M, Hall A, et al. Associations between digital health intervention engagement and dietary intake: a systematic review. Nutrients. Sep 20, 2021;13(9):3281. [CrossRef] [Medline]
  39. Mclaughlin M, Delaney T, Hall A, et al. Associations between digital health intervention engagement, physical activity, and sedentary behavior: systematic review and meta-analysis. J Med Internet Res. Feb 19, 2021;23(2):e23180. [CrossRef] [Medline]
  40. Li Y, Guo Y, Hong YA, et al. Dose-response effects of patient engagement on health outcomes in an mHealth intervention: secondary analysis of a randomized controlled trial. JMIR Mhealth Uhealth. Jan 4, 2022;10(1):e25586. [CrossRef] [Medline]
  41. Spieker AJ, Nelson LA, Rothman RL, et al. Feasibility and short-term effects of a multi-component emergency department blood pressure intervention: a pilot randomized trial. J Am Heart Assoc. Mar 2022;11(5):e024339. [CrossRef] [Medline]


FAMS: Family/Friend Activation to Motivate Self-care
FIAD: Family and Friend Involvement in Adults' Diabetes
mHealth: mobile health
RCT: randomized controlled trial
REDCap: Research Electronic Data Capture


Edited by Lorraine Buis; submitted 31.03.23; peer-reviewed by Tessa Delaney, Xiaofu Liu; final revised version received 29.11.23; accepted 30.11.23; published 19.01.24

Copyright

© Lyndsay A Nelson, Andrew J Spieker, Lauren M LeStourgeon, Robert A Greevy Jr, Samuel Molli, McKenzie K Roddy, Lindsay S Mayberry. Originally published in JMIR mHealth and uHealth (https://mhealth.jmir.org), 19.1.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR mHealth and uHealth, is properly cited. The complete bibliographic information, a link to the original publication on https://mhealth.jmir.org/, as well as this copyright and license information must be included.