Skip to main content

METHODS article

Front. Pharmacol., 07 December 2022
Sec. Drugs Outcomes Research and Policies
This article is part of the Research Topic Recent Advances in Attempts to Improve Medication Adherence – from basic research to clinical practice. View all 11 articles

Development and validation of a new non-disease-specific survey tool to assess self-reported adherence to medication

Rnnaug Eline Larsen
Rønnaug Eline Larsen1*Are Hugo Pripp,Are Hugo Pripp2,3Tonje KrogstadTonje Krogstad1Cecilie Johannessen Landmark,,Cecilie Johannessen Landmark1,4,5Lene Berge Holm,Lene Berge Holm1,6
  • 1Department of Life Sciences and Health, Faculty of Health Sciences, Norway And The Research Group Medicines and Patient Safety, Oslo Metropolitan University, Oslo, Norway
  • 2Faculty of Health Sciences, Oslo Metropolitan University, Oslo, Norway
  • 3Department of Biostatistics, Oslo Centre of Biostatistics and Epidemiology, University of Oslo, Oslo, Norway
  • 4The National Center for Epilepsy, Oslo University Hospital, Oslo, Norway
  • 5Section for Clinical Pharmacology, Department of Pharmacology, Oslo University Hospital, Oslo, Norway
  • 6Center for Connected Care, Oslo University Hospital, Oslo, Norway

Background: Patients’ non-adherence to medication affects both patients themselves and healthcare systems. Consequences include higher mortality, worsening of disease, patient injuries, and increased healthcare costs. Many existing survey tools for assessing adherence are linked to specific diseases and assessing medication-taking behavior or identifying barriers or beliefs. This study aimed to develop and validate a new non-disease-specific survey tool to assess self-reported medication-taking behavior, barriers, and beliefs in order to quantify the causes of non-adherence and measure adherence.

Methods: The survey tool was developed after literature searches and pilot testing. Validation was conducted by assessing the psychometric properties of content, construct, reliability, and feasibility. Content validity was assessed by subject matter experts and construct validity by performing exploratory factor analysis. Reliability assessment was performed by calculating internal consistency, Cronbach’s alpha and test/retest reliability, intraclass correlation coefficient (ICC), and standard error of measurement (SEm). A receiver operating characteristic (ROC) curve and the Lui method were used to calculate the statistical cut-off score for good versus poor adherence. Survey responses from Norwegian medication users over 18 years recruited via social media were used for the factor analysis and Cronbach’s alpha.

Results: The final survey tool contains 37 causes of non-adherence connected to medication-taking behavior and barriers to adherence and beliefs associated with adherence. The overall result for all 37 items demonstrated reliable internal consistency, Cronbach’s alpha = 0.91. The factor analysis identified ten latent variables for 29 items, explaining 61.7% of the variance. Seven of the latent variables showed reliable internal consistency: medication fear and lack of effect, conditional practical issues, pregnancy/breastfeeding, information issues, needlessness, lifestyle, and avoiding stigmatization (Cronbach’s alpha = 0.72–0.86). Shortage showed low internal consistency (Cronbach’s alpha = 0.59). Impact issues and personal practical issues showed poor internal consistency (Cronbach’s alpha = 0.51 and 0.48, respectively). The test/retest reliability ICC = 0.89 and SEm = 1.11, indicating good reliability. The statistical cut-off score for good versus poor adherence was 10, but the clinical cut-off score was found to be 2.

Conclusion: This survey tool, OMAS-37 (OsloMet Adherence to medication Survey tool, 37 items), demonstrated to be a valid and reliable instrument for assessing adherence. Further studies will examine the ability of the tool for measuring adherence enhancing effect following interventions.

1 Introduction

Adherence to medications is the process by which patients take their medication as prescribed, comprised of initiation, implementation, and discontinuation (Vrijens et al., 2012). “Increasing the effectiveness of adherence interventions may have a far greater impact on the health of the population than any improvement in specific medical treatments” is an important statement in an influential WHO report from 2003 on medication adherence (Sabaté, 2003). The importance of adherence interventions on patients’ health is still most applicable as failure to adhere is a serious problem affecting both patients and healthcare systems by resulting in higher mortality, worsening of disease, more patient injuries, and increased healthcare costs (Sokol et al., 2005; Cutler et al., 2018; Khan and Socha-Dietrich, 2018; Holbrook et al., 2021; Lu et al., 2021; Majeed et al., 2021; Nymoen et al., 2022).

Adherence rates have an average of around 50% but range widely from 0% to more than 100% (Nieuwlaat et al., 2014; Horne et al., 2019). In 2018, the Organization for Economic Co-operation and Development (OECD) reported that estimates from 2010 suggest non-adherence annually contributes to nearly 2,00,000 premature deaths and costs the European government EUR 125 billion in excess healthcare (Rabia Khan, 2018). In 2004, Norwegian healthcare costs due to incorrect and ineffective medication usage were estimated to be EUR 500 million (Report No. 18 to the Storting, 2004–2005) in a population of 4.6 million people. However, the economic impact of low adherence to medication is difficult to assess due to current research being limited and of mixed quality (Cutler et al., 2018).

The many reasons for non-adherence are thoroughly described in the literature, often showcasing the complexity of adherence behavior (Sabaté, 2003; Hugtenburg et al., 2013; Gast and Mathes, 2019; Horne et al., 2019). One example is the earlier mentioned WHO report, where adherence is viewed as a multidimensional phenomenon determined by the interplay between five different dimensions: patient-related factors, therapy-related factors, social/economic factors, condition-related factors, and health care team and system-related factors (Sabaté, 2003).

It is also widely recognized that non-adherence can be both intentional, e.g., medication deliberately not being taken and/or unintentional, e.g., medication prevented from being taken by barriers beyond one’s own control. Horne et al. have, in this context, displayed the Perceptions and Practicalities Approach (PAPA) (Horne et al., 2019). In PAPA, intentional causes of non-adherence are linked to motivation which depends upon perceptions, e.g., beliefs, emotions, and preferences. Unintentional causes of non-adherence are linked to ability which depends upon practicalities, e.g., capacity, resources, and opportunities. PAPA indicates that adherence is essentially dependent upon individual motivation and ability, which could vary both within and between individuals for different medications and/or timelines. Thus, mapping and quantifying causes for non-adherence are essential in the process of tailoring interventions to enhance adherence.

Patients’ self-reported measures on medication adherence behavior is one of the most common approaches to assess medication adherence (Simoni et al., 2006; Velligan et al., 2006; Paschal et al., 2008; Kelli Stidham Hall et al., 2010; Garfield et al., 2011; Gonzalez and Schneider, 2011; Stirratt et al., 2015). Self-reporting survey tools are often validated by comparing survey data with invasive methods like monitoring drug concentration, blood sugar, blood pressure, and/or cholesterol (Simoni et al., 2006; Velligan et al., 2006; Paschal et al., 2008; Kelli Stidham Hall et al., 2010; Gonzalez and Schneider, 2011). Assessing self-reporting against adequate clinical measurements opens the possibility of predicting clinical outcomes by measuring adherence behavior. Hence, existing self-reporting survey tools are, to a great extent, connected to specific medications and/or medical diagnoses, although there are several different survey tools independent of medication/medical diagnoses (Garfield et al., 2011; Nguyen et al., 2013; Stirratt et al., 2015; Chan et al., 2021) which can be useful, e.g., when assessing non-adherence in general populations. The survey tools differ not only in number of items but, more importantly, also in how these tools map non-adherence. The comprehensive systematic review by Nguyen et al. (2013), which contains the most used validated self-report adherence scales, and the complemented study by Stirratt et al. (2015) are examples of literature showing how adherence scales are focusing either on medication-taking behavior and/or barriers to adherence and/or beliefs associated with adherence. As the PAPA indicates, tailoring interventions are necessary to increase the effectiveness of adherence interventions. One size does not fit all, and adequate knowledge about the causes for non-adherence is vital for tailoring interventions. However, finding an elaborating survey tool that focuses on both medication-taking behavior and barriers to adherence and beliefs associated with adherence has been proven difficult.

Therefore, the aim of this study was to develop and validate a new non-commercial survey tool independent of patients’ medication type and/or medical diagnosis in order to assess self-reported medication-taking behavior, barriers, and beliefs. The overall goal was to make available an adequate tool for measuring adherence and quantifying causes of non-adherence in various patient groups.

2 Methods

2.1 Development of an online survey tool and questionnaire

The survey tool items are causes of medication-taking behavior, barriers, and beliefs that were identified by literature searches in national (Oria, The Norwegian Electronic Health Library, Norwegian subject libraries, and The Great Norwegian Encyclopedia) and international (PubMed, Google Scholar, and Google) databases. Important search terms were adherence, compliance, concordance, questionnaire, medication, self-report, patient, and equivalent terms in Norwegian. The search terms were chosen based on being relevant keywords for existing survey tools for medication adherence.

General recommendations for developing questionnaires were used in the planning and developing phases of the questionnaire (Robson, 2002; Eberhard-Gran and Winther, 2017).

After identifying the items, the items were divided into the five aforementioned WHO dimensions of adherence (Sabaté, 2003). For each item, the medication user was asked “how often do you not follow the recommendations from your doctor regarding the use of your medication because of (item)?” Each item was then to be scored on a 4-point Likert scale: “very often”—“often”—“sometimes”—“rarely/never”. The survey tool was built into a questionnaire in Nettskjema (2022). Nettskjema belongs to The University of Oslo and is one of the safest and most used solutions for online data collection for research in Norway.

All of the questions had to be answered to proceed further in the questionnaire, leaving no missing values for completed responses.

Inclusion criteria were Norwegian residents over the age of 18 who had been using medication prescribed and/or recommended by a doctor in the last 12 months. Responders who stated that they were under 18 years, that they had not been using one or more medications prescribed or recommended by a doctor in the last 12 months, or that they were not living in Norway were directed out of the questionnaire before answering the survey tool items.

Responders were also asked demographic questions like gender and education-and to choose from a list of diagnoses to provide information on the ailments for which they had been medicated in the course of the last 12 months. The responders were, in addition, asked a question about their own perception of their overall adherence (see Section 2.4).

Feedback was given on content for the different versions of the survey tool via video calls and one-to-one meetings with members of an adherence expert team until there were no more comments from the team.

A few adjustments were made after content validation and feedback given in feasibility pilots (see Section 2.3). A technical verification was performed where the logic of the order of the items was tested after the final version of the survey tool.

2.2 Recruitment

For the feasibility pilots, acquaintances of the researchers were invited to participate by answering the online questionnaire and afterward giving feedback on the availability and usability of the online solution, time taken to answer, and clarity of questions and providing suggestions for causes of non-adherence which was not already included.

For the construct validity and internal consistency, Data used were collected as a part of an online survey on medication use. Moderators of several large Norwegian Facebook groups were contacted, and six group moderators replied with consent. An invitation to participate with general information about the study and an electronic link to the questionnaire was then posted on these six Facebook groups. The general invitation addressed group members over 18 years who were using/had been using medication for the last 12 months. To participate, the group members were to use the electronic link and would, in this way, be anonymous. In addition to the survey respondents, data from two pilot studies (not the feasibility pilots) in 2021 using the online questionnaire in Nettskjema were added for the construct validation and internal consistency.

For test/retest reliability: Respondents were recruited from three medium-sized Facebook groups with an invitation to participate anonymously in the test/retest of the questionnaire.

2.3 Validation strategy

To make sure survey data are trustworthy, survey tools must be validated—not solely through theoretical constructs but also through empirical constructs. Validity, reliability, and feasibility are important elements of validation. Validity expresses the extent to which an instrument measures what it is designed to measure, and reliability expresses the extent to which outcomes are consistent on repeated measures (Kimberlin and Winterstein, 2008; García de Yébenes Prous et al., 2009; Bolarinwa, 2015). Poor feasibility will influence the response rate and/or interpretation/scoring of survey tool items (García de Yébenes Prous et al., 2009).

Choosing a validation strategy depends on what to measure and if the data fit the assumptions for the selected validation methods (García de Yébenes Prous et al., 2009; Bolarinwa, 2015; McNeish, 2018). The chosen validation strategy is shown in Table 1. Each validation method required an independent population except for construct validity and internal consistency where the same population is used. The population sizes are shown in Table 1 and further explained in the Results-section. Feasibility of the results was tested by piloting.

TABLE 1
www.frontiersin.org

TABLE 1. Validation strategy for the survey tool.

Content validity, i.e., to what extent the instrument includes most of the dimensions of the concept being studied (García de Yébenes Prous et al., 2009), was tested by feedback on the online survey tool from the earlier-mentioned adherence expert team on language clarity (wording), completeness, item relevance, and (if any) additional causes of non-adherence.

For construct validity, the exploratory factor analysis (EFA) method of principal axis factoring (PAF) with oblique rotation was performed. Construct validity is to what extent the trait or theory of the phenomenon/concept that the instrument is intended to measure is measured (Bolarinwa, 2015).

For test/retest reliability (consistency across time), the intraclass correlation coefficient (ICC) was calculated for a test/retest group using the survey tool online.

Standard error of measurement (SEm) was calculated using the following formula (Portney and Watkins, 2015): SEm = SDTest√(1-ICC) SDTest is the standard deviation of the test.

2.4 Measurement of adherence and cut-off score

For each survey tool item, the respondent was asked the following question: “How often do you not follow the recommendations from your doctor regarding the use of your medication because of [item]?” For measurement of the adherence score, string value was converted to numeric value: “very often” = 3, “often” = 2, “sometimes” = 1, and rarely/never” = 0, making the total minimum adherence score 0 and maximum adherence score 111.

In order to identify whether the calculated adherence score relates to what the patients believe about their overall adherence, a self-reported adherence question was added to the questionnaire: “In total, to what extent do you believe you follow the recommendations from your doctor regarding the use of your medication?” For this anchor question, respondents were to score on a 4-point Likert scale. String value was converted into numeric value for measurement of score: “to a very limited extent” = 4, “to a limited extent” = 3, “to a large extent” = 2, and “to a very large extent” = 1. Thus, indicating that poor adherence would give a higher score, which is in line with the calculated adherence score.

Given a significant correlation, a receiver operating characteristic (ROC) curve was to be made to find the statistical cut-off score for adherence. The ROC curve is a graphical plot illustrating the sensitivity (true positive rate) against the 1-specificity (false positive rate) for various threshold settings—here, the threshold settings being the adherence scores. In order to make the ROC curve, the anchor question scores were dichotomized into whether patients believe they follow the recommendations or not: “to a large extent” and “to a very large extent” = following recommendations = 0, “to a limited extent” and “to a very limited extent” = not following recommendations = 1.

Based on the ROC curve, the Liu method was to be used to calculate the empirical optimal cut point by maximizing the product of the sensitivity and specificity. The empirical optimal cut point would be the statistical cut-off score between good adherence and poor adherence.

All data were analyzed by SPSS Statistics (RRID:SCR_016479) version 27. Empirical optimal cut point was calculated in Stata (RRID:SCR_012763) version 17. The chosen significance level alpha was 0.05.

3 Results

3.1 Feasibility

Data from three pilots were used for feasibility. The respondents were recruited by three different student groups at Oslo Metropolitan University (OsloMet), and the data were collected in 2021. The three pilots gave complete data from (12 + 15 + 12) 39 online respondents. The respondents first completed the survey tool online and were afterward interviewed by the researchers for feedback on the availability and usability of the online solution, time taken to answer, clarity of questions, and providing suggestions for causes of non-adherence which were not already included. In general, the tested survey tool was feasible, but some feedback was given, especially on the length of some of the items (questions).

The developed survey tool was included in a questionnaire together with sociodemographic and health-related questions. The final questionnaire showed an average responding time of about 10 min for the feasibility pilots.

Just under 80% of the 857 respondents in the survey population used less than 10 min to answer the questionnaire, and over 90% used less than 15 min. Time was measured from the opening of the survey to submitting the survey.

3.2 Content validity

Feedback on content validity was given for different adjusted versions of the survey tool via video calls and one-to-one meetings with the adherence expert team members until there were no more comments from the adherence expert team. Feedback on content from the feasibility pilots was consecutively included in the adjusted versions of the survey tool.

After the feasibility pilots and the content validation by the adherence expert team, the survey tool ended up containing 37 items connected to medication-taking behavior and barriers to adherence and beliefs associated with adherence.

3.3 Construct validity

Completed data from two pilots (n = 121) and the survey group (n = 737) were received, leaving a total of 858 respondents. One respondent scored an unrealistically full score on all 37 items and was thus removed. The calculations were conducted on data from 857 respondents, further referred to as the survey group. Data from the survey group were collected from January to March 2021. The pilot data were collected during the spring of 2021. The demographics of the respondents in the survey group are shown in Table 2.

TABLE 2
www.frontiersin.org

TABLE 2. Demographics of the survey group and test–retest populations.

Pearson correlation was calculated to measure the strength of the linear variables as linear correlation is an assumption for factor analysis. 1,230 of the 1,332 variables showed a significant (p ≤ 0.05) linear correlation.

Kaiser–Meyer–Olikin (KMO) measure of sampling adequacy was performed to see if the correlations between the variables were fit for factor analysis. KMO for all items in total was 0.89. A total of 30 items had KMO over 0.8, and seven items had KMO between 0.79–0.61 (see Table 3). Since the KMO measure for all of the items was over 0.6, the data were fit for factor analysis. This is supported by Bartlett’s test of sphericity being significant (p ≤ 0.05).

TABLE 3
www.frontiersin.org

TABLE 3. Kaiser-Meyer-Olkin (KMO) values for each survey item.

EFA was performed to find clusters of inter-correlated variables, so-called latent variables or factors. PAF with oblique (Oblimin) rotation extracted ten latent variables with eigenvalue >1, explaining a total of 61.7% of the variance (see Table 4). An acceptable variance explained for the construct to be valid is said to be more than 60% in factor analysis (Hair, 2014). Table 5 shows the pattern matrix for the ten latent factors with 29 associated item-loadings > +/- 0.4. The remaining eight of the 37 items did not show loadings > +/- 0.4. Rotation converged in 14 iterations.

TABLE 4
www.frontiersin.org

TABLE 4. Validation values for factors and items.

TABLE 5
www.frontiersin.org

TABLE 5. Pattern matrix for PAF extraction, oblimin with Kaiser normalization rotation and loading > +/−0.4.

Factor 1 encompasses almost 25% of the total variance and includes four items, where two items describe fear of medication outcomes (adverse effects and non-tolerance) and two items describe lack of effect. See Table 4 for % of variances. Factor 2 encompasses over 6% of the variance containing three items regarding conditional practicalities like forgetting and difficulties taking the medication due to timing and/or specific instructions. Factor 3 encompasses 5.1% of the variance and includes the two items directly connected to pregnancy and breastfeeding. Factors 4–10 encompass variances between 4.8 and 2.8%. Factor 4 connects the information issues of not understanding what the doctor/pharmacy staff meant and forgetting how to use the medication. Factor 5 includes three items describing no need for medication, like feeling better, not feeling sick, and thinking that it does not matter whether the medication is used or not. The three items on Factor 6 involve shortage issues like having no medication left, lack of availability in the pharmacy, and financial reasons. The four items of factor 7 are connected to wanting to avoid stigmatization. Two items are about not wanting to be sick, where medication is a reminder that stigmatizes, and two items are about feeling clever when taking less than prescribed and not wanting others to know about the medication. Factor 8 involves four lifestyle issues: ethical/religious reasons, preferring alternative treatments, being in principle against medication treatment, and belief that taking medication does not suit the lifestyle. Factor 9 connects the impact of being influenced by media, the internet, friends, family, and others to the difficulties of accessing a pharmacy. Factor 10 is the last factor and embraces two items regarding personal practicalities of handling the medication.

3.4 Reliability

3.4.1 Internal consistency

The data from the 857 respondents in the survey group used for construct validity were also used for internal consistency.

Cronbach’s α was calculated for internal consistency. The overall result for all 37 items in total demonstrated a very reliable internal consistency with Cronbach’s α 0.91 (See Table 4). Factor 1–5 and 7–8 showed reliable internal consistency with Cronbach’s α between 0.72–0.86. Factor 6 showed low reliable internal consistency with Cronbach’s α = 0.58, and Factors 9 and 10 had poor reliable consistency with Cronbach’s α = 0.51 and 0.48, respectively. Although factors 6, 9, and 10 per se showed low/poor reliability, removal of either of the factor items had no particular impact on the overall Cronbach’s α of 0.91.

Exploratory factor analysis was chosen to explore latent variables and not to remove eventual redundant items. Eight of the items had loadings < +/− 0.4 and were thus not included in the factors. Removal of any of these items had no particular impact on the overall Cronbach’s α of 0.91.

The corrected item–total correlation values for the items indicate overall good discrimination between all 37 items and between the items in each factor as all values exceeded 0.2 (See Table 4).

3.4.2 Test/retest reliability

Data were collected during the first half of 2022, with 14 days between publishing the web link for the test and the retest.

A total of 47 responded to the test, and 22 of these responded to the retest. Two were removed due to answering the test and the retest being too close apart (<7 days), leaving 20 respondents and a response rate of 42.5%. The 20 respondents answered the test and the retest with a median interval of 13 days apart (range: 8–24 days).

The average measure was ICC = 0.89 and SEm = 1.11, both indicating good reliability (Matheson, 2019). ICC was calculated using a two-way random model and absolute agreement, and SEm using the test standard deviation (SD).

3.4.3 Measurement of adherence and cut-off score

Data from three of the 857 respondents were excluded as they answered “Do not know/not applicable/do not want to answer” on the anchor question, leaving n = 854. The linear regression analysis on the anchor question toward the adherence scores showed a significant correlation (p ≤ 0.05) between the two measures of adherence with an acceptable R-squared = 0.24.

The dichotomization of the anchor question into whether the patients believe they follow the recommendations or not resulted in n = 820 for the group that believes they follow (values for “to a large extent” and “to a very large extent”) and n = 34 for the group that does not believe they follow (values for “to a small extent” and “to a very small extent”). The ROC curve based on this dichotomization of the anchor question is shown in Figure 1. The area under the curve (AUC) shows a significant (p ≤ 0.05) high classification accuracy value of 0.86. The empirical optimal cut point for the adherence score scale was 10 (sensitivity = 0.82, specificity = 0.79, and AUC = 0.81), leaving the statistical cut-off score for adherence to be 10.

FIGURE 1
www.frontiersin.org

FIGURE 1. ROC curve for anchor question versus adherence score. The ROC curve is produced in SPSS.

4 Discussion

This study was conducted to develop a survey tool that measures adherence and quantifies causes of non-adherence independently of patients’ medication type and/or medical diagnosis and to evaluate the psychometric properties and factor structure of the survey tool. As mentioned in Section 1, it has been proven difficult to find an elaborating survey tool that focuses on both medication-taking behavior and barriers to adherence and beliefs associated with adherence. The importance of assessing behavior, barriers, and beliefs is imperative when tailoring interventions for non-adherence and is the main rationale for developing this survey tool.

4.1 Development and validation

The overall result for all 37 items of the survey tool demonstrated a very reliable internal consistency with Cronbach’s α 0.91. Cronbach’s α is sensitive to the number of items, and some literature suggest that α should not exceed 0.9. If α exceeds 0.9, it may suggest that some items are testing the same but from a different angle and should be removed (Tavakol and Dennick, 2011). In our study, the α is approximately 0.9, the removal of any items had no particular impact on the overall α, and the corrected item–total correlation values for all of the 37 items indicated good discrimination. When quantifying causes of non-adherence, it is important to cover all well-known issues and the calculations on internal consistency support keeping all of the 37 items.

EFA was chosen for construct validity to explore underlying factor structures. PAF extracted ten latent factors with eigenvalue >1. Most of the latent factor dimensions are all well-known and showed reliable internal consistency: conditional practicalities (Factor 2), being pregnant/breastfeeding (Factor 3), needlessness for medication (Factor 5), wanting to avoid stigmatization (Factor 7), and lifestyle issues (Factor 8). However, the latent dimension of medication fear combined with lack of effect (Factor 1) was interesting and should be further investigated. It is also interesting to unravel that it is not necessarily lack of information on how to use medication that makes people forget how to use them, but rather that they do not understand the explanations from the doctor or pharmacy staff, information issues (Factor 4). The shortage (Factor 6) showed low reliable internal consistency even though the combination of issues could be expected, and removal of any of the three items did not improve the α. The impact issue (Factor 9), which is a combination of being influenced by media, the internet, friends, family, and/or others, and difficulties in accessing a pharmacy was unforeseen, and the poor reliable consistency was to be expected. The personal practicalities (Factor 10) combination also showed poor reliable consistency even though the combination was expected. This could be explained by the low number of respondents choosing options other than “rarely/ never” for these two items (56 and 28, respectively).

The survey tool items are divided into the five WHO dimensions (Sabaté, 2003): patient-related factors, therapy-related factors, social/economic factors, condition-related factors, and health system/HCT factors. There were, however, some difficulties in placing the 37 items between the five dimensions as several of the items could fit into more than one dimension. Exchanging the WHO dimensions with latent variable dimensions from the performed EFA would be interesting to investigate further.

The average measure of ICC and SEm indicated both good test/retest reliability. The 20 respondents replied to the test and retest with an interval of 8–24 days with a median interval of 13 days apart. In the literature, there is a wide range of administration intervals used in test/retesting depending, e.g., upon assessment of the stability of the condition involved and complexity of the patient-reported outcome (Quadri et al., 2013). For this study, the medication condition could change over time, and the time frame should not be too long. The interval should, however, be long enough to not remember the test answers when taking the retest. It was thus decided to analyze the respondents who had replied between 1–4 weeks. Although the average measure of ICC and SEm showed good test/retest reliability the sample size of 20 might be a bit low (Terwee et al., 2012).

4.2 Measurement of adherence and cut-off score

The survey tool aims to measure adherence. For every item, the respondent is to score “very often”—“often”—“sometimes”—“rarely/never” on the question “How often do you not follow the recommendations from your doctor regarding the use of your medication because of [item]? Every item will weigh equal as the clinical outcome of the non-adherence will be the same, i.e., if the respondent scores “very often,” it does not matter if not taking the medication very often is because of forgetting to take the medication or being influenced by others etc. But not every item is of relevance for everyone, e.g., items regarding pregnancy and breastfeeding. This is why the scores are converted from string to numeric value, and adherence is measured by the total numeric adherence score.

Clinically it would be considered as poor adherence if the patient “often” (2 points) or “very often” (3 points) does not follow the recommendations for one reason, and it could also be considered as poor adherence if the patient “sometimes” (1 point) does not follow the recommendations for several reasons. This indicates that an adherence score ≥2 could be considered poor adherence, whereas an adherence score of 1 or 0 could be considered good adherence.

The correlation between the adherence score and the anchor question “in total, to what extent do you believe you follow the recommendations from your doctor regarding the use of your medication?” were significant (p ≤ 0.05), and the AUC of the ROC curve showed high classification accuracy. If one considers the anchor question to be the truth (or the respondent’s claimed truth), this demonstrates that the adherence score is a good measure of the degree of adherence. The statistical cut-off score for adherence was calculated to be 10 based on ROC. Even though the anchor question and the adherence score showed a significant correlation, the statistical calculated cut-off score for adherence could not be used clinically. The respondents that scored between the clinical cut-off for adherence of two and the statistical calculated cut-off score of 10 believed they were following the doctor’s recommendation although they, in fact, did not, showing an overestimation of adherence score. This supports the knowledge of self-reporting as subject to social-desirability biases (Kimberlin and Winterstein, 2008; Stirratt et al., 2015).

4.3 Limitations

This study used the 4-point Likert rating scale for both the adherence score questions and the anchor question. Much research has been carried out without reaching an agreement regarding finding the optimal number of response categories for Likert scales in order to maximize the scales’ psychometric properties (Chang, 1994; Xu and Leung, 2018; Taherdoost, 2019). The 4-point Likert scale is a forced scale because of the lack of neutral options and was chosen to force the respondent to form an opinion of the items. Larger numbers of even Likert scales could have been chosen, but this could go beyond the discrimination abilities of respondents and create indistinct measurements. However, it has been indicated that the 4-point scale could have higher skewness and lower loadings than a larger number of Likert scales (Xu and Leung, 2018).

Self-reporting is subject to challenges with social-desirability biases (Kimberlin and Winterstein, 2008; Stirratt et al., 2015), meaning that respondents are answering in a way where they are well-presented in the eyes of others which does not necessarily reflect the reality. For each survey tool item, the respondent was asked: “how often do you not follow the recommendations from your physician regarding the usage of your medication because of [item]?” This approach in the questioning was chosen to reassure the patient from feeling shame for not adhering to medication by demonstrating various known causes for non-adherence and thus opting for a more honest scoring.

The performed validations do not include concurrent validity. Due to structural differences in sample strategy, sample size, and population, the correlated measures comparing studies can be challenging (Garfield et al., 2011). However, this should be investigated further when assessing findings after the use of this new survey tool.

For the content validation the adherence expert team did not utilize any scale measurement making the content validation process less documented and with no possibility of calculating a content validity index (CVI).

Recruitment was done via Facebook in an attempt to get many respondents. A systematic review from 2017 (Whitaker et al., 2017) states growing evidence for Facebook being a useful recruitment tool for health research due to, e.g., shorter recruitment period and easier to access demographics that are hard to reach. However, one limitation is internet accessibility—seniors aged 65 + being the smallest demographic group on Facebook (only 4.8%) (OMNICORE, 2022). The age distribution in our study (see Table 2) reflects this and can indicate age bias.

Another bias is that females are more likely to respond to surveys (Smith, 2008). This is also applicable to our study as 90.4% of the respondents were females (see Table 2), although Facebook is used by more males (56%) than females (44%) (OMNICORE, 2022).

There is also a bias of educated people being more likely to participate in surveys than less educated people (Smith, 2008). The survey tool was piloted and validated in the Norwegian language only. In our study, 10.6% of the responders were below upper secondary education, and 35.8% had higher education (see Table 2). Norwegian statistics from 2020 show that 24.8% of the population are below upper secondary education, and 35.3% have higher education (SSB, 2020), demonstrating that our respondents, in total, had more education than the general population in Norway.

The response rate was not possible to calculate for construct validity. The participants were recruited by Facebook groups, so it is not possible to know how many of the group members actually saw the invitation nor how many of the group members were relevant for the questionnaire (over 18 years, using medication, or had used medication for the last 12 months).

The survey tool contains three double-barred questions: you do not want to be sick and taking medication is a reminder of this/you are feeling stigmatized or made sick by having to use medication/you feel medications are harmful, toxic and/or you do not tolerate them. To avoid misconceptions in newer versions, these should be changed into the following: taking medication is a reminder of being sick/you are feeling stigmatized by having to use medication/you feel medications are doing you more harm than good.

The validated survey tool is named OMAS-37 (OsloMet Adherence to medication Survey tool, 37 items).

Conclusion

This study describes the development and validation of a self-reporting adherence survey tool (OMAS-37) where causes for non-adherence are quantified, and adherence is measured. The validated survey tool is named OMAS-37 (OsloMet Adherence to medication Survey tool, 37 items). The OMAS-37 demonstrated to be a valid and reliable instrument. The OMAS-37 is, to our knowledge, the first non-disease-specific adherence instrument developed to assess self-reported causes of medication-taking behavior, barriers, and beliefs. Further studies will examine the ability of the tool for measuring adherence enhancing effect following interventions.

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Ethics statement

Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements.

Author contributions

RL, LH, CJ, and TK contributed substantially to the design and conception of the work. LH was involved in the data collection. RL, AP, and LH performed the analysis, and all authors were involved in the interpretation of the data. AP was the responsible statistician. RL drafted the article, and LH, AP, CJ, and TK critically revised the article. The final version of the manuscript was approved by all the authors, and all authors agree to be accountable for all aspects of the work.

Funding

The PhD project of RL was funded through a scholarship from the Ministry of Education and Research to OsloMet. The contributions from the other authors were funded as a part of their R&D positions at Oslo Metropolitan University.

Acknowledgments

The authors are grateful to the students of OsloMet that have been involved in the data collection—especially master’s student Ala Alsammarraie.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors, and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Bolarinwa, O. (2015). Principles and methods of validity and reliability testing of questionnaires used in social and health science researches. Niger. Postgrad. Med. J. 22, 195–201. doi:10.4103/1117-1936.173959

PubMed Abstract | CrossRef Full Text | Google Scholar

Chan, A., Van Dijk, L., Horne, R., Vervloet, M., and Brabers, A. (2021). Development and validation of a self-report measure of practical barriers to medication adherence: The medication practical barriers to adherence questionnaire (MPRAQ). Br. J. Clin. Pharmacol. 87, 4197–4211. doi:10.1111/bcp.14744

PubMed Abstract | CrossRef Full Text | Google Scholar

Chang, L. (1994). A psychometric evaluation of 4-point and 6-point likert-type scales in relation to reliability and validity. Appl. Psychol. Meas. 18, 205–215. doi:10.1177/014662169401800302

CrossRef Full Text | Google Scholar

Cutler, R. L., Fernandez-Llimos, F., Frommer, M., Benrimoj, C., and Garcia-Cardenas, V. (2018). Economic impact of medication non-adherence by disease groups: A systematic review. BMJ Open 8, e016982. doi:10.1136/bmjopen-2017-016982

PubMed Abstract | CrossRef Full Text | Google Scholar

Eberhard-Gran, M., and Winther, C. (2017). Spørreskjema som metode : For helsefagene. Oslo: Universitetsforl.

Google Scholar

García De Yébenes Prous, M. J., Rodríguez Salvanés, F., and Carmona Ortells, L. (2009). Validation of questionnaires. Reumatol. Clin. 5, 171–177. doi:10.1016/s2173-5743(09)70115-7

PubMed Abstract | CrossRef Full Text | Google Scholar

Garfield, S., Clifford, S., Eliasson, L., Barber, N., and Willson, A. (2011). Suitability of measures of self-reported medication adherence for routine clinical use: A systematic review. BMC Med. Res. Methodol. 11, 149. doi:10.1186/1471-2288-11-149

PubMed Abstract | CrossRef Full Text | Google Scholar

Gast, A., and Mathes, T. (2019). Medication adherence influencing factors - an (updated) overview of systematic reviews. Syst. Rev. 8, 112. doi:10.1186/s13643-019-1014-8

PubMed Abstract | CrossRef Full Text | Google Scholar

Gonzalez, J. S., and Schneider, H. E. (2011). Methodological issues in the assessment of diabetes treatment adherence. Curr. Diab. Rep. 11, 472–479. doi:10.1007/s11892-011-0229-4

PubMed Abstract | CrossRef Full Text | Google Scholar

Hair, J. F. (2014). Multivariate data analysis. Harlow: Pearson.

Google Scholar

Health System Efficiency (2021). OECD website. Paris, France: OECD.

Google Scholar

Holbrook, A. M., Wang, M., Lee, M., Chen, Z., Garcia, M., Nguyen, L., et al. (2021). Cost-related medication nonadherence in Canada: A systematic review of prevalence, predictors, and clinical impact. Syst. Rev. 10, 11. doi:10.1186/s13643-020-01558-5

PubMed Abstract | CrossRef Full Text | Google Scholar

Horne, R., Cooper, V., Wileman, V., and Chan, A. (2019). Supporting adherence to medicines for long-term conditions: A perceptions and practicalities approach based on an extended common-sense model. Eur. Psychol. 24, 82–96. doi:10.1027/1016-9040/a000353

CrossRef Full Text | Google Scholar

Hugtenburg, J. G., Timmers, L., Elders, P. J. M., Vervloet, M., and Van Dijk, L. (2013). Definitions, variants, and causes of nonadherence with medication: A challenge for tailored interventions. Patient prefer. Adherence 7, 675–682. doi:10.2147/PPA.S29549

PubMed Abstract | CrossRef Full Text | Google Scholar

Kelli Stidham Hall, K. O. C. W., Nancy, R. E. A. M. E., Carolyn, W. E. S. T. H. O. F. F., and Westhoff, C. (2010). Studying the use of oral contraception: A review of measurement approaches. J. Womens Health 19, 2203–2210. doi:10.1089/jwh.2010.1963

PubMed Abstract | CrossRef Full Text | Google Scholar

Khan, R., and Socha-Dietrich, K. (2018). Investing in medication adherence improves health outcomes and health system efficiency. Paris, France: OECD.

Google Scholar

Kimberlin, C. L., and Winterstein, A. G. (2008). Validity and reliability of measurement instruments used in research. Am. J. Health. Syst. Pharm. 65, 2276–2284. doi:10.2146/ajhp070364

PubMed Abstract | CrossRef Full Text | Google Scholar

Lu, Z. K., Xiong, X., Brown, J., Horras, A., Yuan, J., and Li, M. (2021). Impact of cost-related medication nonadherence on economic burdens, productivity loss, and functional abilities: Management of cancer survivors in medicare. Front. Pharmacol. 12, 706289. doi:10.3389/fphar.2021.706289

PubMed Abstract | CrossRef Full Text | Google Scholar

Majeed, A., Rehman, M., Hussain, I., Imran, I., Saleem, M. U., Saeed, H., et al. (2021). The impact of treatment adherence on quality of life among type 2 diabetes mellitus patients – findings from a cross-sectional study. Patient prefer. Adherence 15, 475–481. doi:10.2147/PPA.S295012

PubMed Abstract | CrossRef Full Text | Google Scholar

Matheson, G. J. (2019). We need to talk about reliability: Making better use of test-retest studies for study design and interpretation, 5, 6918. doi:10.7717/peerj.6918PeerJ

CrossRef Full Text

Mcneish, D. (2018). Thanks coefficient alpha, we'll take it from here. Psychol. Methods 23, 412–433. doi:10.1037/met0000144

PubMed Abstract | CrossRef Full Text | Google Scholar

Nettskjema, (2022). Nettskjema university of Oslo. Available: https://nettskjema.no/?lang=en (Accessed June 20, 2022 2022).

Google Scholar

Nguyen, T. M. U., Caze, A. L., and Cottrell, N. (2013). What are validated self-report adherence scales really measuring?: A systematic review. Br. J. Clin. Pharmacol. 77, 427–445. doi:10.1111/bcp.12194

PubMed Abstract | CrossRef Full Text | Google Scholar

Nieuwlaat, R., Wilczynski, N., Navarro, T., Hobson, N., Jeffery, R., Keepanasseril, A., et al. (2014). Interventions for enhancing medication adherence. Cochrane Database Syst. Rev. 11, CD000011. doi:10.1002/14651858.CD000011.pub4

PubMed Abstract | CrossRef Full Text | Google Scholar

Nymoen, L. D., Björk, M., Flatebø, T. E., Nilsen, M., Godø, A., Øie, E., et al. (2022). Drug-related emergency department visits: Prevalence and risk factors. Intern. Emerg. Med. 17, 1453–1462. doi:10.1007/s11739-022-02935-9

PubMed Abstract | CrossRef Full Text | Google Scholar

Omnicore (2022). 63 Facebook statistics you need to know in 2022. Available: https://www.omnicoreagency.com/facebook-statistics/(Accessed June 22, 2022).

Google Scholar

Paschal, A. M., Hawley, S. R., Romain, T. S., and Ablah, E. (2008). Measures of adherence to epilepsy treatment: Review of present practices and recommendations for future directions. Epilepsia 49, 1115–1122. doi:10.1111/j.1528-1167.2008.01645.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Portney, L. G., and Watkins, M. P. (2015). Foundations of clinical research : Applications to practice. Philadelphia, Pa: F.A. Davis Company, 913.

Google Scholar

Quadri, N., Wild, D., Skerritt, B., Muehlhausen, W., and O'Donohoe, P. (2013). A literature review of the variance in interval length between administrations for assessment of test retest reliability and equivalence of pro measures. Value health 16, A40–A41. doi:10.1016/j.jval.2013.03.230

CrossRef Full Text | Google Scholar

Rabia Khan, K. S.-D. (2018). Investing in medication adherence improves health outcomes. Paris, France: OECD, 105.

Google Scholar

Report No. 18 to the Storting (2004–2005). On course towards more correct use of medicine. Norway: white paper from Norwegian Ministry of Health and Care Services.

PubMed Abstract | Google Scholar

Robson, C. (2002). Real world research : A resource for social scientists and practitioner-researchers. Oxford: Blackwell.

PubMed Abstract | Google Scholar

Sabaté, E. (2003). Adherence to long-term therapies: Evidence for action - WHO. Available at: http://apps.who.int/iris/bitstream/handle/10665/42682/9241545992.pdf;jsessionid=95AAD45B8B7157327FC83AF158C7BB87?sequence=1 (Accessed June 20, 2022).

Google Scholar

Simoni, J. M., Kurth, A. E., Pearson, C. R., Pantalone, D. W., Merrill, J. O., and Frick, P. A. (2006). Self-report measures of antiretroviral Therapy adherence: A review with recommendations for HIV research and clinical management. AIDS Behav. 10, 227–245. doi:10.1007/s10461-006-9078-6

PubMed Abstract | CrossRef Full Text | Google Scholar

Smith, W. G. (2008). Does gender influence online survey participation?: A record-linkage analysis of university faculty online survey response behavior. Available: https://www.researchgate.net/publication/234742407_Does_Gender_Influence_Online_Survey_Participation_A_Record-Linkage_Analysis_of_University_Faculty_Online_Survey_Response_Behavior (Accessed June 1, 2022).

Google Scholar

Sokol, M. C., Mcguigan, K. A., Verbrugge, R. R., and Epstein, R. S. (2005). Impact of medication adherence on hospitalization risk and healthcare cost. Med. Care 43, 521–530. doi:10.1097/01.mlr.0000163641.86870.af

PubMed Abstract | CrossRef Full Text | Google Scholar

SSB (2020). Educational attainment of the population. Available: https://www.ssb.no/en/utdanning/utdanningsniva/statistikk/befolkningens-utdanningsniva (Accessed June 1, 2022).

Google Scholar

Stirratt, M. J., Dunbar-Jacob, J., Crane, H. M., Simoni, J. M., Czajkowski, S., Hilliard, M. E., et al. (2015). Self-report measures of medication adherence behavior: Recommendations on optimal use. Transl. Behav. Med. 5, 470–482. doi:10.1007/s13142-015-0315-2

PubMed Abstract | CrossRef Full Text | Google Scholar

Taherdoost, H. (2019). What is the best response scale for survey and questionnaire design; review of different lengths of rating scale/attitude scale/Likert scale. Int. J. Acad. Res. Manag. (IJARM) 8.1-10.

Google Scholar

Tavakol, M., and Dennick, R. (2011). Making sense of Cronbach's alpha. Int. J. Med. Educ. 2, 53–55. doi:10.5116/ijme.4dfb.8dfd

PubMed Abstract | CrossRef Full Text | Google Scholar

Terwee, C. B., Mokkink, L. B., Knol, D. L., Ostelo, R. W. J. G., Bouter, L. M., and De Vet, H. C. W. (2012). Rating the methodological quality in systematic reviews of studies on measurement properties: A scoring system for the COSMIN checklist. Qual. Life Res. 21, 651–657. doi:10.1007/s11136-011-9960-1

PubMed Abstract | CrossRef Full Text | Google Scholar

Velligan, D. I., Lam, Y.-W. F., Glahn, D. C., Barrett, J. A., Maples, N. J., Ereshefsky, L., et al. (2006). Defining and assessing adherence to oral antipsychotics: A review of the literature. Schizophr. Bull. 32, 724–742. doi:10.1093/schbul/sbj075

PubMed Abstract | CrossRef Full Text | Google Scholar

Vrijens, B., De Geest, S., Hughes, D. A., Przemyslaw, K., Demonceau, J., Ruppar, T., et al. (2012). A new taxonomy for describing and defining adherence to medications. Br. J. Clin. Pharmacol. 73, 691–705. doi:10.1111/j.1365-2125.2012.04167.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Whitaker, C., Stevelink, S., and Fear, N. (2017). The use of Facebook in recruiting participants for health research purposes: A systematic review. J. Med. Internet Res. 19, e290. doi:10.2196/jmir.7071

PubMed Abstract | CrossRef Full Text | Google Scholar

Xu, M. L., and Leung, S. O. (2018). Effects of varying numbers of Likert scale points on factor structure of the Rosenberg Self-Esteem Scale. Asian J. Soc. Psychol. 21, 119–128. doi:10.1111/ajsp.12214

CrossRef Full Text | Google Scholar

Keywords: non-adherence, measure adherence, assess adherence, patient compliance, reliability, OMAS-37, factor analysis, questionnaire

Citation: Larsen RE, Pripp AH, Krogstad T, Johannessen Landmark C and Holm LB (2022) Development and validation of a new non-disease-specific survey tool to assess self-reported adherence to medication. Front. Pharmacol. 13:981368. doi: 10.3389/fphar.2022.981368

Received: 29 June 2022; Accepted: 16 November 2022;
Published: 07 December 2022.

Edited by:

Bjorn Wettermark, Uppsala University, Sweden

Reviewed by:

Ivana Tadic, University of Belgrade, Serbia
Sina Hafizi, University of Manitoba, Canada

Copyright © 2022 Larsen, Pripp, Krogstad, Johannessen Landmark and Holm. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Rønnaug Eline Larsen, cm9ubmF1Z2xAb3Nsb21ldC5ubw==

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.