PROs in the wild: Assessing the validity of patient reported outcomes in an electronic registry

Federico Cabitza, Linda Greta Dui, Giuseppe Banfi

Research output: Contribution to journalArticle

2 Citations (Scopus)

Abstract

Background and objectives: Collecting Patient-Reported Outcomes (PROs) is an important way to get first-hand information by patients on the outcome of treatments and surgical procedure they have undergone, and hence about the quality of the care provided. However, the quality of PRO data cannot be given for granted and cannot be traced back to the dimensions of timeliness and completeness only. While the reliability of these data can be guaranteed by adopting standard and validated questionnaires that are used across different health care facilities all over the world, these facilities must take responsibility to assess, monitor and ensure the validity of PROs that are collected from their patients. Validity is affected by biases that are hidden in the data collected. This contribution is then aimed at measuring bias in PRO data, for the impact that these data can have on clinical research and post-marketing surveillance. Methods: We considered the main biases that can affect PRO validity: Response bias, in terms of Acquiescence bias and Fatigue bias; and Non-Response bias. To assess Acquiescence bias, phone interviews and online surveys were compared, adjusted by age. To assess Fatigue bias, we proposed a specific item about session length and compared PROs scores stratifying according to the responses to this item. We also calculated the intra-patient agreement by conceiving an intra-interview test-retest. To assess Non-Response bias, we considered patients who participated after the saturation of the response-rate curve as proxy of potential non respondents and compared the outcomes in these two strata. All methods encompassed common statistical techniques and are cost-effective at any facility collecting PRO data. Results: Acquiescence bias resulted in significantly different scores between patients reached by either phone or email. In regard to Fatigue bias, stratification by perceived fatigue resulted in contrasting results. A relevant difference was found in intra-patient agreement and an increasing difference in average scores as a function of interview length (or completion time). In regard to Non-Response bias, we found non-significant differences both in scores and variance. Conclusions: In this paper, we present a set of cost-effective techniques to assess the validity of retrospective PROs data and share some lessons learnt from their application at a large teaching hospital specialized in musculoskeletal disorders that collects PRO data in the follow-up phase of surgery performed therein. The main finding suggests that response bias can affect the PRO validity. Further research on the effectiveness of simple and cost-effective solutions is necessary to mitigate these biases and improve the validity of PRO data.

Original languageEnglish
JournalComputer Methods and Programs in Biomedicine
DOIs
Publication statusAccepted/In press - Jan 1 2019

Fingerprint

Registries
Fatigue of materials
Fatigue
Costs
Electronic mail
Health care
Surgery
Marketing
Teaching
Interviews
Patient Reported Outcome Measures
Costs and Cost Analysis
Quality of Health Care
Health Facilities
Proxy
Research
Teaching Hospitals
Cost-Benefit Analysis
Delivery of Health Care

Keywords

  • Acquiescence bias
  • Fatigue bias
  • Medical registry
  • Non-Response bias
  • Patient reported outcomes
  • Response bias
  • Validity

ASJC Scopus subject areas

  • Software
  • Computer Science Applications
  • Health Informatics

Cite this

PROs in the wild : Assessing the validity of patient reported outcomes in an electronic registry. / Cabitza, Federico; Dui, Linda Greta; Banfi, Giuseppe.

In: Computer Methods and Programs in Biomedicine, 01.01.2019.

Research output: Contribution to journalArticle

@article{6e14abc3d9c748d5a53367af474cde62,
title = "PROs in the wild: Assessing the validity of patient reported outcomes in an electronic registry",
abstract = "Background and objectives: Collecting Patient-Reported Outcomes (PROs) is an important way to get first-hand information by patients on the outcome of treatments and surgical procedure they have undergone, and hence about the quality of the care provided. However, the quality of PRO data cannot be given for granted and cannot be traced back to the dimensions of timeliness and completeness only. While the reliability of these data can be guaranteed by adopting standard and validated questionnaires that are used across different health care facilities all over the world, these facilities must take responsibility to assess, monitor and ensure the validity of PROs that are collected from their patients. Validity is affected by biases that are hidden in the data collected. This contribution is then aimed at measuring bias in PRO data, for the impact that these data can have on clinical research and post-marketing surveillance. Methods: We considered the main biases that can affect PRO validity: Response bias, in terms of Acquiescence bias and Fatigue bias; and Non-Response bias. To assess Acquiescence bias, phone interviews and online surveys were compared, adjusted by age. To assess Fatigue bias, we proposed a specific item about session length and compared PROs scores stratifying according to the responses to this item. We also calculated the intra-patient agreement by conceiving an intra-interview test-retest. To assess Non-Response bias, we considered patients who participated after the saturation of the response-rate curve as proxy of potential non respondents and compared the outcomes in these two strata. All methods encompassed common statistical techniques and are cost-effective at any facility collecting PRO data. Results: Acquiescence bias resulted in significantly different scores between patients reached by either phone or email. In regard to Fatigue bias, stratification by perceived fatigue resulted in contrasting results. A relevant difference was found in intra-patient agreement and an increasing difference in average scores as a function of interview length (or completion time). In regard to Non-Response bias, we found non-significant differences both in scores and variance. Conclusions: In this paper, we present a set of cost-effective techniques to assess the validity of retrospective PROs data and share some lessons learnt from their application at a large teaching hospital specialized in musculoskeletal disorders that collects PRO data in the follow-up phase of surgery performed therein. The main finding suggests that response bias can affect the PRO validity. Further research on the effectiveness of simple and cost-effective solutions is necessary to mitigate these biases and improve the validity of PRO data.",
keywords = "Acquiescence bias, Fatigue bias, Medical registry, Non-Response bias, Patient reported outcomes, Response bias, Validity",
author = "Federico Cabitza and Dui, {Linda Greta} and Giuseppe Banfi",
year = "2019",
month = "1",
day = "1",
doi = "10.1016/j.cmpb.2019.01.009",
language = "English",
journal = "Computer Methods and Programs in Biomedicine",
issn = "0169-2607",
publisher = "Elsevier Ireland Ltd",

}

TY - JOUR

T1 - PROs in the wild

T2 - Assessing the validity of patient reported outcomes in an electronic registry

AU - Cabitza, Federico

AU - Dui, Linda Greta

AU - Banfi, Giuseppe

PY - 2019/1/1

Y1 - 2019/1/1

N2 - Background and objectives: Collecting Patient-Reported Outcomes (PROs) is an important way to get first-hand information by patients on the outcome of treatments and surgical procedure they have undergone, and hence about the quality of the care provided. However, the quality of PRO data cannot be given for granted and cannot be traced back to the dimensions of timeliness and completeness only. While the reliability of these data can be guaranteed by adopting standard and validated questionnaires that are used across different health care facilities all over the world, these facilities must take responsibility to assess, monitor and ensure the validity of PROs that are collected from their patients. Validity is affected by biases that are hidden in the data collected. This contribution is then aimed at measuring bias in PRO data, for the impact that these data can have on clinical research and post-marketing surveillance. Methods: We considered the main biases that can affect PRO validity: Response bias, in terms of Acquiescence bias and Fatigue bias; and Non-Response bias. To assess Acquiescence bias, phone interviews and online surveys were compared, adjusted by age. To assess Fatigue bias, we proposed a specific item about session length and compared PROs scores stratifying according to the responses to this item. We also calculated the intra-patient agreement by conceiving an intra-interview test-retest. To assess Non-Response bias, we considered patients who participated after the saturation of the response-rate curve as proxy of potential non respondents and compared the outcomes in these two strata. All methods encompassed common statistical techniques and are cost-effective at any facility collecting PRO data. Results: Acquiescence bias resulted in significantly different scores between patients reached by either phone or email. In regard to Fatigue bias, stratification by perceived fatigue resulted in contrasting results. A relevant difference was found in intra-patient agreement and an increasing difference in average scores as a function of interview length (or completion time). In regard to Non-Response bias, we found non-significant differences both in scores and variance. Conclusions: In this paper, we present a set of cost-effective techniques to assess the validity of retrospective PROs data and share some lessons learnt from their application at a large teaching hospital specialized in musculoskeletal disorders that collects PRO data in the follow-up phase of surgery performed therein. The main finding suggests that response bias can affect the PRO validity. Further research on the effectiveness of simple and cost-effective solutions is necessary to mitigate these biases and improve the validity of PRO data.

AB - Background and objectives: Collecting Patient-Reported Outcomes (PROs) is an important way to get first-hand information by patients on the outcome of treatments and surgical procedure they have undergone, and hence about the quality of the care provided. However, the quality of PRO data cannot be given for granted and cannot be traced back to the dimensions of timeliness and completeness only. While the reliability of these data can be guaranteed by adopting standard and validated questionnaires that are used across different health care facilities all over the world, these facilities must take responsibility to assess, monitor and ensure the validity of PROs that are collected from their patients. Validity is affected by biases that are hidden in the data collected. This contribution is then aimed at measuring bias in PRO data, for the impact that these data can have on clinical research and post-marketing surveillance. Methods: We considered the main biases that can affect PRO validity: Response bias, in terms of Acquiescence bias and Fatigue bias; and Non-Response bias. To assess Acquiescence bias, phone interviews and online surveys were compared, adjusted by age. To assess Fatigue bias, we proposed a specific item about session length and compared PROs scores stratifying according to the responses to this item. We also calculated the intra-patient agreement by conceiving an intra-interview test-retest. To assess Non-Response bias, we considered patients who participated after the saturation of the response-rate curve as proxy of potential non respondents and compared the outcomes in these two strata. All methods encompassed common statistical techniques and are cost-effective at any facility collecting PRO data. Results: Acquiescence bias resulted in significantly different scores between patients reached by either phone or email. In regard to Fatigue bias, stratification by perceived fatigue resulted in contrasting results. A relevant difference was found in intra-patient agreement and an increasing difference in average scores as a function of interview length (or completion time). In regard to Non-Response bias, we found non-significant differences both in scores and variance. Conclusions: In this paper, we present a set of cost-effective techniques to assess the validity of retrospective PROs data and share some lessons learnt from their application at a large teaching hospital specialized in musculoskeletal disorders that collects PRO data in the follow-up phase of surgery performed therein. The main finding suggests that response bias can affect the PRO validity. Further research on the effectiveness of simple and cost-effective solutions is necessary to mitigate these biases and improve the validity of PRO data.

KW - Acquiescence bias

KW - Fatigue bias

KW - Medical registry

KW - Non-Response bias

KW - Patient reported outcomes

KW - Response bias

KW - Validity

UR - http://www.scopus.com/inward/record.url?scp=85060713559&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85060713559&partnerID=8YFLogxK

U2 - 10.1016/j.cmpb.2019.01.009

DO - 10.1016/j.cmpb.2019.01.009

M3 - Article

AN - SCOPUS:85060713559

JO - Computer Methods and Programs in Biomedicine

JF - Computer Methods and Programs in Biomedicine

SN - 0169-2607

ER -