7
views
0
recommends
+1 Recommend
2 collections
    0
    shares

      Submit your digital health research with an established publisher
      - celebrating 25 years of open access

      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Public Concern About Monitoring Twitter Users and Their Conversations to Recruit for Clinical Trials: Survey Study

      research-article

      Read this article at

      ScienceOpenPublisherPMC
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Background

          Social networks such as Twitter offer the clinical research community a novel opportunity for engaging potential study participants based on user activity data. However, the availability of public social media data has led to new ethical challenges about respecting user privacy and the appropriateness of monitoring social media for clinical trial recruitment. Researchers have voiced the need for involving users’ perspectives in the development of ethical norms and regulations.

          Objective

          This study examined the attitudes and level of concern among Twitter users and nonusers about using Twitter for monitoring social media users and their conversations to recruit potential clinical trial participants.

          Methods

          We used two online methods for recruiting study participants: the open survey was (1) advertised on Twitter between May 23 and June 8, 2017, and (2) deployed on TurkPrime, a crowdsourcing data acquisition platform, between May 23 and June 8, 2017. Eligible participants were adults, 18 years of age or older, who lived in the United States. People with and without Twitter accounts were included in the study.

          Results

          While nearly half the respondents—on Twitter (94/603, 15.6%) and on TurkPrime (509/603, 84.4%)—indicated agreement that social media monitoring constitutes a form of eavesdropping that invades their privacy, over one-third disagreed and nearly 1 in 5 had no opinion. A chi-square test revealed a positive relationship between respondents’ general privacy concern and their average concern about Internet research ( P<.005). We found associations between respondents’ Twitter literacy and their concerns about the ability for researchers to monitor their Twitter activity for clinical trial recruitment ( P=.001) and whether they consider Twitter monitoring for clinical trial recruitment as eavesdropping ( P<.001) and an invasion of privacy ( P=.003). As Twitter literacy increased, so did people’s concerns about researchers monitoring Twitter activity. Our data support the previously suggested use of the nonexceptionalist methodology for assessing social media in research, insofar as social media-based recruitment does not need to be considered exceptional and, for most, it is considered preferable to traditional in-person interventions at physical clinics. The expressed attitudes were highly contextual, depending on factors such as the type of disease or health topic (eg, HIV/AIDS vs obesity vs smoking), the entity or person monitoring users on Twitter, and the monitored information.

          Conclusions

          The data and findings from this study contribute to the critical dialogue with the public about the use of social media in clinical research. The findings suggest that most users do not think that monitoring Twitter for clinical trial recruitment constitutes inappropriate surveillance or a violation of privacy. However, researchers should remain mindful that some participants might find social media monitoring problematic when connected with certain conditions or health topics. Further research should isolate factors that influence the level of concern among social media users across platforms and populations and inform the development of more clear and consistent guidelines.

          Related collections

          Most cited references30

          • Record: found
          • Abstract: found
          • Article: not found

          Strategies to improve recruitment to randomised trials

          Recruiting participants to trials can be extremely difficult. Identifying strategies that improve trial recruitment would benefit both trialists and health research. To quantify the effects of strategies for improving recruitment of participants to randomised trials. A secondary objective is to assess the evidence for the effect of the research setting (e.g. primary care versus secondary care) on recruitment. We searched the Cochrane Methodology Review Group Specialised Register (CMR) in the Cochrane Library (July 2012, searched 11 February 2015); MEDLINE and MEDLINE In Process (OVID) (1946 to 10 February 2015); Embase (OVID) (1996 to 2015 Week 06); Science Citation Index & Social Science Citation Index (ISI) (2009 to 11 February 2015) and ERIC (EBSCO) (2009 to 11 February 2015). Randomised and quasi‐randomised trials of methods to increase recruitment to randomised trials. This includes non‐healthcare studies and studies recruiting to hypothetical trials. We excluded studies aiming to increase response rates to questionnaires or trial retention and those evaluating incentives and disincentives for clinicians to recruit participants. We extracted data on: the method evaluated; country in which the study was carried out; nature of the population; nature of the study setting; nature of the study to be recruited into; randomisation or quasi‐randomisation method; and numbers and proportions in each intervention group. We used a risk difference to estimate the absolute improvement and the 95% confidence interval (CI) to describe the effect in individual trials. We assessed heterogeneity between trial results. We used GRADE to judge the certainty we had in the evidence coming from each comparison. We identified 68 eligible trials (24 new to this update) with more than 74,000 participants. There were 63 studies involving interventions aimed directly at trial participants, while five evaluated interventions aimed at people recruiting participants. All studies were in health care. We found 72 comparisons, but just three are supported by high‐certainty evidence according to GRADE. 1. Open trials rather than blinded, placebo trials . The absolute improvement was 10% (95% CI 7% to 13%). 2. Telephone reminders to people who do not respond to a postal invitation . The absolute improvement was 6% (95% CI 3% to 9%). This result applies to trials that have low underlying recruitment. We are less certain for trials that start out with moderately good recruitment (i.e. over 10%). 3. Using a particular, bespoke, user‐testing approach to develop participant information leaflets . This method involved spending a lot of time working with the target population for recruitment to decide on the content, format and appearance of the participant information leaflet. This made little or no difference to recruitment: absolute improvement was 1% (95% CI −1% to 3%). We had moderate‐certainty evidence for eight other comparisons; our confidence was reduced for most of these because the results came from a single study. Three of the methods were changes to trial management, three were changes to how potential participants received information, one was aimed at recruiters, and the last was a test of financial incentives. All of these comparisons would benefit from other researchers replicating the evaluation. There were no evaluations in paediatric trials. We had much less confidence in the other 61 comparisons because the studies had design flaws, were single studies, had very uncertain results or were hypothetical (mock) trials rather than real ones. The literature on interventions to improve recruitment to trials has plenty of variety but little depth. Only 3 of 72 comparisons are supported by high‐certainty evidence according to GRADE: having an open trial and using telephone reminders to non‐responders to postal interventions both increase recruitment; a specialised way of developing participant information leaflets had little or no effect. The methodology research community should improve the evidence base by replicating evaluations of existing strategies, rather than developing and testing new ones. What improves trial recruitment? Key messages We had high‐certainty evidence for three methods to improve recruitment, two of which are effective: 1. Telling people what they are receiving in the trial rather than not telling them improves recruitment. 2. Phoning people who do not respond to a postal invitation is also effective (although we are not certain this works as well in all trials). 3. Using a tailored, user‐testing approach to develop participant information leaflets makes little or no difference to recruitment. Of the 72 strategies tested, only 7 involved more than one study. We need more studies to understand whether they work or not. Our question We reviewed the evidence about the effect of things trial teams do to try and improve recruitment to their trials. We found 68 studies involving more than 74,000 people. Background Finding participants for trials can be difficult, and trial teams try many things to improve recruitment. It is important to know whether these actually work. Our review looked for studies that examined this question using chance to allocate people to different recruitment strategies because this is the fairest way of seeing if one approach is better than another. Key results We found 68 studies including 72 comparisons. We have high certainty in what we found for only three of these. 1. Telling people what they are receiving in the trial rather than not telling them improves recruitment. Our best estimate is that if 100 people were told what they were receiving in a randomised trial, and 100 people were not, 10 more would take part n the group who knew. There is some uncertainty though: it could be as few as 7 more per hundred, or as many as 13 more. 2. Phoning people who do not respond to a postal invitation to take part is also effective. Our best estimate is that if investigators called 100 people who did not respond to a postal invitation, and did not call 100 others, 6 more would take part in the trial among the group who received a call. However, this number could be as few as 3 more per hundred, or as many as 9 more. 3. Using a tailored, user‐testing approach to develop participant information leaflets did not make much difference. The researchers who tested this method spent a lot of time working with people like those to be recruited to decide what should be in the participant information leaflet and what it should look like. Our best estimate is that if 100 people got the new leaflet, 1 more would take part in the trial compared to 100 who got the old leaflet. However, there is some uncertainty, and it could be 1 fewer (i.e. worse than the old leaflet) per hundred, or as many as 3 more. We had moderate certainty in what we found for eight other comparisons; our confidence was reduced for most of these because the method had been tested in only one study. We had much less confidence in the other 61 comparisons because the studies had design flaws, were the only studies to look at a particular method, had a very uncertain result or were mock trials rather than real ones. Study characteristics The 68 included studies covered a very wide range of disease areas, including antenatal care, cancer, home safety, hypertension, podiatry, smoking cessation and surgery. Primary, secondary and community care were included. The size of the studies ranged from 15 to 14,467 participants. Studies came from 12 countries; there was also one multinational study involving 19 countries. The USA and UK dominated with 25 and 22 studies, respectively. The next largest contribution came from Australia with eight studies. The small print Our search updated our 2010 review and is current to February 2015. We also identified six studies published after 2015 outside the search. The review includes 24 mock trials where the researchers asked people about whether they would take part in an imaginary trial. We have not presented or discussed their results because it is hard to see how the findings relate to real trial decisions.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Interventions to improve recruitment and retention in clinical trials: a survey and workshop to assess current practice and future priorities

            Background Despite significant investment in infrastructure many trials continue to face challenges in recruitment and retention. We argue that insufficient focus has been placed on the development and testing of recruitment and retention interventions. Methods In this current paper, we summarize existing reviews about interventions to improve recruitment and retention. We report survey data from Clinical Trials Units in the United Kingdom to indicate the range of interventions used by these units to encourage recruitment and retention. We present the views of participants in a recent workshop and a priority list of recruitment interventions for evaluation (determined by voting among workshop participants). We also discuss wider issues concerning the testing of recruitment interventions. Results Methods used to encourage recruitment and retention were categorized as: patient contact, patient convenience, support for recruiters, monitoring and systems, incentives, design, resources, and human factors. Interventions felt to merit investigation by respondents fell into three categories: training site staff, communication with patients, and incentives. Conclusions Significant resources continue to be invested into clinical trials and other high quality studies, but recruitment remains a significant challenge. Adoption of innovative methods to develop, test, and implement recruitment interventions are required. Electronic supplementary material The online version of this article (doi:10.1186/1745-6215-15-399) contains supplementary material, which is available to authorized users.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              Stigma, HIV and health: a qualitative synthesis

              Background HIV-related stigma continues to negatively impact the health and well-being of people living with HIV, with deleterious effects on their care, treatment and quality of life. A growing body of qualitative research has documented the relationship between HIV-related stigma and health. This review aims to synthesize qualitative evidence that explored the intersections of stigma and health for people with HIV. Methods A thematic summary was conducted that was guided by the qualitative metasummary technique developed by Sandelowski and Barraso. Literature searches yielded 8,622 references of which 55 qualitative studies were identified that illustrated HIV-related stigma in the context of health. Results The metasummary classified qualitative findings into three overarching categories: conceptualizing stigma which identified key dimensions of HIV-related stigma; experiencing stigma which highlighted experiences of stigma in the health context, and managing stigma which described ways in which stigma is avoided or addressed. To better illustrate these connections, the qualitative literature was summarized into the following themes: stigma within health care settings, the role of stigma in caring for one’s health, and strategies to address HIV-related stigma in the health context. A number of health care practices were identified – some rooted in institutional practices, others shaped by personal perceptions held by practitioners – that could be stigmatizing or discriminatory towards people with HIV. There existed interconnections between enacted stigma and felt stigma that influenced health care utilization, treatment adherence, and overall health and well-being of people with HIV. Intersectional stigma also emerged as instrumental in the stigma experiences of people living with HIV. A number of strategies to address stigma were identified including social support, education, self-efficacy, resilience activities, and advocacy. Conclusion This review of the qualitative evidence indicates that HIV-related stigma within health contexts is a broad social phenomenon that manifests within multiple social spheres, including health care environments. Findings from this review indicate that future stigma research should consider the social structures and societal practices – within and outside of health care environments – that perpetuate and reinforce stigma and discrimination towards people with HIV. Electronic supplementary material The online version of this article (doi:10.1186/s12889-015-2197-0) contains supplementary material, which is available to authorized users.
                Bookmark

                Author and article information

                Contributors
                Journal
                J Med Internet Res
                J. Med. Internet Res
                JMIR
                Journal of Medical Internet Research
                JMIR Publications (Toronto, Canada )
                1439-4456
                1438-8871
                October 2019
                30 October 2019
                : 21
                : 10
                : e15455
                Affiliations
                [1 ] Southern California Clinical and Translational Science Institute Keck School of Medicine University of Southern California Los Angeles, CA United States
                [2 ] Institute for Health Promotion and Disease Prevention Research, Department of Preventive Medicine Keck School of Medicine University of Southern California Los Angeles, CA United States
                [3 ] School of Information Studies University of Wisconsin-Milwaukee Milwaukee, WI United States
                [4 ] Cedars Sinai Medical Center Los Angeles, CA United States
                [5 ] Department of Computer Science Marquette University Milwaukee, WI United States
                Author notes
                Corresponding Author: Katja Reuter katja.reuter@ 123456gmail.com
                Author information
                https://orcid.org/0000-0002-1559-9058
                https://orcid.org/0000-0002-3622-2858
                https://orcid.org/0000-0001-7022-7049
                https://orcid.org/0000-0002-5054-5362
                https://orcid.org/0000-0001-7472-822X
                https://orcid.org/0000-0003-4229-4847
                Article
                v21i10e15455
                10.2196/15455
                6914244
                31670698
                c1fe4496-d1e7-487d-a93b-4a1fbd51afce
                ©Katja Reuter, Yifan Zhu, Praveen Angyan, NamQuyen Le, Akil A Merchant, Michael Zimmer. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 30.10.2019.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License ( https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.

                History
                : 15 July 2019
                : 11 August 2019
                : 4 October 2019
                : 4 October 2019
                Categories
                Original Paper
                Original Paper

                Medicine
                aids,cancer,clinical research,clinical trial,crowdsourcing,ethics,hiv,hpv,infoveillance,infodemiology,informed consent,internet,research ethics,mechanical turk,mturk,monitoring,obesity,privacy,public opinion,recruitment,smoking,social media,social network,surveillance,turkprime,twitter

                Comments

                Comment on this article