Reset filters

Search publications


Search by keyword
List by department / centre / faculty

No publications found.

 

Reporting quality in abstracts of meta-analyses of depression screening tool accuracy: a review of systematic reviews and meta-analyses

Authors: Rice DBKloda LAShrier IThombs BD


Affiliations

1 Lady Davis Institute for Medical Research, Jewish General Hospital, Montréal, Québec, Canada.
2 Department of Psychiatry, McGill University, Montréal, Québec, Canada.
3 Library, Concordia University, Montréal, Québec, Canada.
4 Department of Epidemiology, Biostatistics and Occupational Health, McGill University, Montréal, Québec, Canada.
5 Department of Psychology, McGill University, Montréal, Québec, Canada.
6 Department of Medicine, McGill University, Montréal, Québec, Canada.
7 Department of Educational and Counselling Psychology, McGill University, Montréal, Québec, Canada.
8 School of Nursing, McGill University, Montréal, Québec, Canada.

Description

Objective: Concerns have been raised regarding the quality and completeness of abstract reporting in evidence reviews, but this had not been evaluated in meta-analyses of diagnostic accuracy. Our objective was to evaluate reporting quality and completeness in abstracts of systematic reviews with meta-analyses of depression screening tool accuracy, using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) for Abstracts tool.

Design: Cross-sectional study.

Inclusion criteria: We searched MEDLINE and PsycINFO from 1 January 2005 through 13 March 2016 for recent systematic reviews with meta-analyses in any language that compared a depression screening tool to a diagnosis based on clinical or validated diagnostic interview.

Data extraction: Two reviewers independently assessed quality and completeness of abstract reporting using the PRISMA for Abstracts tool with appropriate adaptations made for studies of diagnostic test accuracy. Bivariate associations of number of PRISMA for Abstracts items complied with (1) journal abstract word limit and (2) A Measurement Tool to Assess Systematic Reviews (AMSTAR) scores of meta-analyses were also assessed.

Results: We identified 21 eligible meta-analyses. Only two of 21 included meta-analyses complied with at least half of adapted PRISMA for Abstracts items. The majority met criteria for reporting an appropriate title (95%), result interpretation (95%) and synthesis of results (76%). Meta-analyses less consistently reported databases searched (43%), associated search dates (33%) and strengths and limitations of evidence (19%). Most meta-analyses did not adequately report a clinically meaningful description of outcomes (14%), risk of bias (14%), included study characteristics (10%), study eligibility criteria (5%), registration information (5%), clear objectives (0%), report eligibility criteria (0%) or funding (0%). Overall meta-analyses quality scores were significantly associated with the number of PRISMA for Abstracts scores items reported adequately (r=0.45).

Conclusions: Quality and completeness of reporting were found to be suboptimal. Journal editors should endorse PRISMA for Abstracts and allow for flexibility in abstract word counts to improve quality of abstracts.


Keywords: PRISMA for Abstractsdiagnostic test accuracymeta-analysesscreening


Links

PubMed: https://pubmed.ncbi.nlm.nih.gov/27864250/

DOI: 10.1136/bmjopen-2016-012867