Skip to main content

Table 2 Characteristics of the training sets and Abstrackr’s predictions for each review

From: The semi-automation of title and abstract screening: a retrospective exploration of ways to leverage Abstrackr’s relevance predictions in systematic and rapid reviews

Review name

Screening workload, n a

Training set, n includes/excludes (% includes) b

Predicted relevant by Abstrackr, n (%)

Systematic reviews

Biomarkers

1812

14/186 (7)

503 (31)

Brain injury

6262

11/189 (6)

2126 (35)

Activity and pregnancy

2928

10/190 (5)

319 (12)

Concussion

1439

3/197 (2)

638 (51)

Antipsychotics

12,156

15/185 (8)

2117 (18)

Digital technologies for pain

2662

15/185 (8)

321 (13)

Treatments for bronchiolitis

5861

12/188 (6)

656 (12)

VBAC

5092

25/175 (13)

1490 (30)

Visual acuity

11,229

4/296 (1)

3639 (33)

Experience of bronchiolitis

651

13/187 (7)

111 (25)

Experiences of UTIs

1493

3/197 (2)

864 (67)

Rapid reviews

Preterm delivery

451

47/153 (24)

95 (38)

Community gardening

1536

55/145 (28)

139 (10)

Depression safety

964

7/193 (4)

449 (59)

Depression treatments

1583

43/157 (22)

904 (65)

Patient education for cancer

2413

5/195 (3)

1410 (64)

Workplace stress

767

36/164 (18)

210 (37)

  1. UTI urinary tract infection; VBAC vaginal birth after caesarean section
  2. aRetrospective screening workload for each of the two reviewers in systematic reviews, and for the single reviewer in rapid reviews
  3. bThe training sets were 200 records for all reviews, with the exception of the Visual Acuity systematic review, for which 300 records were needed for Abstrackr to develop predictions