Melanoma has one of the fastest rising incidence rates of any cancer. It accounts
for a small percentage of skin cancer cases but is responsible for the majority of
skin cancer deaths. History‐taking and visual inspection of a suspicious lesion by
a clinician is usually the first in a series of ‘tests’ to diagnose skin cancer. Establishing
the accuracy of visual inspection alone is critical to understating the potential
contribution of additional tests to assist in the diagnosis of melanoma. To determine
the diagnostic accuracy of visual inspection for the detection of cutaneous invasive
melanoma and atypical intraepidermal melanocytic variants in adults with limited prior
testing and in those referred for further evaluation of a suspicious lesion. Studies
were separated according to whether the diagnosis was recorded face‐to‐face (in‐person)
or based on remote (image‐based) assessment. We undertook a comprehensive search of
the following databases from inception up to August 2016: CENTRAL; CINAHL; CPCI; Zetoc;
Science Citation Index; US National Institutes of Health Ongoing Trials Register;
NIHR Clinical Research Network Portfolio Database; and the World Health Organization
International Clinical Trials Registry Platform. We studied reference lists and published
systematic review articles. Test accuracy studies of any design that evaluated visual
inspection in adults with lesions suspicious for melanoma, compared with a reference
standard of either histological confirmation or clinical follow‐up. We excluded studies
reporting data for ‘clinical diagnosis’ where dermoscopy may or may not have been
used. Two review authors independently extracted all data using a standardised data
extraction and quality assessment form (based on QUADAS‐2). We contacted authors of
included studies where information related to the target condition or diagnostic threshold
were missing. We estimated summary sensitivities and specificities per algorithm and
threshold using the bivariate hierarchical model. We investigated the impact of: in‐person
test interpretation; use of a purposely developed algorithm to assist diagnosis; and
observer expertise. We included 49 publications reporting on a total of 51 study cohorts
with 34,351 lesions (including 2499 cases), providing 134 datasets for visual inspection.
Across almost all study quality domains, the majority of study reports provided insufficient
information to allow us to judge the risk of bias, while in three of four domains
that we assessed we scored concerns regarding applicability of study findings as 'high'.
Selective participant recruitment, lack of detail regarding the threshold for deciding
on a positive test result, and lack of detail on observer expertise were particularly
problematic. Attempts to analyse studies by degree of prior testing were hampered
by a lack of relevant information and by the restricted inclusion of lesions selected
for biopsy or excision. Accuracy was generally much higher for in‐person diagnosis
compared to image‐based evaluations (relative diagnostic odds ratio of 8.54, 95% CI
2.89 to 25.3, P < 0.001). Meta‐analysis of in‐person evaluations that could be clearly
placed on the clinical pathway showed a general trade‐off between sensitivity and
specificity, with the highest sensitivity (92.4%, 95% CI 26.2% to 99.8%) and lowest
specificity (79.7%, 95% CI 73.7% to 84.7%) observed in participants with limited prior
testing (n = 3 datasets). Summary sensitivities were lower for those referred for
specialist assessment but with much higher specificities (e.g. sensitivity 76.7%,
95% CI 61.7% to 87.1%) and specificity 95.7%, 95% CI 89.7% to 98.3%) for lesions selected
for excision, n = 8 datasets) . These differences may be related to differences in
the spectrum of included lesions, differences in the definition of a positive test
result, or to variations in observer expertise. We did not find clear evidence that
accuracy is improved by the use of any algorithm to assist diagnosis in all settings.
Attempts to examine the effect of observer expertise in melanoma diagnosis were hindered
due to poor reporting. Visual inspection is a fundamental component of the assessment
of a suspicious skin lesion; however, the evidence suggests that melanomas will be
missed if visual inspection is used on its own. The evidence to support its accuracy
in the range of settings in which it is used is flawed and very poorly reported. Although
published algorithms do not appear to improve accuracy, there is insufficient evidence
to suggest that the ‘no algorithm’ approach should be preferred in all settings. Despite
the volume of research evaluating visual inspection, further prospective evaluation
of the potential added value of using established algorithms according to the prior
testing or diagnostic difficulty of lesions may be warranted. What is the aim of the
review? Melanoma is one of the most dangerous forms of skin cancer. The aim of this
Cochrane Review was to find out how accurate checking suspicious skin lesions (lumps,
bumps, wounds, scratches or grazes) with the naked eye (visual inspection) can be
to diagnose melanoma (diagnostic accuracy). The Review also investigated whether diagnostic
accuracy was different depending on whether the clinician was face to face with the
patient (in‐person visual inspection), or looked at an image of the lesion (image‐based
visual inspection). Cochrane researchers included 19 studies to answer this question.
Why is it important to know the diagnostic accuracy of visual examination of skin
lesions suspected to be melanomas? Not recognising a melanoma when it is present (a
false‐negative test result) delays surgery to remove it (excision), risking cancer
spreading to other organs in the body and possibly death. Diagnosing a skin lesion
(a mole or area of skin with an unusual appearance in comparison with the surrounding
skin) as a melanoma when it is not (a false‐positive result) may result in unnecessary
surgery, further investigations, and patient anxiety. Visual inspection of suspicious
skin lesions by a clinician using the naked eye is usually the first of a series of
‘tests’ to diagnose melanoma. Knowing the diagnostic accuracy of visual inspection
alone is important to decide whether additional tests, such as a biopsy (removing
a part of the lesion for examination under a microscope) are needed to improve accuracy
to an acceptable level. What did the review study? Researchers wanted to find out
the diagnostic accuracy of in‐person compared with image‐based visual inspection of
suspicious skin lesions. Researchers also wanted to find out whether diagnostic accuracy
was improved if doctors used a 'visual inspection checklist' or depending on how experienced
in visual inspection they were (level of clinical expertise). They considered the
diagnostic accuracy of the first visual inspection of a lesion, for example, by a
general practitioner (GP), and of lesions that had been referred for further evaluation,
for example, by a dermatologist (doctor specialising in skin problems). What are the
main results of the review? Only 19 studies (17 in‐person studies and 2 image‐based
studies) were clear whether the test was the first visual inspection of a lesion or
was a visual inspection following referral (for example, when patients are referred
by a GP to skin specialists for visual inspection). First in‐person visual inspection
(3 studies) The results of three studies of 1339 suspicious skin lesions suggest that
in a group of 1000 lesions, of which 90 (9%) actually are melanoma: ‐ An estimated
268 will have a visual inspection result indicating melanoma is present. Of these,
185 will not be melanoma and will result in an unnecessary biopsy (false‐positive
results). ‐ An estimated 732 will have a visual inspection result indicating that
melanoma is not present. Of these, seven will actually have melanoma and would not
be sent for biopsy (false‐negative results). Two further studies restricted to 4228
suspicious skin lesions that were all selected to be excised found similar results.
In‐person visual inspection after referral, all lesions selected to be excised (8
studies) The results of eight studies of 5331 suspicious skin lesions suggest that
in a group of 1000 lesions, of which 90 (9%) actually are melanoma: ‐ An estimated
108 will have a visual inspection result indicating melanoma is present, and of these,
39 will not be melanoma and will result in an unnecessary biopsy (false‐positive results).
‐ Of the 892 lesions with a visual inspection result indicating that melanoma is not
present, 21 will actually be melanoma and would not be sent for biopsy (false‐negative
results). Overall, the number of false‐positive results (diagnosing a skin lesion
as a melanoma when it is not) was observed to be higher and the number of false‐negative
results (not recognising a melanoma when it is present) lower for first visual inspections
of suspicious skin lesions compared to visual inspection following referral. Visual
inspection of images of suspicious skin lesions (2 studies) Accuracy was much lower
for visual inspection of images of lesions compared to visual inspection in person.
Value of visual inspection checklists There was no evidence that use of a visual inspection
checklist or the level of clinical expertise changed diagnostic accuracy. How reliable
are the results of the studies of this review? The majority of included studies diagnosed
melanoma by lesion biopsy and confirmed that melanoma was not present by biopsy or
by follow‐up over time to make sure the skin lesion remained negative for melanoma.
In these studies, biopsy, clinical follow‐up, or specialist clinician diagnosis were
the reference standards (means of establishing final diagnoses). Biopsy or follow‐up
are likely to have been reliable methods for deciding whether patients really had
melanoma. In a few studies, experts diagnosed the absence of melanoma (expert diagnosis),
which is less likely to have been a reliable method for deciding whether patients
really had melanoma. There was lots of variation in the results of the studies in
this review and the studies did not always describe fully the methods they used, which
made it difficult to assess their reliability. Who do the results of this review apply
to? Thirteen studies were undertaken in Europe (68%), with the remainder undertaken
in Asia (n = 1), Oceania (n = 4), and North America (n = 1). Mean age ranged from
30 to 73.6 years (reported in 10 studies). The percentage of individuals with melanoma
ranged between 4% and 20% in first visualised lesions and between 1% and 50% in studies
of referred lesions. In the majority of studies, the lesions were unlikely to be representative
of the range of those seen in practice, for example, only including skin lesions of
a certain size or with a specific appearance. In addition, variation in the expertise
of clinicians performing visual inspection and in the definition used to decide whether
or not melanoma was present across studies makes it unclear as to how visual inspection
should be carried out and by whom in order to achieve the accuracy observed in studies.
What are the implications of this review? Error rates from visual inspection are too
high for it to be relied upon alone. Although not evaluated in this review, other
technologies need to be used to ensure accurate diagnosis of skin cancer. There is
considerable variation and uncertainty about the diagnostic accuracy of visual inspection
alone for the diagnosis of melanoma. There is no evidence to suggest that visual inspection
checklists reliably improve the diagnostic accuracy of visual inspection, so recommendations
cannot be made about when they should be used. Despite the existence of numerous research
studies, further, well‐reported studies assessing the diagnostic accuracy of visual
inspection with and without visual inspection checklists and by clinicians with different
levels of expertise are needed. How up‐to‐date is this review? The review authors
searched for and used studies published up to August 2016.