Batch effect confounding leads to strong bias in performance estimates obtained by cross-validation.

Details

Ressource 1Download: BIB_8910DD2E4C90.P001.pdf (1449.07 [Ko])
State: Public
Version: author
Serval ID
serval:BIB_8910DD2E4C90
Type
Article: article from journal or magazin.
Collection
Publications
Institution
Title
Batch effect confounding leads to strong bias in performance estimates obtained by cross-validation.
Journal
Plos One
Author(s)
Soneson C., Gerster S., Delorenzi M.
ISSN
1932-6203 (Electronic)
ISSN-L
1932-6203
Publication state
Published
Issued date
2014
Peer-reviewed
Oui
Volume
9
Number
6
Pages
e100335
Language
english
Notes
Publication types: Journal Article Publication Status: epublish
Abstract
BACKGROUND: With the large amount of biological data that is currently publicly available, many investigators combine multiple data sets to increase the sample size and potentially also the power of their analyses. However, technical differences ("batch effects") as well as differences in sample composition between the data sets may significantly affect the ability to draw generalizable conclusions from such studies.
FOCUS: The current study focuses on the construction of classifiers, and the use of cross-validation to estimate their performance. In particular, we investigate the impact of batch effects and differences in sample composition between batches on the accuracy of the classification performance estimate obtained via cross-validation. The focus on estimation bias is a main difference compared to previous studies, which have mostly focused on the predictive performance and how it relates to the presence of batch effects.
DATA: We work on simulated data sets. To have realistic intensity distributions, we use real gene expression data as the basis for our simulation. Random samples from this expression matrix are selected and assigned to group 1 (e.g., 'control') or group 2 (e.g., 'treated'). We introduce batch effects and select some features to be differentially expressed between the two groups. We consider several scenarios for our study, most importantly different levels of confounding between groups and batch effects.
METHODS: We focus on well-known classifiers: logistic regression, Support Vector Machines (SVM), k-nearest neighbors (kNN) and Random Forests (RF). Feature selection is performed with the Wilcoxon test or the lasso. Parameter tuning and feature selection, as well as the estimation of the prediction performance of each classifier, is performed within a nested cross-validation scheme. The estimated classification performance is then compared to what is obtained when applying the classifier to independent data.
Pubmed
Web of science
Open Access
Yes
Create date
05/08/2014 18:46
Last modification date
20/08/2019 15:48
Usage data