Beware of Validation by Eye: Visual Validation of Linear Trends in Scatterplots
Visual validation of regression models in scatterplots is a common practice for assessing model quality, yet its efficacy remains unquantified. We conducted two empirical experiments to investigate individuals' ability to visually validate linear regression models (linear trends) and to examine the impact of common visualization designs on validation quality. The first experiment showed that the level of accuracy for visual estimation of slope (i.e., fitting a line to data) is higher than for visual validation of slope (i.e., accepting a shown line). Notably, we found bias toward slopes that are ``too steep'' in both cases. This lead to novel insights that participants naturally assessed regression with orthogonal distances between the points and the line (i.e., ODR regression) rather than the common vertical distances (OLS regression). In the second experiment, we investigated whether incorporating common designs for regression visualization (error lines, bounding boxes, and confidence intervals) would improve visual validation. Even though error lines reduced validation bias, results failed to show the desired improvements in accuracy for any design. Overall, our findings suggest caution in using visual model validation for linear trends in scatterplots.
Cite this paper
D. Braun, R. Chang, M. Gleicher and T. v. Landesberger, "Beware of Validation by Eye: Visual Validation of Linear Trends in Scatterplots," in IEEE Transactions on Visualization and Computer Graphics, 2024.
BibTeX:
@Article{Braun_2024_validation,
author = {Braun, Daniel and Chang, Remco and Gleicher, Michael and von Landesberger, Tatiana},
journal = {IEEE Transactions on Visualization and Computer Graphics},
year = {2024},
title = {Beware of Validation by Eye: Visual Validation of Linear Trends in Scatterplots},
publisher = {{IEEE}},
doi = {10.1109/TVCG.2024.3456305},
}