Semantic segmentation is an important task for a wide range of applications from medical imaging to autonomous vehicles. However, the current state of the art requires large amounts of per-pixel annotated datasets which are costly and time consuming to curate. This paper presents Seg-Diff – an active learning method for estimating segmentation model uncertainty for unlabeled images. Our proposed method computes the difference in predictive uncertainty across saved training checkpoints, and uses these differences to compute a scalar ranking of uncertainty which can be visualized as an uncertainty heatmap. Using Seg-Diff to sample images for active learning, we consistently outperform random sampling on the Cityscapes dataset when measuring overall mean Intersection Over Union (mIOU).