This paper explores how model design choices can cause or exacerbate algorithmic biases, notwithstanding the common view that data predominantly cause bias problems in machine learning systems. The author cites two important factors that constrain our ability to curb bias solely through working on the quality or scope of training data: inherent messiness in real world data and limits on accurately anticipating features in a model that can cause bias. Model designers should therefore consider how their choices about the length of model training or the use of differential privacy techniques can affect model accuracy for groups underrepresented in the data.