When measuring preferences, discrete choice experiments (DCEs) typically assume that respondents consider all available information before making decisions. However, many respondents often only consider a subset of the choice characteristics, a heuristic called attribute non‐attendance (ANA). Failure to account for ANA can bias DCE results, potentially leading to flawed policy recommendations. While conventional latent class logit models have most commonly been used to assess ANA in choices, these models are often not flexible enough to separate non‐attendance from respondents' low valuation of certain attributes, resulting in inflated rates of ANA. In this paper, we show that semi‐parametric mixtures of latent class models can be used to disentangle successfully inferred non‐attendance from respondent's “weaker” taste sensitivities for certain attributes. In a DCE on the job preferences of health workers in Ethiopia, we demonstrate that such models provide more reliable estimates of inferred non‐attendance than the alternative methods currently used. Moreover, since we find statistically significant variation in the rates of ANA exhibited by different health worker cadres, we highlight the need for well‐defined attributes in a DCE, to ensure that ANA does not result from a weak experimental design.