Recent studies have highlighted the vulnerability of deep neural networks (DNNs) to adversarial examples - a visually indistinguishable adversarial image can easily be crafted to cause a well-trained model to misclassify. Existing methods for crafting adversarial examples are based on \(L_2\) and \(L_\infty\) distortion metrics. However, despite the fact that \(L_1\) distortion accounts for the total variation and encourages sparsity in the perturbation, little has been developed for crafting \(L_1\)-based adversarial examples. In this paper, we formulate the process of attacking DNNs via adversarial examples as an elastic-net regularized optimization problem. Our Elastic-net Attacks to DNNs (EAD) feature \(L_1\)-oriented adversarial examples and include the state-of-the-art \(L_2\) attack as a special case. Experimental results on MNIST, CIFAR10 and ImageNet show that EAD can yield a distinct set of adversarial examples and attains similar attack performance to the state-of-the-art methods in different attack scenarios. More importantly, EAD leads to improved attack transferability and complements adversarial training for DNNs, suggesting novel insights on leveraging \(L_1\) distortion in adversarial learning and security implications for DNNs.