Emotion status impacts massively the human’s health and the job performance of people. A system that can continuously and automatically monitor people’s emotion is worthy of development. Besides, the speech signals always contain some emotion features and are the most commonly used for communication between humans. The exploration of emotion recognition from speeches then becomes more important. In this paper, we propose a strategy for emotion recognition from speech by combining evolutionary algorithm (EA) with Empirical Mode Decomposition (EMD) to improve the emotion recognition rate. First, some emotional speeches were decomposed into several Intrinsic Mode Functions (IMFs) by using EMD process. The emotional part of a speech is then extracted by using these IMFs. In this paper, some weighted IMFs obtained from EMD are combined for the following recognition process. Hence, it is one of the goals of this paper to find the optimal weights corresponding to each IMF and to combine these weighted IMFs to make the recognition results as accurate as possible. The weights for each IMF are trained by evolutionary algorithm to find an optimal combination of IMFs. The reason why evolutional algorithm is used here is that evolutional algorithm always obtains some outstanding performances in many research concerning optimal design. The Mel-Frequency Cepstral Coefficients (MFCCs) are then computed and are used as the features for emotion recognition. An open database, eNTERFACE 2005 emotion database, is adopted in this paper as training and testing data for the experiments.