In this paper, we adopt a multimodal emotion recognition framework by combining eye movements and electroencephalography (EEG) to enhance emotion recognition. The main contributions of this paper are twofold. a) We investigate sixteen eye movements related to emotions and identify the intrinsic patterns of these eye movements for three emotional states: positive, neutral and negative. b) We examine various modality fusion strategies for integrating users external subconscious behaviors and internal cognitive states and reveal that the characteristics of eye movements and EEG are complementary to emotion recognition. Experiment results demonstrate that modality fusion could significantly improve emotion recognition accuracy in comparison with single modality. The best accuracy achieved by fuzzy integral fusion strategy is 87.59%, whereas the accuracies of solely using eye movements and EEG data are 77.80% and 78.51%, respectively.