Virtual Reality (VR) is, thus, proposed as a powerful tool to simulate complex, real situations and environments, offering researchers unprecedented opportunities to investigate human behaviour in closely controlled designs in controlled laboratory conditions. Facial emotion recognition has attracted a great deal of interest for interaction in virtual reality, healthcare system: Therapeutic applications, surveillance video application etc. In this paper, we propose a method for facial emotion recognition for immersive virtual environment based on 2D and 3D geometrical features. We used our collected dataset of 17 subjects' performance of six basic facial emotions (anger, fear, happiness, surprise, sadness, and neutral) using three devices: Kinect (v1), Kinect (v2), and RGB HD camera. In addition, we present the performance results of the RGB data for facial emotion recognition using Bagged Trees algorithm. To assess the performance of the proposed system, we used leave-one-out-subject cross-validation. We compared the 2D and 3D data performance for facial expression recognition. The obtained results show the superior performance of the RGB-D features provided by Kinect (v2). Our findings highlight that the 2D images are not robust enough for facial emotion recognition. The built facial emotion models will animate virtual characters that can express emotions via facial expressions. This could be deployed for Chatting, Learning and Therapeutic Intervention.