Most of the computer vision algorithms operate pixel-wise and process image in a small neighborhood for feature extraction. Such a feature extraction strategy ignores the context of an object in the real world. Taking geometric context into account while classifying various regions in a scene, we can discriminate the similar features obtained from different regions with respect to their context. A geometric context based scene decomposition method is proposed and is applied in a context-aware Augmented Reality (AR) system. The proposed system segments a single image of a scene into a set of semantic classes representing dominant surfaces in the scene. The classification method is evaluated on an urban driving sequence with labeled ground truths and found to be robust in classifying the scene regions into a set of dominant applicable surfaces. The classified dominant surfaces are used to generate a 3D scene. The generated 3D scene provides an input to the AR system. The visual experience of 3D scene through the contextually aware AR system provides a solution for visual touring from single images as well as an experimental tool for improving the understanding of human visual perception.