Many applications in mobile and underwater robotics employ 3D vision techniques for navigation and mapping. These techniques usually involve the extraction and 3D reconstruction of scene interest points. Nevertheless, in large environments the huge volume of acquired information could pose serious problems to real-time data processing. Moreover, In order to minimize the drift, these techniques use data association to close trajectory loops, decreasing the uncertainties in estimating the position of the robot and increasing the precision of the resulting 3D models. When faced to large amounts of features, the efficiency of data association decreases drastically, affecting the global performance. This paper proposes a framework that highly reduces the number of extracted features with minimum impact on the precision of the 3D scene model. This is achieved by minimizing the representation redundancy by analyzing the geometry of the environment and extracting only those features that are both photometrically and geometrically significant.