Mohammad Tariqul Islam

About me

I am an assistant professor in the Computer Science Department at Southern Connecticut State University. I am interested in working on research problems arising in computer vision, machine learning, and big data. My active research areas are: outdoor scene analysis, image geo-localization and analysis of large scale image data.

Selected Research Projects

While face analysis from images is a well-studied area, little work has explored the dependence of facial appearance on the geographic location from which the image was captured. To fill this gap, we constructed GeoFaces, a large dataset of geotagged face images, and used it to examine the geo-dependence of facial features and attributes, such as ethnicity, gender, or the presence of facial hair. Our analysis illuminates the relationship between raw facial appearance, facial attributes, and geographic location, both globally and in selected major urban areas. Some of our experiments, and the resulting visualizations, confirm prior expectations, such as the predominance of ethnically Asian faces in Asia, while others highlight novel information that can be obtained with this type of analysis, such as the major city with the highest percentage of people with a mustache.
____________________________________________________________________________________________________________________

The facial appearance of a person is a product of many factors, including their gender, age, and ethnicity. Methods for estimating these latent factors directly from an image of a face have been extensively studied for decades. We extend this line of work to include estimating the location where the image was taken. We propose a deep network architecture for making such predictions and demonstrate its superiority to other approaches in an extensive set of quantitative exper- iments on the GeoFaces dataset. Our experiments show that in 26% of the cases the ground truth location is the topmost prediction, and if we allow ourselves to consider the top five predictions, the accuracy increases to 47%. In both cases, the deep learning based approach significantly outperforms random chance as well as another baseline method.
____________________________________________________________________________________________________________________

In this work, I introduced AMOS+C, a geo-tagged image dataset of over 1200 scenes annotated with weather metadata and used this dataset to show that outdoor scene appearance prediction exhibits lower reconstruction error if we incorporate local weather conditions. The time and location that an image is captured indirectly defines the appearance of the scene. However, for outdoor scenes, the more directly relevant variables that affect image appearance are the scene structure, local weather conditions, and the position of the sun. In this project, we introduced AMOS+C, a large image dataset of archived time-lapse imagery with associated geo-location and weather annotations collected from multiple sources. Through validation via crowdsourcing, we estimate the reliability of this automatically gathered data. We use this dataset to investigate the value of direct geo-temporal context for the problem of predicting the scene appearance and show that explicitly incorporating the sun position and weather variables significantly reduces reconstruction error.
____________________________________________________________________________________________________________________

In this project, We propose cloud motion as a natural scene cue that enables geometric calibration of static outdoor cameras. This work introduces several new methods that use observations of an outdoor scene over days and weeks to estimate radial distortion, focal length and geo-orientation. Cloud-based cues provide strong constraints and are an important alternative to methods that require specific forms of static scene geometry or clear sky conditions. Our method makes simple assumptions about cloud motion and builds upon previous work on motion-based and line-based calibration. We show results on real scenes that highlight the effectiveness of our proposed methods.
____________________________________________________________________________________________________________________

In VOEIS project, I worked on visualizing massive amount of environmental monitoring sensor data collected from the Kentucky Lake and the Flathead Lake. As a part of this project, we also collaborated with other research groups at University of Kentucky and Montana State University for creating high-resolution mosaics from aerial imagery of the surroundings of the lakes.
____________________________________________________________________________________________________________________