It is difficult to map locations of limited infrastructure or increased danger. Satellite, via community platforms such as OpenStreetMap, is the primary method. Ground truth data collection, however is difficult and expensive. My repeated mapping of Mogadishu, for example, required dozens of people, technical support, and months of work. So if a disaster or act of social violence necessitates a rapid 'big picture' understanding of what is happening and where - how can we do this faster? More importantly, how can we do this in a way that is less culturally subjective and universal?
In my current design research, I seek new methods to quickly understand complex environments. One output of this research is the fusion of machine learning and social network analysis with urban photography. The objective of this work is to move away from the rigidity of discrete computational data within complex environments (which frequently contains deep cultural bias) and to utilize the fluid and subjective content of photography as data within urban computation.
This work has generated an urban systems map that captures the shape and composition of neighborhoods relative to the variety of source data (see above). It also captures the non-geographic organizational composition of urban systems - such as a measure of robustness. Possible applications include rapid urban assessments, culturally relevant indicator determination for economic or health appraisals, and rapid infrastructure mapping for emergency and humanitarian relief.
3,000 personally collected images from Mogadishu were processed via CAFFE. The machine learning classifications were not culturally tuned to the content of the images, and that work remains for the next iteration, but are derived via a random and generic classifier using the MIT CSAIL places library. This work was assisted by Geoffrey Morgan.
Derived image descriptions from the Machine Learning included phrases such as "security, medical, house, hospital." To validate the accuracy of these keywords, samples were randomly compared against on-the-ground urban assessment data I collected in Mogadishu in 2013. The resulting keyword index was then consolidated by keyword co-occurence (see grey/black visualization) and the three largest concentrations of tagged images were extracted (1,816 images).
A Manhatten Analysis was applied to this new index, essentially clustered similar data points by walkability to capture the "neighborhood footprint" of relationships. The result is a map that can be instantly generated from unstructured image data, relaying the distribution, layering, and user interaction patterns of urban infrastructures.