External Contextual Metadata

Descriptions of context in our photographs are often limited to the visual plane, time, camera specs and a GPS location. With new media and artificial intelligence, there has been a trend towards automatically describing the photos we take with greater detail. This field of data is frequently collected analyzed in corporate echelons, but until now this format of data has not been available to consumers in the form of a single, transmittable unit.

With Gaia Gate, I sought to investigate how we could take this further by integrating ‘External Contextual Metadata’ (ECM), a brand new category of metadata that describes the context surrounding a digital object rather than the object itself. 

The Multimodal Framework

Multimodality refers to multiple modes of perception; moving beyond the pictoral to a enriched mode of 'seeing'. By combining traditional smartphone photography with additional contextual data, we can create multimodal images representing the many facets of a moment.

These images can be transmitted through social platforms and ultimately rendered through compatible applications.

Mobile Metrics

We can source External Contextual Metadata from a variety of sources on our smartphones, such as sensor data, APIs, ArcGIS maps, and through processing algorithms used in statistical and artificial intelligence.

Great Potential

ECM has the capacity to be used in many novel applications in a cross-disciplinary fashion. 

Benefits from the technology could be seen in photography, archivalism, application development, surveillance and statistical technologies (among others).

Using Format