External Contextual Metadata
Descriptions of context in our photographs are often limited to the visual plane, time, camera specs and a GPS location. With new media and artificial intelligence, there has been a trend towards automatically describing the photos we take with greater detail. This field of data is frequently collected analyzed in corporate echelons, but until now this format of data has not been available to consumers in the form of a single, transmittable unit.
With Gaia Gate, I sought to investigate how we could take this further by integrating ‘External Contextual Metadata’ (ECM), a brand new category of metadata that describes the context surrounding a digital object rather than the object itself.
The Multimodal Framework
Multimodality refers to multiple modes of perception; moving beyond the pictoral to a enriched mode of 'seeing'. By combining traditional smartphone photography with additional contextual data, we can create multimodal images representing the many facets of a moment.
These images can be transmitted through social platforms and ultimately rendered through compatible applications.