In order to understand how much information can be communicated with metadata–as well as to get a better understanding of structured data–let’s take a look at several different schemas. You’ll be able to see how they are purpose-driven, and also see how some of them overlap. This is helpful in understanding how to make use of metadata more effectively.
Named for Dublin, Ohio where the standard was created in 1995, the Dublin Core is a metadata standard which started with 15 fields for describing digital files. It was an early attempt to describe digital objects with a very broad set of characteristics. Although it was not specific to images, Dublin Core, along with the IPTC standard has formed the foundation of modern metadata. Much of the Dublin Core schema has made its way into the IPTC standard.
If you look at XMP metadata notation, you can still see the Dublin Core. Keywords, for instance, are actually written to a field called dc:subject, which references the old Dublin Core Subject field, and the photographer name is written to the dc:author field. This shared syntax is a legacy of the development and merging of metadata standards
EXIF is a standard used by camera manufacturers to store camera-created information in the file. It was created by the Japan Electronic Industries Development Association (JEIDA) in 1995 and the most recent revision was made in 2016. Figure 4-15 shows some of the EXIF values that are written to a Nikon D750 file and an iPhone photo. EXIF information is broken down into a couple of types:
- Media encoding – This includes information about the structure of the file, the color model that’s used, as well as the resolution and pixel dimensions of the image. This stuff is pretty invisible to most users, and is used by programs to decode the image file.
- Camera settings – This includes the date and time, as well as the exposure, lens, and color-balance settings. These settings are generally understood by imaging and DAM software, and can help organize the images. EXIF can also contain GPS data, which we’ll look at later in the chapter.
Although the EXIF specification is standardized, it is implemented differently in different camera models. Therefore, not every EXIF field will exist in every file, and even if it’s there, the data may not be accessible to third parties because it can’t be understood. Sometimes, this is because a camera manufacturer wants to hide the information, and sometimes, it’s just a by-product of the way the camera writes the file. Camera, computer, and software makers are attempting to fix this situation through the Metadata Working Group.
Metadata Working Group
The Metadata Working Group (MWG) is yet another initiative that seeks to standardize the interchange of metadata. This effort was spearheaded by Microsoft, Adobe, Apple, Canon, Nokia and Sony. They focused on rules for metadata handling, as well as some of the thornier problems like face recognition and other tags that apply to a specific part of the image.
The metadata handling rules that the MWG published serve as guidelines for software manufacturers when creating derivative files or otherwise processing images. Since we have so many schemas in use, and since no one application program understands it all, the MWG worked to create a standardized treatment of this information.
The other big problem the MWG tackled is called region tagging. When Facebook or Lightroom or your camera draws a box around a face and allows you to add a name, you are tagging a region of the image. This is a useful capability and will be an area of significant growth in the computational tagging era. As AI and ML tools can better identify individual faces or objects in an image, it will be handy to have those tags refer directly to the part of the image that has been identified.
While it’s relatively easy to draw a box on an image by using x-y pixel coordinates, it’s harder to make this information durable. The x-y positions need to be recalculated when an image is resized, rotated or cropped. The solution the MWG came up with proscribes a process for maintaining these tags even in the event of this kind of transformation of the image. While this is not universally supported, at least we have an agreed-upon method to preserve the data.