Category Archives: Blog

  • Understanding color in digital images

    This week, we’ll be looking at the way color is represented in digital images. Proper digital asset management requires that color is handled properly so that files reproduce predictably. 

    At the heart of digital imaging is the translation of visual information into numbers. Digital images are basically a rectangle of dots, each representing a color or tone, commonly referred to as a bitmap or a pixmap. There are certain characteristics of a bitmap which are common to most traditional digital images. These characteristics include the color model, color profiles and bit depth. Let’s look at these. 

    COLOR MODELS 

    There are a number of different ways of turning color into numbers. Some use the physics of light waves, some rely on the way the eye perceives color, and some are built around the way ink combines to create colors. Each of these color models is useful in different ways. The vast majority of computer-based digital images use the RGB model. 

    Let’s look at some of the options.

    Indexed color 

    The earliest digital images were often made with indexed color. In this arrangement, the color for each pixel is chosen from a list of possible colors. This can range from 1-bit (two colors, usually pure black and pure white) up to a 8-bit color (256 distinct colors). Indexed color is very economical in file size, but it does not do a very good job rendering photographic images. It was soon replaced with RGB color as a standard method. 

    RGB 

    RGB is a color model that employs the three primary colors of light: red, green and blue. These can be mixed to make all other colors. Conventional RGB digital images will assign a numerical value for red, green and blue for each pixel. The table below shows examples of the relationship between RGB values and the colors that are produced. 

    RGB is the native underlying color model for most digital cameras, computer monitors, and lots of image editing software. There are a lot of variations of exactly how to implement an RGB color space, as we will see in the next post.

    Here’s a diagram that simulates RGB color. As you add one color to another, the resulting color is a brighter combination of the two. When all three colors are added together, you get white light. This is often referred to as Additive Color.

    RGB values can define every different visible color. The higher the number, the brighter the particular color. This chart shows some sample colors and their RGB values. 

    Grayscale images 

    In addition to RGB, it’s also very common to run across grayscale images: ones that have brightness information, but no color information. In a grayscale image, each pixel can have a value from black to white. Grayscale is often used for black and white imagery (but RGB color is also frequently used). Grayscale is also be used for alpha channels and masks. 

    CMYK color  

    Commercial printing, like that which is used for books and magazines, is typically done with a combination of cyan, magenta, yellow and black ink. The first three are the primary colors in pigments, and black is added for extra punch, since it’s difficult to get a deep black with CMY inks. 

    CMYK color is, therefore, a color model that is useful for preparing images for print reproduction. In CMYK color, the numbers appear to run in reverse compared to RGB. Low numbers indicate less ink, so they describe lighter colors. And high numbers indicate a lot of ink and therefore darker colors. 

    Because it has four color channels instead of three, a CMYK version of an image is 1/3 bigger than an equivalent RGB image. 

    Here is a simulation of CMYK color. Since ink absorbs light, as you combine the color together the resulting color is darker. This is the opposite of RGB color and is often referred to as Subtractive Color. 

    Other color models 

    There are a number of other color models, including CIELAB which is modeled on the physiology of the human eye and YCbCr which is used for video. Click the links to read more about these.

    In the next post, we will look at color profiles, which are specific flavors of the color models described above.

  • The Evolution of Imaging Applications

    This is the fourth post in a set of four which outline the nature of images, formats and applications.

    The final component of imaging evolution is the software environment to create and make use of images. We can track the development of the digital image itself along with seismic changes with advances in software and hardware development. Let’s take a look at several eras in computing that have driven the use of images to new levels. 

    The personal computer

    Digital images were primarily used by a handful of news organizations until the 1980s when it became possible to work with images on low-cost computers. This spurred a need to create interchangeable file formats rather than proprietary application-specific ones. 

    The desktop publishing revolution

    As the tools to create print and web materials matured, the need for consistent reproduction became an urgent requirement. This led to important standardization like the formalization of the TIFF and JPEG specification and the ICC effort. 

    The digital camera

    In the early 2000s, digital cameras created a brand new set of requirements for image files. Cameras that created raw files spurred the development of non-destructive, read-only applications. And the proliferation of digital images created an asset management problem that was previously only relevant to major media companies. 

    Mobile photography

    As camera-equipped smartphones became nearly universal in the 2010s, this led to an exponential rise in the number of photos taken as well as the use of photos for nearly any kind of purpose. Imaging environments were extended to or integrated with cloud-based applications. And social media platforms have become essential partners in understanding the meaning and value of images. 

    Cloud workflow

    The migration of computing workflow from “my stuff on my computer” to “my stuff everywhere” has driven new requirements in bandwidth and connectivity. Images are turning into cloud-hosted connected objects, which requires proxy images and application-specific renderings.

    Computational photography

    The newest frontier of image evolution is the productization of computational photography, primarily through advanced mobile phones. Images now include depth information to create portrait renderings, as well as Augmented Reality integration in mobile phone software. 

    Next week, we’ll take a deeper dive into the characteristics of digital images, starting with color rendering.

  • The Evolution of Formats

    This is the third post in a set of four which outline the nature of images, formats and applications.

    As the complexity of digital images has grown, new formats have appeared to enable these capabilities. In fact, formats have always largely been purpose-driven. They are created as standardized containers that facilitate saving particular types of data, usually in service of some particular workflow. Sometimes formats are designed as general purpose containers, and sometimes they are highly purpose-driven. 

    Let’s look at the evolution of formats in light of the purposes they serve in the digital photography environment. 

    Interchangeable formats

    The Compuserve GIF format was introduced in the late 1980s and became one of the most useful formats for image interchange between computer systems and programs. This was followed by the TIFF and JPEG standards in the 1990s. 

    Highly efficient image storage

    The JPEG standard was created to enable high quality images to be saved and stored with full color and an efficient compression scheme. It became a hugely successful standard. 

    High quality image storage

    Other applications were more oriented to the creation of a standard format that could store images with the highest quality possible. TIFF was created for this purpose.

    Consistent color

    In the1990s, the International Color Consortium (ICC) organization created a method to standardize the way to encode and decode color, and to convert color for consistent appearance on different devices. This capability was added to JPEG through the official standard, and added to TIFF by unofficial consensus. 

    Multi-part master file storage

    As images became dependent on multiple embedded files, masks, alpha channels and other components, it became commonplace to store as many of the components in a file as possible. Adobe has specified the methods to do this in the  PSD format specification, and these methods have generally been adopted into TIFF usage, even though they are not an official part of the TIFF standard. 

    Digital camera originals

    Digital camera raw images introduced the need for a dedicated file type that could support the encoding these files required. Raw image data is encoded very differently than “standard” image data. The TIFF/EP format was designed for this use, and some formats like the Nikon NEF are quite similar, and some take a more independent approach. 

    Digital camera master files

    Digital cameras led to a wide variety of approaches to file encoding and structure, with many of them undocumented. The DNG format was created to bring standardization to raw image storage. Not only does it allow for the standardized storage of camera original information, DNG also provides a way to store items that are useful for good post-production workflow. 

    Mobile/Cloud native capabilities

    Taking a clue from the capabilities of DNG, the HEIF format provides a flexible storage container that can accommodate the storage of multiple versions of an image, depth data, and even alternate versions that are created for particular uses. DNG was also updated to include some cloud workflow components, particularly the use of proxy files. 

    Most of the advances in images and formats became necessary as the capabilities of hardware and software have evolved. In the next post, we will examine how applications have evolved along the same timeline.

  • The Evolution of Digital Image Data

    In photomechanical imaging, the characteristics of the image are self-evident: it includes the tonal and color information, along with a grain or dot structure. This corresponds to the most basic digital images, which were originally just a rectangle full of colored dots representing tone, color and resolution. 

    As technology evolved, digital images have become far more complex. At first, this was playing catch-up with physical images. For more than a decade now, digital images have included components that extend our understanding of exactly what an image is. Here are some examples of that evolution. 

    Color management

    The numbers that are used to describe colors could be calculated many different ways, Color management is a system to describe exactly what is meant by a numerical value. It enables standardization of color rendering across devices that may have very different color signatures (or very different models of rendering color). 

    Compression

    It’s always been important to shrink digital images down as small as possible for any given use. By standardizing ways to compress and expand image data, storage and transmission bandwidth are conserved. 

    Multiple images

    For several decades, it’s been possible to place multiple images into a single file to create a new synthesized image. This allows images to be composited and also to be adjusted and re-edited. 

    Transparency

    While most digital images are rectangles, adding the capability to make some pixels transparent allows an image to take on any conceivable shape. 

    Adjustment instructions

    Adding color and tonal adjustment instructions to an image allows the “photo finishing” to be attached to the image itself, and for it to be readjusted non-destructively. It also allows for multiple interpretations of the same image to be “part” of a single original image. 

    Masks and alpha channels

    Digital images can contain other images – often a monochromatic image that can be used to mask parts of the primary image. This can be used to create a transparency of the image, or to selectively apply adjustment instructions to regions of an image, or to blend multiple images together. 

    Curved and spherical models

    Digital images may also be created as inherently non-flat photos that make use of curved or spherical geometry. Spherical images can be viewed in a VR environment or in a flat display that is designed to compensate for the curvature of the image. 

    Motion and audio

    Adding multiple sequential images to an image can create a motion image. Hybrid image files have been around for decades. Likewise, it’s been possible to attach an audio file to an image for a very long time. 

    Camera raw images

    Digital cameras created a new type of digital image – one that preserved camera-native information for flexible reprocessing at any point in the future. 

    At this point, the nature of digital images changed radically. Entirely new image editing methods were needed, as were ways to properly store this unprocessed data. 

    Depth information

    Mobile phones and other computational cameras are now creating depth information as an inherent part of the image itself. This is useful for all kinds of purposes, including synthetic depth of field, Augmented Reality features, facial recognition and other visual analysis. 

    Application-specific usage

    Mobile imaging, raw photography and cloud workflow have created a need for even more complex image objects. This includes the need for proxy images and application-specific rendering, all as part of a single object. 

    All of these above items can be integral parts of the image itself, since they inform the nature, understanding and use of the images. Many images will continue to be a simple rectangle of colored dots, but an increasing number will be more complex digital-native objects. In the next post, we will take a look at the ways that formats have evolved along with the images themselves.

  • Images, Formats and Applications

    This week we’re going to examine the relationship between images, formats and applications.  (Note that in this four-part series of posts we use the terms “image” and “photography” in a very expansive sense. It can include both still and moving images, 3-D, and computational imaging.)

    It’s essential to separate your understanding of an image from the file format that may be used to save the image data. While the image is dependent on a file format to exist in digital form, the underlying image is really independent from the file that houses the bits at any particular time. This has been a part of photography since the advent of printable negatives.


    We’ve long understood the photographic image to be distinct from any particular reproduction of that image. The same image may be embodied as a slide, a print or printed in a book, to name a few. 

    The negative, slide or print is a copy of the image, but it’s only one possible copy. It may represent an optimized copy, or a degraded copy, or it may have some special purpose, but the image itself can have an existence that is independent of any single copy. 

    In the digital world, the relationship between the underlying image and the digital media object is more complex because the digital image, when stored, is unrendered. It’s just a set of 1s and 0s. In order to understand how formats relate to images, it’s essential to look at the evolution of three interlocking parts: the image, the formats we use to store the images, and the software that creates, displays or edits the images.

    In most cases, the evolution of images, formats and software environments does not mean the old ones are obsolete and useless. Quite the contrary. Most of the evolution outlined above can be seen as the creation of new uses for images, in addition to the existing ones. As the import and use of images in all forms of communication expand, the traditional uses expand as well. 

    The fact that people have become accustomed to engaging visuals in their web and mobile communications means that they also demand excellent visuals in print communications. Those tools developed for high quality and consistent color in print usage therefore continue to have a high level of importance. 

    In the following posts, we’ll outline the evolution of this three-part relationship of image, format and application.

  • Why Do I Need a Separate Tool for File Distribution?


    It’s common for a master library application to have some ability to distribute files—and some distribution tools can also function as a library—so why should you use different tools? There are several good reasons, but they all fall under the heading of “it’s very difficult to make something that does both jobs well.”

    Collection and Distribution are cloud-native processes

    In most cases, distribution happens over the internet. This is often true even when access happens at your place of business. Access by mobile devices typically routes through the internet, even for files stored in the same location. As soon as you leave the building—or need to grant outside access—you’re generally using the internet. 

    Collection of media is also typically something that happens over the internet. This could mean that files are emailed or sent by text. But it’s more ideal to submit through an online tool that allows for some level of tagging at the point of submission. 

    Running a secure, internet-accessible library inside your own network is a difficult thing even for good IT departments. Managing users, access controls and permissions—while defending against hacking and other threats—is a full-time job. Using a cloud service for this work takes this burden off you. 

    Use cloud for everything?

    Okay, so can’t I use a cloud service as my main library? Maybe. It depends on what you need to store and how you are going to use it. For simple libraries that are not too large, and don’t have a lot of production requirements, a cloud service may be sufficient. 

    But there are plenty of instances where it makes sense to store raw captures, production files, and sensitive material in an entirely separate place from the distribution copy of the media. Reasons for this include:

    • A cloud service might be needlessly slow and expensive for an entire library of master video files.
    • In most cases the set of media you want to keep is much larger than the set of files you need to distribute. This is typically true even for the production process.
    • The source media—video and image capture files—are also typically very large and unsuitable for general distribution.
    • The very structure of cloud services make them slower for image optimization and project-making, and the optimization tools are typically less capable than the ones that run locally. (Face it, most professional creative works run through Adobe software running on desktop at some point in the creative process). 
    • Finally, I think it’s best practice to have a master copy of your media library stored on a device in your own possession. Preservation of media should be a multi-pronged process. Having your own full copy of your media library is a prudent step. 

    A firewall

    Using a separate distribution service also creates a firewall between the user-accessible copy and your primary media archive. Not only does this provide protection against malware or other attack, it is also a structural defense against access to files that are not intended for distribution.

    Next week we’ll going to dive into the nuts and bolts of media files and formats. It’s important to have a good grasp on some of these details as you configure your media library.

  • Software Independence


    It’s very intuitive to think of your media collection as being “in” a piece of DAM software, but I think it’s important to understand it a different way. The software that you use to view and organize your collection is really pointing to a set of files and hosting some information about the media. 

    While I may do my organizational work in a particular software environment, the images and much of the organizational work I do should be able to live independently from the software I’m using at the present time. This can include the files themselves and all the information and curation that the metadata represents.

    Data portability

    Some data may be easily exported to a new hosting application. Some may be exportable, but hard to replicate elsewhere, and some information may be effectively locked in an application forever.

    The relationships that are created by sharing, integrating, embedding and linking may be very difficult to migrate from one application to another. As you consider what applications and services to make use of – and especially how to integrate one with another – you’ll want to consider the limitations you might be making for yourself and the extent of your “partnership” with any particular piece of software. 

    One of our fundamental principles at Tandem Vault is to provide a way for you to export all work you do in our system. It’s your material and your data, and you should be able to take it with you if and when you decide to leave. 

  • Guidelines for Sound Digital Asset Management

    Regardless of the exact details of your system, there are some best practices that everyone should strive for in collection management.

    Standardize 

    As you create and adjust workflow, it’s important to standardize as much of it as possible. Having standard practices helps to prevent mistakes, and also helps you to recognize and recover from mistakes when they do happen. The very process of standardizing practices helps to understand them better. Finally, having standard practices allows you to more easily migrate to new software, hardware and methods when the need for migration arises. 

    You can standardize many parts of the workflow, including work order, software, metadata usage and more. 

    Simplify 

    You should try to keep things as simple and straightforward as possible. This may seem like an ironic statement, given the complexity in the media ecosystem, but it’s a really important point. Creating a modern forward-compatible media library can be a complicated endeavor, requiring the integration of many elements. Wherever possible though, you’ll want to simplify. Whether you are deciding on the storage methods you use, the metadata to describe your images, or the software that runs the library, try the simplest approach first, and add complexity as it becomes necessary. 

    One of the main reasons to outline the many components of the entire ecosystem is to help you understand which features you need and which are not important for you. 

    Don’t rely on your (or anyone else’s) memory 

    What you know about your media collection is an essential part of its value. But any information that exists only in the memory of specific people is less valuable than information that is stored as metadata in the library. Not only are you likely to eventually forget some relevant details, but it’s much harder to make use of what you know than what is written

    Attaching information to the media allows you to remember better, but it also allows you to make use of the data programmatically. Attaching the information also allows other people or programs to make use of the data. Having a consolidated library provides a durable and centralized place to store important information that adds value to both the media, and to the information itself. 

    Be comprehensive 

    The more universal your cataloging structures and practices are, the more value and efficiency you can get from your media. Consistency in organization allows for faster and more reliable searching of your collection, and collecting related images together maximizes the value of each individual image. 

    Build for the future 

    In creating a DAM system, you need to allow for growth. Some of this can be foreseen, such as storage needs. You’ll want to make sure that your library can grow as new media comes in. We can also see that the need to integrate a library with outside services is a growing need. Choosing library software that allows for flexible integration will help extend your use of that application. 

    Do it once… 

    Everything you do to tag or curate a media collection is an opportunity to add more structure to the collection, and thus to increase its value. When you rate for quality, or make useful tags, or curate media into groups, you are adding knowledge. By doing this work inside your collection management application, you make the work easy to find and repurpose. 

    … but don’t overdo it 

    Once you see the control that good management gives you over your collection, you might find yourself going “DAM happy”. You need to strike a balance between what’s useful and what’s a waste of time. Noting who is in a photo is very useful; labeling each image “looking right,” “looking center,” or “looking left” is probably overkill. The methodology I present starts with the tasks that offer the highest return for your work, and gradually works down through less cost-effective tasks. 

    Watch out for migration triggers 

    Throughout the life of your media collection, there will be events that trigger a need for migration. This could be the need to move to a new software package, or moving to new storage hardware, or some other change in workflow. You’ll want to be on the lookout for these, and plan for successful transition. 

    In Wednesday’s post, we’ll outline the need to understand the media and the metadata as independent from any particular application. 

    This post is adapted from The DAM Book 3.0 which lays out these principles in comprehensive form.

  • Consolidate and Unify


    Any project to implement good asset management will begin with consolidating the material as much as possible. This means you’ll want to create central storage, and bring as much of the relevant material together as possible. 

    Benefits of consolidation

    Consolidating the media is the first step to creating a secure, resilient archive of the media. It’s an essential step in creating good storage and backup. It’s also essential for the eventual migrations that are required for long-term maintenance. 

    Consolidating the files will also help you to tag the files in a universal manner. Tagging from within a single environment, such as one catalog, allows you to be consistent in the keywords and other metadata you use. 

    Consolidating all your media begins with a discovery process to understand how much space all your stuff takes up. Once you have a good idea of that, you’ll need to get enough unified storage so that you can bring it all together. This is typically true, even if you expect to use a cloud-based tool for your primary user access. In most cases it’s faster, easier, cheaper and better to do this locally before uploading to the cloud.

    Unification

    In addition to bringing everything together in one place, it’s also advantageous to unify your DAM practices as much as possible. This includes consistent use of metadata, standardizing media formats, having a consistent naming convention, and normalization of rights information. All of these unifications will pay off in the medium and long term, and some will give dividends in the short term as well.

    Unifying your existing collection will also help you build practices for the future. It’s hard to get your policies exactly right until you have road tested them. And there’s no better road test than one you actually have to drive on (in the car you’ll be driving).   

    Consolidation challenges

    Consolidation can be a daunting step if the media files are spread across many devices (worse yet, many users). It can take time to work through everything, and you might find that the collection is really big. It can also be problematic as new files continue to be created after you have transferred the existing contents. 

    At the opening stage of consolidation, it’s best to make sure you are gathering at least one of everything, and not to worry too much about duplication. If there is duplication that is easy to identify, you can fix it as you go along, but you’ll also have the opportunity to fix it later, once your collection has been consolidated and cataloged. 

    While this was originally written for photographers working on their own collections, the basic principle holds for all kinds of media collections. Consolidation is key to preservation and effective use of the media. 

    Remember the 80/20 rule

    As you consolidate, it’s good to keep the 80-20 rule in mind. In most cases, there is a large percentage of media that can be gathered easily (80% perhaps) and a smaller percentage that  will be hard to find (20% perhaps). Don’t let “perfect be the enemy of good” by delaying until 100% of media has been consolidated. Make a good faith effort to gather as much as you can, and get to work.

    Next week, we’ll continue our examination of the building blocks of great Digital Asset Management. 

  • DAM Hierarchy of Needs

    In 1943, Abraham Maslow published a hierarchy of human needs, starting with basic survival and moving all the way up to self-actualization. This is a pretty useful metaphor for the way you can approach collection management. In building your DAM ecosystem, I propose an alternate hierarchy of needs – one that starts with the security of the assets, is followed by discoverability, and eventually peaks in curation and distribution of the media. 

    Preserve the media

    We are moving from an imperfect present to a more perfect future. The most basic need is to get to the future with your media collection intact. That primary goal influences everything else. At times, you’ll need to make some choices between expediency and security. I recommend opting for protection and preservation.  

    Ensure forward compatibility

    We want to bring our images with us into the future. Ensuring that we can do this requires centralization of the archive, occasional migration to new formats or storage, and the use of software and techniques that don’t send you down a dead-end road. 

    Find media when you need it

    While preserving images is the main goal, it’s not the only important one; you need to be able to find images when you want them. If you can’t find an image, you can’t use it, no matter how securely it has been stored and backed up. Images need to be cataloged and tagged.

    Make the images look right

    Sometimes images look great right out of the camera, but many times they need some additional optimization. The approach you take to image optimization will have important ramifications on your entire collection management. Using (mostly) non-destructive, parametric, read-only image editors, we can construct a workflow that provides for maximum flexibility. (I’ll have more on this a little later.) 

    Curate: Make cool stuff with your media

    Curation takes place in the upper reaches of the hierarchy. Selecting just the right image to illustrate some point, or putting media together to tell a story is the process of curation. If we’ve done our work properly on the lower levels, we remove the busywork from curation, and we spend our time making and refining selections, crafting our photographic speech.

    Distribute, share, integrate, embed

    In order to tell a story, or do any other communication with imagery, you’ll need to make them available for others to see. This might be a simple export, or it could be some type of persistent connectivity. You’ll want as much information to be retained in the catalog as possible, since the usage history of your media is some of the most valuable data over time.