Photography & Color Theory, Part 2: The Science of Primary Colors

Rating: 5.00 based on 4 Ratings
  By Nathaniel Eames
Photography & Color Theory, Part 2: The Science of Primary Colors

In part one of this series on photography and color theory, which I advise you to read this article, we discussed why the RGB (Red, Green, Blue) color wheel is more versatile and important for photographers than the RYB (Red, Yellow, Blue) color wheel often used in painting and taught in schools. To recap, RGB is more versatile because it allows for a broader range of colors through color mixing than RYB, thanks to it’s more accurate and isolated stimulation of the color detectors in the human eye know as cones. RGB is more important for a photographer to master because it is how your digital camera captures light, how your computer screen recreates that light, and how Lightroom and Photoshop manipulate that light.

In part two of this series, we’ll go into a bit more detail about how RGB works and how it relates to CMYK, then give a brief overview of how these color standards relate to your camera and ultimately to your task as a photographer. Get ready for some lite science (pun intended)!

The human eye is made up of rods and cones. Rods see in black and white and are great for low-light vision and detecting contrast. Cones are less sensitive but see in color. Interestingly, in very dark environments you actually only see in black and white because there isn’t enough light to stimulate your eyes’ cones. The cones are divided into three groups (usually), distributed throughout the back of your eyes. Each type of 3 cones is sensitive to a different spectrum of color, and the relative stimulation of each of the three cone types tells your brain what color you are seeing. Below is a graph (yay graphs!) of the sensitivity of each cone type (on the y-axis) vs the wavelength of light (on the x-axis).

As you can see, the three cones types are called S, M, and L cones. The S cones are sensitive to deep Blue through Cyan-Green light with a peak around Blue. The M cones are sensitive to Blue through Red light with a peak around Green. And the L cones are sensitive to Blue through deep Red light (almost all visible color) with a peak around Yellow.

The idea of any color theory is to isolate the stimulation of each cone type, then mix these unique stimulations to trick the eye into thinking it is seeing a specific color. It’s important to note that the isolation of a cone is what we’re after, not the maximum stimulation of that cone. The biggest challenge the human eye presents to color theorists is the almost complete overlap in the sensitivity of the M and L cones. Isolating any one of these cones is difficult, and using Yellow light to stimulate the L cone doesn’t isolate it as much as using Red light, even though the L cone is more sensitive to Yellow than Red. One of the best, if not the best, primary color combinations humanity has devised to isolate our own cones is Red for the L cone, Green for the M cone, and Blue for the S cone, or RGB.

As you can see, using Yellow as a primary instead of Green doesn’t isolate the M cone from the L cone as effectively. But going beyond Green towards Cyan begins to stimulate the S cone onto of the M and L cones, and is also less effective. Before modern science could accurately track the human eyes’ cone sensitivity — and before we even realized there were eye cones or that light was a wave, for that matter — people have been guessing at the best primary colors by eye, and actually got pretty close with RYB.

Piet Mondrian, “Composition in Red, Yellow, and Blue” (1942)

So, now that you have a far deeper understanding of your own eyeball than you probably ever needed or wanted, let’s go a little deeper into your brain, shall we? As it turns out, there are multiple ways to see the same color because your eye doesn’t relay different signals to your brain from these two different methods.

Let’s say you wanted to the color Yellow. You have two options, you could look at light with a wavelength around 570nm, pure yellow light; or you could look at a bright light with a wavelength of 650nm (Red) and a dimmer light with a wavelength of 532nm (Green) at the same time. Both of these methods would stimulate your M and L cones, in the same way, so your brain would have no way of knowing that in the second, two-color method there is actually no “real” Yellow light (light with a wavelength of 570nm) present at all. This is what makes Yellow a secondary color in the additive RGB system.

Copyright Nicolas Raymond

The three secondary colors are Cyan, Magenta, and Yellow. These come from the combined stimulation of two of the primary RGB colors. White is what you get when you combine all three primaries, thus stimulating all of your cones; and Black is the result of using none of the primaries, thus stimulating none of your cones (otherwise known as a lack of light, i.e. darkness).

Now, let’s loop this all back to photography!

As we discussed, your camera sees in RGB, meaning it has three types of “cones” on its sensor that are sensitive to the Red, Green, and Blue wavelengths. Unfortunately, a camera’s color sensitive is not very similar to the human eye at all for a number of complicated reasons having to do with the chemistry of silicon, how light gets converted into an electrical signal, the spectral filtration of the glass in a lens, and so on. Below is a graph of channel sensitivity vs wavelength just like the one of the human eye we studied earlier, but this time showing the sensitivity of a Fujifilm digital camera with a CMOS sensor.

Fujifilm RGB CFA pic_02

For our purposes, we can ignore that there is three version of each line. Obviously, this graph is not shaped like the graph for a human eye. This discrepancy accounts for a lot of the color difference between a photograph and what you see in real life. While camera companies spend millions of dollars studying this difference and creating software that accounts for it (which is why you should care about your camera’s image processor), they can’t get it right all the time.

That’s where you come in. As a photographer, a lot of what you do is make up for the difference between the colors your camera sees and the colors you see. Both your brain and the camera’s software adjust for the luminosity, overall color temperature and tint, contrast, white point and black point, saturation, and many other relative elements of color automatically. There is absolutely no way for your camera to guess at the exact settings in your brain, and it can’t even see the same light anyway. It’s your job as a photographer to make up the difference as you see fit.

Copyright Matt Laskowski

Of course, you do not by any means need to make your photos look realistic. In fact, the best photos tend to live on the border of real-world vision and complete fantasy. But regardless of the reality, you want yours to convey, you need to know how to convert your camera’s RGB vision to your own.

In part 3 of this admittedly dense series on color theory, we’ll go over the basic concept of how to make convert color visions in photo editing software, tell you why you can select RGB or CMYK when opening a photoshop file, and maybe even touch on the science behind that fourth “K” color channel that we’ve been ignoring this whole time. See you then!

Rating: 5.00 based on 4 Ratings
The following two tabs change content below.
I'm a writer and photographer living in Brooklyn, specializing in product, architectural, and fine art photography. I have studied art in multiple mediums around the world and graduated with a degree in philosophy, art, and physics. Though I have been a practicing film photographer since I was 13 years old, I am also a tech-geek who keeps up-to-date on the latest advancements in the industry

Comments (0)

There are no comments yet.