Posted on April 17, 2019
The annual Grand Canyon monsoon is known for its spectacular electrical storms, but let’s not forget the rainbows that often punctuate these storms. A rainbow requires rain, sunlight, and the right viewing angle—given the ephemeral nature of a monsoon thunderstorm, it’s usually safe to assume that the sun probably isn’t far behind. To experience a rainbow after a Grand Canyon monsoon storm, all it takes is some basic knowledge, a little faith, and some good fortune.
To help with the knowledge part, I’m sharing the how-and-why of rainbows, excerpted from my just updated Rainbow article in my Photo Tips section. For the faith and good fortune part, read “The story of this image” at the bottom of this post.
Most people understand that a rainbow is light spread into various colors by airborne water drops. Though a rainbow can seem like a random, unpredictable phenomenon, the natural laws governing rainbow are actually quite specific and predictable, and understanding these laws can help photographers anticipate a rainbow and enhance its capture.
Energy generated by the sun bathes Earth in continuous electromagnetic radiation, its wavelengths ranging from extremely short to extremely long (and every wavelength in between). Among the broad spectrum of electromagnetic solar energy we receive are ultra-violet rays that burn our skin, infrared waves that warm our atmosphere, and a very narrow range of wavelengths the human eye sees.
These visible wavelengths are captured by our eyes and interpreted by our brain. When our eyes take in light comprised of the full range of visible wavelengths, we perceive it as white (colorless) light. Color registers when some wavelengths are more prevalent than others. For example, when light strikes an opaque (solid) object such as a tree or rock, some of its wavelengths are absorbed; the wavelengths not absorbed are scattered (reflected). Our eyes capture this scattered light, send the information to our brains, which interprets it as a color. When light strikes water, some is absorbed, some passes through to reveal the submerged world, and some light is reflected by the surface as a reflection.
To understand the interaction of water and light that creates a rainbow, it’s simplest to visualize what happens when sunlight strikes a single drop. Light entering a water drop refracts (bends), with different wavelengths refracting different amounts, which separates the originally homogeneous white light into the myriad colors of the spectrum.
But simply separating the light into its component colors isn’t enough to create a rainbow—if it were, we’d see a rainbow whenever light strikes water. Seeing the rainbow spectrum caused by refracted light requires that the refracted light be returned to our eyes somehow.
A raindrop isn’t flat like a sheet of paper, it’s spherical, like a ball. Light that was refracted (and separated into multiple colors) as it entered the front of the raindrop, continues through to the back of the raindrop, where some is reflected. Red light reflects back at about 42 degrees, violet light reflects back at about 40 degrees, and the other spectral colors reflect back between 42 and 40 degrees. What we perceive as a rainbow is this reflection of the refracted light—notice how the top color of the primary rainbow is always red, the longest visible wavelength; the bottom color is always violet, the shortest visible wavelength.
Every raindrop struck by sunlight creates a rainbow. But just as the reflection of a mountain peak on the surface of a lake is visible only when viewed from the angle the reflection bounces off the lake’s surface, a rainbow is visible only when you’re aligned with the 40-42 degree angle at which the raindrop reflects the spectrum of rainbow colors.
Fortunately, viewing a rainbow requires no knowledge of advanced geometry. To locate or anticipate a rainbow, picture an imaginary straight line originating at the sun, entering the back of your head, exiting between your eyes, and continuing down into the landscape in front of you—this line points to the “anti-solar point,” an imaginary point exactly opposite the sun. With no interference, a rainbow would form a complete circle, skewed 42 degrees from the line connecting the sun and the anti-solar point—with you at the center. (We don’t see the entire circle because the horizon usually gets in the way.)
Because the anti-solar point is always at the center of the rainbow’s arc, a rainbow will always appear exactly opposite the sun (the sun will always be at your back). It helps to remember that your shadow always points toward the anti-solar point. So when you find yourself in direct sunlight and rain, locating a rainbow is as simple as following your shadow and looking skyward—if there’s no rainbow, the sun’s probably too high.
Sometimes a rainbow appears as a majestic half-circle, arcing high above the distant terrain; other times it’s merely a small circle segment hugging the horizon. As with the direction of the rainbow, there’s nothing mysterious about its varying height. Remember, every rainbow would form a full circle if the horizon didn’t get in the way, so the amount of the rainbow’s circle you see (and therefore its height) depends on where the rainbow’s arc intersects the horizon.
While the center of the rainbow is always in the direction of the anti-solar point, the height of the rainbow is determined by the height of the anti-solar point, which will always be exactly the same number of degrees below the horizon as the sun is above the horizon. It helps to imagine the line connecting the sun and the anti-solar point as a fulcrum, with you as the pivot—picture yourself in the center of a teeter-totter: as one seat rises above you, the other drops below you. That means the lower the sun, the more of its circle you see and the higher it appears above the horizon; conversely, the higher the sun, the less of its circle is above the horizon and the flatter (and lower) the rainbow will appear.
Assuming a flat, unobstructed scene (such as the ocean), when the sun is on the horizon, so is the anti-solar point (in the opposite direction), and half of the rainbow’s 360 degree circumference will be visible. But as the sun rises, the anti-solar point drops—when the sun is more than 42 degrees above the horizon, the anti-solar point is more than 42 degrees below the horizon, and the only way you’ll see a rainbow is from a perspective above the surrounding landscape (such as on a mountaintop or on a canyon rim).
Of course landscapes are rarely flat. Viewing a scene from above, such as from atop Mauna Kea or from the rim of the Grand Canyon, can reveal more than half of the rainbow’s circle. From an airplane, with the sun directly overhead, all of the rainbow’s circle can be seen, with the plane’s shadow in the middle.
Not all of the light careening about a raindrop goes into forming the primary rainbow. Some of the light slips out the back of the raindrop to illuminate the sky, and some is reflected inside the raindrop a second time. The refracted light that reflects a second time before exiting creates a secondary, fainter rainbow skewed 50 degrees from the anti-solar point. Since this is a reflection, the colors of the secondary rainbow are reversed from the primary rainbow.
And if the sky between the primary and secondary rainbows appears darker than the surrounding sky, you’ve found “Alexander’s band.” It’s caused by all the light machinations I just described—instead of all the sunlight simply passing through the raindrops to illuminate the sky, some of the light was intercepted, refracted, and reflected by the raindrops to form our two rainbows, leaving less light for the sky between the rainbows.
Understanding the optics of a rainbow has practical applications for photographers. Not only does it help you anticipate a rainbow before it happens, it also enables you to find rainbows in waterfalls.
Unlike a rainbow caused by rain, which requires you to be in exactly the right position to capture the incongruous convergence of rainfall and sunshine, a waterfall rainbow can be predicted with clock-like precision—just add sunshine.
Yosemite is my location of choice, but there’s probably a waterfall or two near you that will deliver. Just figure out when the waterfall gets direct sunlight early or late in the day, then put yourself somewhere on the line connecting the sun and the waterfall. And if you have an elevated vantage point, you’ll find that the sun doesn’t even need to be that low in the sky.
Understanding rainbow optics can even help you locate rainbows that aren’t even visible to the naked eye. A “moonbow” (lunar rainbow) is a rarely witnessed and wonderful phenomenon that follows all the natural rules of a daylight rainbow. But instead of resulting from direct sunlight, a moonbow is caused by sunlight reflected by the moon.
Moonlight isn’t bright enough to fully engage the cones in your eyes that reveal color, though in bright moonlight you can see the moonbow as an arcing monochrome band. But a camera on a sturdy tripod can use its virtually unlimited shutter duration to accumulate enough light to bring out a moonbow in full living color. Armed with this knowledge, all you need to do is put yourself in the right location at the right time.
Following a nice sunrise at the always beautiful Point Imperial, the Grand Canyon Monsoon photo workshop group spent two hours near Bright Angel Point photographing a spectacular electrical storm that delivered multiple lightning captures to everyone in the group. When the storm moved too close and drove us to safety (we’re resilient and adventuresome, not stupid), it would have been easy call it a day and tally our bounty. I mean, who likes getting rained on? Photographers, that’s who.
Don Smith and I herded our group into the cars and headed to Cape Royal Road, where we could follow the Grand Canyon’s East Rim above Marble Canyon all the way to Cape Royal. Knowing that monsoon showers are fairly localized, the plan was to drive out of the cell that was dumping on us at the lodge and either shoot back at it, or (more likely) find another cell firing out over the canyon. In the back of my mind though was the hope for a rainbow above the canyon—dropping in the west, the sun was perfectly positioned for rainbows in the east.
The rainbow appeared just after we passed the Point Imperial Road junction, arcing high above the forest. Climbing through the trees toward the rim (and its views of Marble Canyon), my urgency intensified with the rainbow’s vivid color, but we were stuck behind a meandering tourist who clearly had different priorities. As tempted as I was to pass him, I knew that would be a mistake with three more cars following me. So we poked along at a glacial pace. After what felt like hours, screeched to a halt at the Vista Encantada parking area with the rainbow hanging in there—I swear everyone was out of the car and scrambling for their gear before I came to a complete stop.
With a full rainbow above an expansive view, I opted for my Sony 12-24 lens on my a7RII, but immediately began to question that choice. While Vista Encantada offers a very pretty view, it’s not my favorite scene to photograph because of the less-than-photogenic shrubbery in the foreground—a telephoto lens definitely would have worked better to eliminate the foreground, but I wanted more rainbow. So after a few failed attempts to find a composition at the conventional vista, I sprinted into the woods to find something better. This turned out to be a wise choice, as the shrubs here were replaced with (much more photogenic) mature evergreens.
In a perfect world I’d have found an unobstructed view into the Grand Canyon, but as photographers know, the world is rarely perfect. Committed to my wide lens, I decided to use the nearby evergreens as my foreground, moving back just far enough for the rainbow to clear their crowns. Composing wide enough to include the trees top-to-bottom also allowed me to include all of the rainbow—suddenly my 12-24 lens choice was genius!
After finishing at Vista Encantada we continued down the road and photographed another rainbow from Roosevelt Point, then wrapped up the day with a sunset for the ages at Cape Royal. A great day indeed, all thanks to monsoon weather that would have kept most tourists indoors.
Click an image for a closer look and to view slide show.
Posted on April 14, 2019
I’m often asked if I placed a leaf, moved a rock, or “Photoshopped” a moon into an image. Usually the tone is friendly curiosity, but sometimes it’s tinged with hints of suspicion that can border on accusation. While these questions are an inevitable part of being a photographer today, I suspect that I get more than my share because I aggressively seek out naturally occurring subjects to isolate and emphasize in my frame. But regardless of the questioner’s tone, my answer is always a cheerful and unapologetic, “No.”
We all know photographers who have no qualms about staging their scenes to suit their personal aesthetics. The rights and wrongs of that are an ongoing debate I won’t get into, other than to say that I have no problem when photographers arrange their scenes openly, with no intent to deceive. But photography must be a source of pleasure, and my own photographic pleasure derives from discovering and revealing nature, not manufacturing it. I don’t like arranging scenes because I have no illusions that I can improve nature’s order, and am confident that there’s enough naturally occurring beauty to keep me occupied for the rest of my life.
Order vs. chaos
As far as I’m concerned, nature is inherently ordered. In fact, in the grand scheme, “nature” and “order” are synonyms. But humans go to such lengths to control, contain, and manage the natural world that we’ve created a label for our failure to control nature: Chaos. Despite its negative connotation, what humans perceive as “chaos” is actually just a manifestation of the universe’s inexorable push toward natural order.
Let’s take a trip
Imagine all humans leave Earth for a scenic tour of the Milky Way. While we’re gone, no lawns are mowed, no buildings maintained, no fires extinguished, no floods controlled, no Starbucks built. Let’s say we return in 100 Earth years*. While the state of things would no doubt be perceived as chaotic, the reality is that our planet would in fact be closer to its natural state. And the longer we’re away, the more human-imposed “order” would be replaced by natural order.
What does all this have to do with raindrops on a poppy?
Read the story of this saturated shoot in my All Wet blog post
Venturing outdoors with a camera and the mindset that nature is inherently ordered makes me feel like a treasure hunter—I know the treasure is there, I just have to find it. Patterns and relationships hidden by human interference and the din of 360 degree multi-sensory input, further obscured by human bias, snap into coherence when I find the right perspective.
Finding water droplets to photograph can be as simple as picking a subject and squirting it with a spray bottle of water or (better still) glycerin. But what fun is that? If I’d have been staging this, I probably would have insisted on an open poppy, maybe with more and bigger drops. But that’s not what Nature gave me this soggy afternoon. So I photographed this raindrop festooned poppy (and many others) the old fashioned way—within minutes I was as wet as the poppy, and (to quote the immortal Cosmo Kramer) lovin’ every minute of it.
Click an image for a closer look and to view slide show.
Posted on April 7, 2019
Every year for the last 10 (or so) years I’ve traveled to the Grand Canyon during the Southwest summer monsoon to photograph lightning. Not only have I captured hundreds of lightning strikes and lived to tell about it (yay), I’ve learned a lot. A couple of years ago I added an article sharing my insights on photographing lightning to my photo tips section. With lightning season upon (or almost upon) us here in the United States, I’ve updated my article with new images and additional info. You can still find the article (with updates) in my Photo Tips section, but I’m re-posting it here in my regular blog feed as well.
Read the story of this image at the bottom of this post, just above the gallery of lightning images.
Let’s start with the given that lightning is dangerous, and if “safety first” is a criterion for intelligence, photographers are stupid. So combining photographers and lightning is a recipe for disaster.
Okay, seriously, because lightning is both dangerous and unpredictable, before attempting anything that requires you to be outside during an electrical storm, it behooves you to do your homework. And the more you understand lightning, how to avoid it and stay safe in its presence, the greater your odds of living to take more pictures. Not only will understanding lightning improve your safety, a healthy respect for lightning’s fickle power will also help you anticipate and photograph lightning.
Lightning is an electrostatic discharge that equalizes the negative/positive polarization between two objects. In fact, when you get shocked touching a doorknob, you’ve been struck by lightning. The cause of polarization during electrical storms isn’t completely understood, but it’s generally accepted that the extreme vertical convective air motion (convection is up/down circular flow caused when less-dense warm air rises, becomes more dense as it cools with elevation, and ultimately becomes cool/dense enough to fall. Convection is also what causes bubbling in boiling water. Convection in a thunderstorm carries positively charged molecules upward and negatively charged molecules downward. Because opposite charges attract each other, the extreme polarization (positive charge at the top of the cloud, negative charge near the ground) is quickly (and violently) equalized: lightning.
With lightning comes thunder, the sound of air expanding explosively when heated by a 50,000 degree jolt of electricy. The visual component of the lightning bolt that caused the thunder travels to you at the speed of light, over 186,000 miles per second (virtually instantaneous regardless of your distance on Earth). But lightning’s aural component, thunder, only travels at the speed of sound, a little more than 750 miles per hour—a million times slower than light. Knowing that the thunder occurred at the same time as the lightning flash, and how fast both travel, we can compute the approximate distance of the lightning strike. At 750 miles per hour, thunder will travel about a mile in about five seconds: Dividing the time between the lightning’s flash and the thunder’s crash by five gives you the lightning’s distance in miles; divide the interval by three for the distance in kilometers. If five seconds pass between the lightning and the thunder, the lightning struck about one mile away; fifteen seconds elapsed means it’s about three miles away.
The 30 (or so) people killed by lightning in the United States each year had one thing in common with the rest of us: they didn’t believe they’d be struck by lightning when they started whatever it was they were doing when they were struck. The only sure way to be safe in an electrical storm is to be in a fully enclosed structure or metal-framed vehicle, away from open windows, plumbing, wiring, and electronics.
While there’s no completely safe way to photograph lightning, it doesn’t hurt to improve your odds of surviving to enjoy the fruits of your labor. (Unfortunately, photographing lightning usually requires being outside.) Most lightning strikes within a six mile radius of the previous strike. So, if less than thirty seconds elapses between the flash and bang, you’re too close. And since “most” doesn’t mean “all,” it’s even better to allow a little margin for error. Thunder isn’t usually audible beyond ten miles—if you can hear the thunder, it’s safe to assume that you’re in lightning range.
But if you absolutely, positively must be outside with the lightning crashing about you, or you simply find yourself caught outside with no available shelter, there are few things you can do to reduce the chance you’ll be struck:
Photographing lightning at night is mostly a matter of pointing your camera in the right direction with a multi-second shutter speed and hoping the lightning fires while your shutter’s open—pretty straightforward. Photographing daylight lightning is a little more problematic. It’s usually over before you can react, so without a lightning sensor to recognize lightning and click your shutter, success is largely dumb luck (few people are quick enough see it and click). And using a neutral density filter to stretch the exposure time out to 20 or 30 seconds sounds great in theory, but a lightning bolt with a life measured in milliseconds, captured in an exposure measured in multiple seconds, will almost certainly lack the contrast necessary to be be even slightly visible.
Lightning Trigger: The best tool for the job
Most lightning sensors (all?) attach to your camera’s hot shoe and connect via a special cable to the camera’s remote-release port. When engaged, the sensor fires the shutter (virtually) immediately upon detecting lightning, whether or not the lightning is visible to the eye or camera. With many lightning sensors from which to choose, before I bought my first one I did lots of research. I ended up choosing the sensor that was the consensus choice among photographers I know and trust: Lightning Trigger from Stepping Stone Products in Dolores, CO. At around $350 (including the cable), the Lightning Trigger is not the cheapest option, but after many leading lightning-oriented photo workshops, I can say with lots of confidence that lightning sensors are not generic products, and the internal technology matters a lot. Base on my own results and observations, the Lightning Trigger is the only one I’d use and recommend (I get no kickback for this). On the other hand, if you already have a lightning sensor you’re happy with, there’s no reason to switch.
I won’t get into lots of specifics about how to set up the Lightning Trigger because it’s simple and covered fairly well in the included documentation. But you should know that of the things that sets the Lightning Trigger apart from many others is its ability to put your camera in the “shutter half pressed” mode, which greatly reduces shutter lag (see below). But that also means that connecting the Trigger will probably disable your LCD replay, so you won’t be able to review your captures without disconnecting—a simple but sometimes inconvenient task. You also probably won’t be able to adjust your exposure with the Lightning Trigger connected.
The Lightning Trigger documentation promises at least a 20 mile range, and after many years using mine at the Grand Canyon, I’ve seen nothing that causes me to question that. It also says you can expect the sensor to fire at lightning that’s not necessarily in front of you, or lightning you can’t see at all, which I will definitely confirm. For every click with lightning in my camera’s field of view, I get many clicks caused by lightning I didn’t see, or that were outside my camera’s field of view. But when visible lightning does fire somewhere in my composition, I estimate that the Lightning Trigger clicked the shutter at least 95 percent of the time (that is, even though I got lots of false positives, the Lightning Trigger missed very few bolts it should have detected). Of these successful clicks, I actually captured lightning in at least 2/3 of the frames.
The misses are a function of the timing between lightning and camera—sometimes the lightning is just too fast for the camera’s shutter lag. In general, the more violent the storm, the greater the likelihood of bolts of longer duration, and multiple strokes that are easier to capture. And my success rate has increased significantly beyond 2/3 since switching from a Canon 5DIII to Sony mirrorless (more on this in the Shutter Lag section).
The Lightning Trigger documentation recommends shutter speeds between 1/4 and 1/20 second—shutter speeds faster than 1/20 second risk completing the exposure before all of the secondary strokes fire; slower shutter speeds tend to wash out the lightning. To achieve daylight shutter speeds between 1/4 and 1/20 second, I use a polarizer, with my camera at ISO 50 and aperture at f/16 (and sometimes smaller). Of course exposure values will vary with the amount of light available, and you may not need such extreme settings when shooting into an extremely dark sky. The two stops of light lost to a polarizer helps a lot, and 4- or 6-stop neutral density filter is even better with fairly bright skies (but if you’re using a neutral density filter, try to avoid shutter speeds longer than 1/4 second).
Lightning is fast, really, really fast, so the faster your camera’s shutter responds after getting the command from the trigger device, the more success you’ll have. The delay between the click instruction (whether from your finger pressing the shutter button, a remote release, or a lightning sensor) and the shutter firing is called “shutter lag.”
The less shutter lag you have, the better your results will be. The two most important shutter lag factors are:
In addition to a lightning sensor and fast camera, you’ll need:
Getting the shot
Lightning is most likely to strike in or near the gray curtains (clearly recognizable as distant rain) that hang beneath dark clouds. In addition to visible rain curtains, the darkest and tallest clouds are usually the most likely to fire lightning. Here are a few more points to consider:
Do as I say (not as I do)
Be aware that electrical storms can move quite quickly, so you need to monitor them closely. Sometimes this simply means adjusting your composition to account for shifting lightning; other times it means retreating to the car if the cell threatens your location. No shot is worth your life.
About this image
On the first evening of last year’s second Grand Canyon Monsoon photo workshop, Don Smith and I took the group to Point Imperial for a sunset shoot. Based on the forecast we had little hope for lightning, but one thing I’ve learned over the many years of photographing the monsoon here is that the forecast isn’t the final word. We got another reminder of this that evening.
The view from Point Imperial is both expansive and different from other Grand Canyon vistas, stretching east across the Painted Desert and north to the Vermillion Cliffs. As the group made their way down to the vista platform, in the corner of my I thought I a lighting strike far to the north. A second bolt confirmed my discovery and soon we had the entire group lined up with cameras pointed and triggers ready.
With everyone in business, I set up my tripod and attached my Lightning Trigger to my Sony a7RIII. Since this lightning was close to 30 miles away, maybe farther than any lightning I’ve tried to photograph, so I hauled out my Sony 100-400 GM lens and zoomed in as tight as I could. I didn’t have to wait long to confirm that my Lightning Trigger would catch strikes this distant—it didn’t hurt that these were massive bolts, many with multiple pulses and forks.
Everyone was thrilled, so thrilled that it didn’t immediately register that the storm was moving our direction. I started at 400mm, but by the time I captured this frame I was just a little more than 100mm. That’s still a pretty safe distance, but with night almost on us and another cell moving in from the east, we decided to take our winnings and go home.
One final note: If you check my exposure settings, you’ll see that my shutter speed here was .4 seconds, well outside the 1/20-1/4 second range I suggest. But if you look at the other settings, you’ll see that I’d opened up to f/7.1, and had cranked my ISO to 400, an indication that twilight was settling in. Successful lightning photograph is all about contrast, and the darker the sky, the better the bolt stands out, even in a longer exposure. Had we stayed past dark (and lived), we could have jettisoned the Lighting Triggers and used multi-second exposures.
Join Don Smith and me in our next Grand Canyon Monsoon Photo Workshop
Read my article in Outdoor Photographer magazine, Shooting the Monsoon
Click an image for a closer look and slide show. Refresh the window to reorder the display.
Posted on April 5, 2019
Last Monday seemed like the perfect day for a poppy shoot in the foothills. I had the afternoon wide open—with the California media buzzing about this year’s “superbloom,” plus a forecast promising ideal conditions (calm wind and thin clouds), I couldn’t help dreaming about my own images of poppy-saturated fields. What could possibly go wrong?
Getting on the road proved a little more problematic than anticipated, but by 2 p.m. I was on my way, encouraged forward by an occasional poppy beside the freeway. Adding to my optimism, the aforementioned clouds were just right: thick enough to diffuse the sunlight, but not so dark that they’d close the sun-loving poppies. I exited the freeway as soon as possible, opting to drive the 2-lane roads that follow the hills’ natural contours. While my preferred my route isn’t the most direct, it is the most scenic, winding me through oak-studded hills deeply greened by this year’s copious winter rain. Though this drive takes a little more than an hour, the time passes quickly with so much pastoral beauty filling my windshield.
I knew the poppies in Northern California were starting late due to our relatively late winter, but was fairly confident I’d allowed enough time for the golden hillsides to kick in. In a good spring, poppies dot the entire route, but by the time I was southbound on scenic Highway 49, I started realizing I hadn’t seen any poppies since leaving Sacramento. Soon I was pretty resigned to the fact that this year’s superbloom was limited Southern California, and wondered if I’d find any poppies at all. Then it started to rain.
As easy as it would have been easy to cut my losses and turn around, I simply changed my expectations. With fresh memories of a brief but rewarding raindrop experience in Yosemite, I realized I didn’t need to find entire hillsides covered with poppies, that even a single poppy could be nice. So, rather than zipping along Highway 49 at 50 MPH (-ish) looking for golden slopes, I started exploring some of the quieter tributary roads and quickly realized that there were a sprinkling of poppies out.
I ended up spending two hours photographing a small patch of poppies I found on a dead-end road near Jackson. It rained the entire time, but with rain gear in my car for just these situations, I stayed warm and dry. My camera? Not so much. I tried working with an umbrella, but after a few minutes realized I was one arm short and just decided to test the water resistance of my Sony a7RIII. I’m happy to say that it passed with flying colors, as did the Sony 100-400 GM.
In the two weeks since I shot those raindrops in Yosemite, I’ve been plotting how to get even closer. On the Yosemite shoot I added extension tubes to my 100-400; this afternoon I returned to the extension tubes, but added my 2X teleconverter (which, I might add, handled the rain perfectly as well). I thought I’d try a few lens/extension-tube/teleconverter configurations, but I was having so much fun that I ended up shooting this way the entire time.
On a rainy day, light is already limited. But adding a teleconverter and extension tubes compounds the light problem. Because f/stop is a ratio with focal length as the numerator and lens opening as the denominator, adding a teleconverter and extension increases the focal length, resulting in less light reaching the sensor. A 2x teleconverter cuts two stops of light, which means my 100-400 that’s normally wide upon f/5.6 at 400mm becomes f/11 at (the teleconverted) 800mm (400mm x 2). And adding extension tubes also extends the lens’s effective focal length, further reducing the light reaching the sensor. To compensate for all this missing light, I shot everything this afternoon at either ISO 1600 or ISO 3200.
One of the cool things about this kind of photography is how different the world looks through the viewfinder. I love putting my eye to the viewfinder, moving the lens around, and changing focus slowly to see what snaps into view. In this case I was looking for a poppy to isolate from its nearby surroundings, but that also has something nearby (usually another flower) that I could soften enough to complement without competing. Sometimes I had a general idea of a subject before looking through my camera, other times I’d just explore with my lens until something stopped me.
Because depth of field shrinks not only with focal length, but also with focus distance, every frame I clicked this afternoon had a paper-thin range of sharpness. With such a shallow depth of field, none of these images would have been possible without a tripod. With my composition set, I’d pick a focus point (usually, but not always, a prominent raindrop), focus in my viewfinder until I was “certain” it was sharp, then instantly debunk my that “certainty” by magnifying the image in my viewfinder. This little exercise quickly taught me that with such a small margin for error, the best I could reliably achieve without magnifying the view was almost sharp enough, making pre-click magnification an essential part of my focus workflow (instead of just a cursory focus-check).
Each time I do this kind of photography I learn something. In this case it was how far away I could be and still fill my frame with a poppy. All of the images I captured this afternoon were from four to six feet away.
I wrapped up when the sky darkened further and the rain started coming down pretty hard. I couldn’t believe I’d been out there two hours, and spent most of the drive strategizing new ideas for the next time.
Click an image for a closer look and to view slide show.
Posted on March 10, 2019
With advanced exposure and metering capabilities, cameras seem to be getting “smarter” every year. So smart, in fact, that for most scenes, getting the exposure right is a simple matter of pointing your camera and clicking the shutter button. That’s fine if all you care about is recording a memory, but not only is there more to your exposure decision than getting the amount of light in your picture, there are many reasons to over- or underexpose a pictures. For the creative control that elevates your images above the millions of clicks being cranked out every day, giving control of one of its most important responsibilities to your camera overlooks an undeniable truth…
Your camera is stupid
Sorry—so is mine. And while I can easily cite many examples, right now it’s just important that you understand that your camera thinks the entire world is a middle tone. Regardless of what its meter sees, without intervention your camera will do everything in its power to make your picture a middle tone. Sunlit snowman? Lump of coal at the bottom of your Christmas stocking? It doesn’t matter—if you let your camera decide the exposure, it will turn out gray.
Modern technology offers faux-intelligence to help overcome this limitation. Usually called something like “matrix” or “evaluative” metering, this solution compares a scene to a large but finite internal database of choices, returning a metering decision based on the closest match. It works pretty well for conventional, “tourist” snaps, but often struggles in the warm or dramatic light artistic photographers prefer, and knows nothing of creativity. If you want to capture more than documentary “I was here” pictures, you’re much better off taking full control of your camera’s metering and exposure. Fortunately, this isn’t nearly as difficult as most people fear.
Laying the foundation
The amount of light captured for any given scene varies with the camera’s shutter speed, f-stop, and ISO settings. Photographers measure captured light in “stops,” much as a a cook uses a cup (of sugar or flour or almonds or whatever) to measure ingredients in a recipe. Adding or subtracting “stops” of light by increasing or decreasing the shutter speed, f-stop, or ISO makes a scene brighter or darker.
The beauty of metering is that a stop of light is a stop of light is a stop of light, whether you control it with the:
But while an aperture stop adds/subtracts the same amount of light as a shutter speed or ISO stop, the resulting picture can vary significantly based on which exposure variable combination you choose. Your shutter speed choice determines whether motion in the frame is blurred or frozen, while the aperture choice determines the picture’s depth of field. And while an ISO stop also adds/subtracts the same amount of light as shutter speed and aperture without affecting motion and depth, image quality decreases as the ISO increases. So getting the light right is only part of the exposure objective—you also need to consider how you want to handle any motion in the scene, and how much depth of field to capture.
For example, let’s say you’re photographing autumn leaves in a light breeze. You got the exposure right, but the leaves are blurred. To freeze that blur, you halve the time the shutter is open (faster shutter speed) to freeze the motion, but also reducing the light reaching the sensor by one stop. To replace that lost light, you could open your aperture by a stop (change the f-stop), double the ISO, or make a combination of fractional f-stop and ISO adjustments that total one stop. That’s a creative choice your camera isn’t capable of.
Today’s cameras have the ability to measure, or “meter” the light in a scene before the shutter clicks. In fact, most cameras have many different ways of evaluating a scene’s light. Your camera’s metering mode determines the amount of the frame the meter “sees.” The larger the area your meter measures, the greater the potential for a wide range of tones. Since most scenes have a range of tones from dark shadows to bright highlights, the meter will take an average of the tones it finds in its metering zone.
Metering mode options range from “spot” metering a very small part of the scene, to “matrix” (also know as “evaluative”), which looks at the entire scene and actually tries to guess at what it sees. Each camera manufacturer offers a variety of modes and there’s no consensus on name and function (different function for the same name, same function for different names) among manufacturers, so it’s best to read your camera’s manual to familiarize yourself with its metering modes.
Since I want as much control as possible, I prefer spot metering because it’s the most precise, covering the smallest area of the frame possible, an imaginary circle in the center three (or so) percent (depending on the camera) of what’s visible in the viewfinder. Spot metering, I can target the part of the frame I deem most important and base my exposure decision on the reading there.
Spot metering isn’t available in all cameras. In some cameras, the most precise (smallest metering area) metering mode available is “partial,” which covers a little more of the scene, somewhere around ten percent.
Don’t confuse the metering mode with the exposure mode. While the metering mode determines what the meter sees, the exposure mode determines the way the camera handles that information. Most DSLR (digital single lens reflex) and mirrorless cameras offer manual, aperture priority, shutter priority, and a variety of program or automatic exposure modes. Serious landscape photographers usually forego the full automatic/program modes in favor of manual (my preference) or aperture/shutter priority modes that offer more control.
If you select aperture or shutter priority mode, you specify the aperture (f-stop) or shutter speed, and the camera sets the shutter speed or aperture that delivers a middle tone based on what the meter sees. But you’re not done. Unless you really do want the middle tone result the camera desires (possible but far from certain), you then need to adjust the exposure compensation (usually a button with a +/- symbol) to specify the amount you want your subject to be above or below a middle tone.
For example, if you point your spot meter at a bright, sunlit cloud, the camera will only give your picture enough light make the cloud a middle tone—but if you’ve only given your scene enough light to make a white cloud gray, it stands to reason that the rest of your picture will be too dark. To avoid this, you would adjust exposure compensation to instruct your camera to make the cloud brighter than a middle tone by adding two stops of light (or however much light you want to give the cloud to make it whatever tone you think it should be).
Rather than aperture priority, I prefer manual mode because I never want my camera making decisions for me. And once it’s mastered (a simple task), I think manual metering is easier. In manual mode, after setting my aperture (based on the depth of field I want), I point my spot-meter zone (the center 3% of the scene in my viewfinder) at the area I want to meter on and dial in whatever shutter speed gives me the amount of light I think will make that subject (where my meter points) the tone I want. That’s it. (In manual mode you can ignore the exposure compensation button.)
Trust your histogram
I see many people people base exposure decisions on the brightness of the image on the LCD. The typical approach is some variation of: 1) Guess at the exposure settings 2) Click 3) Look at the picture on the LCD 4) Adjust 5) Repeat. Not only is this approach lazy, it’s a waste of time and woefully inaccurate.
I call it lazy because these photographers (but of course I don’t mean you) don’t care enough about their craft to apply a skill that only takes minutes to learn (see above), a skill that will serve them best in the most difficult exposure situations. But that’s not the real problem—the real problem is the inaccuracy introduced by trusting the image on your LCD.
LCDs vary in brightness because viewing conditions change. With a brightness adjustment in every camera’s menu, many photographers simply turn their brightness to maximum because it’s easier to see, especially in sunlight, and a bright picture usually looks better. Other photographers use an auto-brightness setting that adjusts with ambient light—the more light it detects, the brighter the display.
Regardless of your LCD’s brightness setting, the variation in brightness of the screen and/or the ambient light make the image on the LCD a very unreliable exposure indicator. When people tell me their images are usually too dark on their computer or in prints, the first thing I do is check the brightness of their camera’s LCD—if it’s set to maximum, they’re likely fooled into thinking the exposure was brighter than it actually was.
How do you fix this? Simple: Learn to read a histogram, and never use your camera’s LCD for exposure decisions again. The histogram is as simple as it is useful.
One more time
So let’s review. Start by selecting your metering mode (the way your meters”sees” the scene: spot, partial, matrix, and so on), then take your camera out of auto exposure mode and put it in manual (my recommendation) or aperture priority (if you prefer) mode. (Remember, I’m a landscape photographer so I never use shutter priority; if you’re shooting action, to better control the motion in your frame, you probably want to consider shutter priority if you don’t like manual exposure.)
Before metering, set your camera to whatever aperture you decide your composition calls for. Then meter, remembering that your camera isn’t telling you what the exposure should be, it’s telling you the exposure that will make what it sees a middle tone. Finally, correct the meter’s middle-tone bias by dialing in the shutter speed (in manual mode) or exposure compensation (in aperture priority) that gives the correct exposure.
After you click, check your histogram to be sure you got the exposure right.
What’s the correct exposure? That’s a creative choice that’s entirely up to you—feel free to play until you’re comfortable with your results. And the more you do it, the easier it gets.
Below are some sample images and the thought process I followed to get the exposure.
Now get to work
Don’t wait to apply all this for the first time until you really, really want the shot. Instead, find a time when the results don’t matter and play with your camera to find out how much control you have over exposure. In fact, you can do this right now in your backyard or even sitting right there in your recliner. Meter something nearby, set an exposure, and click. Look at the result, adjust the exposure, and click again. Watch your histogram, and watch how its shape shifts right as you increase the exposure, or left as you decrease it. Continue doing this until you’re confident in your ability to make a scene brighter or darker, and can consistently achieve the exposure you expect.
About this image
It not too difficult to figure out how Iceland’s Diamond Beach got its name. A black sand beach on Iceland’s south coast, just down stream from Glacier Lagoon, Diamond Beach is dotted with glistening blocks of ice ranging in size from a refrigerator ice cube, to an entire refrigerator.
As spectacular as Diamond Beach was on my first visit, it was also unlike anything I’d ever seen, so it took me a little while to figure out how I wanted to shoot it. I tried a few frames that used long shutter speeds to blur the motion of the waves around the ice, but when the sun appeared, I saw another opportunity.
With my Sony 12-24 G lens on my Sony a7RIII, I set up just a couple of feet from one of the larger icebergs, went all the way out to 12mm, and waited for the sun to peek above the ice, hoping to capture a sunstar. As I waited, I tweaked my exposure settings, dialing my aperture to f/18 to maximize my depth of field and to enhance the sunstar effect. When the sun appeared I was in business; as it rose, I dropped my camera lower to keep just a small sliver of sun visible above the ice.
Any frame that includes the sun is frame with lots of dynamic range. To get my exposure right for this image, I relied entirely on the histogram and ignored the image on the LCD, with its nearly black shadows and white sky. Despite the way histogram told me I’d captured all the tones, and I confirmed that as soon as I started processing in Lightroom.
Click an image for a closer look, and a slide show.
Posted on March 3, 2019
As aggressively as I seek creative ways to express nature with my camera, and as important as I think that is, sometimes a scene is so beautiful that it’s best to just get out of the way and let the scene speak for itself. I had one of those experiences last month at Tunnel View in Yosemite.
There’s a reason Tunnel View is one of the most photographed vistas in the world: El Capitan, Half Dome, Cathedral Rocks, Bridalveil Fall—each would be a landscape icon by itself; put them all together in one view and, well…. But the view this evening was truly transcendent, even by Yosemite standards. In Yosemite Valley below, trees and granite still glazed with the snowy vestiges of a departing storm seemed to throb with their own luminance. And above Half Dome a full moon rose through a sky that had been cleansed of all impurities by the departing storm, an otherworldly canvas of indigo, violet, and magenta.
On these crystal-clear, winter-twilight moonrises, the beauty rises with the moon, reaching a crescendo about 20 minutes after sunset, after which the color quickly fades and the landscape darkens. Unfortunately, a some point before the crescendo, the dynamic range becomes so extreme that no camera (not even the dynamic range monster Sony a7RIII) can simultaneously extract usable detail from a daylight-bright moon and dark landscape.
I’d driven to Yosemite solely to photograph this moonrise, an eight hour roundtrip for 40-minutes of photography. Starting with the moon’s arrival about 20 minutes before sunset, I’d juggled three camera bodies and two tripods, first shooting ultra long, then gradually widening to include more of the snowy landscape. Already my captures had more than justified the time and miles the trip would cost me, but watching the moon traverse the deepening hues of Earth’s shadow, I wasn’t ready to stop.
I’ve learned that with a scene this spectacular, conveying the majesty doesn’t require me to pursue the ideal foreground, or do creative things with motion, light, or depth of field. In fact, I’ve come to realize that sometimes a scene can be so beautiful that creative interpretations can dilute or distract from the very beauty that moves me. On this evening in particular, I didn’t want to inject myself into that breathtaking moment, I just wanted to share it.
To simply my images, I opted for a series of frames that used tried-and-true compositions that I’d accumulated after years (decades) of photographing here, the compositions I suggest as “starters” for people who are new to Yosemite, or use myself to jump-start my inspiration: relatively tight horizontal and vertical frames of El Capitan, Half Dome, Bridalveil Fall; El Capitan and Half Dome; or Half Dome and Bridalveil Fall. In the image I share above I concentrated on Half Dome and Bridalveil Fall, capping my frame with the wispy fringes of a large cloud that hovered above Yosemite Valley.
Simplifying my compositions had the added benefit of freeing all of my (limited) brain cells to concentrate on the very difficult exposure. The margin for error when photographing a moon this far after sunset is minuscule—if you don’t get the exposure just right, there’s no fixing it in Photoshop later: too dark and there’s too much noise in the shadows; too bright and lunar detail is permanently erased. The problem starts with the understandable inclination to expose the scene to make the landscape look good on the LCD, pretty much guaranteeing that the moon will be toast. Compounding this problem is the histogram, which most of us have justifiably come to trust as the final arbiter for all exposures. But when a twilight moon (bright moon, dark sky) is involved, even the histogram will fail you because the moon is such a small part of the scene, it barely (if at all) registers on the histogram.
Rather than the histogram, for these dark sky moon images I monitor my LCD’s highlight alert (“blinking highlights”), which is usually the only way to to tell that the moon has been overexposed. If the moon is flashing, I know I’ve given the scene too much light and need to back off until the flashing stops—no matter how dark the foreground looks. This is where it’s essential to know your camera, and how far you can push its exposure beyond where the histogram and highlight alert warn you that you’ve gone too far.
When I’m photographing a full moon rising into a darkening sky, I push the exposure to the point where my highlight alert just starts blinking (only the brightest parts of the moon, not the entire disk, are flashing), then I give it just a little more exposure. I know my Sony a7RIII well enough to know that I can still give it a full stop of light beyond this initial flash point and still recover the highlights later. The shadows? In a scene like this they’ll look nearly black, a reality my histogram will confirm, but I never cease to be amazed by how much detail I can pull out of my a7RIII’s shadows in Lightroom and Photoshop.
I continued shooting for several minutes after this frame, and discovered later that even my final capture contained usable highlights and shadows. I chose this image, captured nearly five minutes before I quit, because it contained the best combination of color, lunar detail, and clean (relatively noise-free) Yosemite Valley.
Posted on February 24, 2019
Roll over, Ansel
Several years ago, while thumbing through an old issue of “Outdoor Photographer” magazine, I came across an article on Lightroom processing. It started with the words:
“Being able to affect one part of the image compared to another, such as balancing the brightness of a photograph so the scene looks more like the way we saw it rather than being restricted by the artificial limitations of the camera and film is the major reason why photographers like Ansel Adams and LIFE photographer W. Eugene Smith spent so much time in the darkroom.”
While it’s true that Ansel Adams and W. Eugene Smith were indeed darkroom masters, statements like this only perpetuate the myth that the photographer’s job is to reproduce the scene “the way we saw it.” And because I imagine that using Ansel Adams himself to peddle this notion must send Ansel rolling in his grave, I’ll start by quoting the Master himself:
Do these sound like the thoughts of someone lamenting the camera’s “artificial limitations” and photography’s inability to duplicate the world the “way we saw it”? Take a look at just a few of Ansel Adams’ images and ask yourself how many duplicate the world as we see it: nearly black skies, exaggerated shadows and/or highlights, and skewed perspectives that intentionally emphasize one subject over another, and on and on. And no color! (Not to mention the fact that every image is a two-dimensional rendering of a three-dimensional world.) Ansel Adams wasn’t trying to replicate scenes more like he saw them, he was trying to use his camera’s unique (not “artificial”) vision to show us aspects of the world he wanted us to see, qualities we might otherwise miss or fail to appreciate.
The rest of the OP article contained solid, practical information for anyone wanting to come closer to replicating Ansel Adams’ traditional darkroom techniques in the contemporary digital darkroom. But the assertion that photographers are obligated to photograph the world as they saw it baffles me.
You’ve heard me say this before
The camera’s vision isn’t artificial, it’s different. Dynamic range, focus, motion, and depth are all rendered differently in a camera than they are to the human eye. And while the human experience of any scene is 360 degrees, a still images is constrained by a rectangular box. Forcing images to be more human-like doesn’t just deny the camera’s unique ability to expand viewers’ perception of the world, it’s literally impossible. Which is why I’ve always felt that the best photographers are the ones who embrace their camera’s vision rather than trying to “fix” it.
For example, limiting dynamic range allows us to emphasize color and shapes that get lost in the clutter of human vision; a narrow range of focus can guide the eye and draw attention to particular elements of interest and away from distractions; and the ability to accumulate light over a photographer-controlled interval exposes color and detail hidden by darkness, and conveys motion in an otherwise static medium.
But what about that rectangular box that constrains the world of a still image? I can think of no better way to excise distractions and laser-focus viewers’ attention on the target subject than taking advantage of the camera’s finite world. While many nature photographers default to their wide angle lenses to expand the visual box surrounding their landscape images and save their long lenses for wildlife, a telephoto lens is an essential landscape tool. The world can be a busy place—in even the most spectacular of vistas, so much is happening visually that going wide in a still photo to include as much beauty as possible introduces many extraneous features, and risks shrinking the scene’s most compelling elements to virtual insignificance.
The best way to overcome wide angle scene dilution is to forego the conventional view (the first thing everyone sees), identify the aspects of the scene that make it special, and isolate them with a telephoto lens. Whether it’s a striking mountain or tree, backlit poppy, or rising moon, isolation enlarges the target subject and removes any ambiguity about what the image is about. And an intimate, up-close perspective of a subject more commonly seen from a distance can be truly mesmerizing.
About this image
I stood atop two feet of packed snow at Tunnel View, more than eight miles from Half Dome, and ten miles from the ridge that would be ground zero for the moonrise that had drawn me in the first place. Along with two other photographers who also seemed aware of the moon’s plans, I had the best (least obstructed) Tunnel View vantage point to myself. Rising full moon or not, before me the table was set for a spectacular Yosemite feast: Brand new snow glazed every exposed surface, and in the pristine winter air, Tunnel View’s veritable who’s who of Yosemite landmarks—El Capitan, Cloud’s Rest, Half Dome, Sentinel Rock, Sentinel Dome, Cathedral Rocks, and Bridalveil Fall—seemed etched into the scene. Above, dark clouds boiled atop El Capitan, while wispy fog radiated from the valley floor.
Occasionally a tourist would wander up and request help identifying Horsetail’s microscopic filament on El Capitan’s vast granite; one or two even pointed at Bridalveil Fall and asked if that was Horsetail Fall. A couple of people, blissfully oblivious to the Horsetail Fall phenomenon, simply wanted their picture taken with this iconic Yosemite backdrop.
About 150 feet down the wall to my right, at least two-dozen photographers on tripods were inexplicably crammed into a significantly less desirable view. While that vantage point gave them an acceptable sightline to Horsetail Fall (as did my own), the rest of the magnificent Tunnel View vista was partially obscured by trees. The only explanation I could muster for their odd choice was that the first to arrive for some reason set up there, and each subsequent photographer assumed that since others have set up here, this must be the spot.
While Horsetail Fall was irrelevant to my objective this evening, the overnight snow still clinging to the trees was undeniable bonus. Getting to Tunnel View had been an adventure, worse even than I’d expected, and I was glad that I’d allowed ample time. The difficulty started with a 30-minute (Horsetail Fall gawker infused) queue at the Arch Rock entrance station. My suspicion that these were mostly inexperienced photographers and tourists (who’d just read an article or seen a news segment and decided to check it out) was confirmed when I was forced to navigate a slalom course of slipping, sliding, spinning cars that had ignored the very clearly communicated chain controls. The serious photographers, those who had photographed Horsetail Fall before, or who had the sense to research the phenomenon well in advance, had been in position for the five-minute show for hours.
With the moon’s imminent arrival upon a scene that already bordered on visual overload, my plan to ensure that the main purpose of my visit didn’t get swallowed by Tunnel View’s conventional post-storm majesty was to start, while the moon was still right on the horizon, with extremely tight compositions. As the moon rose, I planned to widen my focal length, gradually including more scene and turning the moon into more of an accent.
To achieve this, I was flanked by two tripods, and had three camera bodies fired up and ready for action: my Sony a7RIII, a7RII, and a6300. Atop my Really Right Stuff TVC-24L tripod was my a6300 loaded with my Sony 100-400 GM and Sony 2X teleconverter. This combination gave me a 600-1200mm full-frame equivalent focal range (because the a6300 is a 1.5-crop APS-C sensor). When including the rising moon required reducing my focal length below 800mm, I’d switch to my higher resolution, full frame Sony a7RII. And because the moon would rise just about 20 minutes before sunset, I also had to be aware of the possibility that Horsetail Fall would fire up. To handle that possibility, and to cover all my general wide composition needs, mounted on my RRS TQC-14 tripod was my Sony a7RIII and Sony 24-105 f/4 lens.
I pointed my a6300/100-400 at the point where I expected the moon to appear about 20 minutes before sunset, zoomed all the way out to 800mm (1200mm full-frame equivalent), metered, focused, and waited. I started clicking almost immediately after seeing the moon’s leading edge nudge through the trees, refining my composition slightly after each click until I had the right balance of moon and Half Dome. It always surprises me how quickly the moon moves, speed that’s magnified tremendously at such an extreme focal length. Spending the next 40 minutes frantically changing focal lengths, switching lenses and camera bodies, re-metering and re-focusing, and bouncing between tripods, I felt like the percussionist in a jazz band.
When the moon climbed far above Yosemite Valley and the dynamic range between the daylight-bright moon and nighttime landscape made photography impossible, I paused before packing up my gear and just marveled at the beauty. Horsetail Fall had caught a few late rays of sunlight but never did completely light up. I thought about the disappointment of frigid photographers who had waited patiently in the valley below for a show that didn’t happen, and counted my blessings.
Click an image for a closer look and slide show. Refresh the window to reorder the display.