Posted on April 18, 2022
Put me firmly in the camp of those who prefer reading the book to watching the movie. Watching a movie, my gaze is fixed as the scene unfurls before my eyes at a predetermined pace—if something requires scrutiny or triggers my imagination, I have to pause or rewind (often not an option—or at the very least, a source of irritation to others in the room). Reading a book, I’m free to pause, ponder, revisit, and imagine to my heart’s content.
This ability to control the pace of my relationship with the world may explain why I prefer still photography to video capture. Acknowledging that video provides expressive opportunities that still photography can’t, I can’t help but think the power of still photography is under-appreciated.
One important distinction is that the motion in a video is applied by the medium; in a still image, the source of the motion is my own eyes. And while a video dictates the pace of my relationship with the scene, entering the world of a photograph gives my eyes the freedom to linger and explore the scene’s nooks and crannies, to savor its nuances at my own pace.
This still photography bias could be explained by the fact that in virtually all aspects of my life, “think fast” is rarely my default response. Rather, given a choice, I prefer analysis and comprehension to instant reaction. This evaluate-first mindset might also explain why my favorite sport is baseball (which many consider “too slow”), and why I prefer chess and Scrabble to video games (the last video game I played was Pong).
So I guess it should be no surprise that, as a landscape photographer, my subjects don’t move. I love having the time to craft a scene—to position myself, frame my subjects, and manage the exposure variables (that control motion, light, and depth)—confident that when I’m finally ready, my subject will still be there.
But, as we all know (and as Spider-Man reminded us), with great power comes great responsibility. To succeed, we photographers must be sensitive to our viewer’s experience. Is it clear what the picture is a about? Is there a place for the eye to land, and/or a path for the eye to follow?
Just plopping the viewer of an image into the scene without any clues about what to do there is an invitation to a quick exit. Which is why I try in every image to include visual signals to guide my viewer’s eye and make it as clear as possible what they’re supposed to be doing in the world I’m offering. And once they’re there and have examined whatever it is I’m trying to show them, they’re much more inclined to explore further and discover more of the scene’s subtleties.
Visual signals can take many forms. One popular device that I very consciously try not to think about is “leading lines.” Not because I think they’re inherently bad or wrong, but because we’ve heard about them to the point of eye-glazing cliché, and I fear that many photographers (and photography contest judges) have given them too much power—at the expense of other similarly, or even more, important factors. I’m not saying that my images don’t use leading lines, I’m just saying that I only use them when they work organically, without conscious thought.
That said, I am drawn to diagonals—a rock, shoreline, leaning tree trunk, fallen log, and so on. While these diagonals can indeed connect objects and lead viewers’ eyes, I’m more interested in the diagonal’s power to simultaneously move the viewers’ eyes across two planes of my scene: up/down and left/right.
And any line, whether horizontal, vertical, or diagonal doesn’t need to be an actual visible line—virtual lines work too. To understand the concept of a virtual line, I think in terms of visual “visual weight”: any object in my frame that, by virtue of its mass, brightness, position, or some other quality that creates enough visual gravity to pull a viewer’s eye in its direction. I try to avoid visually heavy objects that pull the eye away from the important parts of my frame, and to pair visually heavy objects that the viewer can subconsciously connect into a virtual line.
Another visual aid that I sometimes employ is a virtual frame—some object within the boundaries of my actual frame that holds my viewer’s eye in the scene, or nudges it it back into the scene the way the cushion on a pool table bumps the ball back into the action.
About this image
In last week’s Yosemite Moonbow and Wildflowers workshop (in which we got neither moonbows or wildflowers, but nevertheless enjoyed absolutely spectacular photography conditions), we made the 1 1/2 mile walk up to Mirror Lake. This is one of those hikes that’s as much about the journey as it is about the destination. Along the way I kept my eyes peeled for opportunities to pair Half Dome with churning Tenaya Creek. With Half Dome virtually straight up, of the way I was thwarted by the dense forest canopy, but as the trail steepened for its final ascent to the lake, I found a small gap I thought might work.
After climbing down among the jumbled boulders separating the trail from the creek, I pulled out my Sony a7RIV and attached my Sony 12-24 f/2.8 GM lens. While my Sony 16-35 GM would probably have worked here, I loved the extra room the 12-24 gave me to compose this scene that was beautiful from top to bottom.
With a little scrambling I was able to frame Half Dome with a pair of leaning tree trunks, dropping low to avoid blocking any of its face with a rogue branch. Not only did the leaning trunks provide a nice diagonal to move the eye, they also created a virtual frame to hold the eye in the scene. From my position I was also able to use the rushing creak to create a second diagonal. At 12mm, I was able to include many of the nearby rocks, the closest of which were no more than 2 feet away. These rocks made a great virtual frame across the bottom of the scene.
At 12mm and f/16, I knew I had plenty of focus wiggle room to achieve full front-to-back sharpness, and focused on a rock just a couple of feet into my scene. I wanted to put a slight blur in the water, but the 12-24 isn’t really filter-friendly (it can be done, but requires an expensive and awkward filter system that I haven’t found enough need for), so I couldn’t use a neutral density filter. Fortunately, the water here is so fast that the amount of blur I wanted wouldn’t be a problem. Turns ISO 50 and f/16 gave me enough blur to smooth the motion without losing its definition, exactly what I wanted.
It’s always interesting when I discover that I’d photographed a seemingly random scene before and used a similar composition. On the one hand, it’s a reminder to be careful not to get in a compositional rut, but on the other hand, it’s a confirmation that my compositional process is not random and likely reflective of my personal style.
One more thing
It’s interesting to compare these two images, capture almost exactly 5 years apart. The first one used 13mm, while last week’s was 12mm. The angle of view was similar but not identical, and I was closer to the creek in the first shot. The biggest difference between the two is the amount of sky and the amount of motion blur. Though I have no specific memory of my thoughts when I approach the earlier image, I know my process well enough to know exactly why they’re different. In the early image the sky was lousy (blank blue), and I composed to minimize it; in last week’s image I had clouds that were nice enough to justify including them. And the early image came after sunset (as I was walking back from Mirror Lake), so it was dark enough that it would have been very difficult to get anything but completely blurred water in the extremely fast (and much higher) creek.
Click an image for a closer look, and to view a slide show.
Posted on June 1, 2017
Most photographers know that Ansel Adams visualized his prints, and the darkroom work necessary to create them, before clicking the shutter. This ability to look into the future of each capture is part of what set Ansel Adams apart from his peers.
But Adams’ extensive darkroom work is often cited by digital photographers defending their over-processesed images. We’ve all heard (and perhaps even uttered ourselves) statements like, “Ansel Adams spent more time in the darkroom than he did in the field,” or “Ansel Adams would love Photoshop.” Perhaps true, but using Ansel Adams’ darkroom mastery to justify extreme Photoshop processing misses a significant point: Adams’ mental picture of the ultimate print was founded on a synergistic relationship between his vision and his camera’s vision, coupled with a master’s control of capture variables like composition, light, motion, and depth. In other words, Adams’ gift wasn’t merely his darkroom skills, it was an overarching vision that enabled him to make decisions now based on invisible realities he knew he’d encounter later.
I bring this up because I’m concerned about many photographers’ Photoshop-centric “fix it later” approach that seriously undervalues capture technique. This mindset ranges from simple over-reliance on the LCD for exposure with no real understanding of the histogram or how metering works (shoot-review-adjust, shoot-review-adjust, shoot-review-adjust, until the picture looks okay), to photographers who channel their disappointment with an image into an overzealous Photoshop session, pumping color, adding “effects,” or inserting/removing objects until they achieve the ooooh-factor they crave.
The better approach is to understand the potential in a scene, anticipate the processing that will be required to make the most of it, and shoot accordingly. In other words, Photoshop should inform capture decisions, not fix them.
Every image ever shot, film or digital, was processed. Just as the processing piece was easy to ignore when the exposed film you sent to a lab magically returned as prints or slides, many digital shooters, forgetting that a jpeg capture is processed by their camera, brag that their jpeg images are “Exactly the way I shot them.” Trust me, they’re not.
Whether you shoot monochrome film, Fuji Velvia slides, or low-compression jpeg, there’s nothing inherently “pure” about your image. On the other hand, digital landscape photographers who understand that processing is unavoidable, rather than relinquish control of their finished product to black-box processing algorithms in the camera, usually opt for the control provided by raw capture and hands-on processing.
Unfortunately, Photoshop’s power makes it difficult for many to know where to draw the processing line. And every photographer draws that line in a different place—one man’s “manipulation” is another’s “masterpiece.” Photoshop isn’t a panacea; its main function should be to complement the creativity already achieved in the camera, and not to fix problems created (or missed) at capture.
While I’m not a big Photoshop user, I readily acknowledge that it’s an amazing tool that’s an essential part of my workflow. I particularly appreciate that Photoshop gives the me ability to achieve things that are possible with black and white film and a decent darkroom, but difficult-to-impossible with the color transparencies I shot for over 25 years.
I was in Yosemite on a “secret mission” (my inner 10-year-old just loves saying that) for Sony, trying out the yet-to-be-announced (at the time) Sony 12-24 f4 G lens. Among the many places in Yosemite that are especially conducive to ultra-wide photography is Mirror Lake and its view of Half Dome from directly below, and that’s where I started.
Walking up the trail to Mirror Lake, I skirted Tenaya Creek in less than ideal light, scouting potential scenes for later. On the walk back after sunset, I returned to this scene that I’d found and mentally composed earlier. Despite already having an idea of how I wanted to shoot it, there’s quite a bit going on here, so it took some time and a bit of rock scrambling to get all the elements to work together: Half Dome, Tenaya Creek, the nearby evergreen, and the creekside boulders.
While most of the scene was in deep shade, the sky was still relatively bright. Capturing this much dynamic range in an unprocessed jpeg (or color transparency) would have been impossible—my highlights would have been too bright, the color in the sky would have been washed out, and the shadow detail would have been lost to blackness. And that’s exactly what I saw in the jpeg that popped onto my LCD. But despite the crappy looking jpeg on the back of my camera, my histogram told me all my color and detail was there in my raw file.
With a good histogram, I adjusted my ISO up and down, compensating with a corresponding shutter speed adjustment, to get different blur effects in the creek. Opening the raw file in Lightroom, I simply pulled the Highlights slider to the left and the Shadows slider to the right to confirm my successful exposure. While the exposure adjustment was essential, once that was done, there was very little processing left to do. And as much as he enjoyed the dark room, I suspect Ansel would have embraced any technology that gave him more time outdoors with his camera.