Posted on January 6, 2019
A couple of weeks ago I wrote about how to photograph the moon big, the bigger the better, to overcome its tendency to (appear to) shrink in a wide angle image. But the moon doesn’t need to be big to be a striking addition to a landscape photo.
To balance a landscape frame, I think in terms of “visual gravity” (or “visual weight”): how much the scene’s various elements might pull the viewer’s eye. Unlike conventional gravity, which is a constant determined by an object’s mass (period, end of story), visual gravity is a more subjective quality that is a function of the characteristics of an object, such as its size, brightness, contrast, or color. Thinking in terms of the visual gravity of the various elements in my scene, I (usually) try to avoid any hemisphere of the frame feeling significantly heavier than its corresponding hemisphere (top/bottom, left/right).
Certainly any object as bright (and contrasty) as the moon will pull the eye. But after noticing that many objects at least as bright or contrasty as the moon somehow lack the moon’s ability to pull the eye, I realized I’d been missing an essential component of visual gravity: emotional connection. There is just something about the emotional pull of the moon that draws the human eye far more than its more tangible physical qualities might suggest.
For years I’ve tried to leverage the moon’s emotional weight, using it to elevate a relatively ordinary scene, or to add a simple accent that takes an already beautiful scene to the next level. Last month I got just such an opportunity at Valley View in Yosemite. This was the first night of my annual Yosemite Winter Moon photo workshop. I’d planned moonrises for the other three nights of the workshop, but hadn’t really plotted the first night because the moon would be so high at sunset, and during the moon’s twilight “sweet spot” (when the sky is dark enough for good contrast, but the landscape still has enough light to photograph) the moon wouldn’t align with Half Dome from any of Yosemite Valley’s Half Dome vantage points.
Nevertheless, I chose Valley View for sunset knowing that the moon might make a nice accent above Cathedral Rocks and Bridalveil Fall. As soon as we arrived it was clear the conditions had aligned for us on this chilly December evening. In the distance Bridalveil Fall disappeared into a blanket of dense fog hovering above Bridalveil Meadow, while the moon mingled with wispy clouds in the twilight pastels overhead. And at our feet, the Merced River made a perfect mirror.
I knew that capturing all this beauty required a fairly wide composition that would certainly shrink the moon. Because a horizontal composition that included the moon and its reflection would have to be so wide that would shrink everything (and include a lot of less interesting foreground trees), I opted for a vertical composition that emphasized the scene’s primary elements: the moon, Cathedral Rocks, and Bridalveil Fall.
For this shot I went wide with my Sony 24-105 G lens on my Sony a7RIII body. Once I had the general arrangement of my frame worked out, I moved along the riverbank until everything felt balanced. I used the trees on the left to block the empty sky, and the trees on the right to balance them. And I’ve always liked the small diagonal tree a little left of center, and think in this composition it makes a good counterbalance for the visual weight of Bridalveil Fall.
Is the moon the primary subject the way it would likely be in a telephoto image? Certainly not. I know some people might think the moon is too small in this composition, but for someone like me, with a lifelong relationship with the night sky, the moon makes a perfect accent. And in this image I think just that little pinch of moon is enough to balance a frame that would otherwise be a little heavy on the left.
Click an image for a closer look and slide show. Refresh the window to reorder the display.
Posted on October 14, 2018
What’s the point?
It seems like one of photography’s great mysteries is achieving proper focus: the camera settings, where to place the focus point, even the definition of sharpness are all sources of confusion and angst. If you’re a tourist just grabbing snapshots, everything in your frame is likely at infinity and you can just put your camera in full auto mode and click away. But if you’re a photographic artist trying to capture something unique with your mirrorless or DSLR camera and doing your best to have important visual elements objects at different distances throughout your frame, you need to stop letting your camera decide your focus point and exposure settings.
Of course the first creative focus decision is whether you even want the entire frame sharp. While some of my favorite images use selective focus to emphasize one element and blur the rest of the scene, most (but not all) of what I’ll say here is about using hyperfocal techniques to maximize depth of field (DOF). I cover creative selective focus in much greater detail in another Photo Tip article: Creative Selective Focus.
Beware the “expert”
I’m afraid that there’s some bad, albeit well-intended, advice out there that yields just enough success to deceive people into thinking they’ve got focus nailed, a misperception that often doesn’t manifest until an important shot is lost. I’m referring to the myth that you should focus 1/3 of the way into the scene, or 1/3 of the way into the frame (two very different things, each with its own set of problems).
For beginners, or photographers whose entire scene is at infinity, the 1/3 technique may be a useful rule of thumb. But taking the 1/3 approach to focus requires that you understand DOF and the art of focusing well enough to adjust your focus point when appropriate, and once you achieve that level of understanding, you may as well do it the right way from the start. That ability becomes especially important in those scenes where missing the focus point by just a few feet or inches can make or break and image.
Back to the basics
Understanding a few basic focus truths will help you make focus decisions:
Depth of field discussions are complicated by the fact that “sharp” is a moving target that varies with display size and viewing distance. But it’s safe to say that all things equal, the larger your ultimate output and closer the intended viewing distance, the more detail your original capture should contain.
To capture detail a lens focuses light on the sensor’s photosites. Remember using a magnifying glass to focus sunlight and ignite a leaf when you were a kid? The smaller (more concentrated) the point of sunlight, the sooner the smoke appeared. In a camera, the finer (smaller) a lens focuses light on each photosite, the more detail the image will contain at that location. So when we focus we’re trying to make the light striking each photosite as concentrated as possible.
In photography we call that small circle of light your lens makes for each photosite its “circle of confusion.” The larger the CoC, the less concentrated the light and the more blurred the image will appear. Of course if the CoC is too small to be seen as soft, either because the print is too small or the viewer is too far away, it really doesn’t matter. In other words, areas of an image with a large CoC (relatively soft) can still appear sharp if small enough or viewed from far enough away. That’s why sharpness can never be an absolute term, and we talk instead about acceptable sharpness that’s based on print size and viewing distance. It’s actually possible for the same image to be sharp for one use, but too soft for another.
So how much detail do you need? The threshold for acceptable sharpness is pretty low for an image that just ends up on an 8×10 calendar on the kitchen wall, but if you want that image large on the wall above the sofa, achieving acceptable sharpness requires much more detail. And as your print size increases (and/or viewing distance decreases), the CoC that delivers acceptable sharpness shrinks correspondingly.
Many factors determine the a camera’s ability to record detail. Sensor resolution of course—the more resolution your sensor has, the more important it becomes that to have a lens that can take advantage of that extra resolution. And the more detail you want to capture with that high resolution sensor and tack-sharp lens, the more important your depth of field and focus point decisions become.
The foundation of a sound approach to maximizing sharpness for a given viewing distance and image size is hyperfocal focusing, an approach that uses viewing distance, f-stop, focal length, and focus point to ensure acceptable sharpness.
The hyperfocal point is the focus point that provides the maximum depth of field for a given combination of sensor size, f/stop, and focal length. Another way to say it is that the hyperfocal point is the closest you can focus and still be acceptably sharp to infinity. When focused at the hyperfocal point, your scene will be acceptably sharp from halfway between your lens and focus point all the way to infinity. For example, if the hyperfocal point for your sensor (full frame, APS-C, 4/3, or whatever), focal length, and f-stop combinition is twelve feet away, focusing there will give you acceptable sharpness from six feet (half of twelve) to infinity—focusing closer will soften the distant scene; focusing farther will keep you sharp to infinity but extend the area of foreground softness.
Because the hyperfocal variable (sensor size, focal length, f-stop) combinations are too numerous to memorize, we usually refer to an external aid. That used to be awkward printed tables with long columns and rows displayed in microscopic print, the more precise the data, the smaller the print. Fortunately, those have been replaced by smartphone apps with more precise information in a much more accessible and readable form. We plug in all the variables and out pops the hyperfocal point distance and other useful information
It usually goes something like this:
You’re not as sharp as you think
Since people’s eyes start to glaze over when CoC comes up, they tend to use the default returned by the smartphone app. But just because the app tells you you’ve nailed focus, don’t assume that your work is done. An often overlooked aspect of hyperfocal focusing is that app makes assumptions that aren’t necessarily right, and in fact are probably wrong.
The CoC your app uses to determine acceptable sharpness is a function of sensor size, display size, and viewing distance. But most app’s hyperfocal tables assume that you’re creating an 8×10 print that will be viewed from a foot away—maybe valid 40 years ago, but not in this day of mega-prints. The result is a CoC three times larger than the eye’s ability to resolve.
That doesn’t invalidate hyperfocal focusing, but if you use published hyperfocal data from an app or table, your images’ DOF might not be as ideal as you think it is for your use. If you can’t specify a smaller CoC in your app, I suggest that you stop-down a stop or so more than the app/table indicates. On the other hand, stopping down to increase sharpness is an effort of diminishing returns, because diffraction increases as the aperture shrinks and eventually will soften the entire image—I try not to go more than a stop smaller than my data suggests.
Keeping it simple
As helpful as a hyperfocal app can be, whipping out a smartphone for instant in-the-field access to data is not really conducive to the creative process. I’m a big advocate of keeping photography as simple as possible, so while I’m a hyperfocal focus advocate in spirit, I don’t usually use hyperfocal data in the field. Instead I apply hyperfocal principles in the field whenever I think the margin of error gives me sufficient wiggle room.
Though I don’t often use the specific hyperfocal data in the field, I find it helps a lot to refer to hyperfocal tables when I’m sitting around with nothing to do. So if I find myself standing in line at the DMV, or sitting in a theater waiting for a movie (I’m a great date), I open my iPhone hyperfocal app and plug in random values just to get a sense of the DOF for a given f-stop and focal length combination. I may not remember the exact numbers later, but enough of the information sinks in that I accumulate a general sense of the hyperfocal DOF/camera-setting relationships.
Finally, something to do
Unless I think I have very little DOF margin for error in my composition, I rarely open my hyperfocal app in the field. Instead, once my composition is worked out and have determined the closest object I want sharp—the closest object with visual interest (shape, color, texture), regardless of whether it’s a primary subject.
Of course these distances are very subjective and will vary with your focal length and composition (not to mention the strength of your pitching arm), but you get the idea. If you find yourself in a small margin for error focus situation without a hyperfocal app (or you just don’t want to take the time to use one), the single most important thing to remember is to focus behind your closest subject. Because you always have sharpness in front of your focus point, focusing on the closest subject gives you unnecessary sharpness at the expense of distant sharpness. By focusing a little behind your closest subject, you’re increasing the depth of your distant sharpness while (if you’re careful) keeping your foreground subject within the zone of sharpness in front of the focus point.
And finally, foreground softness, no matter how slight, is almost always a greater distraction than slight background softness. So, if it’s impossible to get all of your frame sharp, it’s usually best to ensure that the foreground is sharp.
Why not just automatically set my aperture to f/22 and be done with it? I thought you’d never ask. Without delving too far into the physics of light and optics, let’s just say that there’s a not so little light-bending problem called “diffraction” that robs your images of sharpness as your aperture shrinks—the smaller the aperture, the greater the diffraction. Then why not choose f/2.8 when everything’s at infinity? Because lenses tend to lose sharpness at their aperture extremes, and are generally sharper in their mid-range f-stops. So while diffraction and lens softness don’t sway me from choosing the f-stop that gives the DOF I want, I try to never choose an aperture bigger or smaller than I need.
Now that we’ve let the composition determine our f-stop, it’s (finally) time to actually choose the focus point. Believe it or not, with this foundation of understanding we just established, focus becomes pretty simple. Whenever possible, I try to have elements throughout my frame, often starting near my feet and extending far into the distance. When that’s the case I stop down focus on an object slightly behind my closest subject (the more distant my closest subject, the farther behind it I can focus).
When I’m not sure, or if I don’t think I can get the entire scene sharp, I err on the side of closer focus to ensure that the foreground is sharp. Sometimes before shooting I check my DOF with the DOF preview button, allowing time for my eye to adjust to the limited light. And when maximum DOF is essential and I know my margin for error is small, I don’t hesitate to refer to the DOF app on my iPhone.
A great thing about digital capture is the instant validation of the LCD—when I’m not sure, or when getting it perfect is absolutely essential, after capture I pop my image up on the LCD, magnify it to maximum, check the point or points that must be sharp, and adjust if necessary. Using this immediate feedback to make instant corrections really speeds the learning process.
Sometimes less is more
The depth of field you choose is your creative choice, and no law says you must maximize it. Use your camera’s limited depth of field to minimize or eliminate distractions, create a blur of background color, or simply to guide your viewer’s eye. Focusing on a near subject while letting the background go soft clearly communicates the primary subject while retaining enough background detail to establish context. And an extremely narrow depth of field can turn distant flowers or sky into a colorful canvas for your subject.
There’s no substitute for experience
No two photographers do everything exactly alike. Determining the DOF a composition requires, the f-stop and focal length that achieves the desired DOF, and where to place the point of maximum focus, are all part of the creative process that should never be left up to the camera. The sooner you grasp the underlying principles of DOF and focus, the sooner you’ll feel comfortable taking control and conveying your own unique vision.
About this image
Yosemite may not be New England, but it can still put on a pretty good fall color display. A few years ago I arrived at Valley View on the west side of Yosemite Valley just about the time the fall color was peaking. I found the Merced River filled with reflections of El Capitan and Cathedral Rocks, framed by an accumulation of recently fallen leaves still rich with vivid fall color.
To emphasize the colorful foreground, I dropped my tripod low and framed up a vertical composition. I knew my hyperfocal distance at 24mm and f/11 would be 5 or 6 feet, but with the scene ranging from the closest leaves at about 3 feet away out to El Capitan at infinity, I also knew I’d need to be careful with my focus choices. For a little more margin for error I stopped down to f/16, then focused on the nearest rocks which were a little less than 6 feet away. As I usually do when I don’t have a lot of focus wiggle room, I magnified the resulting image on my LCD and moved the view from the foreground to the background to verify front-to-back sharpness.
Click an image for a closer look and slide show. Refresh the screen to reorder the display.
Posted on September 2, 2018
My relationship with Yosemite rainbows goes all the way back to my childhood, when a rainbow arcing across the face of Half Dome made my father more excited than I believed possible for an adult. I look back on that experience as the foundation of my interest in photography, my relationship with Yosemite, and my love for rainbows. So, needless to say, photographing a rainbow in Yosemite is a pretty big deal for me.
A few years ago the promise (hope) of lightning drove me to Yosemite to wait in the rain on a warm July afternoon. But after sitting for hours on hard granite, all I got was wet. It became pretty clear that the storm wasn’t producing any lightning, but as the sky behind me started to brighten while the rain continued falling over Yosemite Valley, I realized that conditions were ripe for a rainbow. Sure enough, long after I would have packed up and headed home had I been focused solely on lightning, this rainbow was my reward.
The moral if my story is that despite all appearances to the contrary, rainbows are not random—when sunlight strikes raindrops, a rainbow occurs, every time. The reason we don’t always see the rainbow not because it isn’t happening, it’s because we’re not in the right place. And that place, geometrically speaking, is always the same. Of course sometimes seeing the rainbow requires superhero ability like levitation or teleportation, but when we’re armed with a little knowledge and anticipation, we can put ourselves in position for moments like this.
I can’t help with the anticipation part, but here’s a little knowledge infusion (excerpted from the Rainbow article in my Photo Tips section).
Energy generated by the sun bathes Earth in continuous electromagnetic radiation, its wavelengths ranging from extremely short to extremely long (and every wavelength in between). Among the broad spectrum of electromagnetic solar energy we receive are ultra-violet rays that burn our skin and longer infrared waves that warm our atmosphere. These wavelengths bookend a very narrow range of wavelengths the human eye sees.
Visible wavelengths are captured by our eyes and interpreted by our brain. When the our eyes take in light consisting of the full range of visible wavelengths, we perceive it as white (colorless) light. We perceive color when some wavelengths are more prevalent than others. For example, when light strikes an opaque (solid) object such as a tree or rock, some of its wavelengths are absorbed; the wavelengths not absorbed are scattered. Our eyes capture this scattered light, send the information to our brains, which interprets it as a color. When light strikes water, some is absorbed and scattered by the surface, enabling us to see the water; some light passes through the water’s surface, enabling us to see what’s in the water; and some light is reflected by the surface, enabling us to see reflections.
(From this point on, for simplicity’s sake, it might help to visualize what happens when water strikes a single drop.)
Light traveling from one medium to another (e.g., from air into water) refracts (bends). Different wavelengths refract different amounts, causing the light to split into its component colors. Light that passes through a water refracts (bends). Different wavelengths are refracted different amounts by water; this separates the originally homogeneous white light into the multiple colors of the spectrum.
But simply separating the light into its component colors isn’t enough to create a rainbow–if it were, we’d see a rainbow whenever light strikes water. Seeing the rainbow spectrum caused by refracted light requires that the refracted light be returned to our eyes somehow.
A raindrop isn’t flat like a sheet of paper, it’s spherical, like a ball. Light that was refracted (and separated into multiple colors) as it entered the front of the raindrop, continues through to the back of the raindrop, where some is reflected. Red light reflects back at about 42 degrees, violet light reflects back at about 40 degrees, and the other spectral colors reflect back between 42 and 40 degrees. What we perceive as a rainbow is this reflection of the refracted light–notice how the top color of the primary rainbow is always red, and the bottom color is always violet.
Every raindrop struck by sunlight creates a rainbow. But just as the reflection of a mountain peak on the surface of a lake is visible only when viewed from the angle the reflection bounces off the lake’s surface, a rainbow is visible only when you’re aligned with the 40-42 degree angle at which the raindrop reflects the spectrum of rainbow colors.
Fortunately, viewing a rainbow requires no knowledge of advanced geometry. To locate or anticipate a rainbow, picture an imaginary straight line originating at the sun, entering the back of your head, exiting between your eyes, and continuing down into the landscape in front of you–this line points to the “anti-solar point,” an imaginary point exactly opposite the sun. With no interference, a rainbow would form a complete circle, skewed 42 degrees from the line connecting the sun and the anti-solar point–with you at the center. (We don’t see the entire circle because the horizon gets in the way.)
Because the anti-solar point is always at the center of the rainbow’s arc, a rainbow will always appear exactly opposite the sun (the sun will always be at your back). It’s sometimes helpful to remember that your shadow always points toward the anti-solar point. So when you find yourself in direct sunlight and rain, locating a rainbow is as simple as following your shadow and looking skyward–if there’s no rainbow, the sun’s probably too high.
Sometimes a rainbow appears as a majestic half-circle, arcing high above the distant terrain; other times it’s merely a small circle segment hugging the horizon. As with the direction of the rainbow, there’s nothing mysterious about its varying height. Remember, every rainbow would form a full circle if the horizon didn’t get in the way, so the amount of the rainbow’s circle you see (and therefore its height) depends on where the rainbow’s arc intersects the horizon.
While the center of the rainbow is always in the direction of the anti-solar point, the height of the rainbow is determined by the height of the anti-solar point, which will always be exactly the same number of degrees below the horizon as the sun is above the horizon. It helps to imagine the line connecting the sun and the anti-solar point as a fulcrum, with you as the pivot–picture yourself in the center of a teeter-totter: as one seat rises above you, the other drops below you. That means the lower the sun, the more of its circle you see and the higher it appears above the horizon; conversely, the higher the sun, the less of its circle is above the horizon and the flatter (and lower) the rainbow will appear.
Assuming a flat, unobstructed scene (such as the ocean), when the sun is on the horizon, so is the anti-solar point (in the opposite direction), and half of the rainbow’s 360 degree circumference will be visible. But as the sun rises, the anti-solar point drops–when the sun is more than 42 degrees above the horizon, the anti-solar point is more than 42 degrees belowthe horizon, and the only way you’ll see a rainbow is from a perspective above the surrounding landscape (such as on a mountaintop or on a canyon rim).
Of course landscapes are rarely flat. Viewing a scene from above, such as from atop Mauna Kea in Hawaii or from the rim of the Grand Canyon, can reveal more than half of the rainbow’s circle. From an airplane, with the sun directly overhead, all of the rainbow’s circle can be seen, with the plane’s shadow in the middle.
Not all of the light careening about a raindrop goes into forming the primary rainbow. Some of the light slips out the back of the raindrop to illuminate the sky, and some is reflected inside the raindrop a second time. The refracted light that reflects a second time before exiting creates a secondary, fainter rainbow skewed 50 degrees from the anti-solar point. Since this is a reflection, the order of the colors is the secondary rainbow is reversed.
And if the sky between the primary and secondary rainbows appears darker than the surrounding sky, you’ve found “Alexander’s band.” It’s caused by all the light machinations I just described–instead of all the sunlight simply passing through the raindrops to illuminate the sky, some of the light was intercepted, refracted, and reflected by the raindrops to form our two rainbows, leaving less light for the sky between the rainbows.
Click an image for a closer look and slide show. Refresh the window to reorder the display.
Posted on March 24, 2018
I’m afraid that making a living as a photographer sometimes means exchanging time to take pictures for time to make money. On the other hand, my schedule is mine alone, which means when there’s something I really, really want to photograph, such as a moonrise or fresh snow in Yosemite, I can usually arrange my schedule to make it happen. The moon shoots I can plan a year or more in advance, but snow requires a little more vigilance and flexibility.
Early this month, with hints of snow coming to Yosemite Valley, I started clearing space in my schedule. At 4000 feet, Yosemite Valley is often right on the snow-line, so a swing of just a couple hundred feet in either direction can mean the difference between snow and soggy. After watching the weather reports vacillate between snow and rain all week (and adjusting plans more than once), my buddy Mark and I took a chance and made the drive to Yosemite, visions of snowflakes dancing in our heads.
Waiting at the traffic-light-controlled, one-lane detour around the Ferguson Slide on Highway 140, I watched dozens of westbound headlights file past the four or five eastbound taillights idling at the light in front of us. With a storm imminent, it occurred to me that we were participating in a kind of changing of the guard, where the evacuating tourists are replaced by a much smaller contingent of what could only be photographers.
We arrived in Yosemite Valley at about the same time as the rain, circled the valley, secured a cheap room at Yosemite Valley Lodge (in Yosemite, any night with plumbing and solid walls for $150 is in fact a steal), and went to dinner. When the rain continued through dinner and all the way until bedtime, I began to fear the weather report had vacillated once more in the wrong direction.
Peeking out the window at around 4:00 a.m. and seeing more rain, I grudgingly turned off the alarm I’d optimistically set for 6:00 a.m. and went back to sleep. The next thing I knew, Mark was waking me at 6:10 to report six inches of fresh snow, and it was still falling. By 6:15 we were bundled and searching for my car in a parking lot filled with identical white lumps.
The rest of the morning was a blur as Mark and I darted from pristine location to pristine location, marveling at how a few hours of snow can completely transform months of accumulated grime and a thirsty forest dotted with dead and dying trees. For those few hours, Yosemite was new again.
At our first stop, El Capitan Meadow, we photographed El Capitan and Cathedral rocks battling the clouds for dominance. Down the road at Valley View, the snow continued falling but the granite was winning and I soon found myself admiring the reflection of Cathedral Rocks and Bridalveil Fall just upriver from the parking area.
Normally the thin branches overhanging this vantage point are a distraction to avoid, but glazed with snow, they had the potential to make a perfect frame. The reflection was the easy part, but somehow I had to figure out how to feature it and the branches without the branches obliterating the rest of the scene.
To separate Bridalveil Fall and Cathedral Rocks from the glazed branches, I splayed my tripod’s legs and dropped it to the ground, then scooted up to the river’s edge. That still left a few branches dangling too low, so I pushed my camera out even farther by extending one tripod leg into the river. I was aided immensely by the articulating screen of my Sony a7RIII—while I still needed to sit in the snow to get low enough to compose and control my camera, I very much appreciated the ability to sit and look down at my LCD rather than sprawl on my stomach in the snow to get my eye to the viewfinder.
When photographing a scene that includes a reflection and nearby objects, it’s important to remember that the focus point of a reflection is the focus point of the reflective subject, not the reflective surface. (I’ll pause here for a few seconds to let you process this because it’s important.) In this case I was at 16mm; at f/11 that gave me a hyperfocal distance of less than four feet; with the branches about five feet away, front-to-back sharpness wouldn’t be a problem, even focused at infinity. Nevertheless, I chose f/14 for this shot, not for more depth of field, but to (along with ISO 50) stretch my shutter speed enough to smooth a few small ripples in the reflection.
Excitement about a scene can overwhelm good sense—we see something that moves us, and quickly point the camera and click with more enthusiasm than thought. While this approach may indeed record memories and impress friends, it almost certainly denies the scene the attention it deserves. I was indeed very excited about this scene, but between the depth of field, reflection, overhanging branches, moving water, dominant background subjects, not to mention the awkwardness of my position, I had many moving parts to consider.
Rather than attempt perfection on the first click, I addressed the obvious stuff (covered above) with a “rough draft” click. Draft image in hand, I popped my camera off the tripod, stood (ahhhhh), and evaluated my result. I immediately saw two things to address: first, I wanted Cathedral Rocks better framed by the branches; second, I wanted the mid-river, snow-capped rocks away from the right edge of my frame.
I returned my camera to live-view, dropped to ground-level, and replaced the camera on my tripod. Because I hadn’t touched the tripod, the scene on my live-view LCD was the very scene I’d just reviewed—making my prescribed adjustments was a simple matter of panning right a couple of inches and pushing the tripod a little farther into the river. Click.
I love my job.
Posted on January 28, 2018
The downside of turning your passion into your profession is that so many decisions are no longer based on the pleasure they bring. Since my early 20s, I’d been very happy as an amateur photographer, picking my photo destinations and the images I clicked for the sheer joy of it. But I knew becoming a professional photographer risked preempting that joy with photography decisions designed to pay the bills.
For that reason, part of my decision to become professional a dozen or so years ago included a personal vow to only photograph what I want to photograph, and to never take a picture just because I thought it would make money. I was able to blend my years of photography experience with my prior career in technical communications (tech writing, training, and support) to create a photography business based on photo workshops, not image sales. Of course I do sell images too, but I’ve always viewed image sales as a bonus rather than something to something I rely on.
I’m thinking about this right now because this image reminds me how little time I actually have to work on my images. I’d totally forgotten about this afternoon from last April, when a storm cleared to reveal a dusting of fresh snow on the granite surrounding Yosemite Valley. As we stood marveling at the majesty, a ray of sun burst through the clouds to paint a vivid rainbow in the mist gathered beneath Bridalveil Fall.
It’s finds like this that remind me of the hundreds (thousands?) of images waiting to be processed and shared, some going back more than ten years. This isn’t a complaint—I can’t image a better life than mine. In fact, instead of lamenting the inability to reap the fruits of my labor, I find comfort in the knowledge of these images’ existence. Even if I never process and share them, they’re a reminder of my good fortune. If there’s a lesson here, maybe it’s that, for me at least, the true joy of photography isn’t the images and the acclaim they evoke, it’s simply the act of capturing them.
Posted on March 21, 2017
One perk of being a photographer is the opportunity to experience normally crowded locations in relative peace. That’s because the best nature photography usually happens at most people’s least favorite time to be outside: crazy weather and after dark. A couple of weeks ago in Yosemite I got the opportunity to enjoy both.
After spending a snowy Sunday guiding a couple around Yosemite Valley in a snowstorm, I dropped them back at (the hotel formerly known as) The Ahwahnee with nothing but the drive home on my mind. But winding through the valley in the fading twilight I saw signs of clearing skies and made a snap decision to check out the scene at Tunnel View.
I found the vista at Tunnel View gloriously empty. By the time I’d set up my camera and tripod the darkness was nearly complete, but as my eyes adjusted I could make out large, black holes in the once solid clouds overhead. Soon stars dotted the blackness above El Capitan and the white stripe of Bridalveil Fall. Each time light from the waxing gibbous moon slipped through the shifting clouds, the entire landscape lit up as if someone had flipped a switch.
Because the best parts of the view were in a narrow strip starting with the snow-glazed trees beneath me and continuing through the scene and up into the star-studded sky, I opted for a vertical composition. To include as much foreground and sky as possible, I went nearly as wide as my 16-35 lens would allow, more or less centering El Capitan and Bridalveil Fall to give the snow and stars equal billing.
Being completely comfortable with my a7RII’s high ISO performance, I didn’t stress the 1250 ISO that allowed me to stop down to a slightly sharper f/5.6 (virtually every lens is a little sharper stopped down from its largest aperture). Night focus with the Sony a7RII is extremely easy, easier than any camera I’ve ever used that isn’t an a7S/a7SII. Often I manually focus on the stars and use focus peaking* to tell me I’m sharp; in this case I back-button auto-focused on the contrast between the moonlit snow and dark granite near Bridalveil Fall. I chose a long enough shutter speed to capture motion blur in the rapidly moving clouds, knowing the potential for visible star streaking was minimized by my extremely wide focal length.
My favorite thing about that evening? The 20 seconds my shutter was open, when I didn’t have anything to do but stand there and enjoy the view in glorious silence.
Click an image for a closer look and slide show. Refresh the window to reorder the display.
Posted on March 14, 2017
A week or so ago I had the good fortune to be in Yosemite for the most recent snowfall there. All week the National Weather Service had been waffling a bit on the snow—based on the forecast, I probably wouldn’t have made the trip. But I was there anyway, guiding a fun couple from England for the weekend. Following a nice but unspectacular Saturday, we woke Sunday morning to find the world dipped in white.
The snow fell all day, at times so hard that that it was difficult to see more than a couple hundred yards, other times dwindling to a few flakes per minute. During one of the lulls we made our way to Tunnel View for the obligatory shot there. Despite hundreds (thousands?) of pictures of this view, after surveying the scene for a few minutes I couldn’t resist pulling out my camera and tripod.
My general feeling is that people tend to go too wide with their Tunnel View images, shrinking the main features (El Capitan, Half Dome, Bridalveil Fall) to include less exciting granite left of El Capitan and right right of Cathedral Rocks/Bridalveil Fall. That’s why I opt to tighten my horizontal Tunnel View compositions on the left and right, or isolate one or two of the three primary subjects with a telephoto. And when something exciting is happening in the sky (moon, clouds, or color) or foreground (fog, snow, rainbow), I’ll often compose vertically and bias my composition to favor the most compelling part of the scene.
With so many Tunnel View images in my portfolio, that afternoon I consciously set aside my long-held composition biases in favor of something I don’t already have. Of course the feature that most set the scene apart was the snow, so I set out to find the best way to emphasize it. Because the snow level that day was right around 4000 feet, also the elevation of Yosemite Valley, even the three hundred or so feet of elevation gain at Tunnel View resulted in much more snow virtually at my feet than on the distant valley floor. My Sony/Zeiss 16-35 f/4 lens, a great lens that I usually find too wide for Tunnel View, was perfect for highlighting the foreground snow.
Dialing my focal length to about 20mm allowed me to maximize the foreground snow while including minimal less-than-interesting gray sky. Of course going this wide meant shrinking the scene’s “big three” and adding lots of extraneous middle-ground on the left and right. To mitigate that problem I used the snowy pine on the left, often an obtrusive distraction to be dealt with, as a frame for that side of the scene. Not only did the tree block less interesting features, it actually enhanced the snowy effect I sought. On the right the diagonal ridge added a touch of visual motion (diagonal lines are so much stronger visually than horizontal and vertical lines), and it didn’t hurt that much of the bland granite there was covered with snow.
Click an image for a closer look and slide show. Refresh the window to reorder the display.