Posted on October 14, 2018
What’s the point?
It seems like one of photography’s great mysteries is achieving proper focus: the camera settings, where to place the focus point, even the definition of sharpness are all sources of confusion and angst. If you’re a tourist just grabbing snapshots, everything in your frame is likely at infinity and you can just put your camera in full auto mode and click away. But if you’re a photographic artist trying to capture something unique with your mirrorless or DSLR camera and doing your best to have important visual elements objects at different distances throughout your frame, you need to stop letting your camera decide your focus point and exposure settings.
Of course the first creative focus decision is whether you even want the entire frame sharp. While some of my favorite images use selective focus to emphasize one element and blur the rest of the scene, most (but not all) of what I’ll say here is about using hyperfocal techniques to maximize depth of field (DOF). I cover creative selective focus in much greater detail in another Photo Tip article: Creative Selective Focus.
Beware the “expert”
I’m afraid that there’s some bad, albeit well-intended, advice out there that yields just enough success to deceive people into thinking they’ve got focus nailed, a misperception that often doesn’t manifest until an important shot is lost. I’m referring to the myth that you should focus 1/3 of the way into the scene, or 1/3 of the way into the frame (two very different things, each with its own set of problems).
For beginners, or photographers whose entire scene is at infinity, the 1/3 technique may be a useful rule of thumb. But taking the 1/3 approach to focus requires that you understand DOF and the art of focusing well enough to adjust your focus point when appropriate, and once you achieve that level of understanding, you may as well do it the right way from the start. That ability becomes especially important in those scenes where missing the focus point by just a few feet or inches can make or break and image.
Back to the basics
Understanding a few basic focus truths will help you make focus decisions:
Depth of field discussions are complicated by the fact that “sharp” is a moving target that varies with display size and viewing distance. But it’s safe to say that all things equal, the larger your ultimate output and closer the intended viewing distance, the more detail your original capture should contain.
To capture detail a lens focuses light on the sensor’s photosites. Remember using a magnifying glass to focus sunlight and ignite a leaf when you were a kid? The smaller (more concentrated) the point of sunlight, the sooner the smoke appeared. In a camera, the finer (smaller) a lens focuses light on each photosite, the more detail the image will contain at that location. So when we focus we’re trying to make the light striking each photosite as concentrated as possible.
In photography we call that small circle of light your lens makes for each photosite its “circle of confusion.” The larger the CoC, the less concentrated the light and the more blurred the image will appear. Of course if the CoC is too small to be seen as soft, either because the print is too small or the viewer is too far away, it really doesn’t matter. In other words, areas of an image with a large CoC (relatively soft) can still appear sharp if small enough or viewed from far enough away. That’s why sharpness can never be an absolute term, and we talk instead about acceptable sharpness that’s based on print size and viewing distance. It’s actually possible for the same image to be sharp for one use, but too soft for another.
So how much detail do you need? The threshold for acceptable sharpness is pretty low for an image that just ends up on an 8×10 calendar on the kitchen wall, but if you want that image large on the wall above the sofa, achieving acceptable sharpness requires much more detail. And as your print size increases (and/or viewing distance decreases), the CoC that delivers acceptable sharpness shrinks correspondingly.
Many factors determine the a camera’s ability to record detail. Sensor resolution of course—the more resolution your sensor has, the more important it becomes that to have a lens that can take advantage of that extra resolution. And the more detail you want to capture with that high resolution sensor and tack-sharp lens, the more important your depth of field and focus point decisions become.
The foundation of a sound approach to maximizing sharpness for a given viewing distance and image size is hyperfocal focusing, an approach that uses viewing distance, f-stop, focal length, and focus point to ensure acceptable sharpness.
The hyperfocal point is the focus point that provides the maximum depth of field for a given combination of sensor size, f/stop, and focal length. Another way to say it is that the hyperfocal point is the closest you can focus and still be acceptably sharp to infinity. When focused at the hyperfocal point, your scene will be acceptably sharp from halfway between your lens and focus point all the way to infinity. For example, if the hyperfocal point for your sensor (full frame, APS-C, 4/3, or whatever), focal length, and f-stop combinition is twelve feet away, focusing there will give you acceptable sharpness from six feet (half of twelve) to infinity—focusing closer will soften the distant scene; focusing farther will keep you sharp to infinity but extend the area of foreground softness.
Because the hyperfocal variable (sensor size, focal length, f-stop) combinations are too numerous to memorize, we usually refer to an external aid. That used to be awkward printed tables with long columns and rows displayed in microscopic print, the more precise the data, the smaller the print. Fortunately, those have been replaced by smartphone apps with more precise information in a much more accessible and readable form. We plug in all the variables and out pops the hyperfocal point distance and other useful information
It usually goes something like this:
You’re not as sharp as you think
Since people’s eyes start to glaze over when CoC comes up, they tend to use the default returned by the smartphone app. But just because the app tells you you’ve nailed focus, don’t assume that your work is done. An often overlooked aspect of hyperfocal focusing is that app makes assumptions that aren’t necessarily right, and in fact are probably wrong.
The CoC your app uses to determine acceptable sharpness is a function of sensor size, display size, and viewing distance. But most app’s hyperfocal tables assume that you’re creating an 8×10 print that will be viewed from a foot away—maybe valid 40 years ago, but not in this day of mega-prints. The result is a CoC three times larger than the eye’s ability to resolve.
That doesn’t invalidate hyperfocal focusing, but if you use published hyperfocal data from an app or table, your images’ DOF might not be as ideal as you think it is for your use. If you can’t specify a smaller CoC in your app, I suggest that you stop-down a stop or so more than the app/table indicates. On the other hand, stopping down to increase sharpness is an effort of diminishing returns, because diffraction increases as the aperture shrinks and eventually will soften the entire image—I try not to go more than a stop smaller than my data suggests.
Keeping it simple
As helpful as a hyperfocal app can be, whipping out a smartphone for instant in-the-field access to data is not really conducive to the creative process. I’m a big advocate of keeping photography as simple as possible, so while I’m a hyperfocal focus advocate in spirit, I don’t usually use hyperfocal data in the field. Instead I apply hyperfocal principles in the field whenever I think the margin of error gives me sufficient wiggle room.
Though I don’t often use the specific hyperfocal data in the field, I find it helps a lot to refer to hyperfocal tables when I’m sitting around with nothing to do. So if I find myself standing in line at the DMV, or sitting in a theater waiting for a movie (I’m a great date), I open my iPhone hyperfocal app and plug in random values just to get a sense of the DOF for a given f-stop and focal length combination. I may not remember the exact numbers later, but enough of the information sinks in that I accumulate a general sense of the hyperfocal DOF/camera-setting relationships.
Finally, something to do
Unless I think I have very little DOF margin for error in my composition, I rarely open my hyperfocal app in the field. Instead, once my composition is worked out and have determined the closest object I want sharp—the closest object with visual interest (shape, color, texture), regardless of whether it’s a primary subject.
Of course these distances are very subjective and will vary with your focal length and composition (not to mention the strength of your pitching arm), but you get the idea. If you find yourself in a small margin for error focus situation without a hyperfocal app (or you just don’t want to take the time to use one), the single most important thing to remember is to focus behind your closest subject. Because you always have sharpness in front of your focus point, focusing on the closest subject gives you unnecessary sharpness at the expense of distant sharpness. By focusing a little behind your closest subject, you’re increasing the depth of your distant sharpness while (if you’re careful) keeping your foreground subject within the zone of sharpness in front of the focus point.
And finally, foreground softness, no matter how slight, is almost always a greater distraction than slight background softness. So, if it’s impossible to get all of your frame sharp, it’s usually best to ensure that the foreground is sharp.
Why not just automatically set my aperture to f/22 and be done with it? I thought you’d never ask. Without delving too far into the physics of light and optics, let’s just say that there’s a not so little light-bending problem called “diffraction” that robs your images of sharpness as your aperture shrinks—the smaller the aperture, the greater the diffraction. Then why not choose f/2.8 when everything’s at infinity? Because lenses tend to lose sharpness at their aperture extremes, and are generally sharper in their mid-range f-stops. So while diffraction and lens softness don’t sway me from choosing the f-stop that gives the DOF I want, I try to never choose an aperture bigger or smaller than I need.
Now that we’ve let the composition determine our f-stop, it’s (finally) time to actually choose the focus point. Believe it or not, with this foundation of understanding we just established, focus becomes pretty simple. Whenever possible, I try to have elements throughout my frame, often starting near my feet and extending far into the distance. When that’s the case I stop down focus on an object slightly behind my closest subject (the more distant my closest subject, the farther behind it I can focus).
When I’m not sure, or if I don’t think I can get the entire scene sharp, I err on the side of closer focus to ensure that the foreground is sharp. Sometimes before shooting I check my DOF with the DOF preview button, allowing time for my eye to adjust to the limited light. And when maximum DOF is essential and I know my margin for error is small, I don’t hesitate to refer to the DOF app on my iPhone.
A great thing about digital capture is the instant validation of the LCD—when I’m not sure, or when getting it perfect is absolutely essential, after capture I pop my image up on the LCD, magnify it to maximum, check the point or points that must be sharp, and adjust if necessary. Using this immediate feedback to make instant corrections really speeds the learning process.
Sometimes less is more
The depth of field you choose is your creative choice, and no law says you must maximize it. Use your camera’s limited depth of field to minimize or eliminate distractions, create a blur of background color, or simply to guide your viewer’s eye. Focusing on a near subject while letting the background go soft clearly communicates the primary subject while retaining enough background detail to establish context. And an extremely narrow depth of field can turn distant flowers or sky into a colorful canvas for your subject.
There’s no substitute for experience
No two photographers do everything exactly alike. Determining the DOF a composition requires, the f-stop and focal length that achieves the desired DOF, and where to place the point of maximum focus, are all part of the creative process that should never be left up to the camera. The sooner you grasp the underlying principles of DOF and focus, the sooner you’ll feel comfortable taking control and conveying your own unique vision.
About this image
Yosemite may not be New England, but it can still put on a pretty good fall color display. A few years ago I arrived at Valley View on the west side of Yosemite Valley just about the time the fall color was peaking. I found the Merced River filled with reflections of El Capitan and Cathedral Rocks, framed by an accumulation of recently fallen leaves still rich with vivid fall color.
To emphasize the colorful foreground, I dropped my tripod low and framed up a vertical composition. I knew my hyperfocal distance at 24mm and f/11 would be 5 or 6 feet, but with the scene ranging from the closest leaves at about 3 feet away out to El Capitan at infinity, I also knew I’d need to be careful with my focus choices. For a little more margin for error I stopped down to f/16, then focused on the nearest rocks which were a little less than 6 feet away. As I usually do when I don’t have a lot of focus wiggle room, I magnified the resulting image on my LCD and moved the view from the foreground to the background to verify front-to-back sharpness.
Click an image for a closer look and slide show. Refresh the screen to reorder the display.
Posted on September 27, 2018
As we enter the fall color photography season, I’m revisiting and revising previous articles. This is the second in the series.
Vivid color and crisp reflections make autumn my favorite season for creative photography. While most landscape scenes require showing up at the right time and hoping for the sun and clouds to cooperate, photographing fall color is often a simple matter of circling the scene until the light’s right. For the photographers who understand this, and know how to control exposure, depth, and motion with their cameras, great fall color images are possible any time of day, in any light.
Backlight, backlight, backlight
The difference between the front-lit and backlit sides of fall foliage is the difference between dull and vivid color. When illuminated by direct sunlight, the side of a leaf opposite the sun throbs with color, as if it has its own source of illumination, while the same leaf’s lit side appears flat—if you ever find yourself thinking that the fall color seems washed out, check the other side of the tree.
While the backlight glow isn’t as pronounced in shade/overcast, when the leaves are illuminated by light that’s spread evenly across the sky, even diffuse sunlight is far more pronounced one side of the leaves than the other, giving the side of a leaf that’s opposite the sky (the side getting less light) a subtle but distinct glow when compared to its skyward side.
Isolate elements with a telephoto for a more intimate fall color image
Big fall color scenes are great, but a telephoto or macro enables you to highlight and emphasize elements and relationships. Train your eye to find leaves, groups of leaves, or branches that stand out from the rest of the scene. Zoom close, using the edges of the frame to eliminate distractions and frame subjects. And don’t concentrate so much on your primary subject that you miss complementary background or foreground elements to balance the frame and provide an appealing canvas for your subject.
Selective depth of field is a great way to emphasize/deemphasize elements in a scene
Limiting depth of field with a large aperture on a telephoto lens can soften a potentially distracting background into a complementary canvas of color and shape. Parallel tree trunks, other colorful leaves, and reflective water make particularly effective soft background subjects. For an extremely soft background, reduce your depth of field further by adding an extension tube to focus closer.
Underexpose sunlit leaves to maximize color
Contrary to what many believe, fall foliage in bright sunlight is still photographable if you isolate backlit leaves against a darker background and slightly underexpose them. The key here is making sure the foliage is the brightest thing in the frame, and to avoid including any sky in the frame. Photographing sunlit leaves, especially with a large aperture to limit DOF, has the added advantage of an extremely fast shutter speed that will freeze wind-blown foliage.
Slightly underexposing brightly lit leaves not only emphasizes their color, it turns everything that’s in shade to a dark background. And if your depth of field is narrow enough, points of light sneaking between the leaves and branches to reach your camera will blur to glowing jewels.
A sunstar is a great way to liven up an image in extreme light
If you’re going to be shooting backlit leaves, you’ll often find yourself fighting the sun. Rather than trying to overcome it, turn the sun into an ally by hiding it behind a tree. A small aperture (f16 or smaller is my general rule) with a small sliver of the sun’s disk visible creates a brilliant sunstar that becomes the focal-point of your scene. Unlike photographing a sunstar on the horizon, hiding the sun behind a terrestrial object like a tree or rock enables you to move with the sun.
When you get a composition you like, try several frames, varying the amount of sun visible in each. The smaller the sliver of sun, the more delicate the sunstar; the more sun you include, the more bold the sunstar. You’ll also find that different lenses render sunstars differently, so experiment to see which lenses and apertures work best for you.
Polarize away the foliage’s natural sheen
Fall foliage has a reflective sheen that dulls its natural color. A properly oriented polarizer can erase that sheen and bring the underlying natural color into prominence. To minimize the scene’s reflection, slowly turn the polarizer until the scene is darkest (the more you try this, the easier it will be to see). If you have a hard time seeing the difference, concentrate your gaze on a single leaf, rock, or wet surface.
A polarizer isn’t an all-on or all-off proposition. Slowly dial the polarizer’s ring and watch the reflection change until you achieve the effect you desire. This is particularly effective when you want your reflection to share the frame with submerged feature such as rocks, leaves, and grass.
Blur water with a long exposure
When photographing in overcast or shade, it’s virtually impossible to freeze the motion of rapid water at any kind of reasonable ISO. Rather than fight it, use this opportunity to add silky water to your fall color scenes. There’s no magic shutter speed for blurring water—in addition to the shutter speed, the amount of blur will depend on the speed of the water, your distance from the water, your focal length, and your angle of view relative to the water’s motion. When you find a composition you like, don’t stop with one click. Experiment with different shutter speeds by varying the ISO (or aperture as long as you don’t compromise the desired depth of field).
Reflections make fantastic complements to any fall color scene
By autumn, rivers and streams that rushed over rocks in spring and summer, meander at a leisurely, reflective pace. Adding a reflection to your autumn scene can double the color, and also add a sense of tranquility. The recipe for a reflection is still water, sunlit reflection subjects, and shaded reflective surface.
When photographing leaves floating atop a reflection, it’s important to know that the focus point for the reflection is the focus point of the reflective subject, not the reflective surface. This is seems counterintuitive, but try it yourself—focus on the leaves with a wide aperture and watch the reflection go soft. Achieving sharpness in your floating leaves and the reflection requires an extremely small aperture and careful focus point selection. Often the necessary depth of field exceeds the lens’s ability to capture it—in this case, I almost always bias my focus toward the leaves and let the reflection go soft.
Nothing communicates impending winter like fall color with snow
Don’t think the first snow means your fall photography is finished for the year. Hardy autumn leaves often cling to branches, and even retain their color on the ground through the first few storms of winter. An early snowfall is an opportunity to catch fall leaves etched in white, an opportunity not to be missed. And even after the snow has been falling for a while, it’s possible to find a colorful rogue leaf to accent an otherwise stark winter scene.
To better understand the science and timing of fall color, read
Posted on September 2, 2018
My relationship with Yosemite rainbows goes all the way back to my childhood, when a rainbow arcing across the face of Half Dome made my father more excited than I believed possible for an adult. I look back on that experience as the foundation of my interest in photography, my relationship with Yosemite, and my love for rainbows. So, needless to say, photographing a rainbow in Yosemite is a pretty big deal for me.
A few years ago the promise (hope) of lightning drove me to Yosemite to wait in the rain on a warm July afternoon. But after sitting for hours on hard granite, all I got was wet. It became pretty clear that the storm wasn’t producing any lightning, but as the sky behind me started to brighten while the rain continued falling over Yosemite Valley, I realized that conditions were ripe for a rainbow. Sure enough, long after I would have packed up and headed home had I been focused solely on lightning, this rainbow was my reward.
The moral if my story is that despite all appearances to the contrary, rainbows are not random—when sunlight strikes raindrops, a rainbow occurs, every time. The reason we don’t always see the rainbow not because it isn’t happening, it’s because we’re not in the right place. And that place, geometrically speaking, is always the same. Of course sometimes seeing the rainbow requires superhero ability like levitation or teleportation, but when we’re armed with a little knowledge and anticipation, we can put ourselves in position for moments like this.
I can’t help with the anticipation part, but here’s a little knowledge infusion (excerpted from the Rainbow article in my Photo Tips section).
Energy generated by the sun bathes Earth in continuous electromagnetic radiation, its wavelengths ranging from extremely short to extremely long (and every wavelength in between). Among the broad spectrum of electromagnetic solar energy we receive are ultra-violet rays that burn our skin and longer infrared waves that warm our atmosphere. These wavelengths bookend a very narrow range of wavelengths the human eye sees.
Visible wavelengths are captured by our eyes and interpreted by our brain. When the our eyes take in light consisting of the full range of visible wavelengths, we perceive it as white (colorless) light. We perceive color when some wavelengths are more prevalent than others. For example, when light strikes an opaque (solid) object such as a tree or rock, some of its wavelengths are absorbed; the wavelengths not absorbed are scattered. Our eyes capture this scattered light, send the information to our brains, which interprets it as a color. When light strikes water, some is absorbed and scattered by the surface, enabling us to see the water; some light passes through the water’s surface, enabling us to see what’s in the water; and some light is reflected by the surface, enabling us to see reflections.
(From this point on, for simplicity’s sake, it might help to visualize what happens when water strikes a single drop.)
Light traveling from one medium to another (e.g., from air into water) refracts (bends). Different wavelengths refract different amounts, causing the light to split into its component colors. Light that passes through a water refracts (bends). Different wavelengths are refracted different amounts by water; this separates the originally homogeneous white light into the multiple colors of the spectrum.
But simply separating the light into its component colors isn’t enough to create a rainbow–if it were, we’d see a rainbow whenever light strikes water. Seeing the rainbow spectrum caused by refracted light requires that the refracted light be returned to our eyes somehow.
A raindrop isn’t flat like a sheet of paper, it’s spherical, like a ball. Light that was refracted (and separated into multiple colors) as it entered the front of the raindrop, continues through to the back of the raindrop, where some is reflected. Red light reflects back at about 42 degrees, violet light reflects back at about 40 degrees, and the other spectral colors reflect back between 42 and 40 degrees. What we perceive as a rainbow is this reflection of the refracted light–notice how the top color of the primary rainbow is always red, and the bottom color is always violet.
Every raindrop struck by sunlight creates a rainbow. But just as the reflection of a mountain peak on the surface of a lake is visible only when viewed from the angle the reflection bounces off the lake’s surface, a rainbow is visible only when you’re aligned with the 40-42 degree angle at which the raindrop reflects the spectrum of rainbow colors.
Fortunately, viewing a rainbow requires no knowledge of advanced geometry. To locate or anticipate a rainbow, picture an imaginary straight line originating at the sun, entering the back of your head, exiting between your eyes, and continuing down into the landscape in front of you–this line points to the “anti-solar point,” an imaginary point exactly opposite the sun. With no interference, a rainbow would form a complete circle, skewed 42 degrees from the line connecting the sun and the anti-solar point–with you at the center. (We don’t see the entire circle because the horizon gets in the way.)
Because the anti-solar point is always at the center of the rainbow’s arc, a rainbow will always appear exactly opposite the sun (the sun will always be at your back). It’s sometimes helpful to remember that your shadow always points toward the anti-solar point. So when you find yourself in direct sunlight and rain, locating a rainbow is as simple as following your shadow and looking skyward–if there’s no rainbow, the sun’s probably too high.
Sometimes a rainbow appears as a majestic half-circle, arcing high above the distant terrain; other times it’s merely a small circle segment hugging the horizon. As with the direction of the rainbow, there’s nothing mysterious about its varying height. Remember, every rainbow would form a full circle if the horizon didn’t get in the way, so the amount of the rainbow’s circle you see (and therefore its height) depends on where the rainbow’s arc intersects the horizon.
While the center of the rainbow is always in the direction of the anti-solar point, the height of the rainbow is determined by the height of the anti-solar point, which will always be exactly the same number of degrees below the horizon as the sun is above the horizon. It helps to imagine the line connecting the sun and the anti-solar point as a fulcrum, with you as the pivot–picture yourself in the center of a teeter-totter: as one seat rises above you, the other drops below you. That means the lower the sun, the more of its circle you see and the higher it appears above the horizon; conversely, the higher the sun, the less of its circle is above the horizon and the flatter (and lower) the rainbow will appear.
Assuming a flat, unobstructed scene (such as the ocean), when the sun is on the horizon, so is the anti-solar point (in the opposite direction), and half of the rainbow’s 360 degree circumference will be visible. But as the sun rises, the anti-solar point drops–when the sun is more than 42 degrees above the horizon, the anti-solar point is more than 42 degrees belowthe horizon, and the only way you’ll see a rainbow is from a perspective above the surrounding landscape (such as on a mountaintop or on a canyon rim).
Of course landscapes are rarely flat. Viewing a scene from above, such as from atop Mauna Kea in Hawaii or from the rim of the Grand Canyon, can reveal more than half of the rainbow’s circle. From an airplane, with the sun directly overhead, all of the rainbow’s circle can be seen, with the plane’s shadow in the middle.
Not all of the light careening about a raindrop goes into forming the primary rainbow. Some of the light slips out the back of the raindrop to illuminate the sky, and some is reflected inside the raindrop a second time. The refracted light that reflects a second time before exiting creates a secondary, fainter rainbow skewed 50 degrees from the anti-solar point. Since this is a reflection, the order of the colors is the secondary rainbow is reversed.
And if the sky between the primary and secondary rainbows appears darker than the surrounding sky, you’ve found “Alexander’s band.” It’s caused by all the light machinations I just described–instead of all the sunlight simply passing through the raindrops to illuminate the sky, some of the light was intercepted, refracted, and reflected by the raindrops to form our two rainbows, leaving less light for the sky between the rainbows.
Click an image for a closer look and slide show. Refresh the window to reorder the display.
Posted on August 24, 2018
Yesterday Nikon finally jumped into the mirrorless game with its Z6 and Z7 announcement, a welcome development that can only keep pushing everyone’s mirrorless technology forward.
I made the switch to mirrorless about four years ago and haven’t looked back. At the beginning mirrorless was touted for its compactness, and while mirrorless bodies (and to a lesser extent, lenses) are more compact, it turns out that, for me at least, it’s the mirrorless viewfinder that has hooked me: with real-time exposure simulation, focus assist (peaking), highlight alert (zebras), and a pre-capture histogram, I don’t think I could go back to a DSLR.
While I shoot with the Sony a7RIII and am very much committed to the Sony mirrorless universe, I’m not going to get into the “my camera can beat up your camera” debate—Nikon makes great cameras and I’m sure their mirrorless bodies will be no exception. In fact, the Z7 looks like it compares very closely to the Sony a7RII, which is a fantastic camera that I still carry as a backup and don’t hesitate to use when the situation calls for it.
As happy as I am with my mirrorless conversion, I do have some insights that might spare Nikon shooters of some of the transition pains I went through when I switched from Canon DSLRs (1DSIII and 5DIII) to the Sony a7R series of mirrorless bodies.
None of these points is a reason to not get a Nikon Z6 or Z7, but for me it would be a reason not to pre-order. Instead, if it were me, I’d wait and let others discover the frustrations so I could go into the non-trivial transition from DSLR to mirrorless with realistic expectations.
I’m guessing that current Nikon shooters will probably endure fewer frustrations than I had with my first mirrorless body, the Sony a7R—Sony was still trying to figure out the whole interface thing that Nikon has nailed (I’ve never been a fan of Nikon’s interface, but Nikon shooters like it and that’s what matters). On the other hand, I was probably more forgiving than Nikon shooters might be because the a7R image quality was so much better for my needs than the Canon 5DIII it replaced. Dynamic range is king in the landscape world, and the a7R gave me 2-3 stops more dynamic range than my 5DIII—slow transition plan notwithstanding, I literally didn’t click another frame after my first a7R shoot.
While I expect the Z6/Z7 bodies will be ergonomically more mature than my original a7R, Nikon’s full frame bodies already deliver exceptional image quality, so most Nikon full-frame DSLR shooters transitioning from the D800/810/850 won’t have the euphoria of much better image quality that sustained me until the release of Sony’s a7RII and (especially) a7RIII.
On the other hand…
(Full disclosure: I’m a Sony Artisan of Imagery)
These Nikon mirrorless cameras are great for committed Nikon shooters who are completely invested in the Nikon ecosystem and have no plans to completely replace their lens lineup. But for any photographer planning to make the full jump to mirrorless that includes all native lenses, I think Sony is (at least) several years ahead of Nikon, and given their resources and commitment, will remain at least that far ahead for many years.
One of the early complaints about the Sony mirrorless system was its lack of lenses compared to Nikon and Canon, but valid as that criticism was, that disadvantage has shrunk to virtually the point of irrelevance, and Sony is already very far along on many more native Sony FE-mount lenses. Sony is several laps ahead of everyone else in the mirrorless world—with deep pockets and its foot hard on the mirrorless pedal, I don’t see that lead shrinking muchsoon.
As good as it is for a first generation offering, the Nikon Z7 is much closer to the 3-year old Sony a7RII than it is to the (already 1-year old) a7RIII, and for sports and wildlife (and anything else that moves), it isn’t even in the same league as the (more than 1-year old) Sony a9.
I have no idea how or when Sony will respond to the mirrorless offerings from Nikon and (soon) Canon, but I’m guessing it won’t be long, and am pretty confident that will be a great day to be a Sony shooter. Competition is great for all of us, and Nikon just gave the mirrorless wave a huge boost that I’m looking forward to riding as far as it takes me.
A few words about this image
I can’t tell you that this is my favorite Sony mirrorless image, but it would definitely be on the list. I chose it for this post because it’s one of the few Sony images I have that used a Canon lens with the Metabones adapter.
Leading a workshop in Yosemite a few years ago, I guided the group to a meadow flooded by the Merced River during a particularly extreme spring runoff year. My widest lens at the time was my Sony/Zeiss 16-35 f/4 (which I love, BTW), but the scene called for something wider. When he photographer assisting me offered to let me use his Canon 11-24 f/4 with my Metabones adapter, I snatched it before he could change his mind. Given that everything in the scene was stationary, I was able to bypass any adapter-induced autofocus frustration and take the time to manually focus (it didn’t hurt that depth of field at 11mm is extremely forgiving).
I’d never used a lens that wide and was so excited by the extra field of view that I returned from Yosemite fully prepared to purchase the Canon lens, adapter or not. Fortunate for my budget (and my back), I let the lens sit in my shopping cart long enough for sanity to prevail. Not only was the Canon lens quite expensive, it weighed a ton, and I had a feeling it wouldn’t be long before Sony offered something similar. Those instincts were rewarded a year later when Sony released a 12-24 f/4 G lens that is just as sharp and half the size (and much less money).
Click an image for a closer look and slide show. Refresh the window to reorder the display.
Posted on April 15, 2018
(With apologies to The Hollies.)
The road is long, with many a winding turn…
But that’s no excuse to cut corners. Probably the question I am most asked on location is some variation of, “What lens should I use?” While I’m always happy to answer questions, this one always makes me cringe because the implicit question is, “Which lenses can I leave behind?”
What many photographers fail to realize is that the “proper” lens is determined by the photographer, not by the scene. While there is often a consensus on the primary composition at a location, that usually only means the first composition everyone sees. But if your goal is to capture something unique, those are just the compositions to avoid. And as every photographer knows, the best way to guarantee you’ll need a lens is to not pack it. I’m not suggesting that you lug Hermione’s purse to every shoot—just try to remember that your images will last far longer than your discomfort.
In my Canon life, my personal rule of thumb was to always carry lenses that cover 16-200mm, regardless of the scene, then add “specialty” lenses as my plans dictated: macro for wildflowers, fast and wide prime for night, and super telephoto for a moon. That meant the 16-35, 24-105, and 70-200 were permanent residents of my Canon bag, and my 100-400, 100 macro, or wide and fast prime came along when I needed them.
Shooting Sony mirrorless, with its more compact bodies and lenses, I now carry a much wider focal in a lighter camera bag. My new baseline (always with me) lens lineup is the Sony 12-24 G, 24-105 G, and 100-400 GM, plus the Sony 2x teleconverter. My macro and night lenses still stay behind (but they’re usually in the car), but in my bag I always have lenses to cover 12-800mm, a significant advantage over my Canon 16-200 configuration.
It’s kind of a cliché in photography to say “It’s the photographer, not the equipment.” And as much as I agree in principle, sometimes the equipment does help. Wherever I am, I regularly find compositions beyond 200mm, compositions I never would have considered before. And the 12-24 lens has enabled me to approach familiar scenes with a completely fresh eye.
A recent example came on a snowy day in Yosemite early last month. Moving fast to keep up with the rapidly changing clouds and light, I stopped at El Capitan Bridge, directly beneath El Capitan. Having shot this scene for years (decades), I was quite familiar with the perspective. So wide is the top-to-bottom, left-to-right view of El Capitan here, even at 16mm I’ve always had to choose between all of El Capitan or all of the reflection, never both. I never dreamed I’d be able to get El Capitan and its reflection in a single frame. But guess what….
Standing above the river near the south side of the bridge, I framed up a vertical composition and saw that at 12mm I could indeed fit El Capitan and the reflection, top to bottom. Whoa. With very little margin for error on any side of the frame, I moved around a bit to get the scene balanced, eventually framing the right side with the snowy trees lining the Merced. My elevated perch above the river allowed me to shoot straight ahead (no up or down tilt of the camera) and avoid the extreme skewing of the trees that’s so common at wide focal lengths.
12mm provides so much depth of field that I could focus anywhere in the scene and get front-to-back sharpness; the flat light made exposure similarly simple. With composition, focus, and exposure set, all I had to do was watch the clouds and click the shutter, my heart filled with gladness….
Click an image for a closer look and slide show. Refresh the window to reorder the display.
Posted on April 11, 2018
Are you insane?
Albert Einstein defined insanity as doing the same thing over and over, but expecting different results. Hmmm. For some reason this reminds me of the thousands of good landscape photographers with hundreds of beautiful images they can’t sell. These photographers have a good eye for composition, own all the best equipment, know when to be at the great locations, and are virtual gurus with state-of-the-art processing software. Yet they haven’t achieved (their definition of) success.
Conducting photo workshops gives me pretty good insight into the mindset of serious amateur photographers, the photographers serious enough to spend time and money to rise before sunrise and stay out after dark to photograph the world’s most beautiful landscapes in frequently miserable conditions. I’m struck that many of these photographers have serious aspirations for their photography, but are so mesmerized by technology that they turned over control of the most important aspects of their craft to their camera. Their solution to photographic failure is to buy more equipment, visit more locations, and master more software. But the most overlooked tool is the one on top of their shoulders.
Knowledge vs. understanding
Just as a new camera won’t make you a better photographer, simply upgrading your photography knowledge won’t do it either—knowledge is nothing more than ingested and regurgitated information. Understanding, on the other hand, (among other things) gives you the ability to use information to create new knowledge and solve problems.
Many photographers invest far too much energy acquiring knowledge, and far too little energy understanding what they just learned. For example, it’s not enough to know that a longer shutter speed or bigger aperture means a brighter image if that knowledge doesn’t translate into an understanding of how to manage motion, depth, and light with your camera. It’s one thing to know you need more light on your sensor, but something altogether different to know whether to add it with a longer shutter speed, larger aperture, or higher ISO—a choice that makes a huge difference in the finished product.
Automatic modes in most cameras handle static, midday light beautifully, yet struggle in the limited light, extreme dynamic range, and harsh conditions that artistic photographers seek. The auto modes have become so good that they have created the illusion of control in the minds of many photographers. I see many excellent photographers whose profound faith in their technology has caused critical deficiency of two fundamental photographic principles:
Books and internet resources are a great place to start learning these principles (here’s my Photo Tip article), but the knowledge you gain there won’t turn to understanding until you get out with your camera and learn to manage a scene’s motion, depth, and light in creative ways that set your photography apart.
My metering philosophy is to approach every scene at ISO 100 (my Sony a7RIII’s best ISO) and f/11 (the best combination of lens sharpness and depth of field with minimal diffraction)—I control the light with my shutter speed and only deviate my baseline ISO and f-stop when the scene variables dictate. For example, when I want more or less depth of field, I’ll choose a different f-stop, or when I can’t get a proper exposure at the shutter speed that gives me the motion effect I want (blurred or sharp), I’ll adjust the ISO.
This Yosemite sunset from last February was about Half Dome, the clouds, the light, and the reflection in the Merced River. After finding my composition, the scene variables to consider when determining my exposure settings were:
The blur effect I wanted would require at least a one second exposure time, so I dropped my ISO down to 50 (as low as it goes). Keeping my aperture at f/11, I dialed my shutter speed with an eye on the histogram—when the histogram indicated I’d pushed my highlights as far as I could without clipping, my shutter speed was 1 second. This gave me a the proper exposure with sufficient motion blur, but I decided a little more motion blur would be even better. To double the shutter speed to 2 seconds, I stopped down one stop to f-16 and tried one more frame. In this case the benefit of the extra motion blur far outweighed any diffraction and lost sharpness (which experience has shown e would have been minimal with my Sony 16-35 GM lens), so that’s the frame I selected.
Insanity is in the mind of the beholder
If landscape photography gives you what you want, then by all means, continue doing what you’re doing. But if you’re having a hard time achieving a photographic goal, the solution is likely not doing more of what you’re already doing. Instead, try reevaluating your comprehension of fundamental photographic principles that you might not have thought about for years (or ever). Get out of your camera’s auto exposure modes and take control of your scene’s variables. You’ll know you’re there when you know how to get the result you want, or know why it’s simply not possible.
Do I really think you’re insane for doing otherwise? Of course not. But I do think you’ll feel a little more sane if you learn to take more control of your camera.
Click an image for a closer look and slide show. Refresh the window to reorder the display.
Posted on April 6, 2018
Even though your spellcheck says it doesn’t exist, I promise you that a moonbow is a very real thing indeed (and I have the pictures to prove it). Some argue that “lunar rainbow” is more the technically correct designation, but since that moniker just doesn’t convey the visual magic, I’m sticking with moonbow.
This won’t be on the test
Because a moonbow is a rainbow, all the natural laws governing a rainbow apply. But all the moonbow’s physics can be summarized to:
1) Your shadow always points toward the center of the moonbow (put your back to the moon and note the direction your shadow points)
2) The higher the moon, the lower the moonbow and the less of it you’ll see
3) When the moon is above 42 degrees (assuming flat terrain), the moonbow disappears below the horizon
Each spring, Sierra snowmelt surges into Yosemite Creek, racing downhill and plunging 2,500 feet in three mist-churning steps as Yosemite Falls. Shortly after sunset on spring full moon nights, light from the rising moon catches the mist, which separates and bends it into a shimmering arc. John Muir called this phenomenon a “mist bow,” but it’s more commonly known today as a moonbow.
While a bright moonbow is visible to the naked eye as a shimmering silver band, it isn’t bright enough for the human eye to register color. But thanks to camera’s ability to accumulate light, the moonbow’s vivid color shines in a photograph.
I just returned from the first of two moonbow workshops scheduled for this spring, but haven’t had time to process this year’s moonbow images. The above image was captured a few years ago near the bridge at the base of Lower Yosemite Fall. Not only was it crowded (the moonbow is no longer much of a secret), wind and mist made the necessary 20- to 30-second exposures an exercise in persistence. Not only was I able to capture the moonbow, as you can see, I now have photographic proof that the Big Dipper is the true source of Yosemite Falls.
Click an image for a closer look and slide show. Refresh the window to reorder the display.