Posted on October 11, 2020
(Or, Channeling My Inner Oz)
With virtually every still camera now equipped with video capability, the last few years have brought an explosion of nature videos. When done well, videos can be extremely powerful, conveying motion and engaging both eyes and ears to reveal the world in a manner that’s closer to the human experience than a still image is. But like other sensory media whose demise has been anticipated following the arrival of something “better,” (with apologies to Mark Twain) let me say that the rumors of still photography’s death have been greatly exaggerated.
Just as I enjoy reading the book more than watching the movie, I prefer the unique perspective of a still image. Though motion in a video may feel more like being there, a still image gives me the freedom to linger and explore a scene’s nooks and crannies, to savor its nuances at my own pace.
In a video my eyes are essentially fixed as the scene moves before them. In a still image, my eyes do the moving, drawn instantly to a dominant subject, or perhaps following lines, real or implied, in the scene the way a hiker follows a trail. But also like a hiker, I can choose to venture cross-country through a still image and more closely scrutinize whatever looks interesting.
The photographer needs to be aware of a still image’s inherent lack of motion, and more importantly, how to overcome that missing component by moving the viewer’s eyes with compositional choices. With this in mind, I usually like my images to have an anchor point, a place for the viewer’s eye to start and/or finish. To do this, I identify the scene’s anchor and other potential elements that might draw the eye, then position myself and frame the scene so those secondary elements guide the eye to (or frame) the primary subject.
But sometimes a scene stands by itself, as if every square inch fits together like a like a masterful tapestry. When nature gifts a scene like this, rather than imposing myself by offering visual clues to move my viewer’s eye, I like to step back and channel the Wizard of Oz. Specifically, what Dorothy must have felt when she first opened the door of her ramshackle, monochrome world onto the color and wonder of Oz. That’s how these scenes make me feel, and that’s the feeling I want my images to convey.
In a scene filled edge to edge with the awe and wonder of discovery, the last thing the viewer wants is to be told where to go and what to do. (And just look at all the trouble Dorothy got into when she started following the Yellow Brick Road.)
By getting out of the way and letting the scene speak for itself, my viewer has the freedom to explore the entire frame. Of course that’s easier said than done, but in the simplest terms possible, my sole job is to find balance and avoid distractions.
As much as aspiring photographers would love a composition formula that dictates where to locate each element in their frame, moving the eye, finding balance, and avoiding distractions ultimately comes down to feel. Please bear with me as I try to put into words how this inherently intuitive process manifest for me.
To explain the concept of balance and motion in a still image, I use what I call “visual weight (I’ll just shorten it to VW),” which I define as any object’s ability to pull the viewer’s eye—think of it as gravity for the eye.
An object’s VW is subjective, based on a variety of moving targets that include (to a greater or lesser degree) an object’s size, brightness, color, shape, and position in the frame. VW can also be affected by each viewer’s personal connection to the elements in the scene.
Take a wide angle moon for example. The moon is small and colorless (not much VW), but also bright with lots of contrast (high VW). Then factor in the viewer’s personal connection to the moon. If I’m more drawn to the moon than someone else, the moon’s visual weight would be greater to me. Since I can’t worry about what others think when I compose a shot, what you see in my images reflects the VW that a scene’s elements hold for me, and probably explains why I have so many moon images.
After many years (decades) of doing this, visual balance usually happens intuitively, without conscious thought. But until you reach this point, I have a mental exercise you can apply to your own images, preferably as they appear in your camera’s viewfinder or on its LCD.
Imagine a flat board perfectly balanced horizontally on a fulcrum (like the tip of a pen)—to maintain its equilibrium, any added weight must be counterbalanced by a corresponding weight elsewhere on the board. Visual weight is the virtual equivalent: think of your frame as a print (a stiff, metal print rather than a floppy, paper print) balanced on a fulcrum. Any visible element that pulls the eye tips the frame from horizontal (makes it out of balance) and must be counterbalanced by an element with corresponding visual weight.
Because of the subjective nature of visual weight, your choices might differ from mine. That’s okay—it’s important to be true to your own instincts, which will in fact improve with practice.
The VW concept applies to eliminating distractions too. Without getting too deep into the weeds (there are lots of potential distractions in a scene, and ways to deal with them, but that’s a blog for a different day), the idea is to avoid objects that pull the eye away from the essence of the scene (as you see it), or that simply overpower the scene. In the image at the top of this post, flying monkeys emerging from the Merced River might be pretty cool (and could even gain me some notoriety), but they would not serve my goal to convey a sense of wonder and awe and would in fact be a distraction.
Other potential distractions besides flying monkeys are things like branches and rocks that jut into the scene, creating the sense that they’re part of a different scene, just outside the frame. Another common distraction is objects that are mostly in the scene, but trimmed by the edge of the frame. Since it’s virtually impossible to avoid cutting something off on the edge of most frames in nature, I just try to minimize the damage by being very conscious of what’s cut off and how it’s cut, usually trying to cut boldly, down the middle, when possible. I’ve always felt that objects jutting into a scene, or slightly trimmed by the edge, feel like mistakes, while something cut strongly down the middle feels more intentional.
Yosemite seems to be filled with more than its share of scenes that that don’t need my help assembling a composition. At most scenes I start with the simplest composition and work my way to something more complex. I can usually tell when a scene stands by itself when I end up deciding my early compositions are the way to go.
I’d driven to Yosemite on this November morning chasing a fortuitously timed storm that was forecast to drop snow on peak fall color. The day started gray and cold, the valley floor white with wet snow beneath dark clouds that blanketed all of Yosemite’s distinctive features. But by late morning the clouds brightened and started to lift, slowly unpeeling Yosemite Valley’s soaring granite walls and monoliths.
I happened to be at Valley View when the show started in earnest. Because the scene contained everything I was there to photograph—Yosemite icons (El Capitan, Cathedral Rocks, Bridalveil Fall) decorated with snow, fall color, reflection—I started with this composition that took it all in in a pretty straightforward manner. Standing right at river’s edge, I chose horizontal framing because it was the best way to include the icons without diluting them with too much sky and water. Though I didn’t want to go too wide, because there was so much happening top-to-bottom, from clouds to reflection, I went a little wider than I usually do.
The lower half the scene had lots of rocks that I worked to avoid cutting off, finally finding framing that kept my edges completely clean (not always possible). The small rock in the lower left was a little closer to the edge than I’d have liked, but if I’d have gone any wider I’d have introduced spindly branches along the left edge—I chose the lesser of two evils. Likewise, the small rock on the bottom right was also closer to the edge than I preferred, but an entire herd of disorganized rocks massed just beneath my frame prevented me from composing lower. The top of my frame I set just below a distracting (bright) hole in the clouds. I’d have cut the rock on the middle right if I’d have had to, but was fortunate that there was a small break between it and another gang of rocks just off the frame on the right.
The visual balance was more by feel (as it often is). Looking at the image now, I see that offsetting the gap separating El Capitan and Cathedral Rocks, placing it a little left of center, makes the frame feel more balance than if I’d have centered it, but I don’t remember consciously deciding this. To my eye, the balance works for me because El Capitan, the brilliant color, and striking reflection hold more visual weight than the granite, waterfall, and reflection on the other side, so having more of this on the right compensates for this (slightly) lacking VW.
I wish I could defend my decision to use f/20, but I can’t. I only use f/20 when I absolutely have to—or when I was using it for an earlier scene and forgot to set it back to my default f/8 to f/11 range (which is no doubt what happened here).
One more thing
Even though this image is from 2012, it’s brand new, discovered yesterday while mining my raw file archives. The amazing thing to me is that the scene is quite similar, and the composition virtually identical, to an image taken the following year. When I see similar compositions in scenes from entirely different shoots, it tells me that my instincts are guiding me. In both situations these images were my starting point, and I went on to play with more creative compositions later in the shoot. But it just goes to show that sometimes it’s best to let the scene speak for itself.
Letting Nature Speak for Itself
Click an image for a closer look, and to view a slide show
Posted on October 4, 2020
This morning, while going through unprocessed images looking for something to blog about, I came across this image from last June in New Zealand. I realize the world probably doesn’t need any more pictures of this tree (which is why I’d never processed it), but after nearly two months of smoky skies that have robbed California of anything close to a normal sunset, sunrise/sunset color seemed to be a worthy topic, and this image definitely got my juices flowing.
Following a morning that had started with a beautiful sunrise reflection at Mirror Lakes in Milford Sound National Park, Don Smith and I (well, technically it was our driver) pulled the van carrying our New Zealand workshop group into Wanaka a couple of hours before sunset. We had a sunset spot in mind, but with a little time to spare we decided to give the group a quick preview of our sunrise subject, the iconic lone willow tree of Lake Wanaka. We never left.
It was pretty apparent from the instant of our arrival that the ingredients for a spectacular sunset were in place: clouds, clean air, and a clear spot on the western horizon to let sunlight through. Of course nothing in nature is guaranteed, but based on what we saw, Don and I made a calculated decision to alter our plan. Even though our original sunset spot would benefit from the same conditions, we decided that, because the opportunity to photograph this tree was one of the prime reasons most of the group signed up for the workshop in the first place, and sunrise conditions are never a sure thing, staying would give our group the best opportunity for a memorable experience here. Boy did we make the right call.
For this image I used my Breakthrough 6-stop neutral-density polarizer (X4 Dark CPL) to smooth a slight chop rippling the lake. Not only did the resulting 30-second exposure soften the lake surface, it added an ethereal blur to the distant clouds and fog.
Sunrise was in fact completely washed out by fog, but that didn’t mean it was a failure, just different….
And speaking of sunrise/sunset color, I’ve revised my Photo Tips article on that very topic and added it below. So if you want to know why the sky is blue and sunsets are red, read on.
A sunset myth
If your goal is a colorful sunset/sunrise and you have to choose between pristine or hazy air, which would you choose? If you said clean air, you’re in the minority. You’re also right. Despite some pretty obvious evidence to the contrary, it seems that the myth that a colorful sunset requires lots of particles in the air persists. But if particles in the air were necessary for sunset color, Los Angeles would be known for its vivid sunsets and Hawaii’s main claim to fame would be its beaches. (Okay, and maybe its luaus. And waterfalls. And pineapples. And Mai Tais. And…. Well, maybe lots of great stuff, but not its sunsets.)
So what is the secret to a great sunset? Granted, a cool breeze, warm surf, and a Mai Tai are a good start, but I’m thinking more photographically than recreationally. I look for a mix of clouds (to catch the color) with an opening for the sun to pass through and light the clouds. But even with a nice mix of clouds and sky, sometimes the color fizzles. Often the missing ingredient, contrary to common belief, is clean air—the cleaner the better.
Light and color
Understanding sunset color starts with understanding how sunlight and the atmosphere interact to color the sky. Visible light reaches our eyes in waves of varying length. The color we perceive is a function of wavelength, ranging from short to long: violet, indigo, blue, green, yellow, orange, and red. (These color names are arbitrary labels we’ve assigned to the colors we perceive at various wavelength points along the visible portion of the electromagnetic spectrum—there are an infinite number of wavelength-depenedent colors between each of these colors.)
Because a beam of sunlight passing in a vacuum (such as space) moves in a straight line (we won’t get into relativity and the effect of gravity on a beam of light), all its wavelengths reach our eyes simultaneously and we perceive the light as white. When a beam of sunlight encounters something (like Earth’s atmosphere), its light can be absorbed or scattered, depending on the wavelength and the properties of the interfering medium, and we see as color the remaining wavelength that reach our eyes.
For example, when sunlight strikes a leaf, all of its wavelengths except those that we perceive as green are absorbed, while the green wavelengths bounce to our eyes.
Color my world
Since our atmosphere is not a vacuum, sunlight is changed simply by passing through it. In an atmosphere without impurities (such as smoke and dust), light interacts only with air molecules. Air molecules are so small that they scatter only a very narrow range of wavelengths. This atmospheric scattering acts like a filter that scatters the violet and blue wavelengths first, allowing the longer wavelengths to pass through. When our sunlight has traveled through a relatively small amount of atmosphere (as it does when the sun is overhead), the wavelengths that reach our eyes are the just-scattered violet and blue wavelengths, and our sky looks blue (the sky appears more blue than violet because our eyes are more sensitive to blue light).
On the other hand, because the longer orange and red wavelengths are less easily scattered, they travel a much greater distance through the atmosphere. When the sun is on the horizon, its light has passed through much more atmosphere than it did when it was directly overhead, so the only light reaching our eyes at sunrise or sunset has been stripped of its shorter (blue and violet) wavelengths by its lengthy journey, leaving only the longer, orange and red wavelengths to color our sky. Sunset! (Or sunrise.)
Pollution dampens the filtering process. Rather than only scattering specific colors, light that encounters a molecule larger than its wavelength is more completely scattered—in other words, instead of scattering only the blue and violet wavelengths, polluted air catches some orange and reds too. Anyone who has blended a smoothie consisting of a variety of brightly colored ingredients (such as strawberries, blueberries, cantaloupe, and kale—uhh, yum?) knows the smoothie’s color won’t be nearly as vivid as any of its ingredients, not even close. Instead you’ll end up with a brownish or grayish muck that might at best be slightly tinted with the color of the predominant ingredient. Midday light that interacts with large particles in the atmosphere is similarly muddied, while polluted sunrise and sunset light has already had much of its red stripped out.
Verify this for yourself the next time a storm clears as the sun sets, and compare the color you see to the color on a hazy, summer evening in the city.
Tips for maximizing sunset color in a photograph
Any time rain has cleared the atmosphere and the remaining clouds are mixed with sunlight, there’s a good chance for vivid sunrise or sunset color. I have a few go-to locations near home, and at my frequently visited photo locations (Yosemite, Grand Canyon, Death Valley, Hawaii, and so on) that I beeline to when there’s a chance for color in the sky.
When I’m on location and preparing my shot before the sunset show begins, I look for clouds receiving direct sunlight. This is the light that will most likely color up at sunset, starting with an amber glow that transitions to pink, red, and eventually a deep orange.
An often overlooked color opportunity when the air is clean is the horizon opposite the sun after sunset or before sunrise. When the sun is below the horizon, the opposite horizon reveals the transition between the blues of night and the pinks of the sun’s first or last rays the best color of the day. This is especially true when there are no clouds in the direction of the sun. Photographing this twilight color with your back to the sun’s horizon has the added advantage of being much less contrasty and easier to manage with a camera.
Maximizing sunset color in your images requires careful exposure and composition decisions. By far the most frequent problem is overexposure—giving the scene more light than necessary. In scenes of such extreme contrast, your camera can’t capture the entire range of light your eyes see. And of course your camera has no idea what you’re photographing, so if you leave the exposure decision up to automatic metering, you’ll likely end up with a compromise exposure that tries to pull detail out of the shadows at the expense of color in the sky.
Since it’s the color you’re most interested in capturing, it’s usually best to spare the color in the highlights and let your shadows darken. This usually requires some planning—finding striking finding foreground subjects that stand out against the brighter sky, or water to reflect the sky’s color.
When you’ve found your sunset subject and are ready to shoot, base your exposure decisions on your camera’s histogram, not the way the picture looks on the LCD (never a reliable gauge of actual exposure). Remember, since your camera can’t capture what your eyes see anyway, the amount of light you give your scene is a creative decision. After you’ve exposed, make sure you check your RGB histogram to ensure that you haven’t clipped one of your color channels (most likely the red channel).
You can read more about metering in my Manual Exposure article.
For example: Sentinel Dome, Yosemite
Sentinel Dome in Yosemite provides a 360 degree view of Yosemite and surrounding Sierra peaks. Among the many reasons it’s such a great sunset spot is that from atop Sentinel Dome you can see what’s happening on the western horizon and plan your shoot long before sunset arrives. On this summer evening I was up there shortly after an afternoon rain shower. Though air was crystal clear, lots of clouds remained—and there was an opening on the western horizon for the sun to slip through just before disappearing for the night.
Rather than settle for a more standard Half Dome composition, I wandered around a bit in search of an interesting foreground. I ended up targeting this group of dead pines on Sentinel’s northeast slope, a couple of hundred feet down from the summit. It was no coincidence that sunset that night, one of the most vivid I’ve ever seen, came shortly after a storm had cleansed the atmosphere. Not only did the clouds fire up, the color was so intense that its reflection colored the granite, trees, and pretty much every other exposed surface.
For example: Hilltop Oaks, Sierra Foothills
I was driving the Sierra foothills east of Sacramento looking for the right subject to put with this fiery sunset. Earlier in the sunset it had simply been a been a matter of finding a photogenic tree (or trees), but with the sun more than 15 minutes below the horizon, the foreground was so dark I needed a subject to silhouette against the sky—anything else would have been lost in the rapidly blackening shadows. These trees showed up just in the nick of time.
Color like this comes late (or, at sunrise, early), in the direction of the sun long after most people have gone to dinner (or while they’re still in bed). Everything in this scene that’s not sky is black, which is why my subject needed to stand out against the sky. I was so happy with my discovery that these trees have become go-to subjects for me—browse my galleries and count how many times you see one or both of them (often with a crescent moon).
For example: South Tufa, Mono Lake
The air on Sierra’s east side is much cleaner than air on the more populated west side, and the clouds formed as the prevailing westerly wind descends the Sierra’s precipitous east side are both unique and dramatic. Mono Lake makes a particularly nice subject for the Eastern Sierra’s brilliant sunrise/sunset shows. Not only does it benefit from the clean air and photogenic clouds, Mono Lake’s tufa formations and often glassy surface make a wonderful foreground. The openness of the terrain surrounding Mono Lake allows you to watch the entire sunrise or sunset unfold. Many times over the course of a sunrise or sunset I’ve photographed in every direction.
The image here was captured at the start of a particularly vivid sunrise. The air was clean, with just the right mix of clouds and clear sky; perfectly calm air allowed the lake’s surface to smooth to glass. I find that the more I can anticipate skies like this, the better prepared I am when something spectacular happens. In this case I was at the lake well before the color started, but because it looked like all the sunrise stars were aligning, I was able to plan my composition and settings well before the color started.
Posted on September 27, 2020
Photography is the futile attempt to squeeze a three-dimensional world into a two-dimensional medium. But just because it’s impossible to truly capture depth in a photograph, don’t think you shouldn’t consider the missing dimension when crafting an image. For the photographer with total control over his or her camera’s exposure variables (which exposure variable to change and when to change it), this missing dimension provides an opportunity to reveal the world in unique ways, or to create an illusion of depth that recreates much of the thrill of being there.
The Illusion of Depth
Sometimes a scene holds so much near-to-far beauty that we want to capture every inch of it. While we can’t actually capture the depth our stereo vision enjoys, we can take steps to create the illusion of depth. Achieving this is largely about mindset—it’s about not simply settling for a primary subject no matter how striking it is. When you find a distant subject to feature in an image, scan the scene and position yourself to include a complementary fore-/middle-ground subjects. Likewise, when you want to feature a nearby object in an image, position yourself to include a complementary back-/middle-ground subjects.
Creative Selective Focus
Most photographers go to great lengths to achieve full front-to-back sharpness, an art in itself. But sometimes I like to solve the missing depth conundrum with what I call creative selective focus: An intentionally narrow depth of field with a carefully chosen focus point to flatten a scene’s myriad out-of-focus planes onto the same thin plane as the sharp subject. This technique can soften distractions into a blur of color and shape, or simply guide the viewer’s eye to the primary subject and soften the background to complementary context.
When I use creative selective focus to autumn leaves or spring flowers, I usually take the extreme background blur color and shape approach. In the images below, the soft background serves as a canvas for the primary subject.
But sometimes I like my soft background to have enough resolution to be more recognizable. When I take this approach, my goal is to signal the part of the scene I want to emphasize by making it sharp, and to use the soft but still recognizable background for context that tells the view something about the location.
A few years ago I wrote an article on this very topic for “Outdoor Photographer” magazine. You can read a slightly updated version of this article in my Photo Tips section: Selective Focus.
About this image: Creekside Color, Mill Creek, Eastern Sierra
With dense aspen groves, reflective beaver ponds, towering peaks, and even a waterfall, Lundy Canyon just north and west of Mono Lake, has long been one of my favorite fall color locations.
I spent this overcast autumn morning wandering the banks of Mills Creek. The thick growth here often makes this easier said than done, but the rewards of battling my way through trees and shrubs usually makes it worth the scrapes and scratches I always seem to go home with.
Even though it was less than 30 feet from the road, I heard this cascade long before I saw it. Once I got my eyes on it, I had to battle further to get a clear view. I especially liked the red leaves, a relative rarity in California, and wanted to feature them. Here I positioned myself so the leaves framed the creek, and turned my polarizer to reduce the leaves’ glossy sheen.
I used a range of f-stops for a variety of background sharpness options. This one used f/32 (maybe my all-time record for smallest aperture), which gave me enough DOF for to make the creek easily recognizable, but also resulted in a 4-second exposure. (Clearly wind was not a factor this morning.)
Here’s my Photo Tips article on using hyperfocal focus techniques to enhance your images’ illusion of depth: Depth of Field.
Playing With Depth
Posted on September 20, 2020
This is an updated version of the “Big Moon” article from my Photo Tips section,
plus the story of this image (below)
Nothing draws the eye quite like a large moon, bright and bold, above a striking foreground. But something happens when you try to photograph the moon—somehow, a moon that looks to the eye like you could reach out and pluck it from the sky shrinks to a small white speck in a photo. While a delicate accent of moon is great when properly framed above a nice landscape, most photographers like their moons BIG.
Some photographers resort to cheating, plopping a telephoto moon into a wide angle landscape. But armed with basic knowledge bolstered by a little planning, capturing a large moon isn’t hard.
Every time there’s a “supermoon,” we’re bombarded with news stories implying that the moon will suddenly double or triple in size, followed by faked images intended to confirm the impossible. But crescent or full, super or not, the moon’s size in an image is almost entirely a function of the focal length the photographer used—photograph it at 16mm and the moon registers as a tiny dot; photograph it at 600mm and your moon dominates the frame.
But a landscape image with a large moon requires more than just a long focal length. If big was all that mattered, you could attach your camera to a telescope, point skyward, and capture a huge moon (not that there’s anything wrong with that). But without a landscape to go with your huge moon, no one would know whether you took the picture on a mountainside in Yosemite, atop a glacier in New Zealand, or beside the garbage cans in your driveway.
“Big moon” is a subjective label, but I don’t usually use it unless my focal length was 200mm or longer. And while a 200mm lens is okay for the moon, for me the moon doesn’t really start to jump out of the frame until I approach 400mm.
Prime zooms are super sharp and fast, but for my moon photography I prefer a telephoto zoom for focal length flexibility that enables me to adjust my composition to include or exclude foreground elements. As a Sony Alpha shooter, my default big moon lens that’s almost always in my bag is my Sony 100-400 GM. The Sony 200-600 is sometimes too long, and it’s too big to live in my bag fulltime, but when I know I’ll be photographing the moon rising (or setting) above a location that’s several miles from my foreground subjects, I’ll replace the 100-400 in my bag with the 200-600. And when I want to go nuclear on the moon with either lens, I add the Sony 2X Teleconverter.
Not a Sony shooter? No problem, all the major camera manufacturers offer similar options.
The camera you use makes a difference too. The more resolution you have, the more you can crop (increase the size of the moon) without noticeable quality loss. And since an APS-C sensor has a 50% (-ish) crop built in, until I got my Sony a7RIV, I’d often use my APS-C Sony a6300 to maximize the size of the moon in my images. But now that I have the full frame Sony a7RIV, with 61 megapixels I actually have more resolution in APS-C mode than I had with my a6300.
My own rule for full moon photography is that I must capture both lunar and landscape detail. But a full moon rises at sunset and sets at sunrise, and a crescent moon is only visible shortly before sunrise or after sunset. So your camera’s dynamic range a very important consideration. The darker the sky, the better the moon looks, but the darker the sky, the darker the foreground too. For me it’s time to go home when the foreground becomes so dark that making it bright enough to capture usable detail means blowing out the moon. So the more dynamic range I have, the darker the sky can be. While I don’t know of a camera with as much dynamic range as my a7RIV, all of today’s cameras have pretty decent dynamic range.
And finally, given the extreme focal lengths you’ll be dealing with, don’t even think about trying to shoot a big moon without a sturdy tripod.
Often the most difficult part of including a large moon with a specific landscape subject is finding a vantage point far enough back to fit the subject and the moon. But the farther back from your foreground subject you can position yourself, the longer the focal length you can use, and the bigger the moon will be.
For example, I love photographing a big moon rising behind Half Dome in Yosemite. But at Yosemite’s popular east-side locations, even 200mm is too close to get the moon and all of Half Dome in my frame. And while Yosemite’s most distant east-facing Half Dome vistas are up to 10 miles away, Half Dome is large so that even at that distance the longest focal length that will include the moon and all of Half Dome isn’t much more than 400mm.
A little easier for me is including a big moon with smaller foreground objects like a prominent tree. Near my home in Northern California are rolling hills topped by solitary oaks that make perfect moon foregrounds when I can shoot up so they’re against the sky. And since these trees are much smaller than Half Dome, even vantage points that are less than a mile away are doable.
Location, location, location
As your focal length increases, your compositional margin for error shrinks. You can’t expect to go out on the evening of a full or crescent moon, look to the horizon, and automatically put the moon in the frame with your planned foreground subject.
Even when the moon and your foreground do align, once the moon appears, you’ll only have a few minutes before it rises out of your telephoto frame. This means extreme telephoto images that include both the moon and a foreground subject are only possible when the moon is right on the horizon, making proper timing essential.
Like the sun, the moon traces a different path across the sky each day. This path changes with each lunar cycle (from full, to new, back to full)—whether the moon is full or crescent, a location that perfectly aligns the moon and foreground one month, will probably be nowhere close the next.
Coordinating all the moving parts (moon phase and position, foreground subject alignment, subject distance, and rise/set timing) requires some planning and plotting. When I started photographing the moon, in the days before smart phones and apps that do the heavy lifting, I had to refer to tables to get the moon’s phase and position in the sky, manually plot the alignment, then apply the Pythagorean theorem to figure the timing of the moon’s arrival above (or disappearance behind) the terrain.
Today there are countless apps that will do this for you. Apps like The Photographer’s Ephemeris and Photo Pills (to name just two of many) are fantastic tools that give photographers access to moonrise/set data for any location on Earth. There is a bit of a learning curve (so don’t wait until the last minute to plan your shoot), but they’re infinitely easier than the old fashioned way.
Depth of field
With subjects so far away, it’s easy to forget about depth of field. But extreme focal lengths mean extremely limited depth of field. Depth of field isn’t a concern when Half Dome is your closest subject and it’s ten miles distant, but when your foreground is an oak tree on a hill that’s a mile away, you absolutely need to consider the hyperfocal distance.
For example, at 800mm and f/11 (with a full frame sensor), the hyperfocal distance is about a mile-and-a-quarter (look it up)—focus on the tree and the moon will be soft; focus on the moon and the tree is soft. But if you can focus on something that’s a little beyond the tree, at maybe one-and-a-half miles away, the image will be sharp from front to back.
When I’m not sure of my subject distance, I estimate as best I can, focus on a point beyond my foreground subject, then review my image magnified to check sharpness. If my focus point is in my frame, great, but I won’t hesitate to remove my camera from the tripod to focus on something in another direction that’s the right distance (if you do this, to prevent refocusing, be sure you use back-button focus or are in manual focus mode when you click your shutter). It’s always best to get the focus sorted out before the moon arrives, a good reason to arrive at a new location well in advance of the moon’s arrival.
When the moon is a small accent to a wide scene, it’s often enough to just show up on its full or crescent day and shoot it somewhere above your subject. But because the margin of error is so small, planning for a big moon image is best done months in advance.
I identify big-moon candidate locations near home and on the road, and am always on the lookout for more. My criteria are a prominent subject that stands out against the sky, with a distant east or west facing vantage point. Over the years I’ve assembled a mental database ranging from hilltop trees near home, to landscape icons like Half Dome, Mt. Whitney, and Zabriskie Point (Death Valley).
With my subjects identified, I do my plotting (I still do it the old fashioned way) and mark my calendar for the day I want to be there. That often means waiting close to a year for the alignment I want. And if the weather or schedule doesn’t cooperate, my wait can be longer than that.
About this image
On the penultimate evening of last February’s Yosemite Winter Moon photo workshop, I assembled my Yosemite Winter Moon photo workshop group on the granite above Tunnel View to wait for the moonrise we’d been thinking about all workshop. Sunset was 5:30, and I expected the moon to appear behind Cloud’s Rest between a little before 5:35, which meant the sky and landscape would already be starting to darken. The exposure for a post-sunset full moon is trickier than many people realize because capturing detail in both the daylight-bright moon and the rapidly fading landscape requires vigilant scrutiny of the camera’s histogram and highlight alert (blinking highlights). To get everyone up to speed, I used nearly full rising moons on the workshop’s first two nights to teach them to trust their camera’s exposure aids and ignore the image on the LCD (kind of like flying a plane on instruments). With two moonrises under their belts, by this evening I was confident everyone was ready.
I was ready too. In my never-ending quest to photograph the moon as large as possible, I went all-in—none of that wimpy-ass 200mm glass for me, for this moonrise I used every resource in my bag. I set up two tripods: mounted on one was my Sony a7RIII and 100-400 GM lens; on the other tripod was my Sony a7RIV and 200-600, doubled by the 2X teleconverter: 1200mm. But I wasn’t done. Normally I shoot full frame and crop later (for more compositional flexibility), but just for fun, on this night I decided to put my camera in APS-C mode so I could compose the scene at a truly ridiculous 1800mm—I just couldn’t resist seeing what 1800mm looked like in my viewfinder.
While waiting for the moon the group enjoyed experimenting with different compositions using the warm sunset light illuminating Half Dome and El Capitan. I used the time to test the focus at this unprecedented focal length. Waiting for an event like this with a group is one of my favorite things about photo workshops, and this evening was no exception. Between questions and clicks, we traded stories, laughed, and just enjoyed the spectacular view.
The brilliant sliver of the moon’s leading edge peaked above Cloud’s Rest at 5:33. It is truly startling to realize how quickly the moon moves through the frame at 1800mm, so everything after that was kind of a blur. Adjusting compositions and tweaking exposure and focus on two bodies, I felt like the percussionist in a jazz band, but I somehow managed to track the moon well enough to keep it framed in both cameras.
Though I just processed this image yesterday, it’s the earlier of the two big moon images I’ve processed from that shoot. Which one do you like best?
Posted on September 13, 2020
This is the second of my two-part fall color series
Read part one: The Why, How, and When of Fall Color
Vivid color and crisp reflections make autumn my favorite season for creative photography. While most landscape scenes require showing up at the right time and hoping for the sun and clouds to cooperate, photographing fall color can be as simple as circling your subject until the light’s right. For photographers armed with an understanding of light and visual relationships, and the ability to control exposure, depth, and motion with their camera’s exposure variables, fall color possibilities are virtually unlimited.
Backlight, backlight, backlight
The difference between the front-lit and backlit sides of fall foliage is the difference between dull and vivid color. Glare and reflection make the side of a leaf facing its light source, whether that leaf is in direct sunlight or simply faces an overcast sky, appears flat. But the other side of the same leaf, the side that’s opposite the light from the sun or sky, glows with color.
In the image below (Autumn Reflection, Merced River, Yosemite), my camera has captured the sky-facing side of most of the leaves. But I’ve captured the underside of the leaves on the top-right of the branch—even though it’s an overcast day, can you see how these backlit leaves glow compared to the others?
The moral of this story? If you ever find yourself disappointed that the fall color seems washed out, check the other side of the tree.
Isolate elements for a more intimate fall color image
Big fall color scenes are great, but isolating your subject with a telephoto, and/or by moving closer, enables you to highlight and emphasize specific elements and relationships.
- Train your eye to find leaves, groups of leaves, or branches that stand alone from the rest of the tree or scene, or that stand out against a contrasting background.
- Zoom close, using the edges of the frame to eliminate distractions and frame subjects.
- Don’t concentrate so much on your primary subject that you miss complementary background or foreground elements that can balance the frame and provide an appealing canvas for your primary subject.
Selective depth of field is a great way to emphasize/deemphasize elements in a scene
Limiting depth of field by composing close with a large aperture and/or telephoto lens can soften a potentially distracting background into a complementary canvas of color and shape. Parallel tree trunks, other colorful leaves, and reflective water make particularly effective soft background subjects. For an extremely soft background, reduce your depth of field further by adding an extension tube to focus even closer.
Underexpose sunlit leaves to maximize color
Contrary to what many believe, fall foliage in bright sunlight is still photographable if you isolate backlit leaves against a darker background and slightly underexpose them. The key here is making sure the foliage is the brightest thing in the frame, and to avoid including bright sky in the frame. Photographing sunlit leaves, especially with a large aperture to limit DOF, has the added advantage of an extremely fast shutter speed that will freeze wind-blown foliage.
Slightly underexposing brightly lit leaves not only emphasizes their color, it turns everything that’s in shade to a dark background. And if your depth of field is narrow enough, points of light sneaking between the leaves and branches to reach your camera will blur to glowing jewels.
A sunstar is a great way to liven up an image in extreme light
If you’re going to be shooting backlit leaves, you’ll often find yourself fighting the sun. Rather than trying to overcome it, turn the sun into an ally by hiding it behind a tree. A small aperture (f16 or smaller is my general rule) with a small sliver of the sun’s disk visible creates a brilliant sunstar that becomes the focal-point of your scene. Unlike photographing a sunstar on the horizon, hiding the sun behind a terrestrial object like a tree or rock enables you to move with the sun.
When you get a composition you like, try several frames, varying the amount of sun visible in each. The smaller the sliver of sun, the more delicate the sunstar; the more sun you include, the more bold the sunstar. You’ll also find that different lenses render sunstars differently, so experiment to see which lenses and apertures work best for you.
When photographing in overcast or shade, it’s virtually impossible to freeze the motion of rapid water at any kind of reasonable ISO. Rather than fight it, use this opportunity to add silky water to your fall color scenes. There’s no magic shutter speed for blurring water—in addition to the shutter speed, the amount of blur will depend on the speed of the water, your distance from the water, your focal length, and your angle of view relative to the water’s motion.
All blurs aren’t created equal. When you find a composition you like, don’t stop with one click. Experiment with different shutter speeds by varying the ISO (or aperture as long as you don’t compromise the desired depth of field).
Reflections make fantastic complements to any fall color scene
By autumn, rivers and streams that rushed over rocks in spring and summer, meander at a leisurely, reflective pace. Adding a reflection to your autumn scene can double the color, and also add a sense of tranquility. The recipe for a reflection is still water, sunlit reflection subjects, and shaded reflective surface.
When photographing leaves floating atop a reflection, it’s important to know that the focus point for the reflection is the focus point of the reflective subject, not the reflective surface. This is seems counterintuitive, but try it yourself—focus on the leaves with a wide aperture and watch the reflection go soft; then focus on the reflection and watch the leaves go soft.
A wide focal length often provides sharpness from the nearby leaves to the infinite reflection, but sometimes achieving sharpness in your floating leaves and the reflection requires careful hyperfocal focus. And sometimes the necessary depth of field exceeds the camera’s ability to capture it—in this case, I almost always bias my focus toward the leaves and let the reflection go a little soft.
Don’t forget the polarizer
I can’t imagine photographing fall color without a polarizer. Fall foliage has a reflective sheen that dulls its natural color, so a properly oriented polarizer can erase that sheen and bring the underlying natural color into prominence. Not are reflections on the foliage a problem, reflections on nearby water and rocks can pull the eye and distract from your primary subject.
To minimize the scene’s reflection, slowly turn the polarizer until the scene is darkest (the more you try this, the easier it will be to see). If you have a hard time seeing the difference, concentrate your gaze on a single leaf, rock, or wet surface.
A polarizer isn’t an all-on or all-off proposition. When photographing a scene with still water, it’s often possible to maximize a reflection in the water without dialing up the reflection on the leaves. To achieve this, dial the polarizer’s ring and watch the reflection change until you achieve the effect you desire. This technique is particularly effective when you want your reflection to share the frame with submerged feature such as rocks, leaves, and grass. In the image below, I turned my polarizer just enough to reveal the nearby submerged rocks without removing the mountain a trees reflection.
Nothing communicates the change of seasons like fall color with snow
Don’t think the first snow means your fall photography is finished for the year. Hardy autumn leaves often cling to branches, and even retain their color on the ground through the first few storms of winter. An early snowfall is an opportunity to catch fall leaves etched in white, an opportunity not to be missed. And even after the snow has been falling for a while, it’s possible to find a colorful rogue leaf to accent an otherwise stark winter scene.
About this image
People sometimes accuse me of adding or positioning leaves in my frame. Those who know me know I don’t do that, but that doesn’t protect me from their (good natured) abuse. For those who don’t know me and who don’t believe I found this leaf like that, I don’t really know what to say, except to explain that the joy I get from photography comes from discovering natural beauty, and a manufactured scene that isn’t natural has zero appeal to me. (I think this is also why I don’t do composites.) I don’t think it’s wrong to place elements in a frame (or to blend multiple images), as long as it’s done honestly—it’s just not something that interests me. But anyway…
I don’t really understand why people think it’s so unusual to find a leaf (or two, or three…) isolated from its surroundings. I aggressively look for small scenes like this, so it should be no surprise that I have a lot of them in my portfolio. While the position of the leaves in my images is randomly determined by nature (or maybe by the unscrupulous photographer who preceded me at the scene), there’s nothing random about my position when I capture these scenes.
Probably my favorite place to photograph isolated leaves is Bridalveil Creek, just beneath Bridalveil Fall in Yosemite. The entire area is decorated with an assortment of deciduous trees that deposit their leaves liberally among the rocks and cascades each fall. And unlike Yosemite’s other waterfalls, Bridalveil Fall runs year-round. Even in autumn, when it’s often barely more than a trickle, there’s enough water to cascade, splash, and pool among the rocks.
Another great thing about Bridalveil Creek is that its location just beneath Cathedral Rocks and Leaning Tower means it gets very little direct sunlight in autumn. So even when the sun’s out, I can spend hours photographing here in the full shade that’s ideal for this type of photography.
On this cloudy October morning I was doing my usual thing, bounding about on the rocks upstream from the trail looking for single leaves to isolate in my frame. My of the cascades here are active enough to splash and wet the rocks, so when a descending leaf hits a wet rock just right, it sticks like glue. I didn’t see this leaf land and stick, but I’ve seen it happen enough to know this isn’t that unusual.
This cascade was about 20 feet away, above a pool that was deeper than I wanted to wade, so I went to my 70-200 lens. I spent a little time casually working this scene, circling, framing it from a variety of positions using different focal lengths. But when I got to this spot and saw the smooth curves and dark flowing into light, my mind immediately went to the Yin and Yang symbol (okay, so maybe you need use your imagination a bit). I dropped down a bit and refined my composition, then started working on the exposure.
Not only was this spot in full shade, the morning was overcast. With my polarizer on to cut the sheen on the rocks and leaves, I knew that slowing the water enough to capture any detail was virtually impossible, so I went all-in on the motion blur and just turned the water a homogenous white. It turns out this decision actually enhanced the yin/yang effect I was going for.
To better understand the science and timing of fall color, read
A Gallery of Fall Color
Posted on September 6, 2020
Autumn is right around the corner. To get things started, I’ve updated a previous post that demystifies why, how, and when of fall color.
Few things get a photographer’s heart racing more than the vivid yellows, oranges, and reds of autumn. And the excitement isn’t limited to photographers—to appreciate that reality, just try navigating New England backroads on a Sunday afternoon in the fall.
Despite all the attention, the annual autumn extravaganza is fraught with mystery and misconception. Showing up at at the spot that guy in your camera club told you was peaking at this time last year, you might find the very same trees displaying lime green mixed with just hints of yellow and orange, and hear the old guy behind the counter at the inn shake his head and tell you, “It hasn’t gotten cold enough yet—the color’s late this year.” Then, the next year, when you check into the same inn on the same weekend, you find just a handful of leaves clinging to exposed branches—this time as the old guy hands you the key to your room he utters, “That freeze a couple of weeks ago got the color started early this year—you should have been here last week.”
While these explanations may sound reasonable, they’re not quite accurate. Because the why and when of fall color is complicated, observers resort to memory, anecdote, and lore to fill knowledge voids with partial truth and downright myth. And while we still can’t predict fall color the way we do the whether, science has provided a pretty good understanding of the fall color process.
A tree’s color
The leaves of deciduous trees contain a mix of green, yellow, and orange pigments. During the spring and summer growing season, the volume and intensity of the green chlorophyl pigment overpowers the orange and yellow pigments and the tree stays green. Even though chlorophyl is quickly broken down by sunlight, the process of photosynthesis that turns sunlight into nutrients during the long days of summer continuously replaces the spent chlorophyl.
As the days shrink toward autumn, things begin to change. Cells at the abscission layer at the base of the leaves’ stem (the knot where the leaf connects to the branch) begin the process that will eventually lead to the leaf dropping from the tree: Thickening of cells in the abscission layer blocks the transfer of carbohydrates from the leaves to the branches, and the movement of minerals to the leaves. Without these minerals, the leaves’ production of chlorophyl dwindles and finally stops, leaving just the yellow and orange pigments. Voilà—fall color!
The role of sunlight and weather
Contrary to popular belief, the timing of the onset of this fall color chain reaction depends much more on daylight than it does on temperature and weather. Triggered by a genetically programmed day/night-duration threshold (and contrary to innkeeper-logic), the trees in any given region will commence their transition from green to color at about the same time each year, when the day length drops to a certain point.
Nevertheless, though it doesn’t trigger the process, weather does play a significant part in the intensity, duration, and demise of the color season. Because sunlight breaks down the green chlorophyl, cloudy days after the suspension of chlorophyl creation will slow the chlorophyl’s demise and the coloring process that follows. And while the yellow and orange pigments are present and pretty much just hanging out while they wait all summer for the chlorophyl to relinquish control of the tree’s color, a tree’s red and purple pigments are manufactured from sugar stored in the leaves—the more sugar, the more vivid a tree’s red. Ample moisture, warm days, and cool (but not freezing) nights after the chlorophyl replacement has stopped are most conducive to the creation and retention of the sugars that form the red and purple pigments.
On the other hand, freezing temperatures destroy the color pigments, bringing a premature end to the color display. Drought can stress trees so much that they drop their leaves before the color has a chance to manifest. And wind and rain can wreak havoc with the fall display—go to bed one night beneath a canopy of red and gold, wake the next morning to find the trees bare and the ground blanketed with color.
Since the fall color factors come in a virtually infinite number of possible variations and combinations, the color timing and intensity can vary a lot from year to year. Despite expert advice that seems promise precise timing for the fall color, when planning a fall color trip, your best bet is to try to get there as close as possible to the middle of the color window, then cross your fingers.
About this image
Looking for something to do in this COVID-constrained world, I dialed my way-back machine all the way back to 2005 and landed on this image. I wish I could tell you I have a memory of its capture, but I don’t. I do, however, have lots of general memories of photographing fall color at Bridalveil Creek in Yosemite, just below Bridalveil Fall. Since I’ve never visited Yosemite in autumn without shooting here, when I set out find a fall color image in my archives, I specifically targeted my Bridalveil Creek shoots.
I started by digging up another image from this trip that I’ve always liked, but felt was too soft to share. Given that I virtually never take a single frame of a nice scene, I was pretty confident that I’d find something similar, and crossed my fingers that the sharpness problem was a one-off that I quickly corrected. This is actually the very next image I clicked, and I was very pleased to confirm that it is indeed sharp.
This image is a perfect example of my approach to intimate fall color scenes: Look for color to juxtapose with another feature in the scene. Often that’s a single leaf (no, I do not place leaves, ever), but in this case I accented a nice little cascade with a group of fallen leaves that were plastered against water-soaked granite. And when there’s water motion in the scene, I usually shoot it at a variety of shutter speeds to give myself multiple motion effects to choose between. Looking through my captures from this shoot, I can tell that’s exactly what I did. This image is a 1-second exposure, long enough to blur the cascade, but not so long that I obliterated all detail. And though I have no memory of it, I know I used a polarizer because I always use a polarizer when photographing fall color, and I can tell that the sheen has been removed from the rocks, leaves, and water.
Click an image for a closer look, and to view a slide show.
Posted on August 30, 2020
The feel at the Grand Canyon is expansive, while standing amidst Yosemite’s towering monoliths, the feel is more intimate
I love photographing weather, and because Yosemite’s and the Grand Canyon’s distinctions affect the way their weather is experienced, their weather very much factors into the way I photograph them. In Yosemite Valley I feel like I’m actually in the weather, which is why, for better or worse, when a storm rages in Yosemite, I like to venture out into it. From swirling clouds to fresh snow, these adventures are the source of many of my favorite Yosemite images
At the Grand Canyon, on the other hand, the best photography happens when I feel like I’m photographing someone else’s weather, so when a storm approaches, I try to retreat to a place where I can observe it from a distance. Even when lightning doesn’t make this a safety choice, I like to stand back and observe the weather. Standing on the rim, I can be high and dry beneath bland skies while photographing some of the most exquisite beauty I’ve ever seen. Often that’s lightning, rainbows, or a vivid sunrise/sunset, but sometimes it’s just the play of clouds and light in and around the layered red rocks and tributary canyons.
Last month my brother and I traveled to the Grand Canyon, primarily to photograph lightning and Comet NEOWISE. NEOWISE came through wonderfully, but the lightning not so much. I lost track of the number of times I trained my camera on a promising cell that didn’t deliver, but thankfully lightning is not a prerequisite for great Grand Canyon photography.
This image is the product of one such disappointing lightning shoot. I’d watched the cell move toward the rim from the south and would have bet money that it was bringing lightning with it. I set up my tripod, mounted my Sony a7RIV and Lightning Trigger, and waited with my eyes locked on the rain curtain, willing with all the effort I could muster the lightning to manifest. But alas, as happened far too frequently on this trip, the lightning fizzled. But lightning or not, I couldn’t help appreciate the drama unfolding when a band of heavy rain sped across the canyon. It only took about four minutes for this rain band to span the width of my frame and fizzle as it approached Wotan’s Throne on the North Rim (just out of the frame on the right).
I’d be lying if I said I rushed back to my room and instantly downloaded and processed this image, but working on the images from this trip, this moment stuck in the back of my mind. After I’d gone through the lightning images (exactly one worthy of processing), and the NEOWISE shoots (far more productive), I did another pass looking for some of the beautiful clouds and light that had blessed us, including this wet cell’s brief sprint across the canyon. When I found it I was pleased to see that the moment was indeed as dramatic as my memory.
Vive la différence
Click an image for a closer look, and to view a slide show.
Posted on August 23, 2020
That my hometown topped 110 degrees several days last week isn’t especially newsworthy—100+ degrees happens maybe 20 times in an average Sacramento summer, and we hit 110 for a day or two every two or three years. But adding thunderstorms to the extreme temperatures is indeed unprecedented for California. And with the thunderstorms came the fires that have filled the sky with thick smoke and given the state an end of days vibe.
The fires are still burning, torching our forests and hills to the tune of 1,000,000+ acres burned, with no end in sight. I’m fortunate to live near the Sacramento–San Joaquin River Delta, where we don’t really need to worry about fire (but you might want to check on me if you hear about floods in Sacramento). Even though the closest fire is about 30 miles away, the smoke here is oppressive, at times so thick that it’s not safe to go outside.
To say this year has been a challenge for all of us would be an understatement. We each have our own way of coping, and one thing that has helped me maintain my sanity during the pandemic is getting out and walking the neighborhood several times each day. I’ll start a typical day with a pretty brisk 3 to 5 mile walk, then throughout the day, whenever I start to feel a little cabin fever setting in, I’ll take a more leisurely 1 or 2 mile walk—by the end of most days I’ve logged 8 to 10 miles, then I go to bed, wake up, and do it again.
But with the heat and smoke driving me inside 24×7, by the middle of last week I was beginning to feel a little crazy. So on one particularly smoky day (they all run together), I loaded my camera gear into the car, put the AC on recirculate, and headed to the hills. I had no illusions that I’d escape the smoke, but I just needed to see something different. The plan was to find some oaks against the sky and make some pictures of the orange sun.
I’d hoped to find trees far enough from the road that I could supersize the sun with my Sony 200-600, but after driving around a bit searching for elevated trees that I could align with the sun, I settled for this pair that was maybe 100 yards away. There was no parking here, and the rutted shoulder dipped steeply and only offered about a foot more than a car-width between the pavement and barbed-wire fence, but I squeezed in, thankful for my Outback’s AWD.
The smell of smoke hit me the second I opened my door, but I ignored the burning in my eyes and throat and got to work (I’m blessed to be in good health, with no respiratory problems). I grabbed my tripod from the back of the car, attached my Sony a7RIV, mounted my Sony 100-400, and crossed the road to set up as far from the trees as possible. It was about 45 minutes before sunset, but already the light felt like twilight. I thought I’d have about 30 minutes of shooting before the sun dipped below the hill, but framing up my first shot I realized that the sun was being swallowed by the smoke. Less than three minutes after I took this picture the sun was gone without a trace, not even a bright patch in the smoke, and I was done.
California feels like ground-zero for climate change, so when I hear people’s indefensible explanations for why it’s not real (or why humans aren’t responsible), I get a little irritated. From many of the comments I’ve heard, it’s pretty clear that some people just don’t understand it well enough to have an opinion, so a couple of years ago I wrote a blog explaining climate change in the simplest terms possible. I updated and re-shared this blog on my Facebook page a few days ago, and while the response was largely positive, I did get some pushback from a couple of people who still don’t realize that the debate is over. So I’ve appended it to the bottom of this post (beneath the Sun and Smoke gallery). If you have doubts about climate change, please take the time to read it. And if you still have doubts, before you push back, please be prepared to answer two questions:
- Do you not believe the greenhouse effect is real?
- Or do you not believe that humans are adding enough greenhouse gases to our atmosphere to make a difference?
Sun and Smoke
Click an image for a closer look, and to view a slide show.
Humans, we have a problem
Earth’s climate is changing, and the smoking gun belongs to us. Sadly, in the United States policy lags insight and reason, and the world is suffering.
Climate change science is complex, with many moving parts that make it difficult to communicate to the general public. Climate change also represents a significant reset for some of the world’s most profitable corporations. Those colliding realities created a perfect storm for fostering the doubt and confusion that persists among people who don’t understand climate science and the principles that underpin it.
I’m not a scientist, but I do have enough science background (majors in astronomy and geology before ultimately earning my degree in economics) to trust the experts and respect the scientific method. I also spent 20 years doing technical communication in the tech industry (tech writing, training, and support) for companies large and small. So I know that the fundamentals of climate change don’t need to intimidate, and the more accessible they can be to the general public, the better off we’ll all be.
Recently it feels like I’ve been living on the climate change front lines. On each visit to Yosemite, more dead and dying trees stain forests that were green as recently as five years ago. And throughout the Sierra (among other places), thirsty evergreens, weakened by drought, are under siege by insects that now thrive in mountain winters that once froze them into submission. More dead trees means more fuel, making wildfires not just more frequent, but bigger and hotter.
Speaking of wildfires, for a week last month I couldn’t go outside without a mask thanks to smoke from the Camp Fire that annihilated Paradise (70 miles away). I have friends who evacuated from each of this November’s three major California wildfires (Camp, Hill, and Woolsey), and last December the Thomas Fire forced a two-week evacuation of Ojai, where my wife and I rent a small place (to be near the grandkids). Our cleanup from the Thomas fire took months, and we still find ash in the most unexpected places (and we were among the lucky who had a home to clean).
The debate is dead
Despite its inevitable (and long overdue) death, the climate change debate continues to stagger on like a mindless zombie. We used to have to listen to the skeptics claim that our climate wasn’t changing at all, so I guess hearing them acknowledge that okay-well-maybe-the-climate-is-changing-but-humans-aren’t-responsible can be considered progress.
Despite what you might read on social media or fringe websites, climate change alternative “explanations” like “natural variability” and “solar energy fluctuations” have been irrefutably debunked by rigorously gathered, thoroughly analyzed, and closely scrutinized data. (And don’t get me started on the whole “scientists motivated by grant money” conspiracy theory.)
Science we all can agree on
One thing that everyone does agree on is the existence of the greenhouse effect, which has been used for centuries to grow plants in otherwise hostile environments.
As you may already know, a greenhouse’s transparent exterior allows sunlight to penetrate and warm its interior. The heated interior radiates at longer wavelengths (infrared) that don’t escape as easily through the greenhouse’s ceiling and walls. That means more heat is added to a greenhouse than exits it, so the interior is warmer than the environment outside.
There’s something in the air
Perhaps the most common misperception about human induced climate change is that it’s driven by all the heat we create when we burn stuff. But that’s not what’s going on, not even close.
Our atmosphere behaves like a greenhouse, albeit with far more complexity. The sun bathes Earth with continuous electromagnetic radiation that includes infrared, visible light, and ultraviolet. Solar radiation not reflected back to space reaches Earth’s surface to heat water, land, and air. Some of this heat makes it back to space, but much is absorbed by molecules in Earth’s atmosphere, forming a virtual blanket that makes Earth warmer than it would be without an atmosphere. In a word, inhabitable.
Because a molecule’s ability to absorb heat depends on its structure, some molecules absorb heat better than others. The two most common molecules in Earth’s atmosphere, nitrogen (N2: two nitrogen atoms) and oxygen (O2: two oxygen atoms), are bound so tightly that they don’t absorb heat. Our atmospheric blanket relies on other molecules to absorb heat: the greenhouse gases.
Also not open for debate is that Earth warms when greenhouse gases in the atmosphere rise, and cools when they fall. The rise and fall of greenhouse gases has been happening for as long as Earth has had an atmosphere. So our climate problem isn’t that our atmosphere contains greenhouse gases, it’s that human activity changes our atmosphere’s natural balance of greenhouse gases.
Earth’s most prevalent greenhouse gas is water vapor. But water vapor responds quickly to temperature changes, leaving the atmosphere relatively fast as rain or snow, while other greenhouse gases hold their heat far longer.
The two most problematic greenhouse gases are carbon dioxide (CO2: one carbon atom bonded with two oxygen atoms) and methane (CH4: one carbon atom bonded with four hydrogen atoms). The common denominator in these “problem” gases is carbon. (There are other, non-carbon-based, greenhouse gases, but for simplicity I’m focusing on the most significant ones.)
Carbon exists in many forms: as a solo act like graphite and diamond, and in collaboration with other elements to form more complex molecules, like carbon dioxide and methane. When it’s not floating around the atmosphere as a greenhouse gas, carbon in its many forms is sequestered in a variety of natural reservoirs called a “carbon sink,” where it does nothing to warm the planet.
Oceans are Earth’s largest carbon sink. And since carbon is the fundamental building block of life on Earth, all living organisms, from plants to plankton to people, are carbon sinks as well. The carbon necessary to form greenhouse gases has always fluctuated naturally between the atmosphere and natural sinks like oceans and plants.
For example, a growing tree absorbs carbon dioxide from the atmosphere, keeping the carbon and expelling oxygen (another simplification of a very complex process)—a process that stops when the tree dies. As the dead tree decomposes, some of its carbon is returned to the atmosphere as methane, but much of it returns to the land where it is eventually buried beneath sediments. Over tens or hundreds of millions of years, some of that sequestered carbon is transformed by pressure and heat to become coal.
Another important example is oil. For billions of years, Earth’s oceans have been host to simple-but-nevertheless-carbon-based organisms like algae and plankton. When these organisms die they drop to the ocean floor, where they’re eventually buried beneath sediment and other dead organisms. Millions of years of pressure and heat transforms these ancient deposits into…: oil.
Coal and oil (hydrocarbons), as significant long-term carbon sinks, were quite content to lounge in comfortable anonymity as continents drifted, mountains lifted and eroded, and glaciers advanced and retreated. Through all this slow motion activity on its surface, Earth’s temperatures ebbed and flowed and life evolved accordingly.
Enter humans. We have evolved, migrated, and built civilizations based on a relatively stable climate. And since the discovery of fire we humans have burned plants for warmth and food preparation. Burning organic material creates carbon dioxide, thereby releasing sequestered carbon into the atmosphere. Who knew that such a significant advance was the first crack in the climate-change Pandora’s Box?
For thousands of years the demand for fuel was met simply by harvesting dead plants strewn about on the ground and the reintroduction of carbon to the atmosphere was minimal. But as populations expanded and technology advanced, so did humans’ thirst for fuel to burn.
We nearly killed off the whales for their oil before someone figured out that those ancient, subterranean metamorphosed dead plants burn really nicely. With an ample supply of coal and oil and a seemingly boundless opportunity for profit, coal and oil soon became the driving force in the world’s economy. Suddenly, hundreds of millions of years worth of sequestered carbon was being reintroduced to our atmosphere as fast as it could be produced—with a corresponding acceleration in greenhouse gases (remember, when we burn hydrocarbons, we create carbon dioxide).
Compounding the fossil-fuel-as-energy problem is the extreme deforestation taking place throughout the world. Not only does burning millions of forest and jungle acres each year instantly reintroduce sequestered carbon to the atmosphere, it destroys a significant sink for present and future carbon.
Scientists have many ways to confirm humans’ climate change culpability. The most direct is probably the undeniable data showing that for millennia carbon dioxide in Earth’s atmosphere hovered rather steadily around 280 parts per million (ppm). Then, corresponding to the onset of the Industrial Revolution in the late 18th century, atmospheric carbon dioxide has risen steadily and today sits somewhere north of 400 ppm, with a bullet.
Humans don’t get a pass on atmospheric methane either. While not nearly as abundant in Earth’s atmosphere as carbon dioxide, methane is an even more powerful greenhouse gas, trapping about 30 times more heat than its more plentiful cousin. Methane is liberated to the atmosphere by a variety of human activities, from the decomposition of waste (sewage and landfill) to agricultural practices that include rice cultivation and bovine digestive exhaust (yes, that would be cow farts).
While the methane cycle is less completely understood than the carbon dioxide cycle, the increase of atmospheric methane also correlates to fossil fuel consumption. Of particular concern (and debate) is the cause of the steeper methane increase since the mid-2000s. Stay tuned while scientists work on that….
For humans, the most essential component of Earth’s habitability is the precarious balance between water’s three primary states: gas (water vapor), ice, and liquid. Since the dawn of time, water’s varied states have engaged in a complex, self-correcting choreography of land, sea, and air inputs—tweak one climate variable here, and another one over there compensates.
Earth’s climate remains relatively stable until the equilibrium is upset by external input like solar energy change, volcanic eruption, or (heaven forbid) a visit from a rogue asteroid. Unfortunately, humans incremented the list of climate catalysts by one with the onset of the Industrial Revolution, and our thirst for fossil fuels.
As we’re learning firsthand in realtime, even the smallest geospheric tweak can initiate a self-reinforcing chain reaction with potentially catastrophic consequences for humanity’s long-term wellbeing. For example, a warmer planet means a warmer ocean and less ice, which means more liquid water and water vapor. Adding carbon dioxide to water vapor kicks off a feedback loop that magnifies atmospheric heat: More carbon dioxide raises the temperature of the air—>warmer air holds more water vapor—>more water vapor warms the air more—>and so on.
But that’s just the beginning. More liquid water swallows coastlines; increased water vapor means more clouds, precipitation, and warmer temperatures (remember, water vapor is a greenhouse gas). Wind patterns and ocean currents shift, changing global weather patterns. Oh yeah, and ice’s extreme albedo (reflectivity) bounces solar energy back to space, so shrinking our icecaps and glaciers means less solar energy returned to space even more solar energy to warm our atmosphere, which only compounds the problems.
Comparing direct measurements of current conditions to data inferred from tree rings, ice and sediment cores, and many other proven methods, makes it clear that human activity has indeed upset the climate balance: our planet is warming. What we’re still working on is how much we’ve upset it (so far), what’s coming, and where the tipping point is (or whether the tipping point is already in our rearview mirror).
We do know that we’re already experiencing the effects of these changes, though it’s impossible to pinpoint a single hurricane, fire, or flood and say this one wouldn’t have happened without climate change. And contrary to the belief of many, everyone will not be warmer. Some places are getting warmer, others are getting cooler; some are wetter, others are drier. The frequency and intensity of storms is changing, growing seasons are changing, animal habitats are shifting or shrinking, and the list goes on….
We won’t fix the problem by simply adjusting the thermostat, building dikes and levees, and raking forests. Until we actually reduce greenhouse gases in our atmosphere, things will get worse faster than we can adjust. But the first step to fixing a problem is acknowledging we have one.
About this image
The Camp Fire had been burning for ten days, devouring Paradise and filling the air in Sacramento with brown smoke so thick that at times not only could we not see the sun, we couldn’t see the end of the block. But on this afternoon, when an orange ball of sun burned through the smoke I donned a mask, grabbed my camera bag, and headed for the hills.
I have a collection of go-to foothill oak trees for sun and moonsets, but most of these trees are too close to my shooting position for the extreme telephoto image I had in mind. Too close because at this kind of focal length, the hyperfocal distance is over a mile. So I made my way to a quiet country road near Plymouth where I thought the trees might just be distant enough to work. But I’m less familiar with this location than many of my others, so I didn’t know exactly how the trees and sun would align. Turning onto the road, I drove slowly, glancing at the sun and trees until they lined up. Because there wasn’t a lot of room to park on either side, I was pleased that the shoulder at the location that worked best was just wide enough for my car.
Envisioning a maximum telephoto shot, I added my Sony 2X teleconverter to my Sony 100-400 GM lens. While my plan was to use my 1.5-crop Sony a6300, when I arrived the sun was high enough that that combination provided too much magnification, so I started with my full frame Sony a7RIII. But soon as the sun dropped to tree level I switched to the a6300 and zoomed as tight as possible.
When I started the sun was still bright enough that capturing its color made the trees complete silhouettes, with no detail or color in the foreground. But as the setting sun sank into increasingly thick smoke, it became redder and redder and my exposure became easier. It always surprises me how fast the sun and moon move relative to the nearby horizon, so found myself running around to different positions to get the right sun and tree juxtaposition as the sun fell. The smoke near the horizon was so thick that it swallowed the sun before it actually set.
Later I plotted my location and the sun’s position on a map and realized that I was pointing right at San Francisco, about 100 miles away, with a large swath of the Bay Area in between. Then I thought about this air that was thick enough to completely obscure the sun, and the millions of people who had been breathing that air for weeks.
I’d be lying if I said I don’t like this image—it’s exactly what I was going for. But I’d be very happy if I never got another opportunity to photograph something like this.
Click an image for a closer look and slide show. Refresh the window to reorder the display.
Posted on August 16, 2020
In a previous life I spent a dozen or so years doing technical support. In this job a key role was convincing people that, despite all failures and error messages to the contrary, they are in fact smarter than their computers. Most errors occur because the computer just didn’t understand: If I misspel a wurd, you still know what I meen (rite?). Not so with a computer. A computer can’t anticipate, reason, or create; assigned a task, it will blithely continue repeating a mistake, no matter how egregious, until it is instructed otherwise, fails, or destroys itself.
All this applies equally to today’s “smart” cameras, which are essentially computers at their core. But no matter how advanced its technology, a camera just can’t compete with your brain. Really.
For example, if I’d allowed my camera to decide the exposure for this crescent moon scene from 2016, I’d have ended up with a useless mess: While this image is all about color and shape, automatic exposure, deciding that the foreground hillside is important, would have brightened the scene enough to expose distracting detail and completely wash out the color in the sky. But I knew better. Wanting to simplify the scene, I manually metered and banished the extraneous foreground detail to the black shadows, capturing only the moon’s delicate shape and a solitary oak silhouetted against the indigo twilight.
Digital cameras become more technologically advanced each year, and their auto-exposure and -focus capabilities are quite good, good enough that nobody should feel they must switch to manual if they fear it will diminish the pleasure they get from photography. But if your photographic pleasure comes from getting the best possible images, it would benefit you to spend a little time mastering manual metering (and hyperfocal focus), then using that knowledge to override your camera’s programmed inclinations. It might help to know that in my photo workshops I teach (but never require) manual metering to all who are interested, and most who try it are surprised by how easy and rewarding it is to take control of their camera.
Trust your histogram
Exposure control starts by learning to use a histogram, a graph of the tones in an image (read more about histograms). Not only does every digital camera show us a histogram of the scene we just photographed, modern cameras (all mirrorless for sure, and all of the latest DSLRs that I know of) display the histogram for the scene we’re currently metering, before the shutter is clicked.
With a histogram, instead of clicking and hoping as we did in the film days, or clicking, checking, and adjusting as we did in the pre-live-view histogram days, the addition of a histogram before we shoot provides advance knowledge of the image’s exposure. For those who know how to read a histogram, manual exposure has never been easier—just monitor the histogram as you prepare your shot and dial the exposure until the histogram looks right. Click.
Setting up your live-view histogram
To ensure a valid pre-capture histogram (on your DSLR’s live-view screen, or your mirrorless camera’s live-view or viewfinder screen), make sure you are in whatever your camera manufacturer calls exposure simulation. When the camera simulates exposure, rather than always showing the ideal exposure on the live-view screen, it attempts to emulate the exposure settings you’re using. Here is a far from comprehensive guide to the exposure simulation designation used by the major camera manufacturers (though I can’t guarantee that all cameras from the same manufacturer do it the same way):
- Canon: Exposure Simulation (enabled)
- Fuji: Preview Exp. in Manual Mode (off)
- Olympus: Live-view Boost (off)
- Nikon: Exposure Preview (selected in the Info menu)
- Sony: Setting Effect (on)
On most camera’s the metering mode (the way the camera’s meter views the scene—not to be confused with exposure mode, which is the way the camera sets the exposure) doesn’t affect the pre-capture histogram, but to be safe, instead of spot or partial metering, I choose a metering mode that uses the entire frame. (With my Sony mirrorless bodies, I set my metering mode to Entire Screen Average.)
Once you’ve turned on exposure simulation, you need to figure out how to display the histogram. Most cameras, mirrorless or DSLR, offer multiple live-view screen options that display a variety of information about the scene you’re photographing. On most cameras, only one or two of these screens displays the histogram—finding it is usually a simple matter of cycling through the various displays until the histogram appears. To minimize the number of screens I need to scroll through to get to the information I need (such as the histogram or level), I always go into my camera’s menu system and disable the live-view screens I don’t use.
Using your live-view histogram
Using my pre-capture histogram, I start the metering process as I always have. In manual exposure mode, I start in my camera’s best ISO (100 for my Sony a7RIV), and the best f-stop for my composition (unless motion, such as wind or star motion, forces me to compromise my ISO and/or f-stop). With ISO and f-stop set, I slowly adjust my shutter speed with my eye on the histogram in my viewfinder (or LCD).
Most mirrorless bodies offer highlight warnings in their pre-capture view (often called “zebras”). While these alerts aren’t nearly as reliable as the histogram and should never be relied on for final exposure decisions, I use their appearance as a reminder to check my histogram. The first time I meter a scene, my current exposure settings (based on my prior scene) can be far from what the current scene requires—in this case, I push my shutter speed fast until the zebras appear (if my prior exposure was too dark) or disappear (if my prior exposure was too bright), then refine the exposure more slowly while watching the histogram.
In a low or moderate contrast scene, I’ll have room on both the shadows and highlights sides of the histogram—a pretty easy scene to expose. But in a high dynamic range scene (dark shadows and bright highlights), the difference between the darkest shadows and brightest highlights might stretch the histogram beyond its boundaries. When the high dynamic range is so great that I have to choose between saving the highlights or the shadows, I almost always bias my exposure choice toward sparing the highlights, carefully dialing the exposure until the histogram bumps against the right side—at that point I stop adding exposure, even if my shadows are cut off (black).
Because the post-capture histogram is more reliable than the pre-capture histogram, when high dynamic range gives me little margin for error, I verify my exposure by checking the post-capture histogram. Here’s where the RGB (red, green, blue) histogram becomes important. While the luminosity (white) histogram gives you the detail you captured, it doesn’t tell you if you lost color. Washed out color is always a risk when you push the histogram all the way to the right, so it’s best to check the post-capture RGB histogram to ensure that none of the image’s color channels are clipped.
An often overlooked aspect of mastering in-camera metering is simply learning how your camera reports exposure. Not only does every camera interpret and display its exposure information differently, the histogram returned is based on the jpeg, so raw shooters always have more information than their camera reports—it’s important to know how much more. With my Sony a7Rx bodies, I know I’m usually safe pushing my histogram’s exposure graph up to a full stop beyond the left or right (highlights and shadows) boundary—I have no problem using every available photon.
A few more words about this image
In addition to taking control of the exposure for this image, roaming a hilly cow pasture in the foothills east of Sacramento gave me full freedom of movement to control the new moon’s position relative to the tree. As the sky darkened and the moon dropped, I literally ran up and down the hill to capture as many moon/tree/frame relationships as possible before the moon disappeared.
This is the week (August 16-21, 2020) to photograph a crescent moon. My recommendation is Monday morning on the eastern horizon before sunrise, and Wednesday or Thursday low in the west after sunset.
A Crescent Moon Gallery
Posted on August 9, 2020
As soon as I announced that I’d purchased the just-announced Sony a7SIII, people started asking why I wanted a 12 megapixel camera when I already have a 61 megapixel Sony a7RIV (two, actually). When I hear these questions, I realize the myth that megapixels are a measure of image quality is still alive. The truth is, megapixels are a reflection of image size, not image quality. In fact, for any given technology, the fewer the megapixels, the better the image quality.
Without getting too deep into the weeds of noise and clarity in a digital image, it’s safe to say the the more efficient a sensor is at capturing light, and the less heat the sensor generates, the better it will perform in these areas. How do you make a sensor more efficient? Well, you start with bigger photosites to catch more light. And how to keep the sensor cool? Give your photosites more room to breathe. But how do you make your photosites both bigger and farther apart without increasing the size of the sensor? It doesn’t take a rocket scientist to conclude that reducing the number of photosites is the only way to achieve both of these objectives.
So why do the manufacturers keep giving us more photosites? (My last rhetorical question, I promise.) Well first, advances in technology make it possible to cram more photosites onto a fixed-size sensor without compromising image quality (and in fact, often while still improving image quality). But more important that is the sad, simple truth that megapixels sell cameras.
Don’t get me wrong, I think megapixel count is great and am all for as many megapixels as I can have—as long as they don’t come at the expense of image quality. The more megapixels you have, the more you can crop, and the larger you can print. While cropping is a nice safety net, goal should be to get the composition right at capture. And before chasing more megapixels, you should ask yourself how large you need to print, and how many megapixels you need to do it. Whenever this question comes up, I think about an image that I have printed 24×36 and hanging in my home. It’s an extreme close-up of a raindrop festooned dogwood flower, with Bridalveil Fall in the background. I can stand six inches from this 24×36 print and not feel like it’s missing any detail, from its delicate spider web filaments to the small dust particles suspended in the raindrops. All this was captured as a jpeg on my first DSLR, a 6 megapixel Canon 10D.
So given all this, you may be wondering why my primary camera is a 61 megapixel Sony a7RIV, with a second a7RIV as my backup. Well, like I said, all things equal, more megapixels are better than fewer megapixels, and for the vast majority of the natural light landscapes, on a tripod, that I photograph, my a7RIV bodies give me cleaner, higher resolution images than I ever dreamed possible. The dynamic range is the best I’ve ever seen, and my high ISO images are as good as any primary body I’ve ever owned. They’re so good, in fact, that last year I set aside my dedicated night camera, my 12 megapixel Sony a7SII, in favor of the a7RIV. I was getting such good results after dark with the a7RIV, I figured I could sacrifice a little low light performance to lighten my bag.
And for the most part I was satisfied—I’ve now used it enough at night to know the a7RIV is hands down the best night camera I’ve used that’s that not an a7S (original, or a7SII). But photographing Comet NEOWISE last month in Yosemite, I started to wonder if I might have been too quick to jettison the a7SII. My images were clean enough, but if I could get even less noise…
If you follow me regularly you know that I’m a one-click shooter—if I can’t get an image with one click, I don’t shoot it. That doesn’t mean I think it’s wrong to composite night images, but that approach doesn’t give me satisfaction, and I don’t like the artificial look of images that have clearly been blended. The analogy I like to use is the difference between applying a little make-up (dodging/burning and noise reduction in Photoshop), and submitting to cosmetic surgery (blending multiple exposures captured at different times, or with completely different focus and exposure settings). (There’s also a third option that’s more of a Frankenstein solution that involves assembling images from two different scenes, that I don’t even consider real photography.) My one-click approach means I have to live with more noise in my night images, but anyone viewing them knows that that truly is what my camera saw.
So anyway… For my Grand Canyon trip a couple of weeks ago, I decided to dust off the a7SII and give it a shot at Comet NEOWISE. My plan was to concentrate on the park’s east vistas to get away from the lights of Grand Canyon Village. Desert View was closed, but all the other vistas—west to east: Grandview, Moran, Lipan, and Navajo Points—were open for business. So during the day, while chased lightning out on the east end, at each stop I made a point of firing up my astronomy apps to figure out where the comet would be after dark.
Knowing that at about an hour after sunset, NEOWISE would be the northwest sky just a few degrees west of the Big Dipper (which would be dropping and rotating closer to due north as the night wore on), I decided that Grandview Point would be the best place to get it above the canyon. After it rotated farther north, I liked the way NEOWISE aligned with the canyon from the more eastern vistas. On that first night I got about 45 minutes of clear enough skies before the clouds returned.
For this trip I’d brought two tripods so I could simultaneously shoot with both the a7SII and a7RIV. On the a7SII I mounted my Sony 20mm f/1.8 G lens; on the a7RIV was the Sony 24mm f/1.4 GM lens. For both cameras I had long exposure noise reduction turned on (because with the Sonys it does make a difference for exposures measured in seconds). LENR doubles the capture time, which gave me at least 30 seconds between each shot, making it really easy to switch back and forth between cameras.
Having both cameras set up side-by-side like this, I was reminded what a nighttime monster the a7SII is—even though the a7RIV had a slightly faster lens, I could see the dark scene much better with the a7SII. I wouldn’t know how much cleaner the a7SII files would be until looked at them on my computer, but what a joy that camera is to work with in the dark.
I went with relatively few compositions, but varied my exposures for each for more processing options later. To focus, I just picked a star in my viewfinder, magnified it to the maximum, and dialed my focus ring until the star became the smallest dot possible. And even though that’s usually enough to ensure a sharp image, each time I focused I verified sharpness by magnifying the captured image in my viewfinder and checking the detail in the canyon.
I was thrilled by how much light the 20% waxing crescent moon cast on the scene. While the moonlight wasn’t noticeable to my eye, and didn’t seem to wash out the stars at all, it did cast enough light to bring out more canyon detail in my images. The small meteor that scooted through the Big Dipper during this frame was a welcome bonus that surprised me when I reviewed the image later.
When I finally got back to the room and looked at my images from that night a little more closely, the a7SII images were noticeably cleaner, so much so that when I went back out to photograph the comet the next night, I didn’t even set up the a7RIV. Is the a7RIV bad for night photography? Absolutely not. In fact, to capture 61 megapixel, high ISO, long exposure images as clean as the a7RIV does feels like cheating. But given my one-shot paradigm, and the fact that 12 megapixels is more than enough resolution for pretty much any use I can think of (for me—you need to decide for yourself how much resolution you need), for dark sky night photography, my vote goes the a7SII’s cleaner files and ease of use.
Some of my fellow Sony Artisans got to preview the a7SIII, but since it’s primarily billed as a video camera and I don’t really do video (yet), I’ll have to wait until mine arrives at the end of September (fingers crossed). But the reports from my colleagues about the a7SIII’s high ISO performance have me salivating.
An a7S/a7SII Gallery