Mastering Focus (Hyperfocal and Otherwise)

Gary Hart Photography: Floating Leaves, Valley View, Yosemite

Floating Autumn Leaves, Valley View, Yosemite
Canon EOS-1Ds Mark II
Canon 24-105 f/4 L
1/15 second
F/16
ISO 100

What’s the point?

It seems like one of photography’s great mysteries is achieving proper focus: the camera settings, where to place the focus point, even the definition of sharpness are all sources of confusion and angst. If you’re a tourist just grabbing snapshots, everything in your frame is likely at infinity and you can just put your camera in full auto mode and click away. But if you’re a photographic artist trying to capture something unique with your mirrorless or DSLR camera and doing your best to have important visual elements objects at different distances throughout your frame, you need to stop letting your camera decide your focus point and exposure settings.

Of course the first creative focus decision is whether you even want the entire frame sharp. While some of my favorite images use selective focus to emphasize one element and blur the rest of the scene, most (but not all) of what I’ll say here is about using hyperfocal techniques to maximize depth of field (DOF). I cover creative selective focus in much greater detail in another Photo Tip article: Creative Selective Focus.

Beware the “expert”

I’m afraid that there’s some bad, albeit well-intended, advice out there that yields just enough success to deceive people into thinking they’ve got focus nailed, a misperception that often doesn’t manifest until an important shot is lost. I’m referring to the myth that you should focus 1/3 of the way into the scene, or 1/3 of the way into the frame (two very different things, each with its own set of problems).

For beginners, or photographers whose entire scene is at infinity, the 1/3 technique may be a useful rule of thumb. But taking the 1/3 approach to focus requires that you understand DOF and the art of focusing well enough to adjust your focus point when appropriate, and once you achieve that level of understanding, you may as well do it the right way from the start. That ability becomes especially important in those scenes where missing the focus point by just a few feet or inches can make or break and image.

Where to focus this? Of course 1/3 of the way into a scene that stretches for miles won’t work. And 1/3 of the way into a frame with a diagonal foreground won’t work either.

Back to the basics

Understanding a few basic focus truths will help you make focus decisions:

  • A lens’s aperture is the opening that allows light to reach your sensor—the bigger this opening, the more light gets in, but also the smaller your DOF.
  • Aperture is measured in f-stops, which is the lens’s focal length divided by the aperture’s diameter; the higher the f-number, the smaller the aperture and the greater the DOF. So f/8 is actually a bigger aperture (with less DOF) than f/11. This understanding becomes second nature, but if you’re just learning it’s helpful to think of f/stops this way: The higher the f-number, the greater the depth of field. Though they’re not exactly the same thing, photographers usually use f-stop and aperture interchangeably.
  • Regardless of its current f-stop setting, a camera maximizes the light in its viewfinder by always showing you the scene at the lens’s widest aperture. All this extra light makes it easier to compose and focus, but unless your exposure is set for the widest aperture (which it shouldn’t be unless you have a very specific reason to limit your depth of field), the image you capture will have more DOF than you see in the viewfinder. The consequence is that you usually can’t see how much of your scene is in focus when you compose. Most cameras have a DOF preview button that temporarily closes the lens down to the f-stop you have set—this shows the scene at its actual DOF, but can also darken the viewfinder considerably (depending on how small your aperture is), making it far more difficult to see the scene.
  • For any focus point, there’s only one (infinitely thin) plane of maximum sharpness, regardless of the focal length and f-stop—everything in front of and behind the plane containing your focus point (and parallel to the sensor) will be some degree of less than maximum sharpness. As long as the zone of less than perfect sharpness isn’t visible, it’s considered “acceptably sharp.” When that zone becomes visible, that portion of the image is officially “soft.” When photographers speak of sharpness in an image, they’re really talking about acceptable sharpness.
  • The zone of acceptable sharpness extends a greater distance beyond the focus point than it does in front of the focus point. If you focus on that rock ten feet in front of you, rocks three feet in front of you may be out of focus, but a tree fifty feet away could be sharp. I’ll explain more about this later.
  • While shorter focal lengths may appear to provide more depth of field, believe it or not, DOF doesn’t actually change with focal length. What does change is the size of everything in the image, so as your focal length increases, your functional or apparent DOF decreases. So you really aren’t gaining more absolute DOF with a shorter focal length, it just won’t be as visible. When photographers talk about DOF, they’re virtually always talking about apparent DOF—the way the image looks. (That’s the DOF definition I use here too.)
  • The closer your focus point, the narrower your DOF (range of front-to-back sharpness). If you focus your 24mm lens on a butterfly sunning on a poppy six inches from your lens, your DOF is so narrow that it’s possible parts of the poppy will be out of focus; if you focus the same lens on a tree 100 feet away, the mountains behind the tree are sharp too.
Whitney Arch Moonset, Alabama Hills, California

Moonset, Mt. Whitney and Whitney Arch, Alabama Hills, California
With subjects throughout my frame, from close foreground to distant background, it’s impossible to get everything perfectly sharp. Here in the Alabama Hills near Lone Pine, California, I stopped down to f/16 and focused at the at the most distant part of the arch. This ensured that all of the arch would be perfectly sharp, while keeping Mt. Whitney and the rest of the background “sharp enough.”

Defining sharpness

Depth of field discussions are complicated by the fact that “sharp” is a moving target that varies with display size and viewing distance. But it’s safe to say that all things equal, the larger your ultimate output and closer the intended viewing distance, the more detail your original capture should contain.

To capture detail a lens focuses light on the sensor’s photosites. Remember using a magnifying glass to focus sunlight and ignite a leaf when you were a kid? The smaller (more concentrated) the point of sunlight, the sooner the smoke appeared. In a camera, the finer (smaller) a lens focuses light on each photosite, the more detail the image will contain at that location. So when we focus we’re trying to make the light striking each photosite as concentrated as possible.

In photography we call that small circle of light your lens makes for each photosite its “circle of confusion.” The larger the CoC, the less concentrated the light and the more blurred the image will appear. Of course if the CoC is too small to be seen as soft, either because the print is too small or the viewer is too far away, it really doesn’t matter. In other words, areas of an image with a large CoC (relatively soft) can still appear sharp if small enough or viewed from far enough away. That’s why sharpness can never be an absolute term, and we talk instead about acceptable sharpness that’s based on print size and viewing distance. It’s actually possible for the same image to be sharp for one use, but too soft for another.

So how much detail do you need? The threshold for acceptable sharpness is pretty low for an image that just ends up on an 8×10 calendar on the kitchen wall, but if you want that image large on the wall above the sofa, achieving acceptable sharpness requires much more detail. And as your print size increases (and/or viewing distance decreases), the CoC that delivers acceptable sharpness shrinks correspondingly.

Many factors determine the a camera’s ability to record detail. Sensor resolution of course—the more resolution your sensor has, the more important it becomes that to have a lens that can take advantage of that extra resolution. And the more detail you want to capture with that high resolution sensor and tack-sharp lens, the more important your depth of field and focus point decisions become.

Hyperfocal focus

The foundation of a sound approach to maximizing sharpness for a given viewing distance and image size is hyperfocal focusing, an approach that uses viewing distance, f-stop, focal length, and focus point to ensure acceptable sharpness.

The hyperfocal point is the focus point that provides the maximum depth of field for a given combination of sensor size, f/stop, and focal length. Another way to say it is that the hyperfocal point is the closest you can focus and still be acceptably sharp to infinity. When focused at the hyperfocal point, your scene will be acceptably sharp from halfway between your lens and focus point all the way to infinity. For example, if the hyperfocal point for your sensor (full frame, APS-C, 4/3, or whatever), focal length, and f-stop combinition is twelve feet away, focusing there will give you acceptable sharpness from six feet (half of twelve) to infinity—focusing closer will soften the distant scene; focusing farther will keep you sharp to infinity but extend the area of foreground softness.

Because the hyperfocal variable (sensor size, focal length, f-stop) combinations are too numerous to memorize, we usually refer to an external aid. That used to be awkward printed tables with long columns and rows displayed in microscopic print, the more precise the data, the smaller the print. Fortunately, those have been replaced by smartphone apps with more precise information in a much more accessible and readable form. We plug in all the variables and out pops the hyperfocal point distance and other useful information

It usually goes something like this:

  1. Identify the composition
  2. Determine the closest thing that must be sharp (right now I’m assuming you want sharpness to infinity)
  3. Dig the smartphone from one of the 10,000 pockets it could be in
  4. Open the hyperfocal app and plug in the sensor size (usually previously set by you as the default), f-stop, and a focus distance
  5. Up pops the hyperfocal distance (and usually other info of varying value)

You’re not as sharp as you think

Since people’s eyes start to glaze over when CoC comes up, they tend to use the default returned by the smartphone app. But just because the app tells you you’ve nailed focus, don’t assume that your work is done. An often overlooked aspect of hyperfocal focusing is that app makes assumptions that aren’t necessarily right, and in fact are probably wrong.

The CoC your app uses to determine acceptable sharpness is a function of sensor size, display size, and viewing distance. But most app’s hyperfocal tables assume that you’re creating an 8×10 print that will be viewed from a foot away—maybe valid 40 years ago, but not in this day of mega-prints. The result is a CoC three times larger than the eye’s ability to resolve.

That doesn’t invalidate hyperfocal focusing, but if you use published hyperfocal data from an app or table, your images’ DOF might not be as ideal as you think it is for your use. If you can’t specify a smaller CoC in your app, I suggest that you stop-down a stop or so more than the app/table indicates. On the other hand, stopping down to increase sharpness is an effort of diminishing returns, because diffraction increases as the aperture shrinks and eventually will soften the entire image—I try not to go more than a stop smaller than my data suggests.

Keeping it simple

As helpful as a hyperfocal app can be, whipping out a smartphone for instant in-the-field access to data is not really conducive to the creative process. I’m a big advocate of keeping photography as simple as possible, so while I’m a hyperfocal focus advocate in spirit, I don’t usually use hyperfocal data in the field. Instead I apply hyperfocal principles in the field whenever I think the margin of error gives me sufficient wiggle room.

Though I don’t often use the specific hyperfocal data in the field, I find it helps a lot to refer to hyperfocal tables when I’m sitting around with nothing to do. So if I find myself standing in line at the DMV, or sitting in a theater waiting for a movie (I’m a great date), I open my iPhone hyperfocal app and plug in random values just to get a sense of the DOF for a given f-stop and focal length combination. I may not remember the exact numbers later, but enough of the information sinks in that I accumulate a general sense of the hyperfocal DOF/camera-setting relationships.

Finally, something to do

Unless I think I have very little DOF margin for error in my composition, I rarely open my hyperfocal app in the field. Instead, once my composition is worked out and have determined the closest object I want sharp—the closest object with visual interest (shape, color, texture), regardless of whether it’s a primary subject.

  • If I want to be sharp to infinity and my closest foreground object (that needs to be sharp) is close enough to hit with my hat, I need a fair amount of DOF. If my focal length is pretty wide, I might skip the hyperfocal app, stop down to f/16, and focus a little behind my foreground object. But if I’m at a fairly long focal length, or my closest object is within arm’s reach, I have very little margin for error and will almost certainly refer to my hyperfocal app.
  • If I could hit my foreground object with a baseball and my focal length is 50mm (or so) or less, I’ll probably go with f/11 and just focus on my foreground object. But as my focal length increases, so does the likelihood that I’ll need to refer to my hyperfocal app.
  • If it would take a gun to reach my closest object (picture a distant peak), I choose an f-stop between f/8 and f/11 and focus anywhere in the distance.

Of course these distances are very subjective and will vary with your focal length and composition (not to mention the strength of your pitching arm), but you get the idea. If you find yourself in a small margin for error focus situation without a hyperfocal app (or you just don’t want to take the time to use one), the single most important thing to remember is to focus behind your closest subject. Because you always have sharpness in front of your focus point, focusing on the closest subject gives you unnecessary sharpness at the expense of distant sharpness. By focusing a little behind your closest subject, you’re increasing the depth of your distant sharpness while (if you’re careful) keeping your foreground subject within the zone of sharpness in front of the focus point.

And finally, foreground softness, no matter how slight, is almost always a greater distraction than slight background softness. So, if it’s impossible to get all of your frame sharp, it’s usually best to ensure that the foreground is sharp.

Some examples

Sunset Palette, Half Dome from Sentinel Dome, Yosemite

A hat’s toss away: The closest pool was about 6 feet from my lens. I stopped down to f/20 (smaller than I generally like to go) and focused on the back of the pool on the left, about 10 feet away.

A baseball throw away: The little clump of wildflowers (lower right) was about 35 feet away and the trees started another 35 feet beyond that. With a focal length of 55mm, I dialed to f/11 and focused on the most distant foreground tree, getting everything from the flowers to Half Dome sharp.

Gary Hart Photography: Tree and Crescent, Sierra Foothills, California

Honey, fetch my rifle: With everything here at infinity I knew could focus on the trees or moon confident that the entire frame would be sharp. In this case I opted for f/8 to minimize diffraction but still in my lens’s sharpest f-stop range, and focused on the tree.

Why not just automatically set my aperture to f/22 and be done with it? I thought you’d never ask. Without delving too far into the physics of light and optics, let’s just say that there’s a not so little light-bending problem called “diffraction” that robs your images of sharpness as your aperture shrinks—the smaller the aperture, the greater the diffraction. Then why not choose f/2.8 when everything’s at infinity? Because lenses tend to lose sharpness at their aperture extremes, and are generally sharper in their mid-range f-stops. So while diffraction and lens softness don’t sway me from choosing the f-stop that gives the DOF I want, I try to never choose an aperture bigger or smaller than I need.

Now that we’ve let the composition determine our f-stop, it’s (finally) time to actually choose the focus point. Believe it or not, with this foundation of understanding we just established, focus becomes pretty simple. Whenever possible, I try to have elements throughout my frame, often starting near my feet and extending far into the distance. When that’s the case I stop down focus on an object slightly behind my closest subject (the more distant my closest subject, the farther behind it I can focus).

When I’m not sure, or if I don’t think I can get the entire scene sharp, I err on the side of closer focus to ensure that the foreground is sharp. Sometimes before shooting I check my DOF with the DOF preview button, allowing time for my eye to adjust to the limited light. And when maximum DOF is essential and I know my margin for error is small, I don’t hesitate to refer to the DOF app on my iPhone.

A great thing about digital capture is the instant validation of the LCD—when I’m not sure, or when getting it perfect is absolutely essential, after capture I pop my image up on the LCD, magnify it to maximum, check the point or points that must be sharp, and adjust if necessary. Using this immediate feedback to make instant corrections really speeds the learning process.

Sometimes less is more

The depth of field you choose is your creative choice, and no law says you must maximize it. Use your camera’s limited depth of field to minimize or eliminate distractions, create a blur of background color, or simply to guide your viewer’s eye. Focusing on a near subject while letting the background go soft clearly communicates the primary subject while retaining enough background detail to establish context. And an extremely narrow depth of field can turn distant flowers or sky into a colorful canvas for your subject.

In this image of a dogwood blossom in the rain, I positioned my camera to align Bridalveil Fall with the dogwood and used an extension tube to focus extremely close. The narrow depth of field caused by focusing so close turned Bridalveil Fall into a background blur (I used f/18 to the fall a little more recognizable), allowing viewers to feast their eyes on the dogwood’s and raindrop’s exquisite detail.
An extension tube on a macro lens at f/2.8 gave me depth of field measured in fractions of an inch. The gold color in the background is more poppies, but they’re far enough away that they blur into nothing but color. The extremely narrow depth of field also eliminated weeds and rocks that would have otherwise been a distraction.

There’s no substitute for experience

No two photographers do everything exactly alike. Determining the DOF a composition requires, the f-stop and focal length that achieves the desired DOF, and where to place the point of maximum focus, are all part of the creative process that should never be left up to the camera. The sooner you grasp the underlying principles of DOF and focus, the sooner you’ll feel comfortable taking control and conveying your own unique vision.

About this image

Gary Hart Photography: Floating Leaves, Valley View, Yosemite

Floating Autumn Leaves, Valley View, Yosemite

Yosemite may not be New England, but it can still put on a pretty good fall color display. A few years ago I arrived  at Valley View on the west side of Yosemite Valley just about the time the fall color was peaking. I found the Merced River filled with reflections of El Capitan and Cathedral Rocks, framed by an accumulation of recently fallen leaves still rich with vivid fall color.

To emphasize the colorful foreground, I dropped my tripod low and framed up a vertical composition. I knew my hyperfocal distance at 24mm and f/11 would be 5 or 6 feet, but with the scene ranging from the closest leaves at about 3 feet away out to El Capitan at infinity, I also knew I’d need to be careful with my focus choices. For a little more margin for error I stopped down to f/16, then focused on the nearest rocks which were a little less than 6 feet away. As I usually do when I don’t have a lot of focus wiggle room, I magnified the resulting image on my LCD and moved the view from the foreground to the background to verify front-to-back sharpness.

Workshop Schedule || Purchase Prints


Playing with Depth: A Gallery of Focus

Click an image for a closer look and slide show. Refresh the screen to reorder the display.

Improve Your Fall Color Photography

Gary Hart Photography: Autumn Snow, El Capitan, Yosemite

Autumn Snow, El Capitan, Yosemite
Canon EOS-5D Mark III
24-105L
1/15 second
F/16
ISO 100


As we enter the fall color photography season, I’m revisiting and revising previous articles. This is the second in the series.


Improve Your Fall Color Photography

Vivid color and crisp reflections make autumn my favorite season for creative photography. While most landscape scenes require showing up at the right time and hoping for the sun and clouds to cooperate, photographing fall color is often a simple matter of circling the scene until the light’s right. For the photographers who understand this, and know how to control exposure, depth, and motion with their cameras, great fall color images are possible any time of day, in any light.

Backlight, backlight, backlight

The difference between the front-lit and backlit sides of fall foliage is the difference between dull and vivid color. When illuminated by direct sunlight, the side of a leaf opposite the sun throbs with color, as if it has its own source of illumination, while the same leaf’s lit side appears flat—if you ever find yourself thinking that the fall color seems washed out, check the other side of the tree.

While the backlight glow isn’t as pronounced in shade/overcast, when the leaves are illuminated by light that’s spread evenly across the sky, even diffuse sunlight is far more pronounced one side of the leaves than the other, giving the side of a leaf that’s opposite the sky (the side getting less light) a subtle but distinct glow when compared to its skyward side.

Forest Autumn, Yosemite

Forest Autumn, Yosemite

Isolate elements with a telephoto for a more intimate fall color image

Big fall color scenes are great, but a telephoto or macro enables you to highlight and emphasize elements and relationships. Train your eye to find leaves, groups of leaves, or branches that stand out from the rest of the scene. Zoom close, using the edges of the frame to eliminate distractions and frame subjects. And don’t concentrate so much on your primary subject that you miss complementary background or foreground elements to balance the frame and provide an appealing canvas for your subject.

Solitary Leaf, Bridalveil Creek, Yosemite

Selective depth of field is a great way to emphasize/deemphasize elements in a scene

Limiting depth of field with a large aperture on a telephoto lens can soften a potentially distracting background into a complementary canvas of color and shape. Parallel tree trunks, other colorful leaves, and reflective water make particularly effective soft background subjects. For an extremely soft background, reduce your depth of field further by adding an extension tube to focus closer.

Autumn Bouquet, Zion National Park

Autumn Bouquet, Zion National Park

Underexpose sunlit leaves to maximize color

Contrary to what many believe, fall foliage in bright sunlight is still photographable if you isolate backlit leaves against a darker background and slightly underexpose them. The key here is making sure the foliage is the brightest thing in the frame, and to avoid including any sky in the frame. Photographing sunlit leaves, especially with a large aperture to limit DOF, has the added advantage of an extremely fast shutter speed that will freeze wind-blown foliage.

Leaves and Reflection, Convict Lake, Eastern Sierra

Slightly underexposing brightly lit leaves not only emphasizes their color, it turns everything that’s in shade to a dark background. And if your depth of field is narrow enough, points of light sneaking between the leaves and branches to reach your camera will blur to glowing jewels.

Gary Hart Photography, Autumn Light, Yosemite

Autumn Light, Yosemite

A sunstar is a great way to liven up an image in extreme light

If you’re going to be shooting backlit leaves, you’ll often find yourself fighting the sun. Rather than trying to overcome it, turn the sun into an ally by hiding it behind a tree. A small aperture (f16 or smaller is my general rule) with a small sliver of the sun’s disk visible creates a brilliant sunstar that becomes the focal-point of your scene. Unlike photographing a sunstar on the horizon, hiding the sun behind a terrestrial object like a tree or rock enables you to move with the sun.

When you get a composition you like, try several frames, varying the amount of sun visible in each. The smaller the sliver of sun, the more delicate the sunstar; the more sun you include, the more bold the sunstar. You’ll also find that different lenses render sunstars differently, so experiment to see which lenses and apertures work best for you.

Autumn Light, North Rim, Grand Canyon

Autumn Light, North Rim, Grand Canyon

Gary Hart Photography, Autumn Glow, Yosemite

Autumn Glow, Cook’s Meadow, Yosemite

Polarize away the foliage’s natural sheen

Fall foliage has a reflective sheen that dulls its natural color. A properly oriented polarizer can erase that sheen and bring the underlying natural color into prominence. To minimize the scene’s reflection, slowly turn the polarizer until the scene is darkest (the more you try this, the easier it will be to see). If you have a hard time seeing the difference, concentrate your gaze on a single leaf, rock, or wet surface.

Fallen Color, Rock Creek Canyon, Eastern Sierra

A polarizer isn’t an all-on or all-off proposition. Slowly dial the polarizer’s ring and watch the reflection change until you achieve the effect you desire. This is particularly effective when you want your reflection to share the frame with submerged feature such as rocks, leaves, and grass.

Morning Reflection, North Lake, Eastern Sierra

Blur water with a long exposure

When photographing in overcast or shade, it’s virtually impossible to freeze the motion of rapid water at any kind of reasonable ISO. Rather than fight it, use this opportunity to add silky water to your fall color scenes. There’s no magic shutter speed for blurring water—in addition to the shutter speed, the amount of blur will depend on the speed of the water, your distance from the water, your focal length, and your angle of view relative to the water’s motion. When you find a composition you like, don’t stop with one click. Experiment with different shutter speeds by varying the ISO (or aperture as long as you don’t compromise the desired depth of field).

Leaf, Bridalveil Creek, Yosemite

Autumn Leaf, Bridalveil Creek, Yosemite

Reflections make fantastic complements to any fall color scene

By autumn, rivers and streams that rushed over rocks in spring and summer, meander at a leisurely, reflective pace. Adding a reflection to your autumn scene can double the color, and also add a sense of tranquility. The recipe for a reflection is still water, sunlit reflection subjects, and shaded reflective surface.

When photographing leaves floating atop a reflection, it’s important to know that the focus point for the reflection is the focus point of the reflective subject, not the reflective surface. This is seems counterintuitive, but try it yourself—focus on the leaves with a wide aperture and watch the reflection go soft. Achieving sharpness in your floating leaves and the reflection requires an extremely small aperture and careful focus point selection. Often the necessary depth of field exceeds the lens’s ability to capture it—in this case, I almost always bias my focus toward the leaves and let the reflection go soft.

Autumn Reflection, El Capitan, Yosemite

Fallen Leaves, Valley View, Yosemite

Nothing communicates impending winter like fall color with snow

Don’t think the first snow means your fall photography is finished for the year. Hardy autumn leaves often cling to branches, and even retain their color on the ground through the first few storms of winter. An early snowfall is an opportunity to catch fall leaves etched in white, an opportunity not to be missed. And even after the snow has been falling for a while, it’s possible to find a colorful rogue leaf to accent an otherwise stark winter scene.

Fall into Winter, Bridalveil Fall, Yosemite

First Snow, El Capitan, Yosemite

First Snow, El Capitan, Yosemite

Workshop Schedule || Purchase Prints



To better understand the science and timing of fall color, read

A simple how and when of fall color



A Gallery of Fall Color

Click an image for a closer look and slide show. Refresh the window to reorder the display.

:: More photography tips ::

Better than a Pot of Gold

Gary Hart Photography: Summer Rainbow, Yosemite Valley

Summer Rainbow, Yosemite Valley

My relationship with Yosemite rainbows goes all the way back to my childhood, when a rainbow arcing across the face of Half Dome made my father more excited than I believed possible for an adult. I look back on that experience as the foundation of my interest in photography, my relationship with Yosemite, and my love for rainbows. So, needless to say, photographing a rainbow in Yosemite is a pretty big deal for me.

A few years ago the promise (hope) of lightning drove me to Yosemite to wait in the rain on a warm July afternoon. But after sitting for hours on hard granite, all I got was wet. It became pretty clear that the storm wasn’t producing any lightning, but as the sky behind me started to brighten while the rain continued falling over Yosemite Valley, I realized that conditions were ripe for a rainbow. Sure enough, long after I would have packed up and headed home had I been focused solely on lightning, this rainbow was my reward.

The moral if my story is that despite all appearances to the contrary, rainbows are not random—when sunlight strikes raindrops, a rainbow occurs, every time. The reason we don’t always see the rainbow not because it isn’t happening, it’s because we’re not in the right place. And that place, geometrically speaking, is always the same. Of course sometimes seeing the rainbow requires superhero ability like levitation or teleportation, but when we’re armed with a little knowledge and anticipation, we can put ourselves in position for moments like this.

I can’t help with the anticipation part, but here’s a little knowledge infusion (excerpted from the Rainbow article in my Photo Tips section).

LET THERE BE LIGHT

Energy generated by the sun bathes Earth in continuous electromagnetic radiation, its wavelengths ranging from extremely short to extremely long (and every wavelength in between). Among the broad spectrum of electromagnetic solar energy we receive are ultra-violet rays that burn our skin and longer infrared waves that warm our atmosphere. These wavelengths bookend a very narrow range of wavelengths the human eye sees.

Visible wavelengths are captured by our eyes  and interpreted by our brain. When the our eyes take in light consisting of the full range of visible wavelengths, we perceive it as white (colorless) light. We perceive color when some wavelengths are more prevalent than others. For example, when light strikes an opaque (solid) object such as a tree or rock, some of its wavelengths are absorbed; the wavelengths not absorbed are scattered. Our eyes capture this scattered light, send the information to our brains, which interprets it as a color. When light strikes water, some is absorbed and scattered by the surface, enabling us to see the water; some light passes through the water’s surface, enabling us to see what’s in the water; and some light is reflected by the surface, enabling us to see reflections.

(From this point on, for simplicity’s sake, it might help to visualize what happens when water strikes a single drop.)

Light traveling from one medium to another (e.g., from air into water) refracts (bends). Different wavelengths refract different amounts, causing the light to split into its component colors. Light that passes through a water refracts (bends). Different wavelengths are refracted different amounts by water; this separates the originally homogeneous white light into the multiple colors of the spectrum.

But simply separating the light into its component colors isn’t enough to create a rainbow–if it were, we’d see a rainbow whenever light strikes water. Seeing the rainbow spectrum caused by refracted light requires that the refracted light be returned to our eyes somehow.

A raindrop isn’t flat like a sheet of paper, it’s spherical, like a ball. Light that was refracted (and separated into multiple colors) as it entered the front of the raindrop, continues through to the back of the raindrop, where some is reflected. Red light reflects back at about 42 degrees, violet light reflects back at about 40 degrees, and the other spectral colors reflect back between 42 and 40 degrees. What we perceive as a rainbow is this reflection of the refracted light–notice how the top color of the primary rainbow is always red, and the bottom color is always violet.

FOLLOW YOUR SHADOW

Every raindrop struck by sunlight creates a rainbow. But just as the reflection of a mountain peak on the surface of a lake is visible only when viewed from the angle the reflection bounces off the lake’s surface, a rainbow is visible only when you’re aligned with the 40-42 degree angle at which the raindrop reflects the spectrum of rainbow colors.

Fortunately, viewing a rainbow requires no knowledge of advanced geometry. To locate or anticipate a rainbow, picture an imaginary straight line originating at the sun, entering the back of your head, exiting between your eyes, and continuing down into the landscape in front of you–this line points to the “anti-solar point,” an imaginary point exactly opposite the sun. With no interference, a rainbow would form a complete circle, skewed 42 degrees from the line connecting the sun and the anti-solar point–with you at the center. (We don’t see the entire circle because the horizon gets in the way.)

Because the anti-solar point is always at the center of the rainbow’s arc, a rainbow will always appear exactly opposite the sun (the sun will always be at your back). It’s sometimes helpful to remember that your shadow always points toward the anti-solar point. So when you find yourself in direct sunlight and rain, locating a rainbow is as simple as following your shadow and looking skyward–if there’s no rainbow, the sun’s probably too high.

HIGH OR LOW

Sometimes a rainbow appears as a majestic half-circle, arcing high above the distant terrain; other times it’s merely a small circle segment hugging the horizon. As with the direction of the rainbow, there’s nothing mysterious about its varying height. Remember, every rainbow would form a full circle if the horizon didn’t get in the way, so the amount of the rainbow’s circle you see (and therefore its height) depends on where the rainbow’s arc intersects the horizon.

While the center of the rainbow is always in the direction of the anti-solar point, the height of the rainbow is determined by the height of the anti-solar point, which will always be exactly the same number of degrees below the horizon as the sun is above the horizon. It helps to imagine the line connecting the sun and the anti-solar point as a fulcrum, with you as the pivot–picture yourself in the center of a teeter-totter: as one seat rises above you, the other drops below you. That means the lower the sun, the more of its circle you see and the higher it appears above the horizon; conversely, the higher the sun, the less of its circle is above the horizon and the flatter (and lower) the rainbow will appear.

Assuming a flat, unobstructed scene (such as the ocean), when the sun is on the horizon, so is the anti-solar point (in the opposite direction), and half of the rainbow’s 360 degree circumference will be visible. But as the sun rises, the anti-solar point drops–when the sun is more than 42 degrees above the horizon, the anti-solar point is more than 42 degrees belowthe horizon, and the only way you’ll see a rainbow is from a perspective above the surrounding landscape (such as on a mountaintop or on a canyon rim).

Of course landscapes are rarely flat. Viewing a scene from above, such as from atop Mauna Kea in Hawaii or from the rim of the Grand Canyon, can reveal more than half of the rainbow’s circle. From an airplane, with the sun directly overhead, all of the rainbow’s circle can be seen, with the plane’s shadow in the middle.

DOUBLE YOUR PLEASURE

Not all of the light careening about a raindrop goes into forming the primary rainbow. Some of the light slips out the back of the raindrop to illuminate the sky, and some is reflected inside the raindrop a second time. The refracted light that reflects a second time before exiting creates a secondary, fainter rainbow skewed 50 degrees from the anti-solar point. Since this is a reflection, the order of the colors is the secondary rainbow is reversed.

And if the sky between the primary and secondary rainbows appears darker than the surrounding sky, you’ve found “Alexander’s band.” It’s caused by all the light machinations I just described–instead of all the sunlight simply passing through the raindrops to illuminate the sky, some of the light was intercepted, refracted, and reflected by the raindrops to form our two rainbows, leaving less light for the sky between the rainbows.


Rainbows

Click an image for a closer look and slide show. Refresh the window to reorder the display.

Some Advice for Nikon Shooters (from a Sony Shooter)

Gary Hart Photography: Spring Reflection, El Capitan and Three Brothers, Yosemite

Spring Reflection, El Capitan and Three Brothers, Yosemite
Sony a7R II
Canon 11-24 f/4L with Metabones IV adapter @11mm
1/60 second
F/8
ISO 100

Yesterday Nikon finally jumped into the mirrorless game with its Z6 and Z7 announcement, a welcome development that can only keep pushing everyone’s mirrorless technology forward.

I made the switch to mirrorless about four years ago and haven’t looked back. At the beginning mirrorless was touted for its compactness, and while mirrorless bodies (and to a lesser extent, lenses) are more compact, it turns out that, for me at least, it’s the mirrorless viewfinder that has hooked me: with real-time exposure simulation, focus assist (peaking), highlight alert (zebras), and a pre-capture histogram, I don’t think I could go back to a DSLR.

While I shoot with the Sony a7RIII and am very much committed to the Sony mirrorless universe, I’m not going to get into the “my camera can beat up your camera” debate—Nikon makes great cameras and I’m sure their mirrorless bodies will be no exception. In fact, the Z7 looks like it compares very closely to the Sony a7RII, which is a fantastic camera that I still carry as a backup and don’t hesitate to use when the situation calls for it.

As happy as I am with my mirrorless conversion, I do have some insights that might spare Nikon shooters of some of the transition pains I went through when I switched from Canon DSLRs (1DSIII and 5DIII) to the Sony a7R series of mirrorless bodies.

  • The mirrorless viewfinder is different than a DSLR viewfinder and it will take some getting used to. I don’t know what the Nikon viewfinder will be like, but I’m sure it will be quite good—large, bright, and everything you’d want in an electronic viewfinder (EVF). Even so, you might be surprised at how long it takes you to get used to it (but you will). It just feels different to view a video of the world. The cool thing is, EVF technology is relatively new and will only continue to improve, while there’s not a lot more that can be done for a conventional DSLR viewfinder.
  • Beware of lens adapter hype. My original conversion plan was to use the Sony mirrorless body to supplement my Canon system, to continue using my Canon glass on the Sony body with a Metabones adapter, and gradually convert my lenses as my budget allowed. And while my adapted Canon lenses did indeed do the job, the experience was far from painless (not all that was advertised) and I wasn’t really satisfied until I was using 100% native Sony glass. Some of the problems are a function of the lens—generally the better (and newer) the lens, the closer to native performance it delivers. But as a landscape shooter, autofocus speed isn’t as big a deal to me as it is to anyone whose subjects are in motion, so sluggishness might even be a bigger problem for others. On the other hand, I suspect that since it’s Nikon making an adapter for their lenses to work with their bodies, it will be pretty good from the get-go—but I wouldn’t bet my house on it. And adapter performance likely won’t be as good as using native glass—best case scenario will be that some won’t notice a difference, but those for whom focus responsiveness and autofocus speed is essential should prepare for some frustration. (And I won’t begin to speculate about worst-case.)
  • You’ll miss that second card slot more than you might imagine. Making my living from my images, having two memory card slots for instant image backups saved me a couple of times, and gave me tremendous peace of mind all the time. If your DSLR doesn’t have a second slot, the missing slot might not be a big deal to you, but if you’re as failsafe-obsessed as I am, you might be surprised by how much you’ll long for that second slot. All it takes is one corrupted, damaged, or lost card to make you a convert to the second card slot paradigm.
  • The battery life will drive you crazy. Looking at the specs, the Z7 battery life is about the same as the a7R and a7RII, and nowhere near the Nikon full frame and Sony a7RIII (or the a7III or a9) battery life. I was willing to live with burning through multiple batteries in a single day because of all the other mirrorless benefits, and because the Sony batteries were small enough that carrying four or five at all times (I mean on my person, not just in the car or hotel) wasn’t a big deal. But it looks like the Nikon batteries are twice the size of the original Sony batteries, so there goes your size/weight benefit. I predict this will be the biggest complaint we hear about these cameras (as it was with the early a7 bodies)—that is, assuming the adapter is good.
  • Learn how to clean your sensor. Without a mirror to protect it, your naked mirrorless sensor will be exposed to the elements each time you change a lens. Fortunately, sensor cleaning is simple and not nearly as dangerous is many try to make you believe.

None of these points is a reason to not get a Nikon Z6 or Z7, but for me it would be a reason not to pre-order. Instead, if it were me, I’d wait and let others discover the frustrations so I could go into the non-trivial transition from DSLR to mirrorless with realistic expectations.

I’m guessing that current Nikon shooters will probably endure fewer frustrations than I had with my first mirrorless body, the Sony a7R—Sony was still trying to figure out the whole interface thing that Nikon has nailed (I’ve never been a fan of Nikon’s interface, but Nikon shooters like it and that’s what matters). On the other hand, I was probably more forgiving than Nikon shooters might be because the a7R image quality was so much better for my needs than the Canon 5DIII it replaced. Dynamic range is king in the landscape world, and the a7R gave me 2-3 stops more dynamic range than my 5DIII—slow transition plan notwithstanding, I literally didn’t click another frame after my first a7R shoot.

While I expect the Z6/Z7 bodies will be ergonomically more mature than my original a7R, Nikon’s full frame bodies already deliver exceptional image quality, so most Nikon full-frame DSLR shooters transitioning from the D800/810/850 won’t have the euphoria of much better image quality that sustained me until the release of Sony’s a7RII and (especially) a7RIII.

On the other hand…

(Full disclosure: I’m a Sony Artisan of Imagery)

These Nikon mirrorless cameras are great for committed Nikon shooters who are completely invested in the Nikon ecosystem and have no plans to completely replace their lens lineup. But for any photographer planning to make the full jump to mirrorless that includes all native lenses, I think Sony is (at least) several years ahead of Nikon, and given their resources and commitment, will remain at least that far ahead for many years.

One of the early complaints about the Sony mirrorless system was its lack of lenses compared to Nikon and Canon, but valid as that criticism was, that disadvantage has shrunk to virtually the point of irrelevance, and Sony is already very far along on many more native Sony FE-mount lenses. Sony is several laps ahead of everyone else in the mirrorless world—with deep pockets and its foot hard on the mirrorless pedal, I don’t see that lead shrinking muchsoon.

As good as it is for a first generation offering, the Nikon Z7 is much closer to the 3-year old Sony a7RII than it is to the (already 1-year old) a7RIII, and for sports and wildlife (and anything else that moves), it isn’t even in the same league as the (more than 1-year old) Sony a9.

I have no idea how or when Sony will respond to the mirrorless offerings from Nikon and (soon) Canon, but I’m guessing it won’t be long, and am pretty confident that will be a great day to be a Sony shooter. Competition is great for all of us, and Nikon just gave the mirrorless wave a huge boost that I’m looking forward to riding as far as it takes me.

A few words about this image

I can’t tell you that this is my favorite Sony mirrorless image, but it would definitely be on the list. I chose it for this post because it’s one of the few Sony images I have that used a Canon lens with the Metabones adapter.

Leading a workshop in Yosemite a few years ago, I guided the group to a meadow flooded by the Merced River during a particularly extreme spring runoff year. My widest lens at the time was my Sony/Zeiss 16-35 f/4 (which I love, BTW), but the scene called for something wider. When he photographer assisting me offered to let me use his Canon 11-24 f/4 with my Metabones adapter, I snatched it before he could change his mind. Given that everything in the scene was stationary, I was able to bypass any adapter-induced autofocus frustration and take the time to manually focus (it didn’t hurt that depth of field at 11mm is extremely forgiving).

I’d never used a lens that wide and was so excited by the extra field of view that I returned from Yosemite fully prepared to purchase the Canon lens, adapter or not. Fortunate for my budget (and my back), I let the lens sit in my shopping cart long enough for sanity to prevail. Not only was the Canon lens quite expensive, it weighed a ton, and I had a feeling it wouldn’t be long before Sony offered something similar. Those instincts were rewarded a year later when Sony released a 12-24 f/4 G lens that is just as sharp and half the size (and much less money).


A Sony Mirrorless Gallery

Click an image for a closer look and slide show. Refresh the window to reorder the display.

 

He Ain’t Heavy,…

… He’s My Sony 12-24 f/4 G

Gary Hart Photography: Storm Clouds, El Capitan, Yosemite

Snowstorm Reflection, El Capitan, Yosemite
Sony a7R III
Sony 12-24 f/4 G
1/50 second
F/10
ISO 100

(With apologies to The Hollies.)

The road is long, with many a winding turn…

But that’s no excuse to cut corners. Probably the question I am most asked on location is some variation of, “What lens should I use?” While I’m always happy to answer questions, this one always makes me cringe because the implicit question is, “Which lenses can I leave behind?”

What many photographers fail to realize is that the “proper” lens is determined by the photographer, not by the scene. While there is often a consensus on the primary composition at a location, that usually only means the first composition everyone sees. But if your goal is to capture something unique, those are just the compositions to avoid. And as every photographer knows, the best way to guarantee you’ll need a lens is to not pack it. I’m not suggesting that you lug Hermione’s purse to every shoot—just try to remember that your images will last far longer than your discomfort.

In my Canon life, my personal rule of thumb was to always carry lenses that cover 16-200mm, regardless of the scene, then add “specialty” lenses as my plans dictated: macro for wildflowers, fast and wide prime for night, and super telephoto for a moon. That meant the 16-35, 24-105, and 70-200 were permanent residents of my Canon bag, and my 100-400, 100 macro, or wide and fast prime came along when I needed them.

Shooting Sony mirrorless, with its more compact bodies and lenses, I now carry a much wider focal in a lighter camera bag. My new baseline (always with me) lens lineup is the Sony 12-24 G, 24-105  G, and 100-400 GM, plus the Sony 2x teleconverter. My macro and night lenses still stay behind (but they’re usually in the car), but in my bag I always have lenses to cover 12-800mm, a significant advantage over my Canon 16-200 configuration.

It’s kind of a cliché in photography to say “It’s the photographer, not the equipment.” And as much as I agree in principle, sometimes the equipment does help. Wherever I am, I regularly find compositions beyond 200mm, compositions I never would have considered before. And the 12-24 lens has enabled me to approach familiar scenes with a completely fresh eye.

A recent example came on a snowy day in Yosemite early last month. Moving fast to keep up with the rapidly changing clouds and light, I stopped at El Capitan Bridge, directly beneath El Capitan. Having shot this scene for years (decades), I was quite familiar with the perspective. So wide is the top-to-bottom, left-to-right view of El Capitan here, even at 16mm I’ve always had to choose between all of El Capitan or all of the reflection, never both. I never dreamed I’d be able to get El Capitan and its reflection in a single frame. But guess what….

Standing above the river near the south side of the bridge, I framed up a vertical composition and saw that at 12mm I could indeed fit El Capitan and the reflection, top to bottom. Whoa. With very little margin for error on any side of the frame, I moved around a bit to get the scene balanced, eventually framing the right side with the snowy trees lining the Merced. My elevated perch above the river allowed me to shoot straight ahead (no up or down tilt of the camera) and avoid the extreme skewing of the trees that’s so common at wide focal lengths.

12mm provides so much depth of field that I could focus anywhere in the scene and get front-to-back sharpness; the flat light made exposure similarly simple. With composition, focus, and exposure set, all I had to do was watch the clouds and click the shutter, my heart filled with gladness….


A Sony 12-24 Gallery

Click an image for a closer look and slide show. Refresh the window to reorder the display.

Permanent Change

Gary Hart Photography: Gray and White, El Capitan Through the Clouds, Yosemite

Gray and White, El Capitan Through the Clouds, Yosemite
Sony a7R III
Sony 12-24 f/4 G
1/40 second
F/8
ISO 800

Surrounded by towering granite walls that seem so permanent, Yosemite Valley is America’s poster-park for enduring beauty. But in the grand geological scheme, there’s nothing permanent about Yosemite. In my lifetime Yosemite has been visibly altered by drought, flood, and rockslides (not to mention human interference). Predating my arrival, Yosemite’s Anglo conquerors had a profound affect on the flora and fauna that prevailed in its prior centuries under Native care. And predating all human contact, glaciers performed their carve-and-polish magic on Yosemite’s granite.

But Yosemite’s history of change goes back much farther than that. Though it’s just a drop in the 4 1/2 billion-year bucket of Earth’s existence, let’s flip the calendar back to 100 million years before the glaciers scoured the area we call Yosemite, when layers of sediment deposited beneath a vast sea had been injected with magma that cooled to become granite. This subterranean granite was gradually uplifted by a slow-motion collision of tectonic plates that formed the mountains we call the Sierra Nevada. (Yes, I know this is a gross simplification of a very complex process.)

That’s a time-lapse I’d pay money to see, but lacking an actual 100-million-year time-lapse, I think Yosemite’s clouds make a wonderful metaphor for the park’s constant change. In fact, Yosemite storms are subject to the same the laws of nature that build and erode mountains. Each is the environment’s response to heat, moisture, pressure, and gravity—albeit on a different clock. Different in many ways, there’s also an interconnectedness to these natural processes: Just as the mountains have a profound affect on weather patterns, the weather is the prime force in the mountains’ erosion.

A month ago I got to watch the special choreography of Yosemite’s clouds and granite. Drawn by the promise of snow, I arrived as the storm built during daylight’s last couple of hours. Continuing to build under the cover of darkness, the storm was in full force by the morning’s first light. I woke to find snow covering every exposed surface, while overhead the mesmerizing dance of form and flow played out atop unseen air currents.

My first stop that morning was El Capitan Meadow. In summer, gawkers tailgate here to watch climbers monkey their way to the top of El Capitan. On this frigid morning El Capitan’s summit was a memory beneath a gray shroud, so I turned my camera to earthbound subjects within the small radius of my vision. In intense storms like this, ephemeral glimpses of Yosemite’s icons are a coveted reward that keeps experienced Yosemite photographers glancing skyward. Ever the optimist, despite a seemingly impenetrable low ceiling, I directed frequent glances in El Capitan’s direction as I worked.

The first suggestion of El Cap’s outline above the trees looked more like the faintest hint of a shadow in the clouds. I recognized what could be about to happen and quickly made my way to a better vantage point, watching until the shadow darkened and vague granitic detail appeared. Anticipating further clearing, I worked fast to beat the monolith’s inevitable reabsorption, switching lenses and framing a wide shot. To minimize tree-tilting perspective distortion, I raced across the road to increase my distance from the forest, raising my vantage point by scaling a snow mound piled atop a low fence by snowplows. With a breeze blowing the trees, I’d been shooting all morning at ISO 800, and the morning’s flat and constant light meant was no need to adjust my exposure. When the clouds parted just enough to frame El Capitan’s nose, I focused on the nearby trees and clicked several frames before the hole snapped shut.

An image like this is as much an opportunity to capture Yosemite’s snowy splendor as it is a revelation of something special about El Capitan. And that morning, my only thoughts about the clouds were wishes they’d disappear to show more granite. But as I started working on this image at home, I couldn’t help think about how clouds often provide the change Yosemite photographers seek in this (seemingly) unchanging place. That got me thinking about the nearby scar from last August’s tragic rockslide. On a clear day from the right vantage point, the scar is clearly visible on El Capitan’s east flank. another reminder that the only thing in Yosemite that’s permanent is change.


Yosemite’s Clouds

Click an image for a closer look and slide show. Refresh the window to reorder the display.

A few words about the “supermoon”

Sunset Moonrise, Yosemite Valley, Yosemite
Sony a7R II
Sony 70-200 f/4
1/10 second
F/8
ISO 200

I used to resist using the supermoon label because it’s more of a media event than an astronomical event, and it creates unrealistic expectations. But since the phenomenon appears to be with us to stay, I’ve changed my approach and decided to take advantage of the opportunity to educate and encourage.

What’s the big deal?

So just what is so “super” about a “supermoon?” Maybe another way of asking the question would be, if I hadn’t told you that the moon in this image is in fact a supermoon, would you be able to tell? Probably not. So what’s the big deal? And why do we see so many huge moon images every time there’s a supermoon? So many questions….

Celestial choreography: Supermoon explained

To understand what a supermoon is, you first have to understand that all orbiting celestial bodies travel in an ellipse, not a circle. That’s because, for two (or more) objects to have the gravitational relationship an orbit requires, each must have mass. And if they have mass, each has a gravitational influence on the other. Without getting too deep into the gravitational weeds, let’s just say that the mutual influence the earth and moon have on each other causes the moon’s orbit to deviate ever so slightly from the circle it seems to be (without precise measurement): an ellipse. And because an ellipse isn’t perfectly round, as it orbits earth, the moon’s distance from us depends its position in its orbit.

An orbiting object’s closest approach to the center of its ellipse (and the object it orbits) is at “perigee”; its greatest distance from the ellipse’s center is “apogee.” And the time it takes an object to complete one revolution of its orbit is its “period.” For example, earth’s period is one year (365.25-ish days), while the moon’s period is a little more than 27 days.

But if the moon reaches perigee every 27 days, why don’t we have a supermoon every month? That’s because we’ve also added “syzygy” to the supermoon definition. In addition to being a great Scrabble word, syzygy is the alignment of celestial bodies—in this case it’s the alignment of the sun, moon, and earth (not necessarily in that order). Not only does a supermoon need to be at perigee, it must also be syzygy.

Syzygy happens twice each month, once when the moon is new (sun-moon-earth), and again when it’s full (sun-earth-moon). (While technically a supermoon can also be a new moon, the full moon that gets all the press because a new moon isn’t visible.) Since the earth revolves around the sun as the moon revolves around earth, the moon has to travel a couple extra days each month to achieve syzygy. That’s why the moon reaches perigee ever 27 days, but syzygy comes every 29.5 days, and the moon’s distance from earth is different each time syzygy is achieved.

The view from earth: Supermoon observed

While perigee, apogee, and period are precise terms that can be measured to the microsecond, a supermoon is a non-scientific, media-fueled phenomenon loosely defined a moon that happens to be at or near perigee when it’s full. To you, the viewer, a full moon at perigee (the largest possible supermoon) will appear about 14% larger and 30% brighter than a full moon at the average distance. The rather arbitrary consensus definition of the distance that qualifies a moon as a supermoon is a full moon that is within 90 percent of its closest approach to earth.

I really doubt that the average viewer could look up at even the largest possible supermoon and be certain that it’s different from an average moon. And all those mega-moon photos that confuse people into expecting a spectacular sight when there’s a supermoon? They’re either composites—a picture of a large moon inserted into a different scene—or long telephoto images. I don’t do composites, but they’re a creative choice that I’m fine with others doing as long as they’re clearly identified as composites.

For an image that’s not a composite, the moon’s size in the frame is almost entirely a function of the focal length used. I have no idea whether most of the moons the full moon gallery below were super, average, or small. The images in this and my previous blog post were indeed super, taken within minutes of each other last Sunday evening, at completely different focal lengths.

Every full moon is super

A rising or setting full moon is one of the most beautiful things in nature. But because a full moon rises around sunset and sets around sunrise, most people are eating dinner or sleeping, and seeing it is usually an accident. So maybe the best thing to come of the recent supermoon hype is that it’s gotten people out, cameras or not, to appreciate the beauty of a full moon. If you like what you saw (or photographed), mark your calendar for every full moon and make it a regular part of your life—you won’t be sorry.

Learn more


A full moon gallery (super and otherwise)

Click an image for a closer look and slide show. Refresh the window to reorder the display.

 

%d bloggers like this: