WHY REVERSE LENSES?
For magnified imaging lenses are better used reversed – the diagram shows how, in normal use, the subject is larger than the on-sensor image and normal lenses are designed and corrected for the angles the light rays make on exit and entry under such circumstances. When the image on the sensor is larger than subject then lens corrections are more effective when the lens is reversed: biggest item closest to larger lens surface.
With a 24mm lens used in this way you achieve a magnification (M) of 105/24 = 4.4 In basic terms, light rays from a small area of the subject have been spread out by the lens system to cover a bigger area in fact (4.4)2 times greater. Thus, there is a much smaller “effective aperture” than the one marked on the lens, and this not just affects image brightness (and thus exposure) but also the onset of diffraction as the effective aperture gets smaller
With TTL metering there is no need to make calculations, the camera looks after that but it is useful to know a rough effective aperture (compare with the marked one) so you don’t inadvertently close down to where diffraction softens anything. In practice, I find that it is it is better to keep the effective aperture to around f16 – f20 and not stop down more: a rough mental calculation working backwards is what is needed.
Within the practical realm of x1 – x5 photography the effective aperture is roughly (I stress this is ‘ball-park’ and here is not the place to go into this) obtained by multiplying marked aperture by magnification on the front lens. So, with the 24mm lens set to f4 and a magnification of 4.4 you’d have an effective aperture smaller than f16 , before you even considered making further adjustments on the prime lens.
Going Deeper – the problems
The practical approach above is the best one when dealing with lenses even if, in theory, one can solve any lens problems from a series of equations. These become nightmarishly tedious to solve, with numerous variables when there are even a few lens elements, let alone the number in modern lenses. In practice, lens designers use the mathematical equivalent of a ‘suck it and see’ principle when tracing the path of different rays through a lens system from edge to centre by using calculations that can be done in their millions every second by computer until they get an ‘optimal’ compromise.
Little, if any lens optics remains in any school physics syllabus and most universities courses have abandoned it. This may be why there are some mistaken ideas floating around the internet but voiced with great certainty (as always). Coupling lenses creates a far-from-simple system where entrance pupil of the prime lens and exit pupils of the front lens do not coincide so assumptions about light going through are just that – assumptions. In this branch of physics ‘intelligent guesses’ help with ideas of what might be happening and we often need no more…the empirical approach works best (ie ‘suck it and see’)
Diffraction and the perception of sharpness
At some future stage I shall write a detailed post on this for it seems there is more confusion than ever thanks to wrong assumptions about sensor site sizes and separations mixed with a liberal helping of old ideas about resolution and line-pairs. Sharpness is subjective: resolution can be measured if criteria are set for what you mean by ‘resolved’ for two points close together.
I have lived for decadeswith attempts to circumnavigate the softening effects of diffraction at small apertures- mainly by fooling the eye . Nowadays, with sharpening algorithms one can, to all intents and purposes, reverse the perception of image degradation due to diffraction by around 2 stops worth (my visual estimation) …note this is the APPEARANCE for these algorithms make edges more clearly defined by adding darkened pixels at edge transitions. The diffraction phenomenon and sharpening modes have no relation whatsoever. It is what you see that counts.
One thing that becomes important is the difference between depth of field and depth of detail in this kind of imaging. This concept is important in Photomacrography covered in Kodak publication N.128. It looks dated (B&W) but contains information that would be difficult to find elsewhere and might be of interest to some readers.
Image Composites – stacked images
A more recent solution (also to be the subject of a future post) is to create stacks of images shot at optimum lens aperture (f5.6-f8) with focus changed slightly between shots. These can be merged with Helicon Focus software: wonderful for stationary subjects but a challenge with living insects. Sadly, many of the pictures one sees on line are of ‘recently deceased’ specimens and it shows…maybe people think “they’re’ only insects”…I saw one questioner write how can I best kill insects for stacking shots ? Is it back to the days of killing bottles and pins or should someone just try harder and be more patient perhaps…?
This will inevitably be flash (covered here)– a single gun held close acting as a broad source works amazingly well if it is lightly diffused. You might have to make a reverse lens hood – a rear lens cap with the centre punched out because reflections can result from some of the chrome surfaces on the lens rear and create flare. You don’t need DTTL control of flash – experiment and look at the LCD system. Exposure is can be controlled via the camera – though flash durations this close will be so short that the length of cable (if used) matters since pulses are nanoseconds in duration (which is in the realm of the time electrical signals take to move down cables). You just dial in compensation – reducing flash exposure if too bright: increasing if too dark.
Focusing and Vibration
A 2x-5x magnification is all that is practical (just) in the field where the slightest movement of subject (or camera and lens) becomes an earthquake. Thus, switch off the autofocus and focus either by moving camera plus lens or moving the subject (often easier) to focus.
I tend to operate on the principle (used in the best microscopes where camera and add-ons are fixed) of moving the subject, fixing the camera assembly to a focus slide or support and clamping that to a table: you can use a very rigid tripod as well. With your specimens on twigs etc (and whilst looking through the viewfinder) slide them in holding with both hands steadied on the table. You can also use a bean bag which is excellent at ground level for absorbing vibration. With shots of moving creatures let them come into view (ants on a stem) where you have pre-focused and use a cable or wireless release.
It is not the easiest photography you will ever have done – don’t be too ambitious and work first with a 105mm macro or a 135 telephoto or a zoom and a 50mm lens so you get the feel for what 2x on the sensor is like. Finding the subject through a stacked system is sometimes the main problem – but the results are worth it.
I have been experimenting (the only useful thing to come out of the worst weather locally in years) with a new focusing slide…take one already cannibalised microscope stand and an angle grinder (gently). I’ll show some pics when I talk about making your very own optical bench…almost portable and photographing small life forms including those in water – take two glass mounts from 6cm x 6cm slides and make a micro aquarium.
Next macro-post: reversing lenses onto bellows or extension tubes for better macro performance