Introduction—Defining Light

The typical dictionary definitions of ‘light’ (noun) are: ‘The natural agent that stimulates sight and makes things visible; also, a source of illumination, especially an electric lamp.’ Finally, ‘a device that makes something start burning, as a match, lighter, or flame.’ However, the more technical, scientific definition can be obtained from the ‘International Lighting Vocabulary (ILV) published by the International Commission on Illumination (the CIE). The definition found in the ILV has a number (#17–659) and reads:

‘light

1. characteristic of all sensations and perceptions that is specific to vision;

2. radiation that is considered from the point of view of its ability to excite the human visual system.’

The CIE provides two interesting notes to this formal definition of light: ‘NOTE 1—This term has 2 meanings that should be clearly distinguished. When necessary to avoid confusion between these 2 meanings the term ‘perceived light’ may be used in the first sense’ and ‘NOTE 2—Light is normally, but not always, perceived as a result of the action of a light stimulus on the visual system.’ [This latter note was added even before the photoreceptive retinal–ganglion cells (pRGCs) were known, and of course the pRGCs also influence vision when controlling pupil size and lid position1 and may even have a type of retinal image role in vestibular function.2

Background—Sources of Light

Although ‘light’ refers to visible radiant energy, it may refer to sources of illumination, such as sunlight or artificial sources such as a lamp and luminaires (ie, lamp fixtures). One might think of sunsets or even the nighttime sky! Throughout almost all of humankind’s evolution, there was only natural sunlight—or fire (including, candles, flame torches, and later oil lamps). But today—and over the past century—electrically powered lamps have dominated our nighttime environments in the developed countries. Since the 1820s-1830s gas lamps and (later) incandescent (red-rich) lamps have dominated our indoor environment at night. Open flames and incandescent sources are described technically as having low-color temperatures, typically 2800 Kelvins (K)—rich in longer visible (orange, red) wavelengths and infrared–near-infrared radiation. By contrast, the midday Sun is rich in shorter wavelengths with a color temperature of about 6500 K. Sunlight become red-rich when low in the sky and the significant change in spectrum is often unnoticed because of selective chromatic adaptation by our visual system.

Since the 1950s, fluorescent lamps (generally rich in green light and line spectra) have been widely used in indoor lit environments, at least in office and commercial settings, but rather infrequently in the home—with perhaps one exception—in the kitchen (USA experience). But the ‘revolution’ in optics during the 1960s—fostered largely by the invention of the laser—led to other optical technologies, including the development of new types of lenses and filters, holography, and light-emitting diodes (LEDs). LEDs were far more energy efficient than incandescent sources but initially were capable of emitting only very narrow wavelength bands, that is, single-color visible LEDs, until the invention of multi-chip LEDs and blue–violet-pumped-fluorescent LEDs to produce ‘white’ light.

In this century, governmental emphasis on energy conservation led to pressure to employ compact fluorescent lamps (CFLs) and ‘white’ LEDs for illumination. Solid-state lighting by LEDs, which are even more energy efficient than CFLs, are now beginning to dominate the marketplace. However, both the early CFLs and ‘white’ LEDs have very blue-rich spectral power distributions (Figure 1). Some consumers began to rebel with such blue-rich lamps and demanded less ‘harsh,’ less ‘cold-bluish’ light sources. You will now find some LEDs and CFLs with greatly reduced blue emission. Nevertheless, in the past 60 years there has been an ever-increasing color temperature of artificial sources and an increase in nighttime ‘light pollution.’ The night sky of Western Europe as seen from space shows the enormous impact of electric lighting (Figure 2).

Figure 1
figure 1

Relative spectral power distributions. Traditional tungsten lamps (······) had little shortwavelength light emission compared with ‘white’ fluorescent (–––) and LED (——) lamps. Most white LEDs have an absence of deep red and near-infrared emissions.

Figure 2
figure 2

The nighttime lights of Western Europe can be seen from outer space, showing the enormous impact of artificial lighting on the night sky (from NASA).

Atmospheric optics significantly alters sunlight and sometimes provides wonderful displays of color, including the Green Flash (a great rarity)! The atmosphere acts like a mild prism: the refractive index varies slightly with wavelength, exaggerating the Sun’s image low on the horizon. Different colors are bent to different amounts by the atmosphere and the Sun’s image is bent ~0.6° at the horizon so that the Sun actually sets before its refracted image sets! The red image sets first, followed by green which is seen for only a fraction of second and blue light does not appear because it has been scattered out.3

Historical views

Since primitive times, humans have wondered just ‘What is light?’ Biblically (King James ‘Authorized Version’, Cambridge Edition)—Genesis 1 : 3 (Day 4) reads: ‘And God said, Let there be light: and there was light.’ Many great minds developed theories of light (Figure 3). Classical Greek thinking on ‘What is light?’ led Plato (428–328 BC) to the theory that light originated as ‘feeling rays’ from the eyes—directed at whatever one observes. He apparently drew on the fact that light is produced within the eye by pressure phosphenes. Although today this notion seems strange, this description dominated Western thought for nearly two millennia. In the seventeenth century a controversy arose as to whether light was a wave or a stream of particles. Sir Isaac Newton argued here in Cambridge that Grimaldi’s diffraction phenomena simply demonstrated a new form of refraction. Newton argued that the geometric nature of the laws of refraction and reflection could only be explained if light were composed of ‘corpuscles’ (particles), as waves did not travel in straight lines. After joining the Royal Society of London in 1672, Newton stated that the forty-fourth of a series of experiments he had just conducted had proven that light consisted of corpuscles—not waves. However, on the continent the wave theory of light appeared to hold sway. Christiaan Huygens, a Dutch physicist (physics in that century was termed ‘natural philosophy’) published his Traité de la Lumière in 1690 which supported the wave theory. Not until Sir Thomas Young clearly demonstrated wave interference (Experiments and Calculations Relative to Physical Optics, 1804)4 was the wave theory fully accepted—and the wave theory held sway at least until the end of the nineteenth century. Another prominent physicist at Cambridge was James Clerk Maxwell who in the mid-nineteenth century derived his universal rules of electricity and magnetism that predicted electromagnetic waves and the electromagnetic spectrum (Figure 4). Indeed, around 1800 the existence of ultraviolet and infrared radiation had been discovered by Ritter5 and Herschel,6 respectively.

Figure 3
figure 3

Many great minds have theorized on the nature of light from Plato to Maxwell and Einstein. Of course, Einstein need not be shown as his image is universally known.

Figure 4
figure 4

Electromagnetic waves and the electromagnetic (E–M) spectrum. (a) (top) A geometrical representation of an oscillating E–M wave with E (electric) and H (magnetic) fields. (b) (below) Familiar regions of the E–M spectrum.

At the turn of the nineteenth century (1899–1901), a crisis developed in classical physics. Physicists had to deal with a very big puzzle: In some experiments such as interference and diffraction, light behaved as waves. However, in other experiments, such as the photoelectric effect, light appeared to behave as if particles. The photoelectric effect was observed in some metals when exposed to a beam of light. But only shorter wavelengths would produce a photocurrent in the metal, whereas longer wavelength (red) light—even at high intensity—would not produce a photocurrent. This curious observation strongly supported the quantum theory of radiation. Some German physicists theorized that a single photon (particle of light) has a quantum energy Qν that is directly proportional to the frequency f (sometimes symbolized by the Greek letter, ν) of the wave:

Qν=h × f,

where h is known as ‘Planck’s Constant.’ This led to the concept of ‘wave–particle duality.’

Physicists eventually reached a consensus that light could be characterized simultaneously as both a stream of particles and a wave. Some aspects of quantum theory are quite strange, and we shall not delve deeper, but even Einstein had problems with accepting quantum theory. But then it was Einstein who theorized that the speed of light in a vacuum could not be exceeded—and also (in 1916) predicted ‘stimulated emission of radiation,’ which was the theoretical basis for the laser.7

Most people know that the velocity of light is a constant—about 300 000 km/s in a vacuum but 299 000 km/s in air and slows down even more in denser media, for example, ~225 000 km/s inside the eye. The ratio of the speed of light in a vacuum to that in a medium is the ‘index of refraction, n.’ Just a few months ago, a team at the Ecole Politechnique Lausanne alleged that they had produced the first photograph of light particles and waves! I am not sure that I understood their experimental technique, but it will be interesting to see if other laboratories can reproduce their results and confirm their interpretation of their images. Figure 5 provides a scale for comparing the dimension of one wavelength of light.

Figure 5
figure 5

Wavelength as a matter of scale. A single retinal melanin granule or red blood cell has dimensions of the order of one wavelength from a neodymium laser (1.064 μm=1064 nm).

Quantum theory and stimulated emission

At the atomic scale, photons are emitted when an electron jumps to a lower energy orbital of the atom. Stimulated emission of a photon can occur only if an initial photon of the exact energy passes by an excited atom. Atoms are generally excited by a photon being absorbed and raising the atom to a higher energy level followed by a photon spontaneously emitted as the atom drops to a lower energy level, except by stimulated emission. With a properly constructed resonant cavity, a cascade of stimulated emissions can occur with a resulting laser beam. The real benefit of a laser source is its ultra-high radiance (brightness). Virtually all applications of a laser—from laser pointers, laser rangefinders, and CD writing and reading to laser fusion—are only possible because of the ultra-high radiance of a laser. A 1-mW laser pointer has a brightness (radiance) at least 10 times greater than the Sun.

What are the limits of the visible spectrum?

There really are no agreed limits to the visible spectrum. The CIE defines ‘visible radiation (ILV term number 17–1402) as ‘any optical radiation capable of causing a visual sensation directly’. The CIE definition adds the following note that ‘There are no precise limits for the spectral range of visible radiation since they depend upon the amount of radiant power reaching the retina and the responsivity of the observer. The lower limit is generally taken between 360 and 400 nm and the upper limit between 760 and 830 nm’. The limits of visibility have long been a personal interest. As a young scientist of about 24 years of age, I performed an experiment to determine the shortest wavelength that that I could see after reviewing much earlier reports on the subject.8, 9, 10 I could image the slit of a double monochromator down to 310 nm, and I was certain that I was truly imaging 310 nm and not stray light of longer wavelengths as I placed a number of spectral filters in the beam with no change in detection threshold. But today, at age 74, I cannot even see 400 nm very easily! As I have aged, the buildup of UV-absorbing proteins—many are fluorophores—in my intact crystalline lenses block most UV-A (315–400 nm) wavelengths and I experience more haze from lens fluorescence than when younger. Everyone can experience lens fluorescence11 from the UV-A (315–400 nm), and Zuclich et al12 quantified UV-A lens fluorescence and how it varies little with age. Weale13 estimated that lens fluorescence interfered with visual performance. Insects are quite sensitive to UV and this is the basis of UV insect light traps. Bees are believed to make use of the polarized UV in skylight to navigate, but humans presumably do not knowingly make use of the polarized violet sky, despite some polarizing features of the human cornea producing Haidinger Brushes.14 During World War II, concerns arose that preexposure to ultraviolet decreased night vision,15 but even the renowned vision scientist, George Wald, argued with a University of Rochester graduate student that this finding was ridiculous as the crystalline lens blocked retinal UV-A exposure. Apparently Professor Wald did not think logarithmically in this case, as nearly 1% of UV-A is transmitted, and with higher photon energies from the shorter UV wavelengths, it was not implausible that UV-A radiation could affect rod photoreceptors.16 There was a small tempest that continued with Wolf17 confirming the decrease in night vision, but even later, Wald18 argued this was neither a significant nor permanent effect. Tan19 later measured the grayish vision in aphakic individuals that confirmed the secondary UV-A response peaks of each cone photoreceptor.

Seeing infrared ‘light’

After several curious stories about soldiers seeing infrared lasers in the 1970s, my group demonstrated visual detection to nearly 1100 nm (J Opt Soc Amer 1976). Figure 6 shows the extension of the spectral responsivity of vision well into the infrared. This was not an easy experiment. We separated the laser by 8 m from the observer to reduce pump light (pump light rapidly decreased with distance but the laser-beam irradiance did not), and we employed narrow-band infrared filters, stacked until the same threshold was measured without the addition of another filter (Figure 7). It was interesting that—similar to other visible wavelengths—color identification was difficult at threshold for a point source,20 but if we exceeded threshold and, particularly, if expanded the source size from a ‘point’, we always could see red, suggesting that the red cones were activated. Additionally, we conducted experiments that confirmed reports from field night observations that one would see ‘green’ light from within the beam of a short-pulsed Nd:YAG laser at several kilometers down-range. We were able to confirm that if one directly observed the 1064-nm near-infrared emission wavelength from a q-switched (~10–20 ns) Nd:YAG laser one would observe green light, which when color-matched with a CW monochromator source, appeared as 532-nm green light. This demonstrated to us that second-harmonic generation was occurring within ocular tissues—probably at the retina. A second harmonic was not seen in the ruby (694 nm) laser, demonstrating the low efficiency of this non-linear process.

Figure 6
figure 6

Photopic spectral sensitivity of the human eye V(λ) extended into the infrared (after Sliney et al25). The circles are larger than the SD of measured thresholds for detecting a point source.

Figure 7
figure 7

Experimental arrangement used in 1970 experiments of infrared visual sensitivity (Sliney et al25).

In a paper published last December, Palczewska et al21 argued that infrared vision is the result of two-photon isomerization; however, as they employed only trains of femtosecond (10−12 s) pulses from an infrared laser, they could not rule out non-linear processes. Their experiments were good, but in my view, their interpretations appear flawed, as they ignored the impact of their laser’s peak power of 67 000 above average. They could not assume that their 200-fs, 75-MHz laser was equivalent to a continuous source (with duty cycle of only 1.5 × 10−5), so non-linear effects were not surprising. Their 1-mW average power entering the eye actually had a peak power of 66 W, producing a retinal irradiance >13 MW/cm2 in a minimal retinal spot size of ~25 μm!

We can conclude that visibility of light outside the well-accepted range of about 380–780 nm depends upon the brightness (radiance) of the source but is limited in childhood to approximately 310 nm at the short wavelength of the visible spectrum to perhaps ~1100 nm in the near-infrared. A true dividing line simply does not exist between ‘visible’ and infrared. The visibility of an infrared A (IR-A) wavelength merely depends only on the brightness (radiance) of the source compared with ambient luminance.

CIE photobiological spectral bands

The CIE developed some useful short-hand notations for photobiology in the 1930s. These were: the UV-C from 100–280 nm (highly actinic; germicidal, with a short-wavelength border with the ‘soft-X-ray’ region), the UV-B between 280 and 315 nm with actinic and photocarcinogenic effects, and the UV-A between 315 and 400 nm, which is characterized as weakly actinic and has a major role in photodynamic effects and photosensitizers. The visible spectrum intentionally overlaps the UV-A (from ~360–380 to 400 nm in the deep violet) and well into the near-infrared (IR-A) spectral band, which begins at 780 nm. To some surprise for research photobiologists, the borders of these CIE spectral bands have sometimes created controversy in the industrial sector. There is actually a rather infamous ‘standard’ published by the International Standards Organization (ISO) that attempted to change the traditional CIE definitions of UV-A that had existed for >75 years (ISO-20473–2007). The ISO technical committee, TC172 (optics), prepared this spectral band standard by redefining UV-A to <380 nm rather than the CIE 400-nm definition and attempted to suggest a fine border between visible—beginning at 380 nm.22 Key ophthalmic industry members of the Committee favored ophthalmic lenses and sunglasses that could meet much more lenient criteria for ‘UV blocking!’

The CIE identifies three infrared spectral bands—based largely on spectral variations in the absorption of infrared by water. The IR-A ranges from 780 to 1400 nm (metavisible wavelengths), which are well transmitted by water and which reach the retina through the ocular media. As noted earlier, a very weak visual stimulus exists even at 1100 nm; and IR-A is deeply penetrating into biological tissues and thus used in diagnostics and in skin treatments. The infrared B ranges between 1.4 μm (1400 nm) and 3.0 μm (middle infrared), and these wavelengths do not reach the retina but penetrate as much as a few mm into the skin and ocular tissues. The infrared C is a vast spectral domain, extending from 3.0 to 1000 μm (1 mm). These far-infrared wavelengths are absorbed very superficially (<1 mm). The extreme infrared C is also referred to as terahertz (THz) radiation.

Measuring light—the CIE standardized radiometric and photometric terms

The CIE defines two separate systems for measuring light: the photometric and the radiometric systems. The radiometric system is based upon fundamental physical units (Table 1). The photometric system is used in lighting design and illumination engineering and is based upon an approximate, but standardized, (V(λ)) spectral response of daylight (photopic) vision with units of: lumens (luminous power Φv), lux (lm/m2 for illuminance Ev), candelas (lm/sr for luminous intensity Iv), and nits (cd/m2 for luminance Lv, ie, ‘brightness’). The radiometric system is employed by physicists to quantify radiant energy independent of wavelength; whereas photometric quantities are only used for visible light, but radiometric quantities and units apply in the ultraviolet and infrared spectral regions as well.23 Detailed terms, quantities, and units are provided online in the CIE electronic ILV at http://eilv.cie.co.at/, and these are used widely in international (ISO and IEC) standards.

Table 1 Quick summary of useful radiometric quantities and their unitsa,b

Calculating retinal exposures

The retinal irradiance Er is directly proportional to the radiance (brightness) L of the source being viewed. The retinal irradiance Er in W/cm2 is:

Er=0.27 × L × τ × de2

where L is the radiance in W/cm2/sr, τ is the transmittance of the ocular media and de is the pupil diameter in cm. Two persons looking at the same scene can easily have a pupil size sufficiently different to readily have a retinal irradiance differing by a factor of 2 (100%)!

The retinal illuminance (photometric measure) is measured in Trolands (td) and is the luminance L (cd/m2) of the source viewed, multiplied by the square of the pupil diameter (in mm). This unit has been widely used in studies of ‘flash blindness’ and some areas of vision research. The retinal irradiance from ambient outdoor illumination is of the order of 0.02–0.1 mW/cm2 and these levels are just comfortable to view. The retinal illuminance outdoors is ~5 × 104 td. Directly viewing the midday sun’s image—a million-fold greater radiance than the blue sky or most of the outdoor surround—can result in a retinal irradiance of ~6 W/cm2 or ~3 × 107 Td for a 1.6-mm pupil. Flashblindness studies normally cite ~107 Td × s as a ‘full bleach’, which would occur in one-third second.

Conclusion

It is hard to predict future developments in our understanding and use of light; hopefully not more blue light at night. Our understanding of the multiple roles of pRGCs has only begun. Although incorrectly labeled by some as ‘non-visual photoreceptors’, they have important roles with other photoreceptors in pupillary constriction (improving visual acuity and depth of focus) and upper-lid position to reduce sky glare and shield the inferior retina in bright daylight.1 There is also evidence that pRGCs have some direct roles in imaging.2 We know little about what occurs if overdriven.24