Translator Disclaimer
9 July 2014 Easy Leaf Area: Automated Digital Image Analysis for Rapid and Accurate Measurement of Leaf Area
Hsien Ming Easlon, Arnold J. Bloom
Author Affiliations +

Accurate, rapid, and nondestructive leaf area estimates are critical in many plant physiological and ecological experiments. Now-ubiquitous digital scanners and cameras, in conjunction with digital image processing software, have largely replaced older methods using light obstruction to estimate leaf area. ImageJ, the most common software used for leaf area measurement, uses a threshold-based pixel count measurement to calculate leaf area (Orsini et al., 2010; Warman et al., 2011; Juneau and Tarasoff, 2012; Carins Murphy et al., 2012; Schneider et al., 2012; Easlon et al., 2014). ImageJ, however, can require significant user input and often has difficulty in distinguishing leaves from their background using thresholding alone (Davidson, 2011). Physical masking of soil using paper collars before photographing leaves or software removal of background from images (using, e.g., GNU Image Manipulation Program; Kimball and Mattis, 2012) can remove background artifacts from images before ImageJ analysis, but these approaches add considerable processing time to leaf area measurements (Campillo et al., 2008; Warman et al., 2011; Juneau and Tarasoff, 2012).

We developed Easy Leaf Area software to rapidly estimate leaf area from Arabidopsis (DC.) Heynh. images against complex backgrounds with little user input. Easy Leaf Area uses a combination of thresholding, color ratios, and connected component analysis to rapidly measure leaf area in individual images in seconds or batch process hundreds of images in minutes; results are saved to a spreadsheet-ready CSV file. Each analyzed image is also saved in lossless TIFF format to provide a visual record of leaf area measurement and to facilitate additional analyses (Figs. 1C, F; 2C, E). Easy Leaf Area was written in Python ( http://www.python.org/), a free and open-source programming language with image processing and mathematical tools, and is easy to modify to suit specific experimental requirements; e.g., a “Crop Cover” version of the program was written to facilitate measurement of projected leaf area and percent crop canopy cover.

METHODS AND RESULTS

Easy Leaf Area uses a red calibration area of known area in each image as a scale to calibrate leaf area estimates regardless of image source, eliminating the need for assessing camera distance and focal length or measuring ruler length manually (Baker et al., 1996). Total counts of green leaf pixels and red calibration pixels are used to estimate leaf area, according to: leaf area = (green pixel count) × (calibration area/red pixel count). When possible, the calibration area should be kept in the same plane as the leaves to avoid perspective distortion. Leaf area and calibration area should also be located in similar regions of the image to minimize errors from lens distortion. Errors due to camera set up and lens distortion can be quantified by analyzing area of squares in photographs of the ‘distortion sheet’ of green squares surrounding a red square of the same area (available for download at  https://github.com/heaslon/Easy-Leaf-Area/blob/master/DistortionSheet.jpg). A camera phone (iPhone 4, Apple, Cupertino, California, USA) image of the ‘distortion sheet’ taken without a tripod at a camera distance of 20 cm had a mean distortion of 0.17% (standard error [SE] ± 0.006). A digital single-lens reflex (DSLR) camera (18–55-mm lens, 25-mm focal length =ƒ/4; EOS Rebel T2i, Canon, Melville, New York, USA) image of the ‘distortion sheet’ at a camera distance of 30 cm had a mean distortion of −2.94% (SE ± 0.008) due to significant barrel distortion. Alternatively, destructively harvested leaves can be scanned on a flatbed scanner to eliminate leaf overlap and minimize perspective and lens distortions. Scanner images (MFC-J425w, Brother International, Bridgewater, New Jersey, USA) had a mean distortion of 0.02% (SE ± 0.003). Leaf area analyses typically rely on thresholding of either grayscale images or the blue channel of RGB (red, green, and blue) images to distinguish leaf and calibration areas from their background (O'Neal et al., 2002; Bylesjo et al., 2008; Davidson, 2011). Easy Leaf Area uses thresholding combined with individual pixel RGB ratios to improve this process. For both green leaf pixel and red calibration pixel identification, two simple criteria are used. First, a minimal green or red threshold (i.e., a minimal green or red 8-bit RGB value [0–255]) is selected and any pixels with lower green or red values are not counted as leaf or calibration pixels. The second criteria uses ratios of green/red (G/R) and green/blue (G/B) or red/green and red/blue RGB values to determine which of the remaining pixels are leaf or calibration pixels. Pixel color ratios are similar to the modified excessive green index used in Lee and Lee (2011), but we found independent manipulation of G/R and G/B necessary for our Arabidopsis image set.

Fig. 1.

Raw and processed photographs of Arabidopsis. Unprocessed images (A, D), images after greenest and reddest pixel selection (B, E), and images after final automated processing (C, F) with the delete background option selected. Areas recolored green were identified as leaves and areas recolored red were identified as calibration area. Darker nongreen components in the final image (F) fit pixel threshold and color ratio criteria, but were below the minimum component size, and so were not included in leaf area calculations.

f01_01.jpg

Easy Leaf Area uses an original algorithm based on Arabidopsis rosette images taken with a camera phone (iPhone 4, Apple) to automatically determine leaf area selection criteria without user input. This algorithm is derived from the relationship between the RGB values of the greenest leaf pixels compared to the optimal selection criteria for each image in a set of 50 Arabidopsis images of near-isogenic lines (NILs) from the NIL library described in Fletcher et al. (2013) (these NILs are based on chromosomal introgressions at quantitative trait loci for stomatal conductance or δ13C from the Kas-1 accession in a Tsu-1 accession background) and naturalized Arabidopsis growing on the University of California, Davis, campus. G/R and G/B ratios for pixels of Arabidopsis leaves photographed under a variety of lighting and background settings were extracted with a modified version of Easy Leaf Area. Optimal selection criteria were determined from the 20 lowest green (G) values, G/R values, and G/B values of leaves in each image. The greenest leaf pixels in each image were determined from initial criteria of 75 minimum green (G), 1.8 green/blue ratio (G/B), and 2.0 green/red ratio (G/R). If the above initial criteria identified less than 200 leaf pixels, G/R and G/B were iteratively reduced by 6% until more than 200 leaf pixels were identified (Fig. 1B, E). There were strong correlations among greenest leaf pixel means and optimal selection criteria means for minimum G threshold (R2 = 0.899, p < 0.001), G/R (R2 = 0.883, p < 0.001), and G/B (R2 = 0.776, p < 0.001). The algorithm uses linear regressions of these relationships to estimate optimal minimum G threshold, G/R ratio, and G/B ratio from the 200+ greenest leaf pixels in an image (Fig. 1B, E). For our Arabidopsis image set, the algorithm uses the following equations to calculate automated selection criteria:

e01_01.gif

The same process was used to automatically calculate red calibration area selection criteria. The exact equations used to calculate automated selection criteria for calibration area are available in the Python code ( https://github.com/heaslon/Easy-Leaf-Area). The accuracy of the automatic algorithm can be visually assessed for any leaf image. Pixels identified as leaf area or calibration area are recolored pure green or red for visual confirmation of leaf and calibration area identification; the background pixels can also be deleted for easier visual confirmation (Fig. 1B). For images that do not conform to the Arabidopsis automatic algorithm, manual adjustment of selection criteria using software sliders can be used to optimize selection criteria. These manual settings and the RGB values of the greenest leaf pixels can be saved to a new calibration file to calibrate the algorithm for an image set. During batch or individual image processing, pixel counts and leaf areas are output along with recolored images saved in lossless TIFF format to provide a record of leaf area measurement and to facilitate additional analyses (Figs. 1C, F; 2C, E).

The above method can result in many small groups of background pixels to be misidentified as leaves, especially in unmasked images of leaves with soil in the background (Figs. 1D, 2A), but these small groups of background pixels can be filtered prior to area calculation through connected component analysis (Figs. 1F, 2C). Connected component analysis identifies and labels connected leaf pixels as separate components. Small, nonleaf components can be filtered out if they are smaller than a user-selected minimum leaf size. Individual components can also be labeled with pixel counts if the area of multiple leaf components in a single image is desired.

A Windows executable “ela.exe” for automated leaf area measurement was built using PyInstaller (