Airphoto Interpretation

Recognition Elements

Airphoto interpretation is the process of viewing airphotos, identifying geographic features represented in airphotos based on image characteristics, and relating image characteristics to known ground conditions in order to obtain information about things you can’t see in the airphoto. For example, an experienced interpreter can distinguish between high and low income areas on an airphoto based on looking at lot and building size and on associations between features such as presence of swimming pools, lots backing onto a golf course, etc.

Several image characteristics may be used to identify features and interptet ground conditions. These include pattern, shape, tone, texture, shadow, associated features, and size.

Patterns can help to identify natural, agricultural and urban features. Natural patterns often reflect surficial bedrock geology or dominant geomorphic processes. For example, evidence of glaciation may be found in scraping of the topsoil from bedrock or in depositional features such as drumlins or moraines. Patterns can also be used to differentiate agricultural features. For example, orchards and vineyards show distinctive spatial patterns. Fields subjected to circular irrigation are also clearly evident on airphotos, as are settlement patterns derived from splitting large blocks of land into smaller farms. In urban landscapes, patterns can help distinguish between residential, commercial and industrial areas and may even allow you to differentiate residential areas based on their age.

Shape is particularly important in interpretation of urban images. Shapes can help distinguish between different building types. Roof shape often provides a clear indication of the type of structure which may help in identifying its function.

Tone can be a useful image characteristic but can also be problematic. Tone tends to vary too much across the image, in part because tone is affected by shadows of objects in the image. Airphotos are usually taken in late morning, so the sun angle is typically from the southeast. Because of radial displacement of objects from the nadir, we see the sides of objects as well as the tops of objects in the airphoto. However, in the southeast quadrant of the image, we see the shadowed northwest sides of objects, producing a darker tone, while in the northwest quadrant, we see the sunlit sides, producing a lighter tone. This variation in tone can make interpretation more difficult. Tone may be used to delineate drainage networks since wetter soils will have a darker tone, but darker tones could also be caused by the presence of organic soils.

Texture is particularly important in interpreting vegetation types. Not only is ti possible to distinguish between broad forest classes such as deciduous vs coniferous forest, but an experienced interpreter can also distinguish varieties of trees, e.g. red maple vs sugar maple or cherry trees vs peach trees, based on the texture of the image. Detailed information about forest stands can be interpreted from airphotos, as has been done in producing Ontario’s Forest Resource Inventory (FRI) maps. These maps are derived primarily through image interpretation with some field checking and describe the age and species composition of individual forest stands.

Shadows can reveal the types of structure or allow differentiation of different types of trees. Shadows are more pronounced on low sun angle photographs, making identification of feature types easier. However, shadows may hide detail in the image and affect the tone of the image which may make interpretation more difficult.

Many types of features can be easily identified by examining associated features. For example, a public school and a high school may be similar flat roofed building structures but it may be possible to identify the high school by its association with an adjacent football field and track. Similarly, a light industrial building and a shopping plaza may be difficult to distinguish based on the building structure type but the shopping plaza will be associated with a larger parking area than the industrial building.

The size of objects can also aid in interpretation. A cemetary and a campground may appear similar on an airphoto image since both show a regular spatial pattern of paths/roads and rectangular objects. In this case, the size of the objects can be used to aid in interpretation, although it may be necessary to determine the scale of the image to arrive at the correct interpretation.
3-D Airphoto Interpretation

Because of the overlap between successive airphotos along a flight, it is possible to view airphotos stereoscopically, i.e. in three dimensions. However, stereoscopic viewing is limited to the area of overlap between the images.

Stereoscopic viewing is based on binocular vision. Each of our eyes sees a scene from a slightly different perspective. Our brains reconstruct the two images recorded by our eyes into a three dimensional view of the scene. The same thing is possible with airphotos (or other images) provided that when we view the airphotos, each eye is focused on a single image.

There are several methods that can be used to ensure that each eye sees only one of a pair of images. Early 3-D movies relied on the use of analgyphs. To see the movie in 3-D, the audience was required to wear glasses with red and green lens. The coloured lenses filter out different colours, so each eye sees a different image which the brain reconstructs into a perspective view. Polarized light and projectors operate in a similar way. By changing the direction of polarization, each eye views a different image.

In airphoto interpretation, stereoscopic viewing is usually assisted by the use of pocket of mirror stereoscopes. Both operate on the same principle. Mirror stereoscopes have the advantage of being able to view larger images than is possible with a pocket stereoscope which is limited by the approximately 5 cm distance between our eyes. We look at a pair of overlapping airphotos through lenses that force each eye to see only one of the pair of photos. Once again, our brain reconstructs the three dimensional view from the pair of images.
Pocket Stereoscope

Mirror Stereoscope

Depth perception is a function of the parallax angle, which is the angle between the eyes and an object in a pair of stereo images. The parallas angle decreases with distance from the object. Because of radial displacement of objects in the image, the top of an object appears to be at a different depth than the bottom of an object.
Parallax Angle

In setting up airphotos for stereoscopic viewing, care must be taken to avoid psuedoscopic vision. Psuedoscopic vision can occur in two ways: if the order of the airphotos is reversed or if the shadows in the image point away from the observer. Both of these conditions will cause the 3-D image to appear to be inverted.
Psuedoscopic Vision

A final problem with three dimensional viewing of airphotos is vertical exaggeration. Objects in the image appear to be taller than in reality and slopes appear to be steeper. This exaggeration can sometimes aid in interpretation but is somewhat disorienting to inexperience viewers. Vertical exaggeration occurs because of the difference in geometry when taking the airphotos and when viewing the airphotos. Vertical exaggeration varies with camera focal lenght and % overlap between successive images. Vertical exaggeration can be calculated as:
VE = ( B / H ) / ( b / h)
where: B is the air base; H is the height of the aircraft above the ground; b is the eye base (approximately 6 cm) and h is the distance from the eye at which the stereo model is perceived (approximately 45 cm)
Vertical Exaggeration

Multi- Concept
Many applications of airphoto interpretation require interpretation of mulitple images. This can include use of multi-scale images, multi-temporal images and multi-spectral images.

Multi-scale images require a series of images at different scales, taken at the same time. Although simultaneous acquisition is difficult, if not impossible, it is often possible to acquire images from different sources that were taken at approximately the same time, i.e. within a few days of one another. Multi-scale images could include satellite-based Landsat MSS, Landsat Thematic Mapper or SPOT images, airborne MEIS or CASI images, and airphotos taken from different flying heights or using different camera lenses. In general, in interpreting multi-scale images, we use the larger scale images to interpret smaller scale imagery. Alternatively, smaller scale imagery may be used for reconnaisance purposes and larger scale imagery for more detailed analysis within selected sub-areas of the smaller scale image.

Multi-temporal images are used to analyze landscape change over time. Examples could include examining changes in river systems or sand dunes, monitoring crops over a growing season to forecast crop yields, or monitoring urban growth. In these types of application, we are using images of the same area acquired at different points in time.

Multi-spectral imagery is often used to aid in interpretation of specific types of features. For example, colour IR film clearly distinguishes water from land and is useful at distinguishing between different vegetation types which may be hard to interpret from normal black and white or colour airphotos. In this case, we can select spectral bands that are best suited to identifying the types of features we are interested in. We can also combine spectral bands to create a new index that may be more revealing than the individual bands alone.

There are numerous potential applications of airphoto interpretation. Airphoto interpretation has been widely used as the basis for land use classification and mapping, and for mapping changes in land use over time. In developing countries that often do not have reliable population databases, airphoto interpretation can be used to estimate housing density. By calculating the housing density for representative sample areas with an airphoto image, reliable estimates of housing density can be obtained for other similar areas in the image. If information is available on average household size, then this method can be extended to produce estimates of population density.

Airphotos have often been used in transportation studies and can be used to identify vehicle types, estimate traffic flows, identify parking problems on city streets, estimate parking lot usage, and even to measure the speed of vehicles on a highway.

Airphotos are regularly used in the aftermath of natural disasters such as earthquakes, volcanic erupttions or floods, to guide relief efforts. Insurance companies also make use of airphotos to assess damage and verify insurance claims.

Some municipalities use airphotos to identify building code violations and enforce compliance with permitting procedures. Most municipalities required building permits for any construction project larger than a small backyard shed. New construction can be identified on an airphoto and permit records can be checked to verify that a building permit was issued for the project. This type of application requires large scale imagery such as 1:5,000.

Airphoto interpretation has often been used to aid in locating businesses or public facilities such as schools, fire stations or libraries. By specifying a set of criteria that represent desireable locations for the business or public facility, airphoto interpretation can be used to identify sites that satisfy project requirements. In a similar manner, airphoto interpretation can be used to do avoidance screening. The objective here is to identify areas where development cannot occur. This could include areas of steep slopes, organic soils, buffer zones around marshes, rivers, shorelines or top of steep slopes, ecologically sensitive areas, conflicting land uses, class 1 and 2 agricultural land, or gravel deposits. An experienced interpreter can quickly identify these constraint areas on a airphoto, often by tracing their outlines on an acetate overlay. While this type of analysis is increasingly being done using geographic information systems, manual airphoto interpretation can be much faster than the time required to develop the GIS database.


About Rashid Faridi

I am Rashid Aziz Faridi ,Writer, Teacher and a Voracious Reader.
This entry was posted in Remote Sensing 101. Bookmark the permalink.

1 Response to Airphoto Interpretation


    this is can i know more about how things are done.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.