Thursday, December 10, 2015

Lab 8: Spectral Signature Analysis

Goals and Objectives:
The main goal for this lab is to gain experience on the measurement and interpretation of spectral reflectance of various Earth surface materials captured by satellite images. Specifically, we will learn how to collect spectral signatures from remotely sensed images, graph them, and perform analysis on them to verify whether they pass the spectral separability test. This is a prerequisite for image classification. At the end of this lab, we will be able to collect and properly analyze spectral signature curves for various Earth surface features for any multispectral image.
Methods:
In this lab, Erdas Imagine was used to analyze eau_claire_2000.img.  Once this image is displayed, zoom in to Lake Wissota. Under the Drawing tab, click on the Polygon tool. After outlining the standing water of Lake Wissota, under the Raster tab, click on Supervised and then Signature Editor. Then click Create new Signature for standing water. After that, click Display Mean Plot Window to graph the standing water's spectral bands. Spectral signatures were then collected in the same way for the following features: Moving water, vegetation, riparian vegetation, crops, urban grass, dry soil, moist soil, rock, asphalt highway, airport runway, and concrete surface. After this data is analyzed and collected, close out of Erdas Imagine
Results:
Upon viewing the plot for standing water, it can be observed that the band with the highest reflectance for standing water is band 1 with 77 micrometers, whereas the band with the lowest reflectance is bands 4/6 with about 0 micrometers. This is because water absorbs a high amount of NIR and MIR waves, therefore reflectance is low in this band. Water is blue, and this makes sense because water reflects the blue band very highly as displayed in Fig1.
Fig. 1
Following the the first signature, the highest and lowest reflectance was recorded for the next eleven features respectively: moving water = 1,6 - vegetation = 4,3 - riparian vegetation = 4,3 - crops = 4,3 - urban grass = 5,3 - dry soil = 4/5,3 - moist soil = 4,3 - rock = 1,6 - asphalt highway = 5,4 - airport runway = 5,4 - concrete surface = 5,4. When viewing the highest and lowest reflectance of vegetation, band 4 has the highest reflectance which makes sense because band 4 is the green band. The lowest reflecting bands for vegetation were bands 3 and 6 because vegetation absorbs these bands to convert into energy. When comparing the moist and dry soil, band 5 is where the greatest variation takes place. Moist soil absorbs this band more than dry soil due to the moist soil's water content. Water's reflectance for band 5 is nearly zero, which creates a greater discrepancy between dry and moist soil. These differences can be viewed in Figure 2.
Fig. 2
Upon viewing all spectral signatures on one plot (Fig. 3), many similarities and differences can be observed. The crops and soil bands are fairly similar to each other. This is because they absorb and reflect the same waves for Nitrogen enrichment. However, the moist soil is slightly lower in the graph because moist soil contains water, which absorbs each band more than dry soil or crops. The same reason applies for vegetation and riparian vegetation. Riparian vegetation contains more water, and therefore absorbs more mostly on the green band than the normal vegetation. Surfaces such as the airport runway are vastly different from the vegetation because it has high reflectivity for most of its bands. 
Fig. 3
If I was asked to develop a four channel sensor that collects data for the identification of most of the surfaces, they would be the 5, 4, and 3 bands. These bands show the greatest amount of variability, so you could extrapolate the maximum amount of data from these specific spectral channels. This also covers from most of the visible range into the infrared range, which provides some more information on the given signatures.
Sources:
Satellite image is from Earth Resources Observation and Science Center, United States Geological Survey.

Thursday, December 3, 2015

Lab 7: Photogrammetry

Goals and Objectives:
The main goal for this lab is to develop skills in performing photogrammetric tasks on aerial photographs and satellite images. Specifically, this lab helps in understanding the mathematics behind the calculation of photographic scales, measurement of areas and perimeters of features, and calculating relief displacement. This lab will also cover an introduction to stereoscopy and performing orthorectification on satellite images.
Methods:
In this lab, JPEG images were used to help with the calculation of photographic scales in the measurement of areas, perimeters and relief displacement of features. ERDAS Imagine was also used to analyze the functions of stereoscopy and orthorectification. Finally, polaroid glasses will be implemented to view stereoscopic images. The process began with measurements of scales and relief displacement. First off, open the image Eau Claire_West-se.img and measure the distance from A to B with a ruler in inches. Given that the actual distance is 8822.47 ft., convert this to inches. Reduce the fraction of the measured distance (2.7") with the actual distance (105869.64"). Now view ec_east-sw.img and calculate the scale in a similar way. Subtract the altitude the photograph was taken from from the elevation of Eau Claire. After converting the units in the equation, it is possible to find the scale for the photo. Now open Erdas Imagine. Using the Polygon tool under the Measure icon, you can outline the body of water specified. This function, when completed, will tell you the perimeter and area of the body of water in hectares and acres, meters and miles. Then we calculated relief displacement of ec_west-se.img with the parameters that the height of the aerial camera above the datum is 3870ft, and the scale of the aerial photograph is 1:3209. With the equation: d=(h*r)/h, we can find the displacement of the object in the image. Next, stereoscopy will be used and analyzed with the use of polaroid glasses. Erdas Imagine should be running, create a Stereoscopy_output folder in a personal Lab 7 folder. Bring in the images ec_city.img and ec_dem2.img in two separate viewers. Click on Terrain, then Anaglyph to open the Anaglyph Generation window. The Input DEM should be ec_dem2.img, and the Input Image should be ec_city.img. Out the output image in the Stereoscopy_output folder and name it ec_anaglyph.img. After the vertical exaggeration is increased to 2, run the model and view the image with polaroid glasses. Now, orthorectification will be implemented in Erdas Imagine. Bring in the images spot_pan.img and spot_panb.img in the same viewer. Create a subfolder in your Lab 7 folder and label it Orthorectification_output. In the Toolbox bar, open IMAGINE Photogrammetry. Create a New Block File and name the image Sat_ortho under the Orthorectification_output subfolder. Change the Geometric Model Category to Polynomial-based Pushbroom and select SPOT Pushbroom. Under Block Property Setup, select Set. Select UTM under the Profection Type in the Custom tab. Select Clarke 1866 under Spheroid Name. Select NAD27(CONUS) under Datum Name. The UTM Zone should be 11. Then click OK on Block Setup. Next, add imagery to the Block and define the sensor model. Add a frame to the Images folder. Verify the parameters of the SPOT pushbroom sensor. After this, activate  the Point measurement tool and collect GCP's. Click the Start point measurement tool and select Classic Point Measurement Tool. Bring in the desired images for orthorectification and place GCP's in specified areas. After 12 GCP's are created, set the Type and Usage to Full and Control respectively. Then add a second image to the block and collect its GCP's. Then view the automatic tie point collection and use triangulation and orthorectification resample. Verify that all the automatic placements of the 35 GCP's are accurate and make changes if necessary. Perform the triangulation process. After this is complete, can finally start the orthorectification resampling process. After all the parameters are set, sun the process. View the final orthorectified image in a new viewer.
Results:
After calculating the scale of Eau Claire_West-se.img you find that the scale is 1/40000. After calculating the scale of ec_east-sw.img you find that the scale is 1/39000. When finding the area and perimeter of the specified lake (Fig. 1), we found that the area of the lake was 37.9 ha or 93.66 acres. We also found that the perimeter of the lake was 4108.17m, or 2.55mi.
Fig. 1
When finding the relief displacement of objects in images, the displacement equation is used. When the numbers are entered for the displacement equation, it looks like this: .5"*3209=(1604.5"*10.5)/(3980*12). The resulting displacement is .352". This means that the tower should be displaced .352" toward the principal point. Next, when a stereoscopic image is produced, it is apparent that the elevation features in Eau Claire are more pronounced and objects such as buildings can be seen to almost "pop" out to show their elevation. These pronounced features are fairly similar to reality because they represent their height more accurately, which can be difficult to capture on a two-dimensional surface. Some factors that might be responsible for the differences between the initial observations of the city and the anaglyph image are the combination of red and blue for each eye, which helps the brain interpret the image as height when the two colors are overlaid with each other. After the creation of the orthorectified image, it is apparent that the images are nearly exactly overlaid identically as if the image were one (Fig. 2).
Fig. 2
The image helps us more accurately conceptualize the area where the tow images meet and is a better representation of the given area.
Sources:
National Agriculture Imagery Program (NAIP) images are from United States Department of Agriculture, 2005.
Digital Elevation Model (DEM) for Eau Claire, WI is from United States Department of Agriculture Natural Resources Conservation Service, 2010.
Spot satellite images are from Erdas Imagine, 2009.
Digital elevation model (DEM) for Palm Spring, CA is from Erdas Imagine, 2009.
National Aerial Photography Program (NAPP) 2 meter images are from Erdas Imagine, 2009.