Thursday, December 10, 2015

Lab 8: Spectral Signature Analysis

Goals and Objectives:
The main goal for this lab is to gain experience on the measurement and interpretation of spectral reflectance of various Earth surface materials captured by satellite images. Specifically, we will learn how to collect spectral signatures from remotely sensed images, graph them, and perform analysis on them to verify whether they pass the spectral separability test. This is a prerequisite for image classification. At the end of this lab, we will be able to collect and properly analyze spectral signature curves for various Earth surface features for any multispectral image.
Methods:
In this lab, Erdas Imagine was used to analyze eau_claire_2000.img.  Once this image is displayed, zoom in to Lake Wissota. Under the Drawing tab, click on the Polygon tool. After outlining the standing water of Lake Wissota, under the Raster tab, click on Supervised and then Signature Editor. Then click Create new Signature for standing water. After that, click Display Mean Plot Window to graph the standing water's spectral bands. Spectral signatures were then collected in the same way for the following features: Moving water, vegetation, riparian vegetation, crops, urban grass, dry soil, moist soil, rock, asphalt highway, airport runway, and concrete surface. After this data is analyzed and collected, close out of Erdas Imagine
Results:
Upon viewing the plot for standing water, it can be observed that the band with the highest reflectance for standing water is band 1 with 77 micrometers, whereas the band with the lowest reflectance is bands 4/6 with about 0 micrometers. This is because water absorbs a high amount of NIR and MIR waves, therefore reflectance is low in this band. Water is blue, and this makes sense because water reflects the blue band very highly as displayed in Fig1.
Fig. 1
Following the the first signature, the highest and lowest reflectance was recorded for the next eleven features respectively: moving water = 1,6 - vegetation = 4,3 - riparian vegetation = 4,3 - crops = 4,3 - urban grass = 5,3 - dry soil = 4/5,3 - moist soil = 4,3 - rock = 1,6 - asphalt highway = 5,4 - airport runway = 5,4 - concrete surface = 5,4. When viewing the highest and lowest reflectance of vegetation, band 4 has the highest reflectance which makes sense because band 4 is the green band. The lowest reflecting bands for vegetation were bands 3 and 6 because vegetation absorbs these bands to convert into energy. When comparing the moist and dry soil, band 5 is where the greatest variation takes place. Moist soil absorbs this band more than dry soil due to the moist soil's water content. Water's reflectance for band 5 is nearly zero, which creates a greater discrepancy between dry and moist soil. These differences can be viewed in Figure 2.
Fig. 2
Upon viewing all spectral signatures on one plot (Fig. 3), many similarities and differences can be observed. The crops and soil bands are fairly similar to each other. This is because they absorb and reflect the same waves for Nitrogen enrichment. However, the moist soil is slightly lower in the graph because moist soil contains water, which absorbs each band more than dry soil or crops. The same reason applies for vegetation and riparian vegetation. Riparian vegetation contains more water, and therefore absorbs more mostly on the green band than the normal vegetation. Surfaces such as the airport runway are vastly different from the vegetation because it has high reflectivity for most of its bands. 
Fig. 3
If I was asked to develop a four channel sensor that collects data for the identification of most of the surfaces, they would be the 5, 4, and 3 bands. These bands show the greatest amount of variability, so you could extrapolate the maximum amount of data from these specific spectral channels. This also covers from most of the visible range into the infrared range, which provides some more information on the given signatures.
Sources:
Satellite image is from Earth Resources Observation and Science Center, United States Geological Survey.

Thursday, December 3, 2015

Lab 7: Photogrammetry

Goals and Objectives:
The main goal for this lab is to develop skills in performing photogrammetric tasks on aerial photographs and satellite images. Specifically, this lab helps in understanding the mathematics behind the calculation of photographic scales, measurement of areas and perimeters of features, and calculating relief displacement. This lab will also cover an introduction to stereoscopy and performing orthorectification on satellite images.
Methods:
In this lab, JPEG images were used to help with the calculation of photographic scales in the measurement of areas, perimeters and relief displacement of features. ERDAS Imagine was also used to analyze the functions of stereoscopy and orthorectification. Finally, polaroid glasses will be implemented to view stereoscopic images. The process began with measurements of scales and relief displacement. First off, open the image Eau Claire_West-se.img and measure the distance from A to B with a ruler in inches. Given that the actual distance is 8822.47 ft., convert this to inches. Reduce the fraction of the measured distance (2.7") with the actual distance (105869.64"). Now view ec_east-sw.img and calculate the scale in a similar way. Subtract the altitude the photograph was taken from from the elevation of Eau Claire. After converting the units in the equation, it is possible to find the scale for the photo. Now open Erdas Imagine. Using the Polygon tool under the Measure icon, you can outline the body of water specified. This function, when completed, will tell you the perimeter and area of the body of water in hectares and acres, meters and miles. Then we calculated relief displacement of ec_west-se.img with the parameters that the height of the aerial camera above the datum is 3870ft, and the scale of the aerial photograph is 1:3209. With the equation: d=(h*r)/h, we can find the displacement of the object in the image. Next, stereoscopy will be used and analyzed with the use of polaroid glasses. Erdas Imagine should be running, create a Stereoscopy_output folder in a personal Lab 7 folder. Bring in the images ec_city.img and ec_dem2.img in two separate viewers. Click on Terrain, then Anaglyph to open the Anaglyph Generation window. The Input DEM should be ec_dem2.img, and the Input Image should be ec_city.img. Out the output image in the Stereoscopy_output folder and name it ec_anaglyph.img. After the vertical exaggeration is increased to 2, run the model and view the image with polaroid glasses. Now, orthorectification will be implemented in Erdas Imagine. Bring in the images spot_pan.img and spot_panb.img in the same viewer. Create a subfolder in your Lab 7 folder and label it Orthorectification_output. In the Toolbox bar, open IMAGINE Photogrammetry. Create a New Block File and name the image Sat_ortho under the Orthorectification_output subfolder. Change the Geometric Model Category to Polynomial-based Pushbroom and select SPOT Pushbroom. Under Block Property Setup, select Set. Select UTM under the Profection Type in the Custom tab. Select Clarke 1866 under Spheroid Name. Select NAD27(CONUS) under Datum Name. The UTM Zone should be 11. Then click OK on Block Setup. Next, add imagery to the Block and define the sensor model. Add a frame to the Images folder. Verify the parameters of the SPOT pushbroom sensor. After this, activate  the Point measurement tool and collect GCP's. Click the Start point measurement tool and select Classic Point Measurement Tool. Bring in the desired images for orthorectification and place GCP's in specified areas. After 12 GCP's are created, set the Type and Usage to Full and Control respectively. Then add a second image to the block and collect its GCP's. Then view the automatic tie point collection and use triangulation and orthorectification resample. Verify that all the automatic placements of the 35 GCP's are accurate and make changes if necessary. Perform the triangulation process. After this is complete, can finally start the orthorectification resampling process. After all the parameters are set, sun the process. View the final orthorectified image in a new viewer.
Results:
After calculating the scale of Eau Claire_West-se.img you find that the scale is 1/40000. After calculating the scale of ec_east-sw.img you find that the scale is 1/39000. When finding the area and perimeter of the specified lake (Fig. 1), we found that the area of the lake was 37.9 ha or 93.66 acres. We also found that the perimeter of the lake was 4108.17m, or 2.55mi.
Fig. 1
When finding the relief displacement of objects in images, the displacement equation is used. When the numbers are entered for the displacement equation, it looks like this: .5"*3209=(1604.5"*10.5)/(3980*12). The resulting displacement is .352". This means that the tower should be displaced .352" toward the principal point. Next, when a stereoscopic image is produced, it is apparent that the elevation features in Eau Claire are more pronounced and objects such as buildings can be seen to almost "pop" out to show their elevation. These pronounced features are fairly similar to reality because they represent their height more accurately, which can be difficult to capture on a two-dimensional surface. Some factors that might be responsible for the differences between the initial observations of the city and the anaglyph image are the combination of red and blue for each eye, which helps the brain interpret the image as height when the two colors are overlaid with each other. After the creation of the orthorectified image, it is apparent that the images are nearly exactly overlaid identically as if the image were one (Fig. 2).
Fig. 2
The image helps us more accurately conceptualize the area where the tow images meet and is a better representation of the given area.
Sources:
National Agriculture Imagery Program (NAIP) images are from United States Department of Agriculture, 2005.
Digital Elevation Model (DEM) for Eau Claire, WI is from United States Department of Agriculture Natural Resources Conservation Service, 2010.
Spot satellite images are from Erdas Imagine, 2009.
Digital elevation model (DEM) for Palm Spring, CA is from Erdas Imagine, 2009.
National Aerial Photography Program (NAPP) 2 meter images are from Erdas Imagine, 2009.

Thursday, November 19, 2015

Lab 6: Geometric Correction

Goals and Objectives:
The main goal for this lab is to introduce a very important image preprocessing exercise known as geometric correction. It is structured to develop skills on the two major types of geometric correction that are normally performed on satellite images as part of the preprocessing activities before the extraction of biophysical and sociocultural information from satellite images.
Methods:
In this lab, ERDAS Imagine 2015 was used in order to analyze information pertaining to the Chicago area and Sierra Leone. The process began with image-to-map rectification. After Erdas Imagine is opened, bring in Chicago_2000.img in one viewer and Chicago_drg.img in the other. Navigate to Multispectral and click on control points. Select Polynomial under Select Geometric Model. Accept the default Image Layer. Add Chicago_drg.img  as the reference DRG image. After maximizing the window and accepting all defaults, the window should look like this.

Clear the existing GCP's from the Multipoint Geometric Correction window and fit the images to frame. Click on the Create GCP tool and add a GCP to the input image (Chicago_2000.img), and another to the same area in the reference image (Chicago_drg.img) as directed. Repeat this process with two more points in the directed areas. You may have to change the color of the GCP's in order to make them visible on the image. After the third image is added, the model solution will change from model has no solution to model solution is current. When this occurs, add a fourth GCP to its directed area only on the input image, but not the reference image. The GCP on the reference image is automatically added. Zoom in on the individual GCP's and make micro-adjustments until the final Root Mean Square (RMS) error is below 2.0. The RMS error can be found in the bottom right hand corner of the window. This process is necessary in order to reduce visual errors in the final image. The finished product should appear as below:
After this process is complete, click the Display Resample Image Dialog button. Add the rectified image to the Lab 6 folder and name it Chicago_2000gcr.img and accept all other parameters. Run the operation and bring in the image to Erdas. The next process used in this lab is image to image rectification. Bring in sierra_leone_east1991.img in the first viewer and Sierra_Leone_east1991grf.img in the other. Go through the same process as in part one under Multispectral, except change the polynomial order to three under Polynomial Model Properties. Use the same process as in Part 1 to add 12 GCP's to the image. After 10 GCP's have been added to both the input image and the reference image, the remaining two GCP's will be added to the reference image automatically. Once the GCP's have been placed, adjust the individual points in order to reduce the RMS error to less than one, an acceptable level of error. The final product should look like this:
Click the Display Resample Image Dialog button. Save the output image as sl_east_gcc.img and change the resample method to bilinear interpolation. Accept the other details and run the operation. When the operation is completed, bring the finished product up on Erdas and compare the rectified image to the reference image.
Results:
After viewing the first rectified image from an image-to-map process, it is apparent that the Chicago_drg.img provided a digital planimetric map, which is the source of obtaining accurate ground control points. The image-to-map rectification method converts data file coordinates to some other grid and coordinate system known as a reference system (in this case Chicago_2000.img). The image data pixel coordinates are rectified/transformed using the map coordinate counterparts. This results in a planimetric image. The reason the four ground control points are spread around the image and not concentrated on one area of the image so as to maximize the amount of the image that is geometrically corrected. If the points were concentrated, the image would only be corrected for a small area. The model used in the first geometric correction exercise is a first order polynomial, in other words the process only requires three ground control points. This model uses a simple y=b+ax approach, which is a slope equation that measures the surface linearly by fitting a plane to the data and is less accurate than higher order polynomials. Next, when creating a 3rd order polynomial, the type of map coordinate system the reference image is in is UTM, and requires at least 10 GCP's in order to perform a transformation. The final image after all the GCP's are placed and the geometric correction is run displays a fairly spatially accurate rectified image. This image is much more accurate than the original two images. Bilinear interpolation was selected for this process instead of nearest neighbor because the polynomial used for the second process is not linear. Since the first image was linear, nearest neighbor was acceptable to use.
Sources:
Satellite images are from Earth Resources Observation and Science Center, United States Geological Survey.
Digital raster graphic (DRG) is from Illinois Geospatial Data Clearing House.

Thursday, November 12, 2015

Lab 5: Lidar Remote Sensing

Goals and Objectives:
The main goal for this lab is to learn about Lidar data structure and processing. The specific objectives for this lab are processing and retrieval of various surface and terrain models and processing and creation of intensity image and other derivative products from point cloud. In this lab, we will work with Lidar point clouds in LAS file format.
Methods:
In this lab, ArcMap and ERDAS Imagine 2015 were used in order to analyze information pertaining to the Eau Claire area. The process began with point cloud visualization in ERDAS Imagine. ERDAS was used to add lidar point cloud files to access their information. Once the point clouds were loaded and displayed on ERDAS, ArcMap was opened. After navigating to Tile Index in Lab 5, QuarterSections_1.shp was displayed and observed. After this, ERDAS and ArcMap were closed out.
Next, an LAS dataset was generated and lidar point clouds with ArcGIS were explored. First, a folder connection was created by opening ArcMap and opening ArcCatalog. From here, we connected to the Lab_5/LAS folder and creating a new LAS Dataset. After renaming the file Eau_Claire_City.lasd, we navigated to LAS Dataset Properties. Under LAS Files, click Add Files and add the individual LAS files. The files are added to ArcMap. View Statistics and click Calculate to see the statistics for all the files, which can be viewed under the tab LAS Files. Next, assign coordinate information to the LAS dataset by first clicking on the XY Coordinate System. Consult the Metadata for Lab 5 by selecting Edit with Notepad++. In ArcMap, view the XY Coordinate System and navigate to NAD 1983 HARN Wisconsin CRS Eau Claire (US Feet) and Apply. Go to the Z Coordinate tab and navigate to NAVD 1988 US Feet and Apply. Units are now applied to the image. Close the Properties window and bring in the image of Eau Claire, which should appear as the image below.






Zoom into individual tiles to view its detail. Examine the Surface pull down menu and observe its features.

Click on the Contour listing and change the index factor by changing the numbers. Return to Layer Properties and change the properties under Filter and observe their differences. Back in the main viewer under the LAS Dataset, set points to Elevation and First Return. Click on the LAS Dataset Profile View tool and use it to view the bridge pictured below.
Finally, we will explore the generation of Lidar derivative products. After accessing Workspace under Geoprocessing, we accessed the information on Lab 5. Then access the LAS Dataset to Raster tool to create a DSM image. Then access the Hillshade tool and use the same process to create a hillshade image.
Now turn on the LAS Dataset and turn off the DSM and hillshade product. Then set the filter to Ground and generate a digital terrain model, or bare Earth raster. Run the operation and observed the finished DTM product. Lastly, a first return image is created based on intensity. Use the LAS Dataset to Raster tool with the Intensity setting selected to create an image based on intensity. After the image is created, view the image in ERDAS due to its superior viewing capabilities over ArcMap. The image appears as seen below.
Results:
When accessing the LAS Files in ArcMap, the statistics on the files display that the Min Z and Max Z for the entire LAS Dataset are 517.85 and 1845.92 respectively. When viewing whether these values are realistic for the city of Eau Claire, it appears as though the highest value makes sense, but the lowest value seems obscure because the sensor may not be able to sense through multiple layers of an area that may be less than 517.85. At this time, we are unsure of the unit of measurement of these numbers. However, after consulting the Metadata, it appears as though the horizontal coordinate system is through D_North_American_1983 with a unit of feet, and the vertical coordinate system is through the North American Vertical Datum of 1988 also with a unit of feet. Viewing the range of the image, the X and Y ranges are 20995.8 feet and 13347.02 feet respectively. There are some areas with limited amounts of points when viewing this image when zoomed in. These limited points could be due to the large amount of data/land that the sensor must calculate in a brief amount of time because of its high elevation. This would result in wide point spacing during the on fly interpolation. When it comes to the distribution of slope for natural land surface features and man made features, the man made features have sharp edges and specific patterns of elevation, whereas the natural surface features appear more random and sporadic. When choosing a filter under Layer Properties, the Ground and Non Return options make use of classification, First Return relies on return number, while All (Default) uses both. There is a difference between these two options because return number does not classify what the object is, whereas the classification does. Looking back at the LAS Dataset under point spacing, the average NPS of the point clouds appear to be 1.485 after calculation. When viewing the nature of features in the first return hillshade derived product, it is apparent that the product is in grayscale and illustrates the tops of every object in the image. It shows great amounts of detail in every building, tree, road, etc. for texture. After creating a bare Earth raster, the image appears much smoother, and objects such as buildings and trees are taken away. What is left in the DTM is what the sensor predicts is the ground in the image; strictly the terrain. The impact from a visual perspective of removing vegetation and buildings from the map view is a contrast between the smooth view of the DTM image and the first return image. Viewing the intensity image, the spectral channel of the intensity image is under the middle infrared band. Overall, however, there are differences in the spectral characteristics of the intensity image. The image is visibly sharper due to ERDAS, but the image is only on the grayscale, instead of the full spectrum of colors.
Sources:
Lidar point cloud and tile index are from Eau Claire County, 2013.
Eau Claire County Shapefile is from Mastering ArcGIS 6th Edition data by Margaret Price, 2014.

Thursday, October 29, 2015

Lab 4: Miscellaneous Image Functions

Goals and Objectives:
The main goal for this lab is to be able to delineate a study area from a satellite image, demonstrate how spatial resolution of images are optimized for viewing, introduce radiometric enhancement techniques in optical images, link satellite images to Google Earth to be used as additional images, explore image mosaicking, and expose students to binary change detection through graphical modeling. At the end of this lab, students will be able to demonstrate image reprocessing, enhancing images for interpretation, delineate any study area from a larger satellite image scene, mosaic multiple image scenes, and build a simple model for remote sensing analytics.
Methods
This lab ERDAS Imagine 2015, Google Earth and ArcMap were used in order to analyze information pertaining to the Eau Claire area. Image subsetting (creation of AOI) of a study area was analyzed by analyzing images of Eau Claire using the Raster tool in Erdas. The specific images were then subsetted with the use of an area of interest shape file to highlight to counties. By using the Raster and Subset & Chip tool, the output image is a subset of the original, subsequently creating an AOI. The AOI is Eau Claire and Chippewa Counties.

 Open two viewers of Eau Claire in 2000, one image is panchromatic. Using pan-sharpening and resolution merging, the nearest neighbor technique is used to pan-sharpen the 30 meter reflective image to 15 meters.
 Using simple radiometric enhancement techniques, haze reduction can be implemented through the haze reduction tool. Google Earth can be linked to Erdas using the Connect to Google Earth tool. Load an image of Eau Claire and connect to Google Earth and match GE to view to obtain the same spatial extent of the image. By synchronizing views, the same image can be viewed in Google Earth with certain advantages. Resampling was then utilized by viewing an image of Eau Claire. By using the Resample Pixel Size tool, the nearest neighbor method can be used to resample pixel size from 30x30 to 15x15. The process is repeated but with bilinear interpolation versus nearest neighbor. Next, image mosaicking is implemented with multiple images. Using Mosaic Express, two images are selected and run under default parameters. The following image is produced:
MosaicPro is then used on the same two images. By clicking the MosaicPro option, the active area can be manually computed to seam the two images together. Selecting the Use Histogram Matching option and selecting Overlap Areas allows for only the overlapped areas to be corrected, preserving the brightness values for the other areas of the images. Clicking on Process and then Run Mosaic produces this image:

Finally, creating a difference image using binary change detection is implemented by using two Eau Claire images. By clicking Two Image Functions under Functions in Raster, the two Eau Claire images can be filed under their respective input. The operator is changed from (+) to (-), and the layers for the images are reduced to four. Image differencing is then ran. By viewing the histogram in the image's metadata, the standard deviation and mean are observed to gather the upper and lower bands with the equation "mean + 1.5 x standard deviation", displayed below:

After this, the spatial modeler was used with Model Maker. Two raster objects were created to be input into a function. A third raster object is then placed and all the objects are connected using connectors. The two Eau Claire images are then used for the raster objects. The function used for the two images are "image1 - image2 + 127". Run the model to view the final image. The metadata is then observed for the image along with the histogram. In order to find the upper and lower bounds, the equation "mean + 3 x standard deviation" is used. Model Maker is then used to show the pixels that changed between 1991 and 2011 using the change/no change threshold, or upper and lower bounds found above. Input the differenced image as the input raster. Then run the function as Either IF OR with the change/no change threshold value. Run the model. The map is then displayed in Erdas, showing the pixels that changed between the two images. ArcMap is then used two more easily display these changes by overlaying the 1991 image as seen below:

Results
Through the use of the Subset & Chip tool, a subset of an image could be taken from a larger one in order to observe specific areas. In the pan-sharpened images, the finished product appears smoother and displays greater detail of areas such as roads and farming areas instead of harsher pixelated originals.
In haze reduction, images appear much darker and clearer than the original. Bodies of water are nearly black, and clouds are less apparent and are nearly imperceptible.
When linking Erdas to Google Earth, it can be used as a tool for association in image interpretation. When an image becomes too pixelated when zoomed in Erdas, the Google Earth view has a much sharper quality which can be used to identify buildings and other structures that otherwise couldn't be determined in Erdas.
Images were then resampled using the nearest neighbor method. There is very little difference in appearance between the resampled image and the original. This is because the brightness value of the closest input pixel for assigning values is the same as the output, effectively creating the same image. Bilinear interpolation, however, uses the brightness values of four of the closest input pixels in a 2x2 window to calculate the pixel's output value. This creates a sharper image when zoomed than the original.
Mosaic Express is then introduced. Through the use of Mosaic Express, the images are seamed together, but there is not a smooth color transition between the two at the boundaries. The top layer is clearly darker then the bottom and it is apparent where the two images intersect. MosaicPro, however, creates a nearly seamless transition between images. The colors also match between both images more closely than with Mosaic Express. The MosaicPro image is more accurate because only the areas that overlapped were interfered with, instead of the entirety of both of the images.
After mosaicking is implemented, image differencing is used to view two Eau Claire images. Using the metadata and histogram of the final image, the upper and lower band were determined with the given equation. The lower band resulted in -24.47 and the upper 71.8.
Finally image differencing is implemented to show the difference between Eau Claire in 1991 and 2011. The spatial distribution areas in red above are close to urban centers. It shows the activity and growth of these areas, showing change in the land over the last 20 years. However, these areas are fairly scattered with only a few central changing areas. This could be a result of deforestation, changes in crops, or expansion of urban areas.




Source: Cyril Wilson, Geog 338, Fall 2015