Wednesday, October 29, 2014

Lab 7: Digital Change Detection

Goals

The objective of this lab exercise is to conduct digital change detection. This skill is extremely important and can be applied to a variety of different fields. For example, digital change detection can be used to monitor vegetation health, urban growth, pollution and more. In this particular lab we will learn how to perform a quick qualitative change detection method as well as quantifying post-classification change detection and how to design a model which will map detailed "from-to" changes in the land use/land cover changes over a specific amount of time.

Methods

Qualitative Change Detection

While there are a few different ways to produce a change detection image, one which is relatively simple and quick is qualitative change detection. To do this, I put the near-infrared bands of the two different images from different dates in the red, green and blue color guns in ERDAS Imagine 2013. This will cause the pixels which changed over time to become highlighted in a different color than the rest of the image so that I can draw qualitative conclusions about the changes over time of the study area. An example can be seen in Fig. 1. This area looks at the changes in the features in the Eau Claire and Chippewa Counties from 1991 to 2011. 



(Fig. 1) The regions of the bright red color are where there was change in the features within the Eau Claire and Chippewa counties between the years 1991 to 2011.

Calculating Quantitative Changes in Multidate Classified Images

In this portion of the lab, I assessed the quantitiative changes in the land use/land cover classes in the Milwaukee Metropolitan area from 2001 to 2006. The first step of this process involved bringing both the original image from 2001 and 2006 into ERDAS Imagine (Fig. 2). These images have already been classified which makes things easier.  


(Fig. 2) This image shows LULC classification of the Milwaukee Metropolitan Area in 2001 (left) and 2006 (right).

The next step, however was more complicated. I had to determine the percent change in the land use/land cover (LULC) classes between the two images. To do this I created an excel document which contained the area (in hectares) which was classified as each LULC class. To calculate the area I needed to convert the histogram of the images into square meters, then square meters to hectares. To get the histogram values I looked in the attribute tables of each of the images (Fig. 3) and the same is then done for the 2006 image. Then, I subtracted the 2006 values from the 2001 values and multiplied by 100 to determine the percent change.


(Fig. 3) Attributes of the 2001 image are used to determine the area in hectares of each LULC class within the image.

Developing a "from-to" Change Map of Multidate Images

Producing a map to show the changes of the LULC classes over the study area is a bit more of a complicated process. The "from-to" change map more specifically looks at LULC classes which change from one class to another over the time we are looking at (2001 to 2006). To measure these changes we will be using the Wilson-Lula algorithm. This involves a complex model which can be used in Fig. 4. Particularly in this exercise I focused on the areas which changed from: agriculture to urban/built-up, wetlands to urban/built-up, forest to urban/built-up, wetland to agriculture and agriculture to bare soil. I used conditional formulas like the either-or in the second set of formulas applied to the images. 


(Fig. 4) The model maker which shows the process involved in creating a "from-to" change detection image.

Results


While the qualitative methods of change detection are useful in gaining general information about areas which changed over a period of time, however no quantitative conclusions can be drawn from these types of images. Using a "from-to" change detection model we can draw conclusive data from the output image (as can be seen in Fig. 5). Figure 5 makes it difficult to see the changes in the pixels within the maps of the 4 counties but conclusions can be drawn from this map. 


Sources

The data used in this lab exercise contains images from Earth Resources Observation and Science Center, US Geological Survey and ESRI U.S. Geodatabase

The 2001 and 2006 National Land Cover Datasets were provided by the following: 

Homer, C., Dewitz, J., Fry, J., Coan, M., Hossain, N., Larson, C., Herold, N., McKerrow, A., VanDriel, J.N., and Wickham, J. 2007. Completion of the 2001 National Land Cover Database for the Conterminous United States.Photogrammetric Engineering and Remote Sensing, Vol. 73, No. 4, pp 337-341.

Fry, J., Xian, G., Jin, S., Dewitz, J., Homer, C., Yang, L., Barnes, C., Herold, N., and Wickham, J., 2011. Completion of the 2006 National Land Cover Database for the Conterminous United StatesPE&RS, Vol. 77(9):858-864. 

Tuesday, October 28, 2014

Lab 6: Classification Accuracy Assessment

Goals

The main goal of this lab is to perform a classification accuracy assessment of an image which has previously been undergone land use/land cover image classification. It is necessary to conduct an accuracy assessment when producing a image classification as it is an important part of drawing conclusions from the output image. In order to complete an accuracy assessment we will be collecting ground reference points as testing samples and using those samples in order to perform the accuracy assessment.

Methods

The first step in the process of accuracy assessment is to generate ground reference testing samples. Prior to an image being assessed for accuracy, testing samples need to be collected. These points can be collected via field work before the actual land classification begins. Ground reference points are coordinate samples which are collected by some form of GPS. These points can also be created with the help of a high resolution image or aerial photograph. 

For the first part of this lab exercise I will be creating ground reference points from an aerial image in ERDAS Imagine 2013. I'll use the coded, unsupervised classification image from lab 4, to act as a reference image. Next, I will open the accuracy assessment window under the raster tab. In this window the image from lab 4 is to be opened in the accuracy assessment window. Then I will specify the reference image as the 2005 image provided in this lab (Fig.1). After this has been done, it's time to generate the random points on the reference image which will later be used to determine the accuracy of the classified image produced in lab 4. To do this I will select the option, "add random points utility" which will generate random points throughout the reference image. Then once this is complete I can compare the reference values to the class values within the classified image. 


(Fig. 1) The image which I will be assessing the accuracy of is shown on the left while the image used to classify the reference points in the accuracy process is seen on the right.

When setting the conditions for the random points dialog, I will change the number of points to 125, select the stratified random distribution parameter and set the minimum points to 15. It is also important to highlight the select classes which we we used for land use/land cover classes in previous labs (water, forest, agriculture, urban/built-up and bare soil). Once all this is done it is time to click okay and the random points will be generated (Fig. 2).



(Fig. 2) This image shows the accuracy assessment tool which has generated 125 random ground reference points for the analyst to use to determine the accuracy of the image on the left.

Once the random ground reference points have been created it is time to classify them in order to evaluate them. To do this I will first highlight the first 10 reference points and select the view to show the selected points. By doing this it is easier to find each of the points. Starting with number 1, I will zoom into the point so I can determine the LULC class it belongs to. After I have determined which class it belongs to then I will put the corresponding number to the reference column that goes along with the classes from previous labs (1-water, 2-forest, 3-agriculture, 4-urban/built-up, 5-bare soil). This process will be repeated until all the data points have been classified (Fig. 3).


(Fig. 3) Each of the random ground reference points have been assigned a reference number which corresponds to a LULC class which will be used in the accuracy assessment process.

After that is done, it is time to produce the accuracy report to see how accurate our original image classification was. The accuracy report contains a great deal of information including: error matrix data, overall accuracy, producer's and user's accuracy and the Kappa statistics (Fig. 4).


(Fig. 4) The accuracy report produced after completing the accuracy assessment has a great deal of data.

For the rest of this lab I will conduct the same process in order to compare the accuracy of unsupervised classification and supervised classification in images. In the example above, the image which I am looking at the accuracy of was created by unsupervised image classification so I will conduct the exact same process with an image I produced using supervised image classification. Once the process is complete I can compare the different data found within the accuracy report to determine which method is more accurate. 

Results

After completing two separate accuracy assessments for an unsupervised and a supervised classifications, I was able to determine that the supervised classification method is more accurate. While I did not find the difference to be as obvious as I initially expected there was definitely a difference in the two accuracy assessments. Not only was the overall accuracy greater in the image which had undergone supervised classification compared to the unsupervised, but the user's and producer's accuracy were greater for the LULC classes as well.

In completing this lab I now have the knowledge and skills to assess the accuracy of a land use/land cover classification. With this understanding, I can determine that the images I produced in previous labs are not accurate enough to be used in real world application as both were only about 60% accurate. 

Sources

Data for this laboratory exercise was collected from the following data sources: Earth Resources Observation and Science Center (landsat satellite image), US Geological Survey and United States Department of Agriculture (USDA) National Agriculture Imagery Program (high resolution image). All data was provided by Dr. Cyril Wilson of the University Wisconsin- Eau Claire. 

Tuesday, October 14, 2014

Lab 5: Pixel-Based Supervised Classification

Goals and Background

The main goal of this laboratory exercise is to teach the image analyst to extract information from remotely sensed images for both biophysical and sociocultural reasons using the pixel-based supervised classification method. In the previous lab we used unsupervised classification methods in order to produce various land use/land cover classes in the Eau Claire and Chippewa Counties, however in this lab we will be using a different classification method in the same study area. Eau Claire and Chippewa Counties has a variety of land use/land cover features. We will more specifically look at classifying water, urban/built-up areas, agriculture, bare soil and forest. To do so we will be employing training samples in order to produce meaningful results.

Methods

The first portion of the lab requires the collection of training samples for the supervised classification. Training samples are basically spectral signatures which we will collect. In previous labs we relied on spectral signatures from the ASTER and USGS spectral libraries, however in this lab we will be collecting our own. The reason behind doing this is to "train" a maximum likelihood classifier which will be used in the supervised classification in order to classify features which cover the Chippewa and Eau Claire Counties.

We will be collecting multiple training samples for the following land use/land cover surface features: water, forest, agriculture, urban/built-up and bare soil. It is important to collect multiple samples in order to make sure that you have a variety of sample types so that the maximum likelihood classifier will be able to make the best classification. No feature is has the save exact reflectance value and therefore it is important to provide the model with a variety of samples from many different features in your study area. For this reason we will be collecting a minimum of 12 samples of water, 11 of forested areas, 9 of agricultural land, 11 of urban/built-up areas and 7 of bare soil.

The first step in this process is to open the original image in ERDAS Imagine 2013. We will first start by collecting training points for water. To do so we will zoom into areas in our image where water is present and collect the spectral signature. In order to collect the signature we will need to use the Draw tool, particularly the polygon tool to select a portion of the area. Once a polygon has been drawn it is time to open the Signature Editor under the supervised classification icon. We will be creating a new signature from an AOI. Making sure that the polygon which has our sample is still selected we will add the sample's spectral signature to the editor. Then we will change the signature name to make it easier to identify them, in this case changing it to "water 1" (Fig. 1). This process will then be repeated for the rest of the water samples needed to collect. It is important to make sure that samples are taken from various regions of the original image and not all in the same body of water.


(Fig. 1) A water training sample is created using the polygon drawing tool. The selected area's spectral signature is then added to the signature editor.

Once this is complete the next step is to repeat this process for forested areas, agricultural land, urban/built-up areas and bare soil. These features can be more difficult to identify in a false color image. In order to ensure that we are selecting the correct features we can pair and link our image with Google Earth to make sure we take samples from the correct land cover/land use areas. Once the image is paired with Google Earth it is time to collect the number of training samples of the various land cover/land use areas as listed above. It is the same process to create each sample (Fig. 2).


(Fig. 2) This image shows the collection of a training sample for the urban/built-up land use/land cover classification. It was identified as a building with a tin roof at the airport in Eau Claire via Google Earth.

The next step is to evaluate the quality of the training samples you took. It's important to do this before they are used in order to train a supervised classifier. The way that the quality is evaluated is based on the separability among pixels. The greater the spectral separability, the more accurate the classification will be as a result. Before we calculate the separability for our samples we will first look at the signature mean plots for all the land use/land cover classes we collected training points for. To do this we will click the "Display Mean Plot Window" button on the top of the signature editor window after we highlight all of the water signature samples. Another way we can make the graph more clear is to right click on the color tab when all the samples of water are highlighted and change the color to blue. (We will do the same for forest but make it green, agriculture will be pink, urban areas will be red and bare soil will be sienna.)Then we will be able to view the signature mean plot. We can look at all the samples at once this way but it might be necessary to select the option to "Scale Chart to Fit Current Signatures" so we can see the mean values of all our sample for all the six bands (Fig. 3). This information can help us to better understand how accurate our training samples were just by looking at the pattern of the graphs.


(Fig. 3) This signature mean plot shows the mean reflectance values of all the water training samples collected earlier in the lab for each of the six reflective bands.
 
 
After going through all the signature mean plots and making sure there are no lines which are drastically out of place we can highlight all of our training samples and open a single signature mean plot to display all of them (Fig. 4).
 
 
 
(Fig. 4) This signature mean plot shows the mean spectral reflectance for each band of all the training samples collected in this laboratory exercise.
 
Finally, the last step is to highlight all the signatures and evaluate the separability which is done in the signature editor (Fig. 5). Once the transformed divergence is selected for the distance measurement we will click ok to generate the separability report (Fig. 6).
 




(Fig. 5) This image shows the window used to evaluate the separability using the training samples created earlier in the lab which were stored in the signature editor. 
 
 
(Fig. 6) The generated separability report for the training samples we collected.
 
The final step for this process is to merge the like signatures (ie all the water samples, all the forest samples) together. To do so, highlight all the training points and under edit, select the option to merge. A new class will appear at the bottom of the list which should be renamed according to the respective land use/land cover class and the correct color. Once this is done for all the classes the original points can be deleted, leaving you with the final five. These can then be plotted on a signature mean plot (Fig. 7).
 

 
(Fig. 7) This signature mean plot shows the final 5 land use/land cover classes which include: water, forest, agriculture, urban/built-up and bare soil.
 
 Now it is time to perform the supervised classification itself. This is done by inputting the original image and using the input signature file created after all the original training points were merged into the supervised classification window as the input signature file. It is important at this step to make sure that there is no non-parametric rule set and that the parametric rule is set to maximum likelihood. Once it is complete your final output image will be produced (Fig. 8).  
 
 
(Fig. 8) The final output image using supervised classification based on the collection of training points and their merged spectral signatures.
 
Results
 
After completing this laboratory exercise we gained an understanding of the differences in unsupervised versus supervised classification. While the supervised classification was much more time consuming it provided a much more accurate output image compared to those produced from unsupervised classification. Since the supervised classification uses spectral signatures collected by the analyst themselves it is more accurate than just using the algorithms in ERDAS to cluster the pixels. By merging many training points together we are able to produce an accurate spectral signature for a variety of different features on the original map to produce the classifications.
 


Sources
 
All the Landsat satellite images used in this laboratory exercise are from Earth Resources Observation and Science Center and United States Geological Survey. (Data was provided by Dr. Cyril Wilson of the University of Wisconsin-Eau Claire.)

Tuesday, October 7, 2014

Lab 4: Unsupervised Classification

Goals

The goal of this lab is to help the image analyst to extract information of both biophysical and sociocultural features in remotely sensed images. In order to do so, the use of unsupervised classification algorithms will help to gain this information. The two main objectives of this laboratory exercise are to execute unsupervised classification by inputting the correct requirements for such an algorithm as well as the ability to recode spectral clusters into useful information on land use and land cover.

Methods

Experimenting with Unsupervised ISODATA Classification Algorithm
This process uses ISODATA, iterative self-organizing data analysis technique algorithm to produce an image covering both the Eau Claire and Chippewa Counties.

The first step in this process is to set up an unsupervised classification algorithm. Once the original image is opened in ERDAS Imagine 2013 and select the unsupervised classification tool under the raster toolbar. We will use our original image as the input file and we will change the number of classes to 10-to-10. This means that the algorithm will classify the brightness values within the image into a total of 10 different categories. Next, the maximum number of iterations to 250. This means that the algorithm will run up to a total of 250 times in order to make sure it does not group unlike features together (Fig. 1).Once this is complete run the model and compare the input and the output images (Fig. 2).




(Fig. 1) This image shows the input image as well as the Unsupervised Classification window that is being used to classify the data in the original image. The adjustments to the formula have been seen as described above.


(Fig. 2) The input image can be seen on the left while the output image is on the right. The output image has undergone an unsupervised classification algorithm. 

The next step of the process is to recode the clusters produced by the unsupervised classification algorithm into meaningful classes which will show land use/land cover of Eau Claire county. To do so, we will be opening the image attributes. Next we will recode the classes. The best way to do this is to select each cluster individually and change the color to a bright yellow so it stands out (Fig. 3). Then we will decide which category it belongs to and re-color it. The class names and colors we will be using area as follows: Water-Blue, Forest-Dark Green, Agriculture-Pink, Urban/built-up-Red and Bare soil-Sienna.


(Fig.3) This image displays the image which had previously undergone the unsupervised classification and has been recoded according to the class names and colors as described above.

Improving the Accuracy of Unsupervised Classification

One of the main problems that arose from the first classification process is that it was difficult to identify the differences between the forest and agriculture land use/land cover classes. Since this is the case, it is not clear which classes are correct and which are incorrectly classified. For this reason, we will be running the same process as we did above, however, we will be changing the minimum and maximum number of classes to 20-to-20 as opposed to the 10-to-10 we did in the above section. This should help to make a more accurate map when it comes to classifying the unsupervised clusters. The same process is repeated, the algorithm is run and the analyst recodes the image in order to select the appropriate classes for the clusters. Once this is complete we can compare the output image from the previous section and the new output image to see if changing the number of classes had an effect on the map and classifications (Fig. 4).




(Fig. 4) The first output image can be seen on the right which used 10-to-10 classes to perform the classification process and on the left is the image produced after using the 20-to-20 classification.

As can be seen in the comparison of the two images in Fig. 4, by increasing the number of classes in the unsupervised classification algorithm the classification of various LULC (land use/land cover) classes can be more accurately determined. The original output image was challenging to classify because the 10 clusters which the algorithm produced grouped much of the agriculture and forest land together. This lead to the image being dominated by agriculture (represented by the color pink) which is not the correct distribution of land in Eau Claire County. After more classes were added it was easier to accurately identify LULC classes because the clusters were more accurate. 

Another way that we can further organize the classes is to change the column properties in the raster attributes. This will allow us to chose which order we want the columns to be displayed in when we view the attributes of the data. 

Then we will recode the LULC classes in order to make it easier to generate maps using this data. To do so we will select the thematic tab under raster and select the recode tool. The next step is to change new values so that all the like LULC classes are grouped together. For example, the map has a number of classes which were classified as agriculture but rather than having multiple classes for the same LULC we will change the values for all the agriculture to be 3. (Water will be 1, forest will be 2, urban/built-up will be 4 and bare soil will be 5). This way when we go to create a professional looking map using this data it will be much easier. In the last step we will use ArcMap to produce a map which presents the classification data in a more understandable manner for viewers (Fig. 5).




(Fig. 5) The image above shows how the classification work done in ERDAS Imagine can be used to create a visually pleasing map which is easier to interpret by the viewer. 

Results

As a result of completing this laboratory exercise the image analyst learned a number of important techniques for classifying remotely sensed data, particularly for land use/land cover. Unsupervised classification was used to produce all the above images, however to better understand how the algorithms used in this process work we made changes to see how it effects the images and therefore the classification. In the first portion of the lab, we used 10-to-10 classes in the algorithm and tried to classify the LULC (land use/land cover) classes from there, however, it was difficult. After completing the second portion of the lab exercise where we used 20-to-20 classes in the algorithm we gained more insight into how the number of classes can affect the overall accuracy of the classification of the image. In the end we learned that the greater the number of classes used in the unsupervised classification algorithm the easier to identify LULC classess. 

Sources

Data for this lab was collected from: United States Geological Survey and Earth Research Observation Science Center.

Thursday, October 2, 2014

Lab 3: Radiometric and Atmospheric Correction

Goals and Background

The main goal of this lab is to practice atmospherically correcting remotely sensed images. Throughout this lab the image analyst will further develop their skills in atmospheric correction using multiple methods. These methods include: absolute atmospheric correcting using empirical line calibration, absolute atmospheric correction using enhanced image based dark object subtraction, and relative atmospheric correction using multidate image normalization.

Methods

Absolute Atmospheric Correcting: Using ELC (Empirical Line Calibration)

Throughout this part of the lab we will be using the ELC method which matches in situ data to the remotely sensed data which is collected at the same time the aerial image is taken over the area of interest. ELC can be calculated by the following equation: CRk = DNk * Mk + Lk where CRk is the corrected digital output pixel values for a band, DNk is the band which should be corrected, Mk which is a multiplicative term which affects the brightness values and Lk is an additive term. 

The Mk value acts as the gain and the Lk acts as the offset  which are used to create the regression equations used between the spectral reflectance measurement of the in situ data and the spectral reflectance measurement of the sensor for the same area.

In order to perform this type of atmospheric correction, we will use the Spectral Analysis Work Station in ERDAS Imagine 2013. The first step is to then open an analysis image, which is the image you want to correct. Once it has been added, then the next step is to click on the "edit atmospheric correction" option and select empirical line as the method (Fig. 1).




(Fig. 1) The atmospheric adjustment tool in the spectral analysis workstation is used to conduct the ELC atmospheric correction.

Once this window is open it is time to begin collecting samples and identify references from various spectral libraries in order to conduct the ELC. To do this we will first start by taking a sample of a road. We need to find a road surface feature in our image, select the color grey and then use the "create a point selector" tool it carefully select the middle of a road feature. This will add a line to the graph in the bottom right corner of the atmospheric adjustment tool window. The next task is to add the in situ spectral reflectance signature of an asphalt surface from the ASTER spectral library. Once this has been added you can see the difference between the signature of the image you are correcting compared to the spectral signature for that particular surface feature type (Fig. 2).




(Fig. 2) This image shows the spectrum plot for asphalt. The sample is from a road on our original image while the reference is from the ASTER spectral library. 

We will then continue this process for surface features including: vegetation/ forest, aluminum rooftop, and water. Now we will execute the ELC atmospheric correction. The spectral analysis workstation has created the regression equations for each of the bands in the image. After saving the regression information, we will then go back to the spectral analysis workstation to select the preprocess and atmospheric adjustment. After this has been run you will end up with the final output image which has been atmospherically corrected using the empirical line calibration method (Fig. 3). 


(Fig. 3) The original image has been atmospherically corrected using the ELC (empirical line calibration) method.

Absolute Atmospheric Correcting: Using Enhanced Image Based Dark Object Subtraction

The next method of atmospheric correction we will be using is the DOS (dark object subtraction) method. This process employees a number of parameters to atmospherically correct an image including: sensor gain, offset, solar zenith angle, atmospheric scattering, solar irradiance, absorption and path radiance. To execute this method requires two steps, first is the conversion of the image taken by the satellite to an at-satellite spectral radiance image. Second is the conversion of the at-satellite image to the true surface reflectance.

Step 1:
For step one we will first open model maker in ERDAS Imagine 2013. We will create 6 independent models in the same model maker window to save time. The input raster image will be each individual band from the original image we are wanting to atmospherically correct. The formula will be created using the formula seen in Fig. 4. Much of the data can be found in the overall image metadata. Once the formula is added to the model maker and the output images are saved in the correct place, the model can be run (Fig. 5). 


(Fig. 4) The formula used to convert he original satellite image to an at-satellite spectral radiance image. (Formula provided by Dr. Cyril Wilson of the University of Wisconsin- Eau Claire).



(Fig.5) The model maker module for the first step of the DOS method should include individual models for each of the 6 bands using the individual data found in the band metadata in the formula from Fig. 4.

Step 2:
Step two is basically the same process as used in step 1 however the formula is different. The formula (Fig. 6) includes measuring path radiance which is the distance between the origin of the histogram to the start of the actual histogram. This formula also uses the solar zenith angle which is a constant value for all the bands and the distance between the sun and Earth. This distance depends on the day of the year and can be found in a chart which lists these values. The next step is then to set up another model maker window with 6 individual models the same way we did in step 1 only using the new formula shown in Fig. 6. The new model will convert the at-satellite spectral reflectance to the true surface reflectance (Fig. 7).


(Fig. 6) This formula is used in the second step of the DOS method of atmospheric correction which will convert at-satellite spectral radiance to the true surface reflectance values.


(Fig. 7) The model maker module for the second step of the DOS method should include individual models for each of the 6 bands using the individual data found in the band metadata in the formula from Fig. 6.

Final Step:
After both of the above steps have been completed it is time to stack the 6 images produced from the second step. Once they have been stacked you can see the final output image compared to the original (Fig. 8). 



(Fig. 8) The image above shows the original image on the left and the atmospherically corrected image on the right using the DOS method.

Relative Atmospheric Correction: Using Multidate Image Normalization

The last method we will be using for atmospheric correction in this lab is multidate image normalization. This is usually something a method used when absolute atmospheric correction is not possible due to a lack of in situ data or metadata for a remotely sensed image. To perform this type of atmospheric correction we will need to have both the Chicago 2000 image and the Chicago 2009 image open at the same time in ERDAS Imagine 2013. First we will link and synchronize the images to find various surface features. Once we find the first point, a rooftop at O'Hare International Airport, we will open the spectral profile tool under he multispectral option. Making sure to unlink and unsync the two images before creating a profile point in the same place in each image we will collect the first spectral profile for each of the images. This means each image will need its own spectral profile. We will continue to collect more profile points throughout various surface features on the map. It is important that the points are taken at the same place in both of the images. For this lab we will be taking at total of 15 spectral signature points: 5 in Lake Michigan, 5 sin urban/built-up areas, and 4 four internal lakes (including the original point taken at the O'Hare Airport). After all the points have been taken the final spectral profiles should have all the point data on the individual graph for the Chicago 2000 and Chicago 2009 image (Fig. 9).


(Fig. 9) Once all the spectral signatures have been collected from both of the images they will have spectral profiles which contain all 15 points for each of the images. 

The next step is to click on the tabular data view in the spectral profile window. This data shows the actual pixel data of the samples collected from the images. To organize this data we will create a chart in Microsoft Excel that contains 15 rows (for the 15 samples collected) and 6 columns (for each of the six bands that make up each image). We will be making two separate charts, one for the Chicago 2000 image tabular data and the other for the Chicago 2009 image tabular data (Fig. 10). These charts will contain the means found in each band. Next, we will create a scatter plot graph for each band which includes both the mean values for both the 2000 image and the 2009 image. (A total of 6 graphs will be produced.) For each of the graphs add a regression line. The slope data represents the gain and the y intercept represents the bias. This data will then be used in the formula used to correct the image via the normalization method (Fig. 10). The next step is to open model maker once again and create 6 models in the same window as has been done earlier in the lab. Each band of the Chicago 2009 image will act as a separate input image and the formula from Fig. 10 will be used with the respective data for each band (Fig. 11). After the model maker has been run the next step is to stack the layers to produce the final output image. Once this is done you can compare the original Chicago 2000 image to the atmospherically corrected image (Fig. 12).


(Fig. 10) This is the formula used when conducting relative atmospheric correction using the multidate image normalization method.


(Fig. 11) The model maker used for the multidate image normalization method uses each of the individual bands from the Chicago 2009 image and the formula from Fig. 10 to produce the output image.


(Fig. 12) The original image can be seen on the right while the atmospherically corrected image (using the multidate image normalization method) can be seen on the right.

Results
After completing this lab, the image analyst will have the skills to perform both absolute and relative atmospheric correction. These methods include empirical line calibration, enhanced image based dark object subtraction and multidate image normalization. These various methods have their own strengths and weaknesses however, the analyst now has the knowledge to apply these correction methods to remotely sensed data of all kinds.

Sources
The data used throughout this lab exercise was collected from the following sources: Landsat satellite image is from Earth Resources Observation and Science Center, United States Geological Survey. All data was provided by Dr. Cyril Wilson of the University of Wisconsin- Eau Claire.