The main goal of this laboratory exercise is to teach the image analyst to extract information from remotely sensed images for both biophysical and sociocultural reasons using the pixel-based supervised classification method. In the previous lab we used unsupervised classification methods in order to produce various land use/land cover classes in the Eau Claire and Chippewa Counties, however in this lab we will be using a different classification method in the same study area. Eau Claire and Chippewa Counties has a variety of land use/land cover features. We will more specifically look at classifying water, urban/built-up areas, agriculture, bare soil and forest. To do so we will be employing training samples in order to produce meaningful results.
Methods
The first portion of the lab requires the collection of training samples for the supervised classification. Training samples are basically spectral signatures which we will collect. In previous labs we relied on spectral signatures from the ASTER and USGS spectral libraries, however in this lab we will be collecting our own. The reason behind doing this is to "train" a maximum likelihood classifier which will be used in the supervised classification in order to classify features which cover the Chippewa and Eau Claire Counties.
We will be collecting multiple training samples for the following land use/land cover surface features: water, forest, agriculture, urban/built-up and bare soil. It is important to collect multiple samples in order to make sure that you have a variety of sample types so that the maximum likelihood classifier will be able to make the best classification. No feature is has the save exact reflectance value and therefore it is important to provide the model with a variety of samples from many different features in your study area. For this reason we will be collecting a minimum of 12 samples of water, 11 of forested areas, 9 of agricultural land, 11 of urban/built-up areas and 7 of bare soil.
The first step in this process is to open the original image in ERDAS Imagine 2013. We will first start by collecting training points for water. To do so we will zoom into areas in our image where water is present and collect the spectral signature. In order to collect the signature we will need to use the Draw tool, particularly the polygon tool to select a portion of the area. Once a polygon has been drawn it is time to open the Signature Editor under the supervised classification icon. We will be creating a new signature from an AOI. Making sure that the polygon which has our sample is still selected we will add the sample's spectral signature to the editor. Then we will change the signature name to make it easier to identify them, in this case changing it to "water 1" (Fig. 1). This process will then be repeated for the rest of the water samples needed to collect. It is important to make sure that samples are taken from various regions of the original image and not all in the same body of water.
(Fig. 1) A water training sample is created using the polygon drawing tool. The selected area's spectral signature is then added to the signature editor.
Once this is complete the next step is to repeat this process for forested areas, agricultural land, urban/built-up areas and bare soil. These features can be more difficult to identify in a false color image. In order to ensure that we are selecting the correct features we can pair and link our image with Google Earth to make sure we take samples from the correct land cover/land use areas. Once the image is paired with Google Earth it is time to collect the number of training samples of the various land cover/land use areas as listed above. It is the same process to create each sample (Fig. 2).
(Fig. 2) This image shows the collection of a training sample for the urban/built-up land use/land cover classification. It was identified as a building with a tin roof at the airport in Eau Claire via Google Earth.
The next step is to evaluate the quality of the training samples you took. It's important to do this before they are used in order to train a supervised classifier. The way that the quality is evaluated is based on the separability among pixels. The greater the spectral separability, the more accurate the classification will be as a result. Before we calculate the separability for our samples we will first look at the signature mean plots for all the land use/land cover classes we collected training points for. To do this we will click the "Display Mean Plot Window" button on the top of the signature editor window after we highlight all of the water signature samples. Another way we can make the graph more clear is to right click on the color tab when all the samples of water are highlighted and change the color to blue. (We will do the same for forest but make it green, agriculture will be pink, urban areas will be red and bare soil will be sienna.)Then we will be able to view the signature mean plot. We can look at all the samples at once this way but it might be necessary to select the option to "Scale Chart to Fit Current Signatures" so we can see the mean values of all our sample for all the six bands (Fig. 3). This information can help us to better understand how accurate our training samples were just by looking at the pattern of the graphs.
(Fig. 3) This signature mean plot shows the mean reflectance values of all the water training samples collected earlier in the lab for each of the six reflective bands.
After going through all the signature mean plots and making sure there are no lines which are drastically out of place we can highlight all of our training samples and open a single signature mean plot to display all of them (Fig. 4).
(Fig. 4) This signature mean plot shows the mean spectral reflectance for each band of all the training samples collected in this laboratory exercise.
Finally, the last step is to highlight all the signatures and evaluate the separability which is done in the signature editor (Fig. 5). Once the transformed divergence is selected for the distance measurement we will click ok to generate the separability report (Fig. 6).
(Fig. 5) This image shows the window used to evaluate the separability using the training samples created earlier in the lab which were stored in the signature editor.
(Fig. 6) The generated separability report for the training samples we collected.
The final step for this process is to merge the like signatures (ie all the water samples, all the forest samples) together. To do so, highlight all the training points and under edit, select the option to merge. A new class will appear at the bottom of the list which should be renamed according to the respective land use/land cover class and the correct color. Once this is done for all the classes the original points can be deleted, leaving you with the final five. These can then be plotted on a signature mean plot (Fig. 7).
(Fig. 7) This signature mean plot shows the final 5 land use/land cover classes which include: water, forest, agriculture, urban/built-up and bare soil.
Now it is time to perform the supervised classification itself. This is done by inputting the original image and using the input signature file created after all the original training points were merged into the supervised classification window as the input signature file. It is important at this step to make sure that there is no non-parametric rule set and that the parametric rule is set to maximum likelihood. Once it is complete your final output image will be produced (Fig. 8).
(Fig. 8) The final output image using supervised classification based on the collection of training points and their merged spectral signatures.
Results
After completing this laboratory exercise we gained an understanding of the differences in unsupervised versus supervised classification. While the supervised classification was much more time consuming it provided a much more accurate output image compared to those produced from unsupervised classification. Since the supervised classification uses spectral signatures collected by the analyst themselves it is more accurate than just using the algorithms in ERDAS to cluster the pixels. By merging many training points together we are able to produce an accurate spectral signature for a variety of different features on the original map to produce the classifications.
Sources
All the Landsat satellite images used in this laboratory exercise are from Earth Resources Observation and Science Center and United States Geological Survey. (Data was provided by Dr. Cyril Wilson of the University of Wisconsin-Eau Claire.)
No comments:
Post a Comment