Thursday, November 20, 2014

Lab 10: Object- Based Classification

Goals

The main goal of this lab is to perform object based classification using the program, eCognition which is a state-of-the-art object based image processing tool. Throughout this lab I used this program to segment an image into a homogeneous spatial and spectral cluster (object) as well as select the correct sample objects in order to train a nearest neighbor classifier. Finally I executed the object based classification output, correcting where it was necessary, from the nearest neighbor algorithm. In the previous labs I used different image classification methods on the same study area. Based on this I can then compare my results and determine which is the best and most accurate classification method for LULC (land use/land cover). 

Methods

The first step in this lab exercise is to create a new project in eCognition Developer64. Once this was done I  mixed the layers in order to adjust the image to appear in false color form. Next, I used the process tree to create image objects. I opened up the image tree and right-clicked inside the dialog to chose the option to 'append new' which is the first step towards creating a new process. After executing the first process I right-clicked on the first process I created and selected the option to 'insert child' where I edited the segmentation parameters. I selected the multiresolution segmentation from the list of algorithm and changed the shape to 0.2 and the compactness to 0.4, leaving the scale parameter at the default value of 10. After clicking 'execute', the process has been run and the multi-resolution segmentation image appears as the output (Fig. 1). 


(Fig. 1) Multi-resolution segmentation image which was produced using the process tree tool.
The process of creating the image in figure 1 creates polygons which automatically select various LULC portions based on the brightness values. The next step was to create LULC classes. To do this I selected the option 'class hierarchy' from the classification tab. After right-clicking in the window to insert a class. I then entered in the following classes and selected the corresponding colors: forest (dark green), agriculture (pink), urban/built-up (red), water (blue), and green vegetation/shrub (light green). 
Then it was time to declare the sample objects. To create the sample objects I opened the sample editor tool in the eCognition program. I then selected agriculture in the active class dialog in order to enter samples for that particular class. In order to collect the samples themselves, I selected a polygon in my image. The sample editor then marked the object's values with red arrows (Fig. 2). Once I decided they were good samples for the particular class I double clicked on the polygon and the sample editor changed their color to black showing that they were selected. I then added a few more samples to the agriculture class then changed the active class to one of my other feature classes and repeated the process. 

(Fig. 2) The sample editor shows the selected polygon feature from the false color image on the right before it is double-clicked and becomes a sample object for that class.
Once I had collected samples for each of the LULC classes I used in this lab, I needed to apply the nearest neighbor classification. To do so, I went back to the process tree to 'append new' and created a classification process. Then I right-clicked on the classification process in order to insert a new child. Within this edit process window I then used the classification algorithm, selected all the classes as active and selected the 'execute' button to run the process. After the process was complete I was able to see my final output image which was a result of the object-based classification.

Results

The resulting output image of Eau Claire and Chippewa Counties using object-based classification was quite accurate (Fig. 3). While there was some difficulty in the original sampling which required me to do some manual editing it was a useful method. The best part about this method of LULC classification is that it the polygons are drawn for you rather than the user having to draw them themselves. This minimizes the errors in overlapping features and makes it easier for the user.


(Fig. 3) Output image showing the LULC of the Eau Claire and Chippewa Counties using object-based classification.  

Sources

The data used in this lab exercise was provided from the following sources: Landsat satellite images from Earth Resources Observation and Science Center and USGS. All data was distributed by Dr. Cyril Wilson of the University of Wisconsin- Eau Claire. 

Wednesday, November 12, 2014

Lab 9: Advanced Classifiers 2

Goal

The goal of this lab exercise is to gain a better understanding of two different, more advanced classification algorithms. In more advanced classification methods, rather than more basic methods and supervised classification, uses algorithms which are used in order to increase the overall classification. Throughout this lab I will be using expert system/decision tree classification using ancillary data and also develop an artificial neural network in order to perform a more advanced classification of an image.

Methods

Expert System Classification
Expert system classification is a very effective method of classification which uses both spectral data and ancillary data. This method of image classification is usually used to improve spectral images which have already been classified. There are two main steps in the process of expert system classification: building a knowledge engineer/knowledge base which is used in order to train a knowledge classifier and then using the built knowledge engineer to classify an already classified image which needs to be improved. In order to increase the classification accuracy other ancillary data can also be used in this second step. 

In order to produce a knowledge engineer. This is a tool which is available in ERDAS Imagine 2010 and once the interface is opened there are three main components used in the engineering process. The first are hypotheses which are used to target LULC (land use/land cover) classes which you plan to produce. Rules are then used as a method of communication between the original classified image and the ancillary data through the use of a function. Variables include both the original classified image and the ancillary data. To perform an expert system classification using the knowledge engineer interface, I first created a hypothesis which will LULC of the original classified image. Then I will add a rule which will apply variables (the original classification image) which are to be applied to the hypothesis ID. Once the variable prop was applied I changed the value of water from 0 to 1 since the water feature class in this first example has a value of 1 in the image which was used as the variable in this process (Fig. 1). 


(Fig. 1) The use of the knowledge engineer tool to apply hypotheses and rules to provide more advanced LULC classification.

This process was then applied to the rest of the LULC classes in the original image. The next step was to use ancillary data to develop a knowledge base. Based on the qualitative analysis I conducted on the original classified image I noticed that there were some errors in the original classification. For this reason it was important to use ancillary data in order to properly classify the regions which showed errors. One example of how I separated these overlapping classes was by dividing the urban LULC class into residential and other urban LULC classes. To do this I created a new hypothesis and a new variable within the new rule which corresponded to my hypothesis. This is where I inputted the ancillary data. Once this new hypothesis and rule have been created I made the argument based on ancillary data to the previous classified image. To do this I added a reciprocal/counter argument on the urban LULC class which will cause the separation of the original class. The same process was done for agriculture and green vegetation as well as green vegetation and agriculture. The final knowledge engineer file contained 8 different hypotheses (Fig. 2).


(Fig. 2) The final knowledge engineer includes a total of 8 hypotheses used to produce the expert system classification.

After the knowledge engineer file was saved, I performed the expert system classification. To do this I opened the knowledge classifier tool and inputted the knowledge file I created. Once that was done I was able to produce a final output image and compare it to the original classified image (Fig. 3).


(Fig. 3) This image shows the original classification image is shown on the left while the image on the right is the re-classified image produced from the knowledge engineer.


Neural Network Classification
Neural network classification is a method of image classification which acts similar to the process of the human brain in order to develop LULC classes. ANN, or artificial neural network uses ancillary data along with reflective remotely sensed data in order to develop weights in hidden layers of the image. Within this process I would train a neural network in order to perform the image classification in the study area, which in this lab is of the University of Northern Iowa campus.

The first portion of the lab I performed neural network classification using predefined training samples. To do this I used the program ENVI to restore predefined ROIs (regions of interest) to the image using the ROI tool. These ROIs are what I used to later train the neural network. To train the neural network in order to perform the classification I selected classification-supervised-neural network. I then selected the 3 ROIs, the logistic radio button and then changed the number of training iterations to 1000 for training. Once this is done I could study the output image and determine how accurate the LULC classification was. 

Results

The expert system classification proved to be a much more accurate classification method compared to previous methods I have used in other labs. Because of the application of ancillary data the LULC class could more accurately be determined and represented in the study area. My initial application of the neural network classification using predefined training samples was not very accurate. However, the more I increased the training rate and the number of iterations the accuracy increased. 

Sources

The data used in this lab was collected from the following sources: Landsat satellite images from Earth Resources Observation and Science Center and USGS. Quickbird high resolution image of portion of University of Northern Iowa campus from Department of Geography at the University of Northern Iowa. All data was provided by Dr. Cyril Wilson of the University of Wisconsin- Eau Claire. 

Thursday, November 6, 2014

Lab 8: Advanced Classifiers 1

Goal

This lab exercise utilizes robust algorithms which have been developed and tested to improve the accuracy of remotely sensed image classification compared to the traditional methods including supervised and unsupervised classification. Within this lab I learned how to perform a fuzzy classifier to help and solve the common problem of mixed pixels in an image and how to divide mixed pixels into fractional parts in order to perform something called spectral linear unmixing. 

Methods

Linear Spectral Unmixing
In this portion of the lab I used the technique of linear spectral unmixing to produce a subpixel fractional map of my original image. This method is used to produce subpixel images. For this particular lab I used environment for visualizing images (ENVI) software which is similar to ERDAS Imagine in that they are both image processing softwares. To perform the linear mixture model, the 'pure' pixels (endmembers) must have their spectral refelectances measured. Ideally, I would collect ground-based spectra in order to produce these 'pure' pixels because even endmembers taken from high spatial resolution images might have multiple components. Endmembers can also be gathered from high resolution images of specifically the study area however they may require PC (principal component) or MNF (minimum noise fraction) to be used in order to identify the individual 'pure' pixels of multiple surface components.


 (Fig. 1) The starting image/ original image used in this portion of the lab exercise opened in the software ENVI.


  To produce endmembers from an ETM+ image we first must chose the option, transform-principal components-forward PC rotation-compute new statistics and rotate within the ENVI interface. Once the principal component input file window is open I used the original image as the input image and all of the automatic parameters are accepted. Once this and the forward PC parameter models are run, a total of six principle component images for each of the reflective bands which make up the original image will be produced. I then loaded Band 1 of the output image in gray scale (Fig. 2). 


(Fig. 2) This image shows the PC output image in gray scale which was created using the software, ENVI.
 
  Then next step is to view the 2D scatter plots which I used to create the various endmembers for the LULC classes. To do this I had to go back to the original image that I loaded into ENVI and select the tool, 2D Scatter Plot and Scatter Plot Band Choice. For the first plot I chose PC Band 1 for Band X and PC Band 2 for Band Y the resulting graph can be seen in Fig. 2. The next step was to collect the endmembers. To do so, I selected the class-items 1:20 option from the dialog at the top of the scatter plot and first selected the green color. I then drew a circle around the three vertices at the far right end of the scatter plot. This then once I right-clicked inside the circle the areas which were selected on the scatter plot appeared as green on the original map image. I then repeated this using a yellow and blue color. From this process I was able to classify these regions of the image as specific LULC classes (Fig. 3). This included: green-bare soil, yellow-agriculture and blue-water. Next, I did the same process however this time I created a scatter plot with PC Band 3 for Band X and PC Band 4 for Band Y. Then I used the color purple and drew circles in area where I thought I would find urban LULC class (Fig. 4). The next step was then to save the ROI points and create a single file which contains all the endmembers.


(Fig. 3) This 2D scatterplot shows the isolated endmembers created using ENVI. The colors represent the following LULC class: green-bare soil, yellow-agriculture, blue-water.




(Fig. 4) This 2D scatterplot shows the isolated endmember of the urban/built-up LULC class, represented by the color purple.


  After all of this is done it is now time to implement the linear spectral unmixing. To do this I selected the option spectral-mapping methods-linear spectral unmixing and added my original image and my combined ROI file to the dialog. This then created my fractional images (Fig. 5). These show the values of my LULC classes in the red, blue and green bands.


(Fig. 5) Viewer #2 shows reflectance in the red band, which illustrates the bare soil LULC class having greatest reflectance. Viewer #3 shows reflectance in the blue band, which illustrates the water LULC class as having the greatest reflectance. And viewer #4 shows reflectance in the green band, which illustrates the agriculture LULC class as having greatest reflectance because it has healthy, green vegetation.

Fuzzy Classification
Fuzzy classification is used to perform the same task as the linear spectral umixing. The main goal is to correctly identify mixed pixel values when performing accuracy assessments. This method takes into consideration that there are mixed pixels within the image and that it is not possible to perfectly assign mixed pixels to a single land cover category. This particular method however, uses membership grades where pixel value is decided based on whether it is closer to one class compared to the others. There are 2 main steps in the process of performing fuzzy classification: estimation of the fuzzy parameters from training data and a fuzzy partition of spectral space. 

In the first part, I collected training signatures in order to perform the fuzzy classification. I had done this in previous lab exercises (specifically lab 5) however this time I collected samples in areas where there are mixtures of land cover as well as in areas where the land cover is homogeneous. I collected a total of 4 training samples of the LULC (land use/land cover) class water, 4 of the forest class, 6 of agriculture, 6 of urban/built-up and 4 of bare soil. 
  The next step is to actually perform the fuzzy classification. To do this I opened the supervised classification window and selected the option to apply the fuzzy classification and named the output distance file that will be produced. I also made sure that the parametric rule is set to maximum likelihood and the non-parametric rule is set to feature space. Once this model has been run, the next step is to run a fuzzy convolution. This will use the distance file I created in the previous step and produce the final output image (Fig. 6).


(Fig. 6) The image shows the fuzzy convolution image on the left and the final fuzzy classification image on the right.


Results

The process of using the ENVI software to perform linear spectral unmixing was quite tedious at times but did yield very accurate results. I thought that this method was much more accurate in properly classifying pixels to the correct LULC class compared to the fuzzy classification method. This is because the fuzzy classification method classified far more urban/built-up areas than actually exist in the Eau Claire and Chippewa Counties. Because of this exaggeration of the urban areas it incorrectly classified agriculture and bare soil LULC as urban. However, the linear spectral unmixing method was much more accurate as I was able to look at the pixel classification within each band of the image and determine the accuracy of them.

Sources

The data for this lab was gained from the following sources: Landsat satellite images from Earth Resources Observation and Science Center and the United States Geological Survey. All data was provided by Dr. Cyril Wilson of the University of Wisconsin Eau Claire.