Section A: Enhancing Very High-Resolution Image for Feature Extraction


Feature extraction can be defined as image processing techniques to identify and to categorize image regions using their spatial or spectral properties. Usually a specific type of features – like waterbodies, buildings, and roads – can be extracted from very high-resolution imagery using digitization or automatic/semi-automatic techniques. Performing enhancement to these images largely improves the ability to identify and extract different features. Using ArcMap’s Image Analyst toolset, it is possible to apply those enhancement in real-time without creating new raster.

Original Image (left) vs. Enhanced Image (right)

Learning Objectives

By going through this exercise, you will be familiarized with techniques of image enhancement on very high-resolution imagery using ArcMap and you will be able to:

– Perform enhancement on VHR satellite image
– Extract different features using manual and semi-automatic methods

Data Inputs


File NameData Type
VHR_Ba_25062019_GE1.tifMultiband Raster


The very high-resolution imagery from satellite imagery goes through different pre-processing operations like pan sharpening, orthorectification and reprojection before reaching the end-users. Sometimes all these processing may introduce unwarranted artifacts. For this exercise we have received a satellite image which has already gone through all the processing mentioned above. If you zoom into feature level, you are going to notice quite a few artifacts.

Unwarranted artifacts in satellite images


Please zoom into different object and documents your observation. Which objects or the landcover has the greatest number of artifacts and how it may affect your classification?

  • Open ArcMap and load the image “VHR_Ba_25062019_GE1.tif” from the location X:\UNOSAT_ADV_FJI\Training_Material\M2\Practical\Data_Input\Raster


What is the spatial resolution of the image?

When was the image captured?

How many spectral bands are present in this image?

At this step we are going to use few image enhancements techniques which may be able to restore some of the sharpness we lost due to pre-processing. This method usually employs a technique called kernel-based filtering or convolution . In this method a new value of any pixel is calculated using the neighbour pixels. All of functions can be applied to our original image using Image Analysis Toolbar.

  • In the Main Menu tool bar, go to the Windows menu > Image Analysis

The image analysis tool gives you access to huge number of remote sensing tools through its “Function Editor” functionality

Image Analysis Tools
  • From the image analysis window, click to select the “VHR_Ba_25062019_GE1.tif
  • Under processing panel, click on the icon named fx (Add Function)
  • Right click on the Identity function > Insert Function > Click “Convolution Function”

Convolution Function
  • Inside the raster functions properties, select “Sharpen” and click ok
  • Click Ok inside Function Template Editor to apply the function

At this stage you shall be able to see the result of ArcMap’s on the fly processing and a temporary raster layer named “Func_VHR_Ba_25062019_GE1.tif”

  • Rename the raster layer “Func_VHR_Ba_25062019_GE1.tif”to “Sharpen_VHR_Ba_25062019_GE1.tif” and compare with the original image


Compare the “Sharpen_ VHR_Ba_25062019_GE1.tif” with the original image “VHR_Ba_25062019_GE1.tif”. Do you notice any changes? Will this be good enough if you want to manually digitise buildings?

Original Image (left) vs. Sharpen Image (right)


Apply different convolution to the original image. You can apply a combination of convolution filters using the function editors to get optimal results for building digitisation. (Hint: Try Laplacian, Sharpening, Smoothing filters and maybe reproduce something like below!)
Please remember all the layers you are creating using the raster function template editor are temporary you shall need to us “Copy Raster” tool to create raster file.

  • Select your best image enhancement layer create a new file using “Copy Raster Tool” and save as “VHR_Ba_25062019_Enhnaced.tif” it in X:\UNOSAT_ADV_FJI\Training_Material\M2\Practical\Data_Output\Raster


Now we are familiar with different image enhancements, let us try to summarise which enhancement will work better for different types of features extraction

FeatureWhat type of enhancement?How to extract?
Types of Enhancement and Features Extraction

Section B: Advanced Image Classification Using Very High-Resolution Satellite Image


Digital Image Classification is the process of automatic or semi-automatic interpretation of imagery with the help of certain given condition. Digital image classification uses the quantitative spectral information contained in an image, which is related to the composition or condition of the target surface. There are two kinds of image classification schemes – Supervised, and Unsupervised. Supervised classification uses the spectral signatures obtained from training samples to classify an image. Unsupervised classification finds spectral classes (or clusters) in a multiband image without the analyst’s intervention. With the ArcGIS Spatial Analyst extension, there is a full suite of tools in the Multivariate toolset to perform supervised and unsupervised classification. The classification process is a multi-step workflow; therefore, the Image Classification toolbar has been developed to provide an integrated environment to perform classifications with the tools.

Learning Objectives

By going through this exercise, you will familiarize with techniques of digital image classification using ArcMap and you will be able to:

  • Perform image segmentation
  • Perform unsupervised image classification using Pixel vs Object based methods
  • Perform interactive supervised classification using Pixel vs Object based methods
  • Relate unsupervised classes to landcover

Data Inputs


File NameData Type
VHR_Ba_25062019_GE1.tifMultiband Raster


  • Open “M2_Digital_Image_Classification.mxd” from the following location X:\UNOSAT_ADV_FJI\Training_Material\M2\Practical\Workspace

In this workspace you are going to find a raster VHR_Ba_25062019_GE1.tif is arranged under “Raster\” group layer. This is a small scene clipped from Geo-Eye 1 Very High-resolution satellite sensor. Geo-Eye 1 has 4 spectral windows (Bands) sensed at 50cm spatial resolution these are bands. The original data is provided at 16bits digital number (DN). The 4 bands of GeoEye 1 sensor captures Blue, Green, Red and Near-Infrared part of the reflected light from the object on earth’s surface. The diagram below shows a spectral reflectance curve which demonstrates the reflectance behaviour of different landcovers with different bands of Geo-Eye 1 sensor. Now if you look at the map for visualisation of “VHR_Ba_25062019_GE1” you shall find out, the colour assignment is the following red = RED BAND (B1), Green = GREEN BAND (B2). Blue = BLUE BAND (B3). So, the scene will look like the colour our eyes perceive and hence it is called true or natural colour composite.

Spectral Bands GeoEye1 Sensor

You shall need to turn on Spatial Analyst extension for this exercise to work.

Workspace for the section
  • Right click on the layer “VHR_Ba_25062019_GE1l.tif” > Copy the Layer
  • Right click on the Raster group layer and paste the layer
  • Right click on the new layer “VHR_Ba_25062019_GE1.tif” > Properties > General >Rename it to “VHR_Ba_26062019_Natural”
  • Right click on the layer “VHR_Ba_25062019_Natural” > Copy the Layer
  • Text Box: Page3Change the band combination of this duplicate “VHR_Ba_26062019_Natural” to red = NEAR INFRARED BAND (B4), Green = RED BAND (B1). Blue = GREEN BAND (B2) from Layer properties Symbology tab > RGB Composite
Changing band combination
  • Rename the layer to “VHR_Ba_26062019_False”
Natural Colour (left) vs. False Colour (right)


Compare the “VHR_Ba_25062019_Natural” to “VHR_Ba_26062019_False” and answer the following questions, explore the area for detecting different landcover types.

What is the colour of vegetation in in the Natural and False Colour composites? Which composite/combination helps us to identify vegetation clearly?

What will be the colour of urban area in False Colour composite?

List the main thematic landcover category that can cover your area of interest?

Section C: Unsupervised Image Classification – Pixel Based

Unsupervised classification finds the spectral classes (or clusters) in a multiband image without the analyst’s intervention. The Image Classification toolbar aids in unsupervised classification by providing access to the tools to create the clusters, capability to analyse the quality of the clusters, and access to classification tools.


  • Go to ArcToolbox > Spatial Analyst > Multivariate > Iso Cluster Unsupervised Classification
Iso Cluster Unsupervised Classification
  • Set the following:
    – Input raster bands => “VHR_Ba_25062019_GE1.tif”
    – Number of classes => 5
    – Output file name => “GE1_unsup_C5.tif” in the location X:\UNOSAT_ADV_FJI\Training_Material\M2\Practical\Data_Output\Raster
  • Use default value for other > Click OK
  • Move “GE1_EH_unsup_C5.tif” under “Classified\Unsupervised” group layer

Now you will get a raster file with 5 code values.

Unsupervised Classification – 5 Classes
  • Match raster code to most applicable landcover:
Raster CodesProbable Landcover


Is there any landcover missing? Why did this happen?


From the above example unsupervised classification with only few numbers of classes we cannot separate the landcover classes required. We are going to perform the same exercise with 15 classes.

  • Go to ArcToolbox > Spatial Analyst > Multivariate > Iso Cluster Unsupervised Classification
  • Set the following:
    – Input raster bands = “VHR_Ba_25062019_GE1.tif”
    – Number of classes = 15
    – Output file name as “GE1_unsup_C15.tif” in the following location X:\UNOSAT_ADV_FJI\Training_Material\M2\Practical\Data_Output\Raster
  • Use default value for other
  • Click OK

Now you will get a raster file with 15 coded values.

Unsupervised Classification – 15 classes
LandcoverRaster Codes
Bare Soil


Were you able to produce better landcovers with 15 unsupervised classes?

Can you please describe the challenges that you faced during this part of the exercise?

Is there anything about the product you would like to improve?

Section D: Supervised Classification – Object Based

As you can easily conclude from the unsupervised classification of the “VHR_Ba_25062019_GE1.tif” image, the pixel-based classification creates a noisy output. Which is why remote science experts sometimes resort to object-based analysis where the pixels are grouped into objects with similar “spectral”, “texture” or “shape” characteristics, thus reducing a lot of isolated pixels and resulting in smoother classification.

Segmentation of raster for generating objects

Perform Image Segmentation

ArcMap provide real-time segmentation functionalities though in Image Analysis Toolset. It can Identify features or segments in your imagery by grouping adjacent pixels together that have similar spectral characteristics. You can control the amount of spatial and spectral smoothing to help derive features of interest. However, this tools only works with 3 bands and 8bit raster. We are going to learn how to perform series of raster analysis using Image Analysis processing chains.

  • In the Main Menu tool bar, go to the Windows menu> Image Analysis.
  • From the image analysis window click to select the “VHR_Ba_25062019_GE1.tif
  • Under processing panel, click on the icon  named fx (Function Editor)
  • Right click on the Identity function > Insert Function > Click “Stretch”
Opening Stretch Function
  • Inside the raster functions properties > “stretch tab” keep everything default and in the general tab change the output pixel type to 8bit unsigned
Applying Stretch Function
  • Click ok inside function template editor to apply the function
  • You shall notice a new raster layer “Func_VHR_Ba_25062019_GE1.tif” is created rename to “VHR_8bit.tif” move it under Segmentation
Visualize results

This raster layer is 8 bits, but we still need to select only 3 bands from the original 4 bands. From our investigation we realised the bands NEAR INFRARED BAND (B4), RED BAND (B1). GREEN BAND (B2) provides better contrast to separate features. This colour combination is sometimes referred as “Colour Infrared Combination.”

  • From the image analysis window, click to select the “VHR_8bit
  • Under processing panel, click on the icon named fx (Function Editor)
  • Right click on the “Stretch function” > Insert Function
  • Click “Extract band function” and Inside the raster functions properties > keep band 1,2,4 and click ok
Extract band function

Congratulations you have just created your first raster function chain now let us add segmentation to it.

  • From the image analysis window, click to select the “Func_VHR_8bit
  • From function template editor, right click on the “Extract Band function” > Insert Function > Click“Segment Mean Shift” Function
Segment Mean Shift – Raster Function Properties
  • You are presented with few parameters, but for now keep the defaults and click ok
  • Click ok again to close the “function template editor” window

You shall notice a new raster layer “Func_VHR_8bit” is created.

  • Rename to “Segment_VHR_8bit”
  • Move it under Segmentation
Original Colour-Infrared (left) vs. Segmented Image (right)


Zoom in and compare “Segment_VHR_8bit” vs “VHR_Ba_25062019_False”.
Can you please summarise your observation? Did the segmentation group the similar pixels into same object? Is there any improvement needed?


  • Right click on the “Segment_VHR_8bit” raster layer
  • Go to the function tab > Double click on “Segment Mean Shift Function”
  • When the Raster Function Window pops up, play with the values to get a good segmentation output which suits your needs
Learn more about optimizing segmentation


Please remember all the layers you are creating using the raster function template editor are temporary. As we know the optimal parameters at this step, we are going to use Segment Mean Shift Tool from ArcToolbox.

  • Press CTRL+F to bring up search box > Type “segment mean shift” > Press enter
  • Inside the Segment Mean Shit Toolbox:
    Select “VHR_8bit as input”
    Enter values for Spectral, Spatial and Minimum Segment Size value
    – For band indexes. enter “1 2 4”
Segment mean shift
  • Name the Output raster Dataset as “VHR_Ba_25062019_Segment.tif” it in X:\UNOSAT_ADV_FJI\Training_Material\M2\Practical\Data_Output\Raster
  • Click ok
Visualize output raster


Supervised classification requires the image analyst to choose an appropriate classification scheme, and then identify training sites in the imagery that best represent each class. A simple land cover classification scheme might consist of a small number of classes, such as urban, water, wetlands, forest, grass/crops. The classification algorithm then uses spectral characteristics of the training sites to classify the remainder of the image.

Decide the classification scheme

As you are quite familiar with the dominant landcovers in “Ba region” please decide on what landcovers classes we should try to extract. Below is a generic landcover classification scheme that can be used.

Land Cover TypeRaster Code
Bare Soil4
Classification Scheme for Supervised Classification

Image Classification Toolbar

The Image Classification toolbar contains interactive tools for creating training samples and signature files. Some commonly used classification tools from the Multivariate toolset of the Spatial Analyst toolbox are also exposed through this toolbar.

  • In the menu, click Customize tab > Toolbars> Image Classification
  • Select “VHR_Ba_25062019_Segment.tif”
Image Classification Toolbar
  • This step ensures the training sample are always collected from “VHR_Ba_25062019_Segment.tif” no matter what is displayed on the screen.

The Training Sample Manager

The Training Sample Manager is the mechanism for managing training samples. With it, you can edit the class name and value, merge, and split classes, delete classes, change display colour, load, and save training samples, evaluate training samples, and create a signature file. The manager is accessible from the Image Classification toolbar by clicking the Training Sample Manager button. The following image shows the dialog box of the manager.

  • Click the training sample manager button . The training sample manager window will come up.

Now we are going to train some water areas in the images as training samples. In the training sample manager, you can draw a rectangle, polygon, circle. As we already selected the segmented image in the image classification toolbar, we can also use segments as training sample collector.

  • Zoom to an area in the image with water pixels
  • Click the Select Segment icon from image classification window

Opening Select Segment Tool
  • Double click in the middle of a small water area in the top right corner, a waterbody training sample will be collected
  • In the training sample manager, rename the sample from “Class 1” to Water > Change the colour to blue

Alternatively, you can also draw a rectangle, polygon, or a circle around the water pixels, but make sure to only capture pixels that are water, avoid those you are unsure about.

Training Sample Manager – Collecting samples
  • Capture or Draw a few more water areas
  • Select all the water classes samples (Ctrl or Shift)
  • Click merge training sample button. All class samples will merge into one
  • Rename the class to “Water” > Assign blue color


Try to select samples that clearly belongs to a landcover category and easily distinguishable from other landcover types. Avoid taking too many samples, which may cause complexity and result in inferior classification results.

  • Repeat the process and train the other landcover type in the same way.
Training Sample Manager – Merging samples

Once done, use up and down button from the training sample manager to make sure landcover class values match your classification scheme

Training Sample Manager – Resetting class values
  • Click the save training samples button in the training sample manager to save the training sample as a shapefile.
  • Give the filename “VHR_Segment_Sup_7C.shp” and place in the following location X:\UNOSAT_ADV_FJI\Training_Material\M2\Practical\Data_Output\Vector

Training Sample Manager – Saving Training Sample

Evaluation of separability of classes

  • Save you ArcMap project as MDX format using CTRL+S
  • Select all classes from the training sample manager
  • Click on show scatterplots . The following scatterplot window will come up:
Analyzing the scatterplots

If you have selected a lot of training samples, this process may take a while please be patient.

From the above scatterplot example, you can see that water can be separated easily from only one scatter plot. But not so much separability for the other landcover classes. And urban landcover is not easily separated in any of the scatterplots.


Observe your own scatterplots and answer the following questions:

Which pair of two bands would you take to separate the landcovers?

 Which landcover class has the most mixed pixels?

How would you overcome the mixed pixel problem?

Interactive Image Classification

  • From the classification toolbar, run interactive supervised classification. You are going to have a temporary result based on the training you have performed.
Running interactive supervised classification
  • Use swipe tool to find the areas of misclassification and digitize new training samples in those areas
  • Merge the new sample with correct landcover samples as shown below:
Iteratively classifying samples
First Classification (left) vs Revised classification (right) with better results
  • In the Training Sample Manager Window, save the training sample as a new “VHR_Ba_25062019_Sup_7C_1.shp”
  • Run interactive supervised classification again and observe the change
  • Repeat this process until you are happy with the result

Landcovers are a very much localised phenomena adding some classes like, shadow, light grass and splitting urban cover into roads and building highly improves the outcome for this specific area. You can follow a similar 8-class landcover classification scheme like below.

Example of 8-class landcover classification scheme

This whole process is very much iterative and may not always produce good result. Do not hesitate to try out new samples and new schemes as you deem appropriate. But for every change save your training samples in a new file so you can revert if something goes wrong.

“Patience is key and with experience the whole process will be much faster”.

Exporting Classification Results

  • Once you obtain satisfactory results, search for copy raster toolbar of data management toolbox
  • Save the output the classified raster “VHR_Segment_Sup_7C_Final.tif” using Copy Raster tool to the following location X:\UNOSAT_ADV_FJI\Training_Material\M2\Practical\Data_Output\Raster
  • Remove all the temporary classification layers from the Table of Content


Which classification is faster? Explain your thoughts.

Which classification is more accurate apparently? Explain your thoughts.

If you want to assess the accuracy of these image classification what can you do?

Section E: Performing Accuracy Assessment on The Classified Image (Optional)


Accuracy assessment is an important part of any classification project. It compares the classified image to another data source that is accurate or ground truth data. Ground truth data are traditionally collected using GPS devices and directly going to the field. Alternatively, experts also utilise information from high resolution imagery from google earth and Bing map. Even though these higher details in observation but these are still viewing from the top and there is no control over the observation time and date. However, a novel approach is to combine high resolution satellite base maps and street view imagery to make a robust assessment of ground-truth. This approach allows a user to collect sample from vast geographical area without having to travel to the field.

Learning Objectives

By going through this exercise, you will be able to:
– Generate accuracy assessment samples
– Prepare ground truth samples
– Compute confusion matrix

Performing Accuracy Assessment

  • Open “A2S2_Digital_Image_Classification.mxd” from the previous chapter
  • Open the Create Accuracy Assessment Points tool (use search  to find this tool)
  • Insert the following information:
    Target Field = Classified
    Input Raster = “VHR_Segment_Sup_7C_Final.tif” in X:\UNOSAT_ADV_FJI\Training_Material\M2\Practical\Data_Output\Raster
    Sampling strategy = Equalized Stratified Random—Generates a set of accuracy assessment points where each class has the same number of points.
    Number of points = 50
    Output Accuracy Assessment Points = “VHR_Segment_Sup_7C_Final_acc.shp” in X:\UNOSAT_ADV_FJI\Training_Material\M2\Practical\Data_Output\Vector

Now you are going to find 50 ground truth points in the file VHR_Segment_Sup_7C_Final_acc.shp where values from the classified image is already inserted. But ground truth values are kept as -1 or missing values, which we shall need to enter and edit

Performing Accuracy Assessment
  • Select the points inside water body
  • Right click on “GrndTruth” column of the attribute table of “VHR_Segment_Sup_7C_Final_acc.shp” file
  • Use Field Calculator
  • Fill the values with 1 (Landcover code of water)
Reassigning values with Field Calculator
  • Repeat the above process to fill all the values for GrndTruth column of the attribute table of “VHR_Segment_Sup_7C_Final_acc.shp” file


  • Run “Compute Confusion Matrix” tool from Spatial Analyst toolbox > Segmentation and Classification toolset to conclude about the accuracy of your classification and save the output to “VHR_Segment_Sup_7C_Final_CM.dbf” in folder X:\UNOSAT_ADV_FJI\Training_Material\M2\Practical\Data_Output\Vector
Confusion matrix example


Which indicator of value is most robust for understanding the accuracy of classification?

 What is the difference between user’s and producer’s accuracy?

Tabulate different landcover area in square kilometre and percentage.