More

Merging DEMs or other raster datasets and clipping Raster dataset to vector line?

Merging DEMs or other raster datasets and clipping Raster dataset to vector line?


I have two DEMs that I am going create a hillshade from, and ultimately want one hillshade clipped to a county boundary. I am running into a couple problems, though.

When I try to merge the two hillshades created from the DEMs, ArcMap crashes. When I try to merge the two DEMs, ArcMap crashes. I am assuming that if I managed to get the DEMs merged, the hillshade created would cause ArcMap to crash, too.

I am using the PLTS Merge Raster Datasets tool, because for some reason the Spatial Analyst tool Raster Calculator does not appear as selectable.

Once I am able to do this and get a hillshade of the two DEMs, is there a specific function for using the county line (which is a vector fc) to clip the raster data set to?


I've never used the PLTS tools, and the help doesn't really give much info on how the Merge Raster Datasets works, but the Mosaic to New Raster geoprocessing tool, in the Data Management Toolbox under Raster Dataset, should work fine, and doesn't require an ArcInfo license or Spatial Analyst.

To clip your resulting mosaic without Spatial Analyst, use the Clip geoprocessing tool, also in Data Management under Raster Processing, and use the county line feature as the Output Extent. If you have the county feature selected and enable the "Use Input Features for Clipping Geometery" option, it will clip to the actual border. If you don't select the feature, it will clip the raster to the bounding box of the feature.


Actually, you can find an alternative in GDAL FOSS software package. Assuming that raster files are in a format GDAL can read, you could process them using following steps:

  1. create virtual raster using gdalbuildvrt util, by building virtual raster you can save on disk space and processing time
  2. use util gdalwrap to cut raster using prepared cutline in any OGR supported format

For more information you can check this page http://linfiniti.com/2009/09/clipping-rasters-with-gdal-using-polygons/, also you can find GDAL/OGR tools and other FOSS for windows operating system in OSGEO4w binary distribution.


To clip your hillshade, use Extract By Mask (assuming you have Spatial Analyst) and specify your county feature class as the "feature mask data."


Concerning crashes… This may or may not be germane to your specific problem. Whenever it appears that I am doing things right and ArcGIS is inexplicably crashing, my first two actions are:

  • make sure I am logged in as Administrator on the computer.
  • export my data from within ArcGIS (basically just resaving it).

Of course, double check to be sure your Spatial Analyst and any other workflow-related extensions are turned on. Note: I am rolling full ArcInfo licenses, and do not know the behavior in ArcView or ArcEditor.


One word: GlobalMapper Your raster processing nightmares will be over, all for a whopping $350. What you are trying to accomplish is cake in GM.

EDIT:

Here is how you clip a raster to a vector polygon in GlobalMapper (ver. 13.2 used here):

  1. Load your raster and vector polygon into GlobalMapper

  2. Select the polygon using the Feature Info tool, then turn it off in the Overlay Control Center (otherwise it will be included in your export):

  3. Go to either File|Raster/Image format or File|Export Elevation Grid format (I'm going to do Raster/Image)

  4. Select your export format from the dropdown. Next on the Export Option, on the Export Bounds tab, select Crop to Selected Area Feature(s):

  5. Continue with the export, you will get a raster with the extent of your clip polygon.


Lesson 3. Activity: Plot Spatial Raster Data in Python

Throughout these chapters, one of the main focuses has been opening, modifying, and plotting all forms of spatial data. The chapters have covered a wide array of these types of data, including all types of vector data, elevation data, imagery data, and more! You’ve used a combination of multiple libraries to open and plot spatial data, including pandas, geopandas, matplotlib, earthpy, and various others. Often earth analytics will combine vector and raster data to do more meaningful analysis on a research question. Plotting these various types of data together can be challenging, but also very informative.


How can you remove DEM edge artifacts in ArcGIS?

As you can see theres a clear tiling artifacts from the orginal DEM scan, because of this my flow direction model is getting all jacked up as the interpreted flow direction changes along this tiling edge as seen here: https://imgur.com/a/60PXwXn

What is the source for the DEM tiles? Iɽ first start by checking if the tiles have equivalent values for overlapping regions. Just open raster calculator and subtract one from the other. The result should be a raster whose only value is zero. If you get something else, then the problem is with the data itself.

Looks like a sensor/processing issue to me, in two different directions. I don't imagine the tiles would only be 7 or 8 pixels wide.

How to deal with it depends on how critical the original data is to you. A simple moving average would smooth it out to allow for a flow direction - here's a 25x25 pixel mean filter.

But if you want to retain the original DEM resolution then you'll need to get more creative to identify the original pattern of sensor/processing error and compensate for it. It's been a while since I did this (it was using PCA, but many years ago), so won't pretend to know the current tools.


Methods

Concept of local climate zones

LCZ typology has been adopted as a baseline description of global urban areas into recognisable types that are formally defined as ‘regions of uniform surface cover, structure, material, and human activity that span hundreds of meters to several kilometres in horizontal scale’, exclude ‘class names and definitions that are culture or region specific’, and are characterized by ‘a characteristic screen-height temperature regime that is most apparent over dry surfaces, on calm, clear nights, and in areas of simple relief’ 27 . Seventeen LCZ types exist, 10 of which are considered ‘urban’ (Fig. 1), and all are associated with characteristic urban canopy parameters (UCP, Table 1). In the current study, two ‘urban’ LCZ classes are not considered: LCZ 7 (lightweight lowrise) referring to informal settlements hardly present in CONUS, and LCZ 9 (sparsely built) characterised by a high abundance of natural land-cover which thus behaves thermally as a natural land-cover.

Urban (1–10) and natural (A–G) Local Climate Zone definitions (adapted from Table 2 in Stewart and Oke 27 , default LCZ colors according to Bechtel et al. 31 ).

An automated (offline) LCZ mapping procedure was suggested by Bechtel et al. 31 , and adopted by the WUDAPT project to create consistent LCZ maps of global cities. To facilitate the expansion of the coverage of LCZ maps, Demuzere et al. 32,34 introduced the transferability concept of labeled Training Areas (TAs) 32 and the use of Google’s Earth Engine cloud computing environment 33 , which allows for up-scaling the default WUDAPT approach to the continental scale (e.g. Europe 34 ). In this approach, the three key operations from the original WUDAPT protocol remain unchanged: (1) the preprocessing of earth observation data as input features for the random classifier, (2) the digitization and preprocessing of appropriate training areas, and (3) the application of the classification algorithm and the accuracy assessment 45 . These steps are described in more detail below.

Input data

Earth observation data

The default WUDAPT workflow typically uses a number of single Landsat 8 (L8) tiles as input to the random forest classifier 31 . Here we adopt the approach of Demuzere et al. 34 , by using 41 input features from a variety of sensors and time periods. L8 mean composites (2016–2018) are made for the full year and the summer/winter half-year, and include the blue (B2), green (B3), red (B4), near-infrared (B5), shortwave infrared (B6 and B7) and thermal infrared (B10 and B11) bands. In addition, a number of spectral indices are derived (as composites covering the period 2016–2018): the minimum and maximum Normalized Difference Vegetation Index (NDVI), the Biophysical Composition Index (BCI 46 ) using the Tasseled Cap transformation coefficients from DeVries et al. 47 , the mean Normalized Difference BAreness Index (NDBAI 48 ), the mean Enhanced Built-up and Bare land Index (EBBI 49 ), the mean Normalized Difference Water Index (NDWI 50 ), the mean Normalized Difference Built Index (NDBI 51 ) and the Normalized Difference Urban Index (NDUI 52 ). Synthetic aperture radar (SAR) imagery (2016–2018) is included as well, as this feature was previously found to be key 32,34,53 . In line with Demuzere et al. 32,34 , this study uses the Sentinel-1 VV single co-polarization backscatter filtered by the Interferometric Wide swath acquisition mode and both ascending and descending orbits, composited into a single image (hereafter referred to S1). From the S1 backscatter composite, an entropy and Geary’s C (a local measure of spatial association 54 ) image is calculated, using a squared kernel of 5x5 pixels and a 9x9 spatial neighborhood kernel respectively. Finally, some other datasets are included such as the Global Forest Canopy Height product (GFCH 55 ), the GTOPO30 digital terrain model (DTM) and derived slope and aspect from the U.S. Geological Survey’s Earth Resources Observation and Science (EROS) Center, the ALOS World 3D global digital surface model (DSM) dataset 56,57 and a digital elevation model (DEM) by subtracting the DTM from the DSM. Note that the full set of features is processed on a resolution of 100 meters, following the default mapping resolution suggested by Bechtel et al. 31 . The reader is referred to Demuzere et al. 34 for more details.

Training data

TA data are generally created by urban experts 31 , a time-demanding procedure, both because of the intrinsic nature of the task (i.e., the extent and heterogeneity of urban areas) and the ability of the urban expert to identify and digitize TAs consistently 58,59 . Here, expert TAs are used from nine U.S. cities: Phoenix and Las Vegas 60 , Salt Lake City 61 , Chicago and New York 62,63 , Houston, Washington D.C., Philadelphia and Los Angeles. The expert TAs from these cities are supplemented with polygons covering LCZ classes E (bare rock or paved) and F (bare soil or sand) to fully capture the spectral signature of the hot desert regions in the southwestern parts of CONUS.

A limitation of the expert TA procedure is that data are collected from only 9 cities (due to the time-demanding procedure described above). To address this limitation, additional training data are created based on a crowd-sourcing platform: MTurk (https://www.mturk.com). MTurk is highly scalable and allows for collecting a large number of urban and natural TAs across CONUS. The following process was used to collect MTurk TAs. First, MTurk participation is limited to workers with a Masters Qualification (i.e., users who have demonstrated high performance on MTurk in previous tasks) from English speaking countries (to avoid confusion from the instructions). The MTurk workers are shown a tutorial and asked to classify a satellite image (500 by 500 m) of an urban or natural area. For each satellite image (https://www.google.com/earth), users are also shown the corresponding Google Street View images (https://www.google.com/streetview/) within the 500 by 500 m area. Based on the satellite and Street View images, MTurk workers are asked to classify the area as a single LCZ. Locations are selected based on: (1) U.S. Environmental Protection Agency Air Quality Monitoring Sites (which are located in all major metropolitan areas) and (2) a supplement of manually chosen locations for LCZs that were under-sampled (from the 60 largest Urban Areas), to ensure that a wide range of built and natural environments are included. For each location, responses are obtained from at least ten unique MTurk workers only when at least 70% of MTurk workers agreed on the classification (defined as the same LCZ or a near-neighbor LCZ), the MTurk TAs are included in the final training dataset.

Three different approaches (using TAs in the nine ‘expert’ cities) are used to compare the consistency between EX and MTurk TAs based on the degree of spatial overlap of the TA polygons: (1) full match where the MTurk TA falls completely within the EX TA, (2) match by centroid where the centroid of the MTurk TA is within the EX TA, and (3) match by intersection where the MTurk TA and EX TA intersect at some point in space. Using each approach we assessed to what degree the EX and MTurk TAs represent the same LCZ. The match percentage was 100% (n [number of matched polygons] = 8) for the full match approach, 87% (n = 69) for the match by centroid, and 65% (n = 141) for the match by intersection (Supplementary Table S1). While differences occur, the degree of consistency is actually higher compared to the results of HUMINEX (HUMan INfluence EXperiment 58,59 ), that indicated large discrepancies between training area sets from multiple ‘experts’, nevertheless leading to strong improvements in overall accuracy when used all together. Combining expert and crowd-sourced data are therefore a reasonable approach to diversify training data for developing LCZ classification models.

As a final TA preprocessing step, the surface area of large polygons is reduced to a radius of approximately 300 m, following Demuzere et al. 32,34 . These large polygons typically represent homogeneous areas such as water bodies and forests 58,59 , a characteristic that is neither needed nor wanted, as this leads to more imbalanced training data and computational inefficiency of the classifier. In addition, because of the imbalance of the MTurk TA sample (Fig. 2), the amount of all non-LCZ 6 MTurk classes (open lowrise) is increased five-fold, by randomly sampling five small polygons (100 x 100 m) from each original MTurk polygon (black boxes in Fig. 2), excluding LCZ 6 polygons. This results in a more balanced training set consisting of 13,216 polygons (10,323 MTurk and 2,893 EX TAs, Fig. 2).

Number of expert (EX) and Amazon Mechanical Turk (MTurk) training areas used in the CONUS LCZ classification. Black boxes refer to the amount of original imbalanced MTurk TAs.

Classification procedure, quality assessment, and post-processing

As a final step in WUDAPT’s LCZ classification protocol, the random forest algorithm 64 is applied, using the earth observation data and the labeled TAs 30,31 as inputs. The accuracy of the resulting maps is then assessed in two ways, via (1) a pixel-based ‘random-sampling’ and (2) a polygon-based ‘city hold out’ procedure.

The random-sampling procedure is based on the default automated WUDAPT cross-validation procedure outlined in Bechtel et al. 45 and is performed as described in Demuzere et al. 34 . Ten pixels are randomly sampled from each TA (resulting in a total of 132,160 pixels). From the resulting TA pixel pool, 50% is selected as the training set and the other 50% as test set based on a stratified (LCZ type) random sampling. This exercise is then repeated 25 times allowing us to provide confidence intervals around the accuracy metrics described in more detail below.

This strategy might lead to a biased accuracy assessment because of the potential spatial correlation in the train and test samples. Therefore, a second approach is applied in line with the methodology used in Demuzere et al. 32 . In this polygon-based city hold out procedure, TAs from all-but-one cities are used to train the classifier, while the remaining TAs from the held out city are used for the accuracy assessment. This is then repeated for all nine expert TA cities. As the information for training is independent of that used for testing, no bootstrapping is performed. The variability around the accuracy is, in this case, provided by the variable mapping quality for the different target cities. This city hold out approach is equivalent to cross-learning or model-transferability experiments in other recent studies 62,65,66,67 .

For both quality assessment approaches, the following accuracy measures are used: overall accuracy (OA), overall accuracy for the urban LCZ classes only (OAu), overall accuracy of the built versus natural LCZ classes only (OAbu), a weighted accuracy (OAw), and the class-wise metric F1 32,34,58,68,69,70 . The overall accuracy denotes the percentage of correctly classified pixels. OAu reflects the percentage of classified pixels from the urban LCZ classes only, and OAbu is the overall accuracy of the built versus natural LCZ classes only, ignoring their internal differentiation. The weighted accuracy (OAw) is obtained by applying weights to the confusion matrix and accounts for the (dis)similarity between LCZ types 58,70 . For example, LCZ 1 is most similar to the other compact urban types (LCZs 2 and 3), leaving these pairs with higher weights compared to e.g., an urban and natural LCZ class pair. This results in penalizing confusion between dissimilar types more than confusion between similar classes. Finally, the class-wise accuracy is evaluated using the F1 metric, which is a harmonic mean of the user’s and producer’s accuracy 68,69 .

According to Bechtel et al. 31 , the ideal scale for classification differs from the scale defined by the LCZ concept. More specifically, the optimal resolution for a pixel-based classification should be systematically higher than the preferred LCZ scale (hundreds of meters to kilometres) 27 , to account for non-regular and rectangular shapes of the patches. Consequently, single pixels do not constitute an LCZ and have to be removed. In the classical WUDAPT workflow, the granularity is reduced by a majority post-classification filter with a default radius of 300 m. This however has several shortcomings. Firstly, it does not account for distance, i.e. the center pixel is weighted as important as a pixel at the border of the filter mask. Secondly, it does not account for differences in the typical patch size between classes and consequently, linear features like rivers tend to be removed. Finally, it produces some artifacts. Therefore, a different filtering approach was chosen here. For each class the likelihood was defined by convolution of the binary membership mask derived from the initial map (1 if pixel is assigned to class i, 0 otherwise) with a Gaussian kernel with standard deviation σi and kernel size > = 2 σi, resulting in a likelihood map per class. Subsequently the class with the highest likelihood was chosen for each pixel. Since the typical patches differ in size between LCZs, σi values of 100 m for LCZ 1, 250 m for LCZ 8 and 10 and 150 m for all remaining urban classes were chosen. For the natural classes, 25 m was chosen for water and 75 m for all other classes. Since these numbers were derived by experts, they introduce a priori knowledge to the procedure and deserve further investigation and adjustment in future. In particular, optimal σi are assumed to differ between cities and continents.

Urban canopy parameter and population data

The LCZ scheme is considered to be a universal classification, that not only provides a common platform for knowledge exchange and a pathway to model applications in cities with little data infrastructure, but also provides a numerical description of urban canopy parameters (UCPs) that are key in urban ecosystem processes 71 . These UCPs, among others, include the building footprints (BF), average building height (BH), impervious surface area (ISA), the sky view factor (SVF), and the anthropogenic heat flux (AHF). Class-specific, globally generic, UCP ranges are provided in Table 1, and are especially useful in areas where such information is not available/incomplete and/or available at poor spatial/temporal resolutions 30,34 . CONUS does have such datasets available, which allows us to 1) evaluate the LCZ map with these independent datasets and 2) potentially fine-tune the existing generic UCP ranges provided by Stewart and Oke 27 . As outlined above, the LCZ typology is chiefly a description of land-cover but some of the types can be linked to land use and population. For example, compact highrise (LCZ 1) and midrise (LCZ 2) are generally associated with downtown commercial districts in most US cities, although it also includes tall residential apartment blocks. Compact lowrise (LCZ 3) are typically associated with densely occupied neighborhoods close to city centers, many of which were built in the early-twentieth century. Open types of all heights (LCZ 4–6) can be linked to the suburban residential areas. Finally, the large lowrise (LCZ 8) and heavy industry(LCZ 10) types are associated with storage units and large emitting facilities, respectively. In other words, one can expect each type to be associated with different populations. To evaluate the LCZs on the basis of this proposition, LCZ types are benchmarked against resident population counts as well.

Building footprints

The Bing Maps team at Microsoft released a nation-wide vector building dataset in 2018 72 . This dataset is generated from aerial images available to Bing Maps using deep learning methods for object classification. This dataset includes over 125 million building footprint polygons in all 50 U.S. States in GeoJSON vector format. The dataset is distributed separately for each state and has a 99.3% precision and 93.5% pixel recall accuracy. Since vector layers are highly challenging for large-scale analysis, Heris et al. 73,74 converted the dataset to a raster format with 30 m spatial resolution in line with the National Land-Cover Dataset (NLCD) grid 29 , providing six building footprint summary variables for each cell. Our study uses the total footprint coverage per grid cell, with values ranging from 0 m 2 (0%, no buildings) to 900 m 2 (100%, completely built).

Building height

To our knowledge, there is currently no publicly-available, recent and high-quality building height (BH, m) dataset that spans the continental United States. Therefor, building height is taken from Falcone 75 , who provides a categorical mapping of estimated mean building heights, by census block group for the continental United States. The data were derived from the NASA Shuttle Radar Topography Mission, which collected ‘first return’ (top of canopy and buildings) radar data at 30-m resolution in February 2000. Non-urban features were filtered out, so that height values refer to object heights where urban development is present, e.g., buildings and other man-made structures (stadiums, towers, monuments). Due to difficulties in mapping exact building heights, information was aggregated on 216,291 census block groups across CONUS. In turn, block height values were categorized into six groups according to their statistical distribution and were categorized as ‘Low’, ‘Low-Medium’, ‘Medium’, ‘Medium-High’, ‘High’, and ‘Very High’. Using the building heights and footprints for 85,166 buildings in San Francisco (representative for 2010), the data quality was assessed (correlation of 0.8), and the mean and standard deviation (SD) of actual heights were calculated for block groups where actual building height data were available. This procedure resulted in the following mean (SD) height values: ‘Low’: too few observations to be meaningful, ‘Low-Medium’: 11.5 m (3.2 m), ‘Medium’: 13.1 m (3.1 m), ‘Medium-High’: 16.3 m (4.4 m), ‘High’: 21.7 m (8.2 m), and ‘Very High’: 35.3 m (14.2 m).

The procedure described above makes it clear that this dataset serves as a first-order proxy for actual building height data. Data were taken in the year 2000, which does not correspond to the year 2017 representative for this CONUS LCZ map. As such, benchmarking the LCZ map against this building height dataset neglects a net 6.7% increase in developed urban land, derived as the difference between the NLCD 2016 and 2001 developed land-cover classes 76 . Also, Falcone’s 75 building heights are categorical and reflect the observed variability in San Francisco, which is not necessarily representative for all other CONUS urban areas. The spatial footprint is defined by census block groups, which vary in shape and scale as their original goal is to sample the population. The impact of these limitations is assessed using more recent, high-resolution and freely available datasets from the metropolitan areas of Austin, Boston, Des Moines, Los Angeles and New York, covering over 5 million buildings (Supplementary Table S2).

Impervious surface area

Impervious surface is taken from the National Land-Cover Database (NLCD) 2016 product 29,77 , which provides the percent of each 30 m pixel covered by developed impervious surface (range 0 to 100%). These authors created the dataset in four steps: (1) training data development using nighttime light products, (2) impervious surface modeling using regression tree models and Landsat imagery, (3) comparison of initial model outputs to remove false estimates due to high reflectance from non-urban areas and to retain 2011 impervious values unchanged from 2011 to 2016, and (4) final editing and product generation (see Section 6.1 in Yang et al. 29 for more details).

Sky view factor

Information on sky view factor (SVF) is available for 22 U.S. cities (Atlanta, Baltimore, Boston, Buffalo, Cleveland, Denver, El Paso, Fresno, Las Vegas, Miami, Orlando, Philadelphia, Portland, Richmond, Salt Lake City, San Diego, San Francisco, San Jose, Seattle, Tampa, Tuscon, and Washington D.C.) and are obtained from Google Street View (GSV) images that are examined using a deep learning approach 78,79 . A complete sample of GSV locations in each city is retrieved through the Google Maps API for all locations, an image cube is downloaded in the form of six 90-degree field-of-view images that face upwards, downwards, north, east, south, and west. The images are segmented by a convolutional neural network that was fine-tuned with GSV 90-degree images from cities around the world to yield six classes: sky, trees, buildings, impervious and pervious surfaces, and non-permanent objects 79 . Here, only the SVF is used, which is obtained by projecting the segmented upper half of the image cube into a hemispherical fish-eye to calculate the SVF using sky and non-sky pixels 80 . GSV images are inherently biased towards street locations, and thus greatly under-sample open spaces, including parks, golf courses, backyards, and natural areas in general 78 . Benchmarking with SVF data (ranges between 0–100%) is, therefore, only done for the urban LCZ classes within the CONUS domain.

Anthropogenic heat flux

Annual mean anthropogenic heat flux (AHF, Wm −2 ) data are provided by Dong et al. 81 , which are available globally at a spatial resolution of 30 arc-seconds. This product includes four heating components: energy loss, heating from the commercial, residential, and transportation sectors, heating from the industrial and agricultural sectors, and heating from human metabolism.

Population

Resident population counts representative for 2015 are provided by the Global Human Settlement global population grid (GHS-POP) 82,83 . These data are disaggregated from CIESIN’s GPWv4 84 census or administrative units to grid cells with a resolution of 250 m, a manipulation that is informed by the distribution and density of built-up as mapped in the Global Human Settlement Layer dataset 3,5,83 . For other global and continental population datasets, and their fitness-for-use, the reader is referred to Leyk et al. 85 .


The National Imagery Transmission Format (NITF) standard is a raster format defined by the NITF Standards Technical Board. The Joint Interoperability Test Command (JITC) certifies systems implementing the NITF format for compliance with the standard. The NITF/NSIF Module provides JITC-compliant support for the NITF file format and it is required for compliant NITF support in ENVI. The ENVI 5.0 NITF/NSIF Module was tested by the JITC and has been certified to complexity level 7 for NITF 2.1 and complexity level 6 for NITF 2.0 (the highest for each format). ENVI 5.1 is in compliance with these standards.

Contact the JITC for detailed information about the NITF certification program, including functional read/write breakdown and testing anomalies.

The NITF/NSIF Module requires a separate license. Please contact your sales representative.

A valid NITF dataset provides a main header identifying the file as a NITF dataset and describing the contents of the file. The header is usually followed by one or more data segments. Each data segment consists of a segment subheader identifying the type and properties of the data, followed by the data itself.

See the following sections:

Main Header

A NITF dataset may contain any or all types of segments available for that version, but every NITF dataset must contain a main header. The main NITF header describes the entire file, including origination information, security information, file version and size, and the number and type of all data segments contained in the NITF dataset.

Security Segments

The NITF format was designed to contain information deemed sensitive, so it includes header data describing the status of any information that is not available to the general public. The main file header contains security information describing the security level of the entire NITF dataset, and each segment also contains security information in its subheader, as the confidentiality of data within a file may vary. The security level of the entire file (T = Top Secret, S = Secret, C = Confidential, R = Restricted, U = Unclassified) is the same as or higher than that of the most restricted segment in the file. NITF 2.0 uses the same fields as NITF 1.1 to contain security information, while NITF 2.1 deprecated some security information fields and added new fields.

These changes are described in the following table. For a detailed description of these security fields, consult the NITF specifications to determine which metadata are relevant to the version of your NITF file.

NITF 1.1/2.0 Security Fields

Releasing Instructions
Declassification Type
Declassification Date
Declassification Exemption
Downgrade
Downgrade Date
Classification Text
Classification Authority Type

Classification Authority
Classification Reason
Security Source Date

When converting between NITF formats, the security fields will be mapped in accordance to Appendix G of the Implementation Practices Of The National Imagery Transmission Format Standard (IPON), Coordination Draft Version 0.3A, 14 Jan 2005.

Image Segments

Image segments contain raster data, typically image data, intended for display or analysis. Each image segment contains a single image consisting of one or more bands of data (NITF 2.0 allows one, three, or four bands of data in an image, and NITF 2.1 allows up to 999 bands).

All bands within an image segment must have the same data type, dimensions, storage order, and map information, although these characteristics can vary across different image segments. Each image segment may contain specific display instructions, including color lookup tables for single-band images and default display bands for multi-band images.

Images can be stored in integer data types in NITF 2.0 and in integer and real data types in NITF 2.1. Images can also be compressed using a variety of algorithms including JPEG DCT, Vector Quantization, Bi-level, JPEG 2000 NPJE (NITF 2.1 only), and JPEG 2000 EPJE (NITF 2.1 only). Images can be broken into blocks, providing an orderly set of subimages (or subarrays). Additional information describing the collection, intended use, wavelengths, and comments can also be stored with the image.

Masks

Mask information stored in image segments identifies pixels that are invalid or not intended to be displayed, and should therefore not be displayed.

Images that are rotated or have gaps can also contain a mask indicating which portions of the image should not be used for display or analysis. Two types of image masks are used in NITF files:

  • Blocked image masks are used to mask entire blocks of image data.
  • Transparent pixel masks are used for masking individual pixels or groups of pixels within an image block.

When an image segment containing masked blocks or pixels is displayed, pixels from images or graphics underneath the image segment show through and are displayed even though they would ordinarily be obscured. If a transparent pixel occurs with nothing displayed under it, or if for any other reason there is no display information for a pixel, the background color specified in the main file header is displayed.

Graphic/Symbol Segments

Symbol segments can contain Computer Graphics Metafile (CGM), bitmap, or object elements, while label segments contain graphical text elements. The CGM format allows direct control of all display elements contained in the graphic including color, size, and orientation of objects. CGM graphics can contain complex lines and polygons, as well as displayable text. Multiple annotations can be combined in a single CGM, so symbol segments with CGM graphics may actually contain multiple sets of graphical primitives.

NITF 2.1 files can contain graphic segments with CGM graphic and graphical text elements, while NITF 2.0 files contain two segment types for the same purpose: symbol segments and label segments. Both the NITF 2.0 symbol segment and the NITF 2.1 graphic segment can contain CGM graphics.

The NITF 2.1 graphic segment can only contain CGM graphics, but NITF 2.0 symbol segments can contain other graphical display elements as well. Symbol segments can contain bitmaps (color-mapped bitmaps to be displayed on the composite) or objects (graphics from a limited set of graphical primitives, including lines, arrows, circles, and rectangles).

For NITF 2.1, the bitmap and object symbol types as well as the label segment have been deprecated. Bitmaps are stored in image segments instead of symbols, and object symbols and labels have been removed in favor of the more general and powerful CGM.

Label Segments

Label segments, available only in NITF 2.0, contain displayable text intended to be drawn with the NITF display. In addition to this text, a label segment includes display instructions such as font, color, size, and a background color to display behind the text.

There are many required CGM elements to draw the data contained in a NITF 2.0 label segment. Element details are described in the MIL-STD-2301A specification.

Annotation Segments

NITF 2.0 symbol and label segments, as well as NITF 2.1/NSIF 1.0 graphics segments, are collectively referred to as annotation segments, as illustrated in the following diagram.

Image, text, and extension segments are available in every version of NITF, while label and symbol segments can occur only in NITF 2.0 datasets. Graphic segments occur only in NITF 2.1 datasets.

Because of the similarity between the symbol segments and label segments in NITF 2.0 files, and the graphic segments in NITF 2.1 files, ENVI combines these segments into a single conceptual type (annotation segments). Annotation segments can contain symbol, label, or graphic segments, and they might include text, ellipses, polylines, bitmaps, and other objects. Annotation segments do not exist in any NITF file, and they are not mentioned in the NITF specification documents. They are a simplification used to reduce the overall number of segment types.

Annotation segments and image segments both carry information intended to be displayed graphically, and both are referred to as displayable segments.

Annotation Objects

Because CGM graphics can display multiple graphical elements, each annotation segment must store multiple displayable features, referred to as annotation objects. NITF 2.0 and 2.1 annotation segments can contain multiple CGM annotation objects. Each NITF 2.0 annotation segment can only contain one non-CGM label, bitmap, or object symbol annotation object. The type of object determines which fields will be filled in the annotation object.

Text Segments

Text segments consist of textual information that is not intended for graphical display. Examples are notes explaining target information, or US Message Text Format (USMTF) notes for other users.

Data Extension Segments

Data Extension Segments (DESes) contain data that cannot be stored in the other NITF segments. An example is the NCDRD Attitude Data DES, CSATTA. A list of unclassified, registered DESes is available at the JITC web site.

ENVI only supports NITF Commercial Dataset Requirements Document (NCDRD) DESes. You cannot edit, create, or delete NCDRD DESes through the Metadata Editor.

If a NITF file contains valid supported DESes, they are automatically saved in the output file. When opening a NITF image, ENVI will not read the DES user-defined subheader if the input data format does not mirror the format in the accompanying XML definition file. When writing a NITF file that contains a DES with no corresponding XML file, ENVI passes through this unknown DES in NITF 2.1 and NSIF 1.0 files only. ENVI does not support unknown DESes in NITF 2.0 files. Also see Data Extension Segments.

Display Levels

The NITF format supports multiple image, graphical, and displayable text elements. A NITF dataset can contain all displayable segments (image, graphic/symbol, and label), allowing for raw imagery plus ancillary information. Each displayable segment contains information controlling the location of the display element in the composite. Each segment also contains a display level that determines which elements should be displayed on top of others, obscuring the lower-level displayable elements from view without corrupting the hidden portion of those lower-level displayable elements.

Wavelength Information

Wavelength information can be stored in several different ways in a NITF image segment. The BANDSB TRE contains the most information, followed by the BANDSA TRE, and the band subcategory settings contain the least information. ENVI will attempt to read wavelength information from a NITF file from each of those locations, in order, until wavelength information is found. If no information is present in any of these locations, the file is opened without wavelength information.

References

For more detailed information about the NITF/NSIF format and its components, see the technical specifications on the Reference Library for NITFS Users web site.


Discussion

This study provides one of the first landscape-scale efforts to explore spatial patterns and landscape drivers of dynamic surface-water connections between depressional wetlands and streams in the PPR. These VC wetlands were found to connect to streams predominately through merging with and being subsumed by other wetland features. Both small (2–10) and large (>100) wetland clusters (or complexes of surficially connected or consolidated wetlands) were common across the study area. The consolidation of wetlands was particularly common around lake features, many of which occur in open, flat basins in which excess water can result in 100% to almost 600% increases in surface-water extent (Vanderhoof and Alexander 2015) (Fig. 6). Initial rises in lake levels may merge wetlands with lakes, but wetlands may still retain wetland vegetation and function. However, as lake levels continue to rise, merged wetlands are completely subsumed by lakes and no longer function as independent depressional wetlands (Mortsch 1998). Features were observed to expand and contract in response to variable wetness conditions, connecting and disconnecting lakes, streams and wetlands. Previous work in the PPR documented variability in wetland-to-wetland and wetland-to-stream connectivity as surface water merges in low relief areas and/or wetlands fill and spill (Leibowitz and Vining 2003 Kahara et al. 2009 Shaw et al. 2013 Vanderhoof et al. 2016), and sought to predict connectivity based on storage capacity and spill point elevation (Huang et al. 2011b), temporal changes in surface-water extent (Rover et al. 2011), and wetland vegetation and water chemistry (Cook and Hauer 2007). This study sought to move from the prediction of connections for individual wetlands to explaining variability in the abundance of such surface-water connections on a landscape scale.

The probability of hydrologic connectivity has been most commonly linked to the proximity or distance between depressional wetlands and streams (Tiner 2003 Kahara et al. 2009 Lang et al. 2012). Yet this study found that substantial variation in the mean Euclidean and flowpath distance to stream for VC and NCO wetlands between ecoregions makes it extremely problematic to identify VC wetlands based on distance alone. For example within 400 m of a stream on the Des Moines Lobe, 78% of the VC wetlands were connected, while the Drift Plains had only 52% of the VC wetlands connected at that same distance. Consequently while mean distance to stream emerged as an important variable in explaining the abundance of SI and NCO variables, it was not ranked as important in explaining the abundance of VC wetlands. Instead, for VC wetlands, wetland arrangement (wetland to wetland distance), as well as the temporal dynamics of surface-water expansion, also need to be considered. Additionally, in landscapes with little relief, flowpath distance from a fixed spill point to a fixed stream entry point may be less relevant. Surface flows connecting wetlands to streams in this area may not follow a single, theoretical flowpath, but instead are likely to expand and spread across the flat surface as excess water accumulates in a catchment.

The variables considered in the models represent several different factors in determining landscape-scale connectivity including (1) wetland abundance, (2) wetland arrangement (distance variables), (3) the availability of surface water connections (stream and lake abundance, surface water extent), and (4) potential influences on water accumulation and flow (topography and land use variables). However, across the PPR, variability within and between these variables is intrinsically tied to variability in landscape age (since last glacial retreat) and corresponding drainage development across the region (Ahnert 1996). The last maximum glacial extent (the Wisconsin glacier) diverged around the Lowlands ecoregion, leaving the older landscape (>20,000 BP) with a well-developed drainage network (Clayton and Moran 1982). In contrast, the Wisconsin glacier retreated from the Missouri Coteau and Drift Plains ecoregions by 11,300 BP, meaning the drainage system is still developing in these ecoregions. In ecoregions with low drainage development, surface water is being stored in glacially formed depressions (Winter and Rosenberry 1998 Stokes et al. 2007), resulting in an inverse relationship between stream density and surface-water extent (Table 10). The drainage network in the PPR is also increasingly modified with the expansion of ditch networks and tile drainage in association with agricultural activities (McCauley et al. 2015). Ditches, pipes and field tiles can increase connectivity between waterbody features, however, both filling wetlands with soil and lowering the water table through increased water withdrawal can decrease expected surface-water connectivity (DeLaney 1995 Blann et al. 2009 McCauley et al. 2015). Our finding regarding the importance of predicted anthropogenic drainage may be related to the relation between land use and wetland connectivity and wetland loss (Miller et al. 2009 Van Meter and Basu 2015). These potential interrelations merit further study.

It is critical to note that the aim of this analysis was not to document all surface-water connections, recognizing limitations of our input datasets, but instead, to characterize spatial patterns for a subset of wetlands that merge with a stream over a wide range of wetness conditions and a relatively large study area. A complete analysis of wetland-to-stream connectivity would also need to consider narrow and temporary (e.g., in response to rain events and peak snow melt conditions) surface connections, groundwater connections, as well as chemical and biological connections (U.S. EPA 2015). This analysis allowed us to identify regionally relevant parameters that can provide a preliminary means to explain variability in the abundance of wetlands that affect streamflow and are subject to regulatory programs. Patterns in VC wetland abundance, for example, demonstrate that wetland abundance and arrangement in combination with expanding surface-water extent provides important opportunities for wetlands to merge with streams, a finding consistent with related literature. Limitations of this study are potential bias due to unmeasured variables and the glacial history of the landscape, which may complicate efforts to apply these variables to different ecoregions.

Further, patterns in the mechanism of connection show that in addition to SI wetlands, depressional wetlands and open waters can play critical roles in moving surface water across the landscape. These findings are particularly relevant to floodplains, permafrost landscapes and formerly glaciated landscapes that often exhibit low topographic gradients, low rates of infiltration, and low stream density. Runoff events in these landscapes rarely satisfy the threshold surface storage volume so that excess surface water (precipitation inputs exceeding soil infiltration and evapotranspiration) tends to accumulate instead of leaving the watershed as stream discharge (Hamilton et al. 2004 Yao et al. 2007 Aragón et al. 2011 Kuppel et al. 2015), leading to wetland consolidation and surface-water connections.


4. Challenges of measuring population exposure to SLR

This review found numerous challenges in the literature when measuring population exposure to SLR and related impacts. The estimates are based on gridded datasets that include DEMs, flooding and extreme sea-levels, and population distribution. Hinkel et al (2014), Lichter et al (2011), and Mondal and Tatem (2012) have shown that estimates of land and population exposure to SLR and coastal flooding vary significantly according to which datasets are employed. Final estimates depend on the input data, and decisions about key parameters such as time horizons, warming scenarios, and ecological or socioeconomic processes and feedbacks including adaptation measures assumed. Four main challenges are discussed here based on our review and analysis.

First, estimates of populations exposed to SLR rely on elevation data to define zones of inundation or potential hazard parameters (Small and Nicholls 2003, Ericson et al 2006, Mcgranahan et al 2007, Lichter et al 2011). Global DEM datasets include GLOBE which combines six gridded DEMs and five cartographic sources the US Geological Survey GTOPO30 which combines eight raster and vector sources of topographic information and the Shuttle Radar Topography Mission (SRTM) elevation data with a vertical resolution of 1 m and spatial resolution of approximately 90 m at the equator (Mcgranahan et al 2007, Lichter et al 2011, Brown et al 2018). Any DEM has vertical and horizontal uncertainties (Wolff et al 2016). For example, while there are enhanced versions of SRTM data (see Mondal and Tatem 2012, Kulp and Strauss 2019), SRTM datasets have uncertainties in urban and forested areas where radar technologies capture infrastructure or tree elevation as opposed to ground elevation (Dasgupta et al 2011, Marzeion and Levermann 2014). Global mean error in SRTM's 1–20 m elevation band has been found to be 1.9 m (and 3.7 m in the US) (Kulp and Strauss 2019). Choice of DEM also has significant effects on estimates. Hinkel et al (2014) found the estimated number of people flooded according to the GLOBE elevation model to be double that calculated using the SRTM elevation model, and Kulp and Strauss (2019) found that using CoastalDEM instead of SRTM resulted in estimates of population exposure to extreme coastal water level that were three or more times higher. Improvements in elevation datasets are required to enable accurate estimates of land area exposure (Gesch 2009).

Second, estimating coastal floodplains and potential coastal flooding requires datasets on extreme sea levels. A significant limitation of flood analysis at all scales is limited availability of accurate datasets (Gesch 2009, Mondal and Tatem 2012, Neumann et al 2015). In their analysis of coastal flood exposure, Muis et al (2017) found that correcting vertical data of sea-level extremes and land elevation for two sea-level datasets—DINAS-COAST Extreme Sea Levels (DCESL) dataset and the GTSR dataset—resulted in an increase of 16% and 20% respectively in flood exposed land, and 39% and 60% respectively for exposed populations. Moreover, there are other drivers of flooding including storm intensity, and climate change affects frequencies, magnitudes, and tracks of storms thereby yielding low confidence in how storm surges and extreme sea levels may alter over time (Marzeion and Levermann 2014, Brown et al 2018, Knutson et al 2019). Hazard parameters must be set within models, but it is not always clear which hazard parameters to select, or whether to select extremes or means.

Third, estimates of population exposure to SLR require population distribution datasets. Key datasets are the Gridded Population of the World (GPW), the Global Rural Urban Mapping Project (GRUMP), and LandScan Global Population database, all of which are developed from census data. Census data are available by census accounting units, with uncertainty in the spatial distribution of populations within each unit. GPW was the first global and widely available dataset that transformed census data to a grid it emphasises input data rather than modelling distributions (Nicholls et al 2008a). GRUMP combines population data with census units, allocating people into urban or rural areas to coincide with UN estimates and using an urban extent assessment derived mostly from the night-time lights dataset of the US National Oceanic and Atmospheric Administration (NOAA) (Mcgranahan et al 2007, CIESIN, IFPRI, World Bank and CIAT 2011, Mondal and Tatem 2012). LandScan disaggregates census data within administrative boundaries based on weightings derived from land cover data, proximity to roads, slope, and populated areas (Bhaduri et al 2007, Mondal and Tatem 2012). As always, all these datasets have limitations. Spatially detailed census data are often not available for low-income countries some census data are over 10 years old informal settlements and undocumented people might not be accounted for in census data and datasets that use night-time lights as a proxy for population can miss smaller coastal settlements with limited development and where electricity supply is intermittent or unavailable (Dugoua et al 2017). Different datasets produce differing population distributions. An analysis of variation in estimates of populations in LECZ as derived from LandScan and GRUMP found that eight of the top ten locations with the largest differences in estimates were small low-lying island countries or territories, including for example Tuvalu (Mondal and Tatem 2012). Consequently, the limited spatial resolution of census data means there is uncertainty as to the location of populations relative to SLR and its related hazards (Small and Nicholls 2003, Foley 2018).

Finally, many studies set specified levels of future GMSLR based on different emission and warming pathways over the coming decades, centuries, and millennia (e.g. Pfeffer et al 2008, Brown et al 2016, Clark et al 2016, Mengel et al 2016). Debates regarding GMSLR estimates and forecasts relate to spatial variations, temporal uncertainties, rates of ice mass loss especially from Greenland and Antarctica, ocean dynamics, emission scenarios, and changes in gravity associated with water mass redistribution, leading to significant regional variations from the global mean (Clark et al 2016, Jevrejeva et al 2016, Mengel et al 2016, Geisler and Currens 2017). Subsidence and isostatic uplift further affect local sea level projections (Hinkel et al 2014, Erkens et al 2015, Brown and Nicholls 2015). There are temporal uncertainties in forecasting GMSLR associated with projected rates (e.g. Bindoff et al 2007, Solomon et al 2007) natural, multi-decadal oscillations (Sérazin et al 2016) and the pace of ice mass loss from the Greenland and Antarctica ice sheets (Tol et al 2006, Hansen 2007, Pfeffer et al 2008, Nicholls et al 2011, Jevrejeva et al 2016). Thus, future GMSLR is uncertain, something that many studies address by using higher and lower bounds of GMSLR in their analyses (IPCC 2019, c.f. Marzeion and Levermann 2014).

In summary, reliability of the estimates of both current and future population exposure to GMSLR and related hazards depend on the reliability of input datasets, with precision not always reflecting accuracy. Global quantitative estimates rely on global datasets, yet there are widely acknowledged challenges in estimating land elevation, extreme sea levels, population distribution, and GMSLR scenarios. These problems are amplified for studies seeking to estimate population exposure to GMSLR in the future. For example, the uncertainties in current population distribution estimates mean that future estimates also have large uncertainties.


The Shp2Vec Tool

The Shp2Vec (shape file to vector data) tool is in the SDKEnvironment SDK folder. This tool takes vector shape data and creates the BGL files used by Prepar3D. The vector shape data is in standard ESRI shape file format. This format includes three file types, .shp (shape data), shx (shape index data), and .dbf (database file containing attributes of the shape data). The ESRI shapefile is a public domain format for the interchange of GIS (geographical information system) data. The wide availability of tools which support the shapefile format and the fact that the format is publicly documented made it the logical choice as the input to the Shp2Vec tool.

The Shp2Vec tool is command line driven, and takes the following parameters:

The path points to the folder containing all the shape data. Use a single period to specify the current folder. All shape files in the specified folder, but not any sub-folder, are examined by the tool if the filename contains the string as part of the filename. The first three characters of a file can be anything, but all the remaining characters, except the extension, must be identical to the string. As a convention, the first three characters are set to FLX for airport boundaries, HPX for hydro polygons, and so on (see the table below for a full list), and then the common string is set to the quad numbers. The quad numbers can be located in the Base File Information grid (for example, Hong Kong SAR is in the quad 78 24). The Shp2Vec tool then simply looks for files with the quad string present in the name. For example, if you look at the filenames in the SDKEnvironment SDKVector ExamplesExample1SourceData folder you will see that they all have 7824 as the common string. This means that entering the following line will process all the shape files:

Of course, other file naming conventions could be used. The default behavior of the tool is to provide replacement data for the quad cell. The flag -ADDTOCELLS is optional, and simply indicates the new data should be merged with existing data, and not replace it.

Performing a task with the Shp2Vec tool, such as replacing vector data, or perhaps using surfaces to exclude autogen or land classifications, or to flatten an area to use as an airport boundary, the following general process should be followed:

  1. Use a shape file editor to create the vector data, or surface shapes, that are the basis of the change. A shape file editor is not provided as part of the SDK. There should be one flavor of data per shapefile. Whereas most vector types only require 2D co-ordinates, ensure that vector data for AirportBounds and WaterPolys is 3D (that is, includes elevation data).
  2. Create a single folder with the shapefiles in it, named according to a convention but fitting the format described above. Each shapefile consists of a .dbf, a .shx, and a .shp file.
  3. Add to the folder one of the proprietary XML files supplied with the SDK for each flavor of data. These XML files provide information to the tool. It is very important to point out that they should not be edited, the GUIDs and values in them are hard-coded in Prepar3D, and there is no value in altering the content of these files. However, it is important to rename them within the folder to match the naming of the shape files. So for example, the XML file FLX7824.XML is named to match the airport boundary data of Hong Kong SAR airport contained in the FLX7824 shapefile.
  4. The .dbf file can be edited in a shapefile editor or dbf editor. Usually these edits take the form of replacing GUIDs to match the task you wish to complete. Notice that the columns of the .dbf file exactly match the data definitions in the XML file.
  5. Run the Shp2Vec giving your folder as input.

The table below links to typical XML files, which are all used in Shp2Vec Example 1, except the exclusions vector data, which is used in Shp2Vec Example 2.

The .dbf file contains two columns, UUID and GUID. Enter the following GUIDs in the GUID column to achieve flattening, or Autogen or land class exclusions. Airport boundaries are one example of the use of Flatten + MaskClassMap + ExcludeAutogen.

The .dbf file for exclusion polygons have a GUID column, and there are three options for what to enter here:

1. Exclude all vector data by using a null GUID (all zeros).

2. Exclude general classes of features that have an attribute block (for example, all roads or all water polygons) by using one of the GUIDs from the Vector Attributes table. Note that some of the entries affect vector autogen associated with the excluded feature.

3. Exclude specific types of shapes (for example, only one lane gravel roads or only golf course polygons) by using one of the GUIDs listed in Vector Shape Properties GUIDs.

Notes
  • The Shp2Vec tool does not process vector data that crosses 180 degrees Latitude, or the Poles.
  • The clip levels are the defaults used by Prepar3D data. Clip level 11 is QMID level 11. Setting higher clip levels will create more detailed data, but at the expense of size and performance. Note the use of clip level 15 for freeways.

Vector Attributes

The following table gives the GUIDs that apply for each type of vector data. Refer to the example XML files on how they are used.

F = traffic moves from reference node (one direction, cars drive away from vertex 0 towards vertex N).

T = traffic moves towards reference node.

Notes
  • The unique IDs that are marked as "does not go into the simulation", are used to troubleshoot issues with the data creation process.
  • The texture GUIDs (noted in the Attribute Block column) reference into in the Scenery Configuration File.

Shp2Vec Sample 1: Hong Kong SAR Area

The Vector Examples/Example1/SourceData folder includes all the files for all the types of vector data present around Hong Kong SAR. To run the sample type the following in the command line, after navigating to the SDKTerrain folder.

The BGL file will be written to the same folder as the data, an example of the output is in the Vector Examples/Example1/Output folder. The command-line output will give some statistics when the tool runs correctly, for example the output from running the tool with the above example data is shown below:

Shp2Vec Sample 2: Excluding and Replacing around Hong Kong SAR (VHHH).

The Vector Examples/Example2/SourceData folder includes a sample excluding and then replacing AirportBounds geometry around Hong Kong SAR. To run the sample type the following in the command line, after navigating to the SDKTerrain folder.

Note the importance of the -ADDTOCELLS flag to add this data to the existing vector data. In the TmfViewer tool, the new airport boundary is shown in the following image.


Use GIS software to query spatial data.

Spatial data updates are accessed, read, interpreted and edited to ensure they are in an acceptable format to meet functional requirements .

Entities and attributes are used to display spatial information that will assist in the delivery of spatial information services .

Entity and attribute queries of spatial data are used to generate summary results.

Results from queries are used to present spatial data graphically according to organisational guidelines .

Entity and attribute queries are applied when using univariate statistics to explore the dataset.

Routine spatial data problems or irregularities are solved in the course of the activity or via consultation with relevant personnel .

Keyboard and computer hardware equipment are used to meet functional requirements on speed and accuracy and according to OHS requirements .

Solve problems using GIS software.

Existing spatial and aspatial data is adjusted to integrate with new data to meet documentation and reporting requirements and to add to personal learning and organisational intelligence.

Geospatial techniques on appropriate software are used to combine spatial layers data to solve problems, highlight selected data features and improve the visual aspect and understanding of the project.

Spatial overlay techniques are used to solve problems and generate results pertaining to the spatial project as specified by relevant personnel.

Cartographic integrity is tested and validated to solve accuracy and quality problems.

Produce reports based on basic spatial analysis.

Map or plans are integrated into project reports.

Results, summary statistics and graphs from a mapping application are incorporated into a project.

Legal and ethical requirements are adhered to according to organisational guidelines.

Spatial dataset to be archived is manipulated where necessary to ensure completeness.

Metadata is created according to accepted industry standards.

New and existing spatial data is stored and archival details are recorded according to organisational guidelines.


Merging DEMs or other raster datasets and clipping Raster dataset to vector line? - Geographic Information Systems

GIS Methods

For GIS analysis of the trails and hikes in Rocky Mountain National Park, the software package ArcGIS from ESRI was utilized. This allowed for convenient spatial analysis, map genernation, and information storage for this entire endeavor.

Data Processing

After all of the data needed was identified and collected, data procressing could begin. The beginning layers included a Rocky Mountain National Park (RMNP) boundary shapefile, bounding quads shapefiles, RMNP trails and trailhead shapefiles, lakes and ponds shapefiles, streams and rivers shapefiles, roads shapefile, cities shapefile, counties shapefile, elk and bighorn sheep winter ranges shapefiles, and a digital elevation model (DEM) grid. The first step in data processing was to define the study area was by including all bounding quads that contain some part of Rocky Mountain National Park. Additionally, a trail segment was digitized representing the well known Keyhole route to the summit of Long's Peak.

Trail Digitization

Then, a common projection was chosen for the project. A projection is the conversion of a spherical object to a flat surface (Theobald, 2007). The UTM North American Datum 1983 projection with the GCS 1983 North American coordinate system was chosen because UTM (Universal Transverse Mercator) projection is broken in to a grid of 60 zones each 6° wide with 0.5° of overlap and stretching from 80°N to 80°S which allows for a maximum error of 1 per 2,500 for each zone (Theobald, 2007). This projection is best used for small areas, such as Rocky Mountain National Park, because minimizes distortion shape and angles while distorting area (Monmonier and de Blij, 1996). This is known as conformality. This project&rsquos zone of interest is 13N. Most of the data was already in this common projection but each layer was checked to confirm. Any data that was not already in UTM North American Datum 1983 Zone 13N was converted by changing the projection in properties table.

Next, a shapefile was created from XY data giving the coordinates for place names of mountain summits and waterfalls. A spatial reference had to be defined for the new shape file so that it could be projected along with the rest of the data and the projection was defined as UTM North American Datum 1983 Zone 13N, the common projection for the project. The data was then verified by overlaying the place names over a DRG quad to make certain the points lined up.

Then, a file geodatabase structure was conceived and a working file system was defined. The file geodatabase system was chosen to ensure adequate memory capacity. File geodatabases have 1 TB of file space which allows for large data storage, faster query performance, and the ability to operate across different operating systems (Theobald, 2007). File geodatabases have a hierarchial file organization starting with the feature data set, which defines the spatial extent and projection of the geodatabase (Theobald, 2007). The feature dataset also contains the feature classes, which are points, lines, or polygons that share the same type of geometry (Theobald, 2007). Any raster data such as grids are stored outside feature datasets but still within the file geodatabase. There are also relationship classes that store relationships between features so behavior can be modeled, however none were used in this project (Theobald, 2007). This project had two file geodatabases. The first stored all raw and processed data including all of the hikes derived from combining individual trail segments through data analysis and the base map. The second file geodatabase stored the updated trail dataset as well as the zonal statistics tables and summarized hike tables.

Within the trail dataset, a field called shape_length contained a value that represented the flat-line length of the trail segment. In order to obtain a more realitic length for each segment, the elevation differences in the trail were accounted for. To achieve this, the surface length tool under 3D Analyst in Arc Toolbox was used. This created an additional field in the trails data table called surface_length.

All of the trail vector data was converted to raster format at this time for future data analysis using the ployline to raster conversion tool in Arc Toolbox. Before the conversion however, three fields had to be added. The first field was called raster_ndx, which was populated with a value of one. This value was then used to assign a value of one to each of the raster cells during future raster conversion. The next field was called difficulty_index. This field was left null and would be used to store the trail difficulty ratings that would be calculated later. The last field was called hike_id, which was a join field for summary values that would also be calculated later.

The data including trails, roads, rivers and streams, lakes and ponds, cites and counties, and the place name data could then be clipped to include only points, lines, or raster cells within the coordinate bounds of the RMNP boundary. This used several different tools from the ArcToolbox including the feature clip and raster clip.

At this point, data analysis could begin. First, a hillshade of the DEM was constructed using the Surface Analyst function in ArcGIS and the other map layers were layered with transparency on top of the hillshade to create a relief-looking map.

Then, zonal statistics were performed on the trail segment rasters. Zonal statistics provides summary statistics for each zone in a zone layer and only integer raster data can be input as a zone layer (Theobald, 2007). The parameters of interest were maximum elevation, minimum elevation, and mean elevation.

To be able to join and relate tables there must be a common table field, most commonly a name or number ID (Theobald, 2007). In this case, a field called seg_oid was added to the attribute table for the trail raster data and was populated with the corresponding object id from the trails dataset. The individual zonal statistics tables for each trail segment were then merged using merge tool in Arc Toolbox. This table was then joined to the updated trails dataset, which provided original trail information with the three added fields, raster_ndx still with just a value of one, and the other two still null, and the zonal statistics for each trail segment contained in the trails dataset.

Using the field calculator, the Trail Difficulty Rating (TDR) was derived using the following equation:

At this point, the difficulty_index field was populated with the results of the field calculation. This leaves the hike_id field still null.

The next step in spatial analysis of the trails in RMNP using data queries to select trail segments based on different criteria to create theme hikes for potential patrons. To this point, all trail data were composed of a network of separate trail segments. To create meaningful information from the compiled trail data, the concept of a hike was formulated. A hike was defined as any trail segments that connected a trailhead to a specific destination. These hikes included hikes to lakes, hikes to waterfalls, hikes to mountain summits, hikes through wildlife habitat, hikes to historic sites, and family oriented hikes. To identify destinations for hikes locational selection queries were performed to selct trail segments within: 10m of a lake or pond, 25m of a waterfall, 100m of a summit, 100m of a historic site or structure, 100m of bighorn sheep winter range, and trails with a centriod within a polygon defined by elk winter range. The winter ranges for the two wildlife species were used because the animals tend to group in the winter and migrate to lower elevations. These factors would lead to a better chance of seeing wildlife and not having to hike as far to see them.

From the results from each of the preceding queries, adjacent trial segments leading back to a trailhead were also selected and were exported as one new shapefile. Finally, the hike_id field was populated with a name for these combined trail segments. The TDRs were then summerized on the hike_id field, creating a new table for each hike that contained hike_id, the summed difficulty_index, and trail length in miles. These summary tables were joined to each hike dataset on the hike_id field. Each joined table was then imported into the first file geodatabase.

Finally, map creation could begin. First a base map was generated that included the hillshade and base data including summits and waterfalls, which were derived from the place name data, roads, streams, lakes and ponds, park boundary, trails, as well as cities and counties.

A map was then created for each individual hike theme. These maps highlighted hike paths corresponding to the appropriate theme and were color-coded according to their TDR.

We believe that the maps created here will have utility among potential RMNP patrons. Thier ability to plan more enjoyable excursions into RMNP will be enhanced through the use of our analysis. The structure of this website allows would-be RMNP hikers to match their skill level and destination interest to a hike cutomized to their needs. It is our hope that more people will, through the use of this website and the analysis within, get outside and enjoy all of the beauty and breath-taking wonder nature has to offer in our great system of national parks, especially, Rocky Mountain National Park.


Watch the video: Arcgis: Merge the raster datasets fusionner raster arcgis Mosaic raster dataset ArcGIS