Spatial Ecology Lab 6: Landscape Pattern Analysis

Similar documents
Spatial Ecology Lab 2: Data Analysis with R

Working with Attribute Data and Clipping Spatial Data. Determining Land Use and Ownership Patterns associated with Streams.

Part 6b: The effect of scale on raster calculations mean local relief and slope

Spatial Ecology Lab 8: Control Structures

GIS Fundamentals: Supplementary Lessons with ArcGIS Pro

Downloading and Repairing Data

GIS LAB 1. Basic GIS Operations with ArcGIS. Calculating Stream Lengths and Watershed Areas.

Lab 18c: Spatial Analysis III: Clip a raster file using a Polygon Shapefile

Lab 7c: Rainfall patterns and drainage density

NRM435 Spring 2017 Accuracy Assessment of GIS Data

A Second Look at DEM s

Stream Network and Watershed Delineation using Spatial Analyst Hydrology Tools

Data Assembly, Part II. GIS Cyberinfrastructure Module Day 4

Getting to Know ModelBuilder

In this exercise we will:

INTRODUCTION TO GIS WORKSHOP EXERCISE

Building Vector Layers

Lab 2. Vector and raster data.

Geographical Information Systems Institute. Center for Geographic Analysis, Harvard University. LAB EXERCISE 1: Basic Mapping in ArcMap

_Tutorials. Arcmap. Linking additional files outside from Geodata

Lab 11: Terrain Analyses

Soil texture: based on percentage of sand in the soil, partially determines the rate of percolation of water into the groundwater.

GEOG 487 Lesson 7: Step- by- Step Activity

Lab 3: Digitizing in ArcGIS Pro

Spatial Calculation of Locus Allele Frequencies Using ArcView 3.2

GEO 425: SPRING 2012 LAB 10: Intermediate Postgresql and SQL

Lab 12: Sampling and Interpolation

Creating a Smaller Data Set from a Larger Data Set Vector Data

GIS.XL. User manual of Excel add-in for spatial data analysis and visualization.

Geography 104 Instructors: Judd Curran & Mark Goodman. LAB EXERCISE #3 Data Analysis - Buffering (25pts)

Making Maps: Salamander Species in US. Read in the Data

Lab 3: Digitizing in ArcMap

GIS LAB 8. Raster Data Applications Watershed Delineation

Name: Date: June 27th, 2011 GIS Boot Camps For Educators Lecture_3

Descriptive Statistics. Project 3 CIVL 3103

FR 5131 SAP Assignment

Lab 3. Introduction to GMT and Digitizing in ArcGIS

Assembling Datasets for Species Distribution Models. GIS Cyberinfrastructure Course Day 3

Lab 12: Sampling and Interpolation

Compilation of GIS data for the Lower Brazos River basin

Field-Scale Watershed Analysis

Making Yield Contour Maps Using John Deere Data

Protocol for Riparian Buffer Restoration Prioritization in Centre County and Clinton County

Lab 7: Tables Operations in ArcMap

Ex. 4: Locational Editing of The BARC

This tutorial shows how to extract longitudinal profiles using ArcMap 10.1 and how to plot them with R, an open-source software.

Minnesota Department of Natural Resources ArcView Utilities Extension User s Guide

Combine Yield Data From Combine to Contour Map Ag Leader

Making flow direction data

Making ArcGIS Work for You. Elizabeth Cook USDA-NRCS GIS Specialist Columbia, MO

Lab 5: Image Analysis with ArcGIS 10 Unsupervised Classification

Welcome to the Surface Water Data Viewer!

Using GIS to Site Minimal Excavation Helicopter Landings

WMS 8.4 Tutorial Watershed Modeling MODRAT Interface (GISbased) Delineate a watershed and build a MODRAT model

GY301 Geomorphology Lab 5 Topographic Map: Final GIS Map Construction

Introduction to LiDAR

BASICS OF SPATIAL MODELER etraining

GIS Workshop Spring 2016

Depending on the computer you find yourself in front of, here s what you ll need to do to open SPSS.

How to Calculate Vector-Based Landscape Metrics in ArcGIS

Lab 11: Terrain Analyses

Introduction to MATLAB

Organizing Design Data

Multi-criteria Decision Analysis

Introduction to Geographic Information Systems Spring 2016

Importing GPS points and Hyperlinking images.

Lab 1: Introduction to ArcGIS

PART 1. Answers module 6: 'Transformations'

Explore some of the new functionality in ArcMap 10

The Data Journalist Chapter 7 tutorial Geocoding in ArcGIS Desktop

Visual Studies Exercise.Topic08 (Architectural Paleontology) Geographic Information Systems (GIS), Part I

Spatial Analysis with Raster Datasets

GEO 465/565 Lab 6: Modeling Landslide Susceptibility

6-7. Connectivity variables.

Chapter 17 Creating a New Suit from Old Cloth: Manipulating Vector Mode Cartographic Data

Project 2 CIVL 3161 Advanced Editing

GGR 375 QGIS Tutorial

Pattern Maker Lab. 1 Preliminaries. 1.1 Writing a Python program

General Digital Image Utilities in ERDAS

Introduction to GIS & Mapping: ArcGIS Desktop

Projections for use in the Merced River basin

Lab 10: Raster Analyses

Delineating Watersheds from a Digital Elevation Model (DEM)

Digitising a map in arcgis desktop 10.3

Using analytical tools in ArcGIS Online to determine where populations vulnerable to flooding and landslides exist in Boulder County, Colorado.

Quantum GIS Basic Operations (Wien 2.8) Raster Operations

ArcGIS Online (AGOL) Quick Start Guide Fall 2018

Priming the Pump Stage II

Introduction to using QGIS for Archaeology and History Workshop by the Empirical Reasoning Center

Downloading and importing DEM data from ASTER or SRTM (~30m resolution) into ArcMap

SPSS 11.5 for Windows Assignment 2

Excel Basics Rice Digital Media Commons Guide Written for Microsoft Excel 2010 Windows Edition by Eric Miller

Making Topographic Maps

Introduction to district compactness using QGIS

Delineating the Stream Network and Watersheds of the Guadalupe Basin

GIS IN ECOLOGY: MORE RASTER ANALYSES

Watershed Modeling Using Online Spatial Data to Create an HEC-HMS Model

Depending on the computer you find yourself in front of, here s what you ll need to do to open SPSS.

STUDENT PAGES GIS Tutorial Treasure in the Treasure State

Session 3: Cartography in ArcGIS. Mapping population data

Transcription:

Spatial Ecology Lab 6: Landscape Pattern Analysis Damian Maddalena Spring 2015 1 Introduction This week in lab we will begin to explore basic landscape metrics. We will simply calculate percent of total for landcover type for a land cover dataset. Also, we will learn to use loops in our code to iterate through a list of items for repeated code blocks. 1.1 Data In this lab all data is projected in NC State Plane, meters. Google the projection name with proj4 in the search parameter to find the web page where I got the string below. ncspm <- "+proj=lcc +lat_1=34.33333333333334 +lat_2=36.16666666666666 +lat_0=33.75 +lon_0=-79 +x_0=609601.22 +y_0=0 + ellps=grs80 +datum=nad83 +units=m +no_defs " 2 Packages Used in this Lab You will need the following new packages installed to complete this lab. #load all libraries > library(rgeos) > library(maptools) > library(raster) 3 Looping in R One powerful function of any computing language is the ability to iterate over several item to repeat a process that you would otherwise have to do manually. This is called "looping." The basic syntax for a loop in R looks like this: 1

for (var in seq){expr It is helpful to write it out on multiple lines to better see the structure: for (var in seq){ expr For example, we can iterate over the elements in a vector and print them using the print() function. #define a numberic vector that contains the numbers 1 through 10. numbers <- c(1:10) #now look through each element of the vector, printing it with the print() function. Notice we define a variable n here that was serve as the object name for each element in the list when its turn comes. for (n in numbers){ print(n) Now let s do something more interesting. sum. #define a numberic vector that contains the numbers 1 through 10. numbers <- c(1:10) We ll add 10 to each number in the list and print the #now look through each element of the vector, printing it with the print() function. Notice we define a variable n here that was serve as the object name for each element in the list when its turn comes. for (n in numbers){ s <- n +10 print(s) Finally, let s concatenate the elements of the list to a string variable we define outside the loop (because all elements will use the same string variable). We will use the paste() command to concatenate each item to a string variable we have defined. Look at the help files for the paste() command to see how it works (help(paste) or?paste). #define two string variables, one for the beginning and one for the end of the sentence to be printed. The number from our vector will go in the middle. pt1 <- "I counted at least" pt2 <- "chicken(s) crossing the road." #define the list numbers <- c(1:10) #now loop through the numerical vector, concatinating pt1 and pt2 to each element to create a sentence. Notice the use of sep=" " here. Try using different characters for the sep parameter to see what the output looks like. Consult the help files for paste() for further clarification. for (n in numbers){ s <- paste(pt1,n,pt2,sep=" ") print(s) How might you use the list.files() command that we used in previous labs with a loop? 2

4 Procedures 4.1 Workspace Setup Let s use our new looping skills to set up the output workspace! We will create several variables for locations on our machine, add those variables to a vector, then loop through the items in the vector to create the directories we need for this lab exercise. #set up workspace locations rootdir <- "/home/damian/test" datadir <- file.path(rootdir,"data") outdir <- file.path(rootdir,"output") mapoutdir <- file.path(outdir,"maps") tableoutdir <- file.path(outdir,"tables") #create the workspace vector workspacelist <- c(rootdir,datadir,outdir,mapoutdir,tableoutdir) #now loop through the vector that contains the workspace location variables, creating each if they do not already exist. for (l in workspacelist){ dir.create(l, showwarnings = FALSE) 4.2 Dissolving Polygons We ll start with the river basin polygon shapefile. We should first check to see what projection the data set is in. That information is in the.prj file associated with it. We can simply open this file in a text editor and view it. #here we read the shapefile and define the projection in one step. We are using the variable we set in the projection section of this document. > huspm <- readshapepoly(file.path(datadir,"hu.shp"),proj4string= CRS(ncspm)) #let s look at the summary information for the file > summary(huspm) Spatial data frame objects contain several "slots," or categories of information within them. We print the names of the slots using the slotnames() command. #print the names of the slots in the hu data frame > slotnames(huspm) Notice the data slot. This is the attribute table. We can pull it out as a data frame by accessing it with the @ symbol: > hudf <- huspm@data > hudf Now look at the bbox slot. What does it contain? We now have the hydrological units loaded as a spatial data frame. Let s map them to see what they look like. 3

#plot the hydrological units to inspect them > plot(huspm) You are looking at the smallest river basins that the are distributed as standard data. We are looking for something more general for this exercise so we will dissolve the boundaries within those polygons we are interested in. Imagine using an eraser to remove the small basins inside the larger basins we are interested in. Let s look again at the attribute names for the hydrological unit shapefile. #print the names on the attribute table for the hydrological unit polygon layer > names(huspm@data) We re interested in the Cape Fear River, so let s pull out those polygons to make the rest of our operations faster (less data, no operations on areas we re not interested in). Subset the Cape Fear minor hydrological units #subset > cfhuspm <- huspm[huspm$basin == "Cape Fear",] #plot > plot(cfhusmp) Now dissolve based on the the Sub-basin name > cfsubspm <-unionspatialpolygons(cfhuspm,ids=cfhuspm$subbasin) > plot(cfsubspm) The dissolve method leaves us with the polygons we want, but the spatial data frame is missing a data table. You can see this for yourself if you check the slot names of the new layer. > slotnames(cfsubspm) We will need to build a data frame to tack onto our new dissolved layer. We will do this by getting the names of the basins out of the layer, creating a data frame out of them, naming the rows properly so the information will match the spatial polygons, then appending the data table as a slot. #get the names of the spatial polygons > n <- names(cfsubspm) #now turn that into a data frame > ndf <- as.data.frame(n) #name the rows of the data frame > row.names(ndf)<-n #finally now add the data frame to the polygons > cfs <- SpatialPolygonsDataFrame(cfsubspm, data=ndf) #check to see if the slot is there > slotnames(cfs) #now label the basins to check that everything turned out OK. The labels will be placed poorly, but this is not a production map. > text(coordinates(cfs), labels=row.names(cfs)) 4

4.3 Summarizing Raster Values within a Polygon We will now summarize the NLCD categorical raster within an individual basin (a polygon). Let s start by selecting the Northeast Cape Fear from our sub-basin map. #select the sub-basin > necf <- cfs[cfs$n == "Northeast Cape Fear",] #plot it to check the selection > plot(necf) Now let s load our NLCD raster data layers into R. #load the raster > nlcd <- raster(file.path(datadir,"ne_cf_nlcd1.tif")) #plot the raster with no legend > plot(nlcd, legend = FALSE) Now let s look at the slot names in the for our raster layer. > slotnames(nlcd) #there is a data slot, let s seee what is in that table > nlcd@data You will notice that within the @data slot there is a slot called attributes. That is the data frame. Let s pull that out as a data table. > df <-as.data.frame(nlcd@data@attributes) > df This data table has several attributes. we are most interested in ID and LAND_COVER_CLASS. We will use those to calculate the percentage of each land cover type within our sub-basin. We will do that by adding a new field to the data table that is calculated as the percent of total cells for each cell type/category. #add the new percent field and calculate it s Values. We will round to one decimal place. > df$perc_tot <- round(df$count/sum(df$count)*100,1) #check the data frame to see if the new variable was added > df #just print the new attribute column > df$perc_tot Finally let s write our new data table as an ASCII file #now let s write the new data frame to the output workspace as an ASCII file. Notice the values for the value sep we re using as well as asking R to write column names but not row names to the file. write.table(df,file=file.path(tableoutdir,"nlcd_perc.txt"),sep=" ",col.names=true,row.names=false) The last thing we will do is subset our raster using a smaller watershed. This is called extracting by mask or clipping depending on what package you are using or who you are talking to. The process is simple, you are basically using a smaller polygon as a cookie cutter to extract a subset of the raster that matches the outline of the cookie cutter. (Clear as mud?) #load in the 10-digit HUCs into R > huc10 <- readshapepoly(file.path(datadir,"ne_huc_10.shp"),proj4string= CRS(ncspm)) 5

#plot this layer on top of your NLCD layer > plot(huc10,add=true) #print the data table to see the names of he individual polygons > huc10@data #let s pull out the first HUC in the list, using the OBJECTID attribute > h <- huc10[huc10$objectid == "37",] #plot to see if it worked > plot(h) #now extract the NLCD for this HUC > r <- mask(nlcd, h) #plot this raster to see if it worked > plot(r) #you will notice that the map might not be zoomed to the tight extent of the clipped raster, we will fix that by zooming to the extent of the HUC. #create an extent object from the HUC > e <- extent(h) #plot the raster now, using the "ext" parameter > plot(r,ext=e) 4.3.1 Building an Attribute Table The previous operations left our extracted raster without an attribute table so we will have to rebuild it if we want to use the same method to calculate percent of total cover tha we used above. Thankfully, the raster package offers a way to do this. #look at the empty attribute table, you will see NULL or NA > r@data@attributes #first we scan the raster for all possible categorical values > r <- ratify(r) #now look again at the attribute table. You will see a list of all possible categorical values in the raster. > r@data@attributes #we will now pull out the table we just build so we can append it. We use the levels command here, it does the same thing as our previous calls of r@data@attributes. It calls the "levels" of the attribute table. rat <- levels(r)[[1]] #now let s add the text bit from the NLCD raster to the attribute table we are building for our raster > rat$land_cover_class <- df$land_cover_class #now we can tack it into the attribute slot levels(r) This will get us an attribute table of levels but it doesn t quite get us to where we need to be. We need cell counts! Luckily, the raster package has a function to do this! We will run this command and use it to generate our data table, we won t add the cell counts to our attribute table though, because we don t really need it there for this exercise. What we really need is a data frame to do the precent calculations in so that we can export them. Let s look at that code: #use the freq command and cast its output as a data frame > f <- as.data.frame(freq(r)) #check your output 6

> f #drop the NA row, we don t care about it. 16 is the row number here. g <- f[-16,] #finally, let s tack on the text descriptors for each NLCD code > g$land_cover_class <- df$land_cover_class You now have a data frame with cell counts for the extracted raster. You also (partly) restored the attribute table in the raster object, and could continue along those lines with the frequency counts if you wanted to continue to build that table for some reason. We didn t need to do that here but you might need to do that in another application. 4.4 Putting it all Together After going through each of the commands above, create an executable.r script file that performs each of the steps outlined in the pseudo-code below. Note: Though the general steps are outlined below I do not include everything you need to do. Consult the steps described above and your previous labs for clarification if you need to or send an email to the class listserv. You will notice that this week we have a header at the top of our file. Include this in your file. We will begin to include these headers in all of our files so that there is some documentation at the top of the code. In your file, I should only have to change ONE file path to run the script on my computer. IE: all workspace locations should be based on one location variable and built by the code. ######################### #file name: # #author: # #description: # #notes: # ########################## #set up your workspace using the loop structure from above Set your rootdir as.../firstname_lastname_lab6 #remember, I should only have to change one line in order to make everything in your script run on my machine #download and unzip the data to the appropriate location #for each 10-digit huc10 in the NE Cape Fear Shapefile #create a map the clipped NLCD data and the HUC boundary that has no ledged and is tightly zoomed to the HUC extent. Include a title on your map that is the HUC code and label your X and Y axis. Put these maps in your output map directory. Name the file as follows: HUC_Code.png. IE: each HUC should have an output file that is named according to its HUC code. #output a NLCD table for the HUC that has a new, calculated perc_tot column. Put these files in your output table directory. Name the file as follows: HUC_Code.txt. IE: each HUC should have an output file that is named according to its HUC code. Your file should be executable at the command prompt using the source command. (See R help for details.) 7

5 Submission Submit your.r script file via Blackboard. Only submit the.r file. 8