Sunday, June 29, 2014

Week 7: Suitability Analysis

Final map product outputs
This week's assignment involved applying suitability analyses using several criteria and different methods.  Both vector and raster methods were applied for a straight-forward 'binary-type' suitability analysis, then a weighted overlay analysis was performed.  Ultimately a comparison was made between two different weight schemes using the overlay method, and the final outputs were compared (see map product at right).

First, a raster grid displaying a landscape of suitability for each criterion was made.  Generally, places on low slopes, far from rivers, close to roads, on particular soils, and on particular land cover types were considered more suitable.  Then, the five criterion themselves were weighted.  The map compares the equally weighted criteria output, and the unequally weighted criteria output.  In real applications, weight assignment is determined by accepted research, personal observation, or popular opinion within the decision-making community.  Clearly in the map above, slope has a large influence on the suitability of particular areas.

Friday, June 27, 2014

Module 6: Exploring & Manipulating Spatial Data

Python Script Output
This weeks assignment was to dive deeper into Python scripting by manipulating lists, dictionaries, tuples, and using cursors.  List, dictionaries, and tuples are similar in that they store indexed data, however there are nuances to each that allow for diverse usage.  Spatial data is often stored in some sort of indexed format (tables) which must be easy to reference and manipulate.  The assignment here involved generating a geodatabase and editing some fields of a feature class within.

At right is a screenshot of my script output, which was used to select only county seats from a list of cities and then match each county seat with its population using a dictionary.  Below is a discussion of some issues I had during write-up and how they were overcome.

I had the most trouble with Step 5, which involved setting the search cursor to retrieve three fields while using an SQL query to only select County Seat features.  The issues occurred in three places of this step:  1) Setting the workspace; 2) calling the correct feature class; and 3) determining the syntax of the Search Cursor.

The first issue was merely an unnoticed typo, but it cause a lot of troubleshooting because I thought other parts of this step were causing the error message.  Typos are small errors that can cause big headaches.  Once realized this, it was quickly fixed and I moved on.

The second issue was a result of not knowing what file extension to use for the feature class cities in the new geodatabase.  I first used .shp, but this gave me an error saying that the file could not be located.  I then used windows explorer to determine if the extension had changed, and I found that it had changed to .spx.  Therefore I tried calling the feature class using this extension.   This again gave me the same error message.  I then thought that perhaps the basename extraction step (i.e. removal of .shp during copying step), produced a file with no extension, therefore I tried simply using ‘cities’, and this worked.  I then moved on to issue three.


Issue three was the toughest because I wasn't sure whether to use brackets, parentheses, quotation marks, commas, semicolons, etc. when using the Search Cursor.  Essentially the amount of info that was covered this week left me a little confused.  I realized, however, that I was over-thinking the problem.  I knew that the three fields needed to be within brackets, separated by commas, and within quotes.  Then I referred back to the exercise to determine how to use the SQL query, which was straightforward.

Monday, June 23, 2014

Week 6: Visibility Analysis

Finish Line Camera Coverage Map (symbology described in text)
This weeks lab involved using Visibility analyses on topographic data.  Specifically, we utilized Line of Sight, Visibility, and Viewshed tools (among others) in the 3D Analyst toolbox.  We worked with DEMs to determine what regions were visible from both point and line features and compared the outputs of these tools.

We also used veiwshed analysis to make an informed decisions about surveillance camera placement around the finish line of the Boston Marathon.  We were restricted to a 90 degree field of view for our cameras, and there were several buildings which obstructed possible placement.  I first analyzed aerial imagery to determine building locations, as I wanted to place the cameras on top of a tall structure.  I then used a DEM which included building height to determine how high these cameras would be placed.  The viewshed tool produced a raster with cell values from 0 - 3, where the value represented the number of cameras that captured that particular cell.  Cells with a value of 3 were colored red, 2 were yellow, and 1 were green (zeros were omitted).  Above is my output using this symbology.  [The cameras are magenta triangles and the finish line a blue dot].  All three of the cameras had an unobstructed view of the finish line as well as much of the surrounding area.

Friday, June 20, 2014

Module 5: Geoprocessing Using Python

This weeks assignment was to become familiar with geoprocessing in Python.  This was primarily accomplished by writing scripts that performed sequential geoprocessing tools in the arcpy site package.  The following is a breif transcript of my notes to complete the assignment.
1. First, I looked in the help window for a description on the Add XY tool as well as the Dissolve tool, just to get an idea of how they worked and what they did.
2. Next, I opened the ArcMap Interactive Python window as well as the PythonWin program.  I first used the scripts in ArcMap so that I could visually see what was happening to the data, then copied the text to a new script file in PythonWin.
3. Because the script should be standalone (outside of Arcmap), it first required to import the arcpy site package.  Then, I imported environment class and set the overwrite option to ‘true’.
4. Then I used the three tools in sequence: Add XY, buffer, and dissolve.  I followed the syntax described in the help folder as well as the easy to follow interactive python help window in ArcMap.
5. After each tool I added the GetMessage function.  Further I used the  ‘+ “\n”’ after the first two tools so that the message was easier to read.
6. I tested the tool in PythonWin and ArcMap to see if the desired result occurred; it did.

Monday, June 16, 2014

Week 5: Crime Hotspot Analysis


This week's lab involved creating several different Crime Hotspot Maps and comparing them in their predictive power.  We created hotspots using Local Moran's I, Kernel Density, and Grid Overlay from 2007 burglary data.  Then we compared the number of 2008 burglaries within these 2007 hotspots.

 I argue that the best crime hotspot predictive method in this scenario is using Kernel Density.  I base this on the observed highest crime density of 2008 burglaries within the 2007 hotspots. This is category (above) that I view as the best metric of predictive power because it takes into account hotspot total area, and thus would provide efficient preventative resource allocation.

It is clear that the Grid Overlay hotspot contains the most 2008 burglaries, therefore it is certainly a suitable predictive tool.  However, the total area is >65km2 and may be too large to effectively patrol by police.  With unlimited resources (i.e. police officers/vehicles), it would be feasible to use this model as area to patrol. In a limited resource situation – which is what is most often the case – higher priority areas must receive preferential resource allocation.  Thus, the Kernel hotspots.


The Local Moran’s I hotspots are large, but contain the fewest amount of 2008 burglaries.  The large area appears to be a result of extreme density regions.  By visual assessment, it appears that areas between several clusters, which don’t look particularly dense themselves, are influenced by adjacency (see screenshot below).  It appears that the red area toward the top is in between two hotspots.

Friday, June 13, 2014

Module 4: Python Fundamentals - Part 2

This weeks assignment involved debugging an existing script as well as writing our own to produce a desired outcome.  A script that was given to us was supposed to produce a simple dice rolling game between several participants, but it had several errors in need of correction.  To fix these errors, the debugging toolbar was used and error messages were interpreted.  Eventually the output was produces as seen above.

We were then required to randomly generate a list of 20 integers between 0-10 and remove a specific 'unlucky' number.  I chose the number 6 as my “unlucky” number, which was removed.  To print the number of times that 6 appeared in my original list, I used the if-elif-else structure.  Using the count method to determine the number of 6’s in the list, I chose the following three conditional outputs:

If there were no sixes, then it would print “This list contains no sixes.”  If there was one six, then it would print “This list contains 1 six, which was removed.”  If there were multiple sixes, then it would print “This list contains (the number of sixes), which were removed.  
I used the while loop and the remove method to remove each six sequentially from the list until the list contained no sixes.  Then the new list with all sixes removed would be printed.

Monday, June 9, 2014

Week 4: Damage Assessment

Damage Assessment of structure on East New Jersey
coast after Hurricane Sandy
This weeks lab involved damage assessment of structures near the New Jersey Coast after Hurricane (super storm) Sandy in Fall 2012.  Damaged structures in this case were homes and businesses that were inundated or wind-ravaged.  We compared pre- and post-Sandy aerial imagery of our study area and visually determined the severity of damage (see image top-right; description below).  Then, we regressed the severity of structure damage on the distance from the coastline in intervals of 100 meters (e.g., 0-100m from coastline, 100-200m, etc.; see table below).

I first coarsely assessed the entire affected area using pre- an post-Sandy aerial imagery.  This visual assessment included areas outside the delineated study area that were highly affected, as well as areas that appeared almost entirely unaffected.  This was done to make my damage assessment more objective.  Based on this broad-scale approach, I determined that the study area was one in which structure damage was highly variable – some properties were destroyed completely, while some looked unharmed.  Further a majority of destroyed structures were located in the easternmost region of the study area.

The general process of identifying the structural damage for each parcel was a left to right sweep of each block.  This allowed sufficient detail without taking hours to process a small area.  I primarily used the ‘slide’ effect tool to compare the two aerial images of the pre- and post-Sandy study site. If a structure was clearly moved from its original foundation or was leveled, then I chose to label it as destroyed.  Some structures were absent all together, those were labeled as ‘destroyed’ as well.  If a structure had major collapse, which was noticeable from aerial imagery by debris or a change in shape, then it was labeled ‘major damage’.  'Minor damage' was subjectively decided to describe a home that was generally surrounded by other severely affected homes, but was not obviously damaged from imagery.  ‘Affected’ was given to any home that appeared to have debris in the yard, which I presumed to be material from the structure.  A structure was labeled ‘no damage’ if it appeared generally identical in shape to the pre-Sandy imagery, and was surrounded by other homes which appeared unharmed.  This was based on the assumption that adjacent homes protected those upwind and uphill of the storm.


Structural Damage Category
Count of structures within distance category

0 – 100 m
101 – 200 m
 201 – 300 m
No Damage
0
1
7
Affected
0
9
24
Minor Damage
0
16
6
Major Damage
0
8
4
Destroyed
12
6
4
Total
12
40
45

Friday, June 6, 2014

Module 3: Python Fundamentals - Part 1

This week's assignment was to become familiar with Python scripting language.  We used Python to perform a few basic functions and methods.  We were instructed to start with a string object of our full name and write a script that ultimately printed only our last name and the number of letters in our last name multiplied by three.  The output of my script is seen above.

This was multi-step process in which we had to isolate each name in a list (e.g. first, middle, last), select only our last name, then calculate the number of letters in it.  This was achieved as follows:

1.       I used my full name “Philip Michael Coppola” as the original string and split this string into a list, which I called “listName” indexed as 0 (Philip), 1 (Michael), 2 (Coppola).
2.       I then used the script:
>>>lastNameLen = len(listName[-1])
>>>print lastNameLen

This gave a result of ‘7’, the length of “Coppola”.  Due to the requirement that another person should be able to use this code to print their last name, I had to use the [-1] index.  This references the last item in a list, whereas using [2] would only work if the other person put three names.

Monday, June 2, 2014

Week 3: Flood Zone Analysis


This weeks assignment was to perform coastal flood zone analyses.  Specifically, we created a map of the District of Honolulu after a modeled 3ft and 6ft sea level rise (SLR) scenario, then analyzed the social impacts of the flooded area using demographic data from the US Census Bureau.  The map above shows the area flooded by a 6 ft sea level rise and the general population density in Census Tracts.  It was created by first demarcating the area that would be flooded using the Less Than tool with the original Raster DEM.  Then the depths were calculated by using the Minus tool between the new raster and the the original DEM.  The high population density near the coastline is noticeable in this map.  Below is a table of demographic data extracted from Census blocks (finer scale than Tracts).

Variable
Entire District
6ft SLR


Flooded
Not Flooded
Total Population
953207
60005
893202
% White
20.85%
29.58%
20.26%
% Owner Occupied
56.77%
38.13%
58.02%
% 65 and Older
14.53%
17.04%
14.36%

The table shows demographic data for the areas flooded and not flooded by the simulated sea level rise of 6ft in Honolulu District, HI.  For comparison, there is demographic data for the entire Honolulu District.  This data is based on the 2010 U.S. Census.  All homes affected are located within close proximity to the coastal in Honolulu District. Clearly the populations affected by the 6ft SLR flood zone are different than the populations not affected by them.  Specifically, there is a greater proportion of persons 65 years and older and persons who describe themselves as “white” in the 3ft and 6ft SLR flood zones compared to the non-flooded zones.  Conversely there is a smaller proportion of owner occupied housing. 

These data can be examined in the context of social vulnerability.  While it is unclear that there is any social or political marginalization in this area due to racial disparity, areas in the United States with a high proportion of persons who describe themselves as “non-white” may be more vulnerable to environmental catastrophes (e.g. Hurricane Katrina; Cutter & Emrich 2006 as cited in Shephard 2012).  While the proportions described here for “white” persons is low compared to the national average, this is to be expected on a Hawaiian island to which “white” persons are secondary colonists.  The lower proportion of Owner Occupied housing in the flooded zones may be a result of seasonal/vocational visits by the owners, or some sort of rental-tenant situation.  This is important for social vulnerability because renters may not know local evacuation routes, or may not have access to personal vehicles for an evacuation situation.  Further, non-owner occupied housing is disproportionately uninsured, which could cause issues during flooding situations.  Lastly, the higher proportion of individuals 65 and older is significant because they may require assistance in an evacuation scenario, increasing their vulnerability to harm.

It should be noted that the 6ft flood zone affects a more racially diverse group of people than the 3ft flood zone.  This is observed in the decrease in the proportion of “white” persons affected from the 3ft to 6ft flood zone scenarios (from 36.79% to 29.58%).  It could be speculated that the greatest density of “white” persons is near the coast of Honolulu.