A Tale of Two Train Lines (please forgive this egregious title)

I. Project background, map images, conclusions

I grew up mostly in Westchester, and viewed MetroNorth Railroad (MNR) as an escape route from the suburbs. I lived along the Harlem Line, which makes stops between Grand Central Terminal and the ambiguously-named Southeast. Less than 10% of trains each day also connect to a transfer at Southeast that runs further north, an 30 additional miles up to Wassaic in Dutchess County. My most-traveled path is from the town where my parents live to Grand Central, off peak. However, the more I’ve taken the train in recent years (particularly when I take a new combinations of stops to reach my students via public transport, or when I ride at an unusual time), the more I observe that the Harlem Line train serves, obviously, many more purposes than just my own. I guess it’s what I already had words for from Kevin Lynch’s mental maps, but each person’s map of the same geography will be different. 

Harlem Line Metro North Stops

This particular project was motivated by a phrase I had heard used a couple times in reference to this train line: “the nanny train.” This is a blunt shorthand for the observed phenomenon of women of color riding from stations in affluent, majority-white towns in Northern Westchester (where they work) to stations further south that generally serve communities of color in the Bronx and Southern Westchester (where they live). The question that motivated this map was “Is there actually a ‘nanny train,’ and can I visualize its existence?”

By and large, what I gleaned from scrutinizing the train schedule and counting up trips (not exhaustively, but carefully) is that Harlem Line trains make stops either south of White Plains and terminate at North White Plains (24 miles north of GCT), or begin making stops at White Plains and terminate at Southeast (53 miles north) or Wassaic (82 miles north). Out of 109 total trips per day to Grand Central (I did not include reverse trips in this map), 96 trips fell into one of these four patterns:

  1. Group 1: begin at North White Plains, make at least 5 stops (i.e. make local stops in the Bronx)
  2. Group 2: begin at Crestwood in Southern Westchester, make either 5 stops (express in peak hour) or 12 (all stops in the Bronx)
  3. Group 3: Start at Southeast, making all or most stops until either Chappaqua or White Plains, then run express through the Bronx
  4. Group 4: Start at Wassaic, and run express before reaching Southern Westchester
Group 1: from North White Plains to GCT, making local stops in the Bronx
Group 2: select stops from Crestwood, express during peak hours (overlaid on Group 1)
Group 4: from Wassaic to GCT, express at or before White Plains
Group 3: from Southeast to GCT, express from White Plains (overlaid on Group 4, from Wassaic)

The remaining 13 trains of the day generally make very specific, peak-hour stops. Sometimes they stop at only 3 or 4 stations total, and seem to be oriented towards moving people quickly into the city from specific high density areas along the whole line. I was surprised to see that there is actually no single train that makes every single stop — the closest is the 1:56am train from Grand Central to Southeast, which skips 2 stations in the Bronx (these areas are also served by MTA subway stations), and the 6 stops after Southeast (which are generally considered sort of an extension of the “regular” line). 

So, most trains make stops north of White Plains or south of it, but not both. Indeed, there are only two southbound trains to “bridge the gap” by making at least 4 stops in northern Westchester AND at least 4 stops in southern Westchester/the Bronx: the 8:14pm from Mt. Kisco, and the 11:21 from Wassaic, which makes many local stops and doesn’t arrive in Grand Central until 1:53am the next day. If there is such a thing as “the nanny train,” as the term seems to have been intended, it’s the 8:14 from Mt. Kisco. Otherwise, anyone commuting from Chappaqua to Woodlawn, for example, has to switch at White Plains from the “Northern Westchester Harlem Line” to the “Southern Westchester/Bronx Harlem Line.”

All groups, displaying “the two Harlem Lines” — note that GCT, Harlem 125th, White Plains, and North White Plains are stops in all 4 groups. I was unable to satisfyingly place these data points in QGIS so that all 4 were visualized at once, so they appear to be only part of groups 3 and 4.

In the end, I’m not very satisfied with my map. To say something meaningful about how train scheduling aligns or is at odds with the demographics of this train line would require a more nuanced visualization of race than just “percentage of white people per census tract,” which is what I have in the background now (see below). To a large degree, it only says what is already widely known: census tracts in northern Westchester generally have a higher percentage of white people than those in southern Westchester and the Bronx. On the train front, likewise, it’s already obvious that White Plains is a change-over station. This makes sense, since it’s about half the distance from GCT to Southeast and is the biggest municipality on the line outside of NYC. I guess I’m a little surprised at just how few trains stop in Northern AND Southern Westchester, but that’s about it in terms of breakthroughs on this map (and I got it mostly from the train schedule, rather than the map). 

I know it needs a legend! Groups 3&4, stopping mostly in census tracts with white populations of 70-100%.
Groups 1&2, stopping mostly in census tracts with white populations of 0-70%.

The process of making it was extremely enlightening, though. To have spent this many hours only to arrive at a lackluster conclusion and lackluster map is humbling, and helps me understand the pressure to produce results or give in to the temptation to say that our visualizations say what we desperately want them to say. I’m glad to be able to look at this project critically, without any need to make statements (or seek funding…) based on its conclusions. 

II. Method

I downloaded American Community Survey 2017 data from the American FactFinder website. I took data for New York, Bronx, Westchester, Putnam, and Dutchess counties, covering every county the Harlem Line MNR services. I took a pre-packaged “Race” dataset that gives a raw count breakdown of race in the generic [super limited] government categories: white, black or African American, American Indian and Alaska Native, Asian, Native Hawaiian and Other Pacific Islander. The raw counts are per census tract.

To display population counts with space factored in, I made the raw counts into percentages. I did this only for the white population, so the map shows the percentages of white and non-white people, with no option for a more specific racial breakdown. This would absolutely be possible based on my dataset, but added too much complexity for me on this project. 

I also downloaded a TIGER shapefile package for all census tracts in New York State. I joined this geographical file to my race data file from America FactFInder using the Join function in QGIS. This is done by linking two spreadsheets using a common column that puts the same, unique datapoint in each spreadsheet. This part of the process gave me the most trouble, as QGIS consistently read the same 11-digit number, the GEOid for each census tract, as a string of text in one file and an integer in the other. This seems like a fairly common problem, based on the information available on Stack Exchange and other forums. However, despite numerous attempts to troubleshoot this problem, I wasn’t able to fix it using any of the suggested methods. Instead, I eventually gave up on fixing the problem in QGIS and used Excel’s Text-to-Columns feature to modify my dataset and create a different, common, unique value. This was easily read as a string in QGIS and I was able to join my geography file to my data file.

My favorite part of the data-creation process was using latlong.net to record the point coordinates of all 36 stations on the Harlem Line of MNR. I literally just followed the train line up a digital map and clicked on each station to get its coordinates, then put these into a third spreadsheet. After spending so much time troubleshooting data types in QGIS (and with the problem still unresolved at this point), I took great pleasure in such a straightforward task that also allowed me to explore a bird’s-eye map I am very familiar with from a lived-experience standpoint. I eventually loaded this file into QGIS and was delighted to see every station appear on the map. 

Then came the data-creation that felt least scientific and most subject to my own bias and lived experience of this question. I spent many minutes examining the Harlem Line train schedule, trying alternately to pull patterns out and to just allow myself to absorb the schedule without consciously looking for patterns. Once I had counted up and figured out some parameters that seemed reasonable (very much capta, not data), I made each of these groups a layer on my map. 

I added labels, fussed endlessly with all the colors and was never satisfied, read about color theory and looked up pre-made ColorBrewer packages, still hated my map and finally called it a day and wrote this blog post. Then I went back and fussed some more after dinner, adding hydrology shapefiles from the state of New York to make my coastline cover the dangling edges of census tracts, and color matching the new water to the underlying knock-off ESRI basemap. And now I’m grudgingly saying goodbye (for now??) to this project at 3 o’clock in the morning so that I can go to sleep and not wake up to it.

Mapping Assignment

The map blog post refers to a project that was created by a four member team in Digital Humanities Methods and Practices last semester. It reached a satisfying level but the mapping section kept being built by me until recently.

After I completed my course in Digital Humanities Methods and Practices last Spring, I had the chance to work for a project that still has great potential to expand and inform anyone who is interested in immigration ,media history, or in European ancestry and of course every scholar of urban immigration and median history.

Immigrant Newspapers was a project that had multiple tasks as everybody had to get enrolled in a different demand in order to contribute to the completion of it’s initial scope. Our goal was to publish and distribute a digital collection of historical NYC immigrant newspapers by collecting them from hundreds of communities and various ethnic publications and displaying them in a public forum. The time span of those collections was between 1860-1890, mostly because of the increasing diversity of immigrants  from southern and eastern Europe who arrived then, but also due to the limited time we had to complete it during our semester.

Because of big responsibilities that everybody had to take over for that project(coding, data aggregating and cleaning, digitizing hard copies data sources, designing, UX researching, logo designing, social media account and many others) we didn’t that the opportunity to finalize our initial goals. Though we managed to take it to a good level. One of my responsibilities was of course to geocode a map so that the user can navigate and be able to easily find the geographic coordinates of those newspapers in the city of New York.

I started building an interactive map through Artmap software. I was inspired by the Tate Gallery in London since there was a similar way to geocode locations. Additionally the design was very appealing as the layout seemed to fit perfectly with the concept of our project.

 As I was trying to intergrade my JavaScript code into the WordPress platform, I faced some compatibility problems. I also noticed the background of the map was from certain longitude and latitude coordinates and it couldn’t fit to our demands, as the team and our instructor preferred to implement a historic atlas of the city. Additionally, trying to change marker pins with other icons depending on the origin country, I found additional technical difficulties. So I decided not to use Artmap tool in the end.

The next step was to find another tool that is more suitable for our project so I decided to use an open-source JavaScript library for mobile-friendly interactive maps called “Leaflet”. Mapbox was the software tool that I used to upload the code that was made through leaflet plugin to proceed with the implementation of the historical map up to the level it has been reached.

The map that was used to present our data was a georeferenced  illustration from a New York City Atlas in 1893. The grid covers almost the entire city, and it is good enough to georeference every newspaper what we had in our database. The user can interact with the publications within the context of the New York City region and when he clicks on the pin marker he is able to see valuable information for it’s publication separately.

However, using a historical illustration scanned form New York City Atlas,  I realized that it had additional difficulties. Unfortunately the user could see the edges of the page. That could restrict everyone from seeing beyond the boundaries of the page. Additionally using this kind of historical map,I couldn’t set a very high “zoomed out” level and resulted with the same conflict. One of the limited capabilities that I had with this option was that the publications points that were located outside those boundaries should be noted on a tool tip that it would be off the map and that was not looking so professional for such a project. Moreover that was not included among the main user experience principles and as a UX designer should have offered a better solution.

As the mapping project had reached a high level and it  could give more efficiency, interaction and value to whom might be interested, I was persuaded that it could also improved ,so I started modify again the mapping code and  in order to  include more components and useful tools as to add more  newspaper ethnicities for example.

 What I did this week was to find a higher resolution historical map and to implement it as a background. Along the way I realized that the problem remained the same. My new pin markers I added to the map that were near the edge of the historical map (e.g. a Greek Publication pin near Nassau county , Long Island ) revealed that even that map wasn’t good enough to georeference it, as the user was able to see the edges of the page. So my goal was to try to find another way to create it using filters that could give the historical sense of the map.

Having completed the Datavis Methods class during the summer, I was trying to see if that problem could be resolved using my Tableau skills. I tried to create the map in the Tableau platform .The results were satisfying, the annotations and the pin markers were fantastic but I still couldn’t find a way to incorporate a historical map on it.  This is still in process.

The interesting thing is that I managed to change the pin markers I found from shutterstock.com image library. Using different color codes that signifies every different language, I clustered them in a way so the user can recognize the publication newspapers that he is interested in and at the same time can eliminate whatever language the user dislikes to display on the map using the checking filter method.

By clicking on every pin marker the user can find a very detailed tooltip with a sample of the newspaper, any information that we aggregated (location, language etc.) and also a link that goes to the main profile of each publication.

What I have done within the last two weeks was to enrich the map with more newspapers from other languages(Greek, Russian Chezh, Yiddish, Turkish) based on  the database that we have already completed .Because of the of time limitation, although we have gathered our data of newspaper titles from Chronicling America, NYPL Microfilm library and NYHS, a project that lasted many weeks, we were unable to prepare all those collections, upload and mapping them at the same time.

My future goal is also to try to geocode all the newspaper collections and also to try to expand  their time span as the one that we have done so far was narrowed down to newspapers founded between 1860-1890.

Additionally tools like a search bar could also help the user navigate better. As the collection gets bigger and bigger, information gets gathered and the papers are scattered over the map area, the user could use an advanced  search engine tool to seek, locate, define and filter information (text and images) in the searchable index of the platform.

If I should write about the importance to develop such a project I would highlight that this project is a perspective of new immigrant history of New York through the lens of news media. As New York city is a real home for hundreds of communities and ethnic publications, this map could potentially discover great findings in their respective language. Moreover this mapping project could be a reminder of how immigrants have been an integral part of US history and especially in New York City history whether they arrived in 1830 or 2015.

Mapping Irish America 1919-1920

The links to my map and slide presentation made in ArcGIS Online.

I, like Amanda, had a goal of finding a mapping platform to use for projects at my job. Being the only librarian/archivist on staff, project design (and ensuing work) is all done by me. So I keep an eye out for any free or low cost program with easily accessible tutorials. One project I thought would work well for this assignment was chronicling the history of early 20th century Irish America’s involvement in Irish Independence through a collection in my archive, the Friends of Irish Freedom (F.O.I.F.). And although it is not completely free, I decided to use ArcGIS Online.

Last year, my archive had the exciting opportunity to work on a project about Eamon de Valera’s 1919-1920 trip to the United States. The documentary relied heavily on the F.O.I.F. Collection, an Irish American organization that was formed shortly before the Easter Rising and whose main goal was to assist in any movement that would bring about the national independence of Ireland. De Valera’s tour seemed like a success, with national attention helping him collect $5,000,000 through a bond drive. What actually happened was a clash of politics and personalities that led to the dissolution of the F.O.I.F., a deep division in Irish America, and a decade of court battles for control of the funds raised.  

From my research for the documentary, I had some questions that I wanted to begin to answer during the project: 

  1. Where exactly did de Valera visit during his 1919-1920 trip?
  2. Did he go to predominantly Irish American states? Is there a correlation between his stops and the Irish American population?
  3. Were Friends of Irish Freedom Branches near each of his stops? Did branches open after his visit?
  4. Where/when was the $5,000,000 collected?

I knew that I was not going to be able to answer all of my questions in the time frame, but I would at least start with his visits and the branches. I initially thought that inputting the data would be the easiest part of this assignment. I had de Valera’s itinerary from the collection, and was going to use ArcGIS Online to create the map. I was also going to add links to photos and newspaper articles that I found online or from the collection. I soon realized that the itinerary I had was not accurate (whether on purpose or changed after he came to New York), so I had to search through the finding aid at UCD of de Valera’s personal papers for dates and then the corresponding newspapers to authenticate it. And to make the search just a little harder, the national newspapers did not fully cover de Valera’s trip. It was mostly covered by local papers (a problem organizers mentioned in both collections). Also, some of the dates that I found in local newspapers or F.O.I.F. circulars were not mentioned in the UCD finding aid. I decided to include all dates with newspaper sources. With time limitations, his tour is only until November 1919 and not until December 1920. Just another reason to always double check your sources!

Once I had a good block of data, I created a comma separated value (csv) file from a spreadsheet of the 32 stops. In the file, I included date, city, state, description and image link. Not all stops have all of the fields filled in, but it is something I could go back and add. The csv file was added to my basemap as a layer, and I changed the symbols to green points with orange numbered labels in order of his tour stops. I then configured the pop ups to show the rest of the data from the file. The first stop at the Waldorf Astoria has some text as well as a link to an image of de Valera and Irish American leaders on the hotel’s roof. 

The next layer I created was of F.O.I.F. Branches as of September 1920. This information was taken from multiple branch lists in the collection, and put into another spreadsheet later downloaded as a csv file. I used the ‘Counts and Amounts’ option for the data, and the circles in the states increase in size in correlation to the number of branches. I did not put in exact addresses, since there were about 700 of them nationally. I also did not include when the branches were formed since there were so many spread out over several documents. I think in the future I would try to shade the entire state, with the color deepening in relation to the amount of branches, instead of using circles.

Overview of map.
From first stop on tour. Image URL is a link to rooftop photo de Valera and leading Irish Americans.

I didn’t want to create a StoryMap since I did not have all of the information I wanted to include, but I did create a slide presentation on ArcGIS Online. Each slide has a title, and I zoomed in and captured an area to show on the slide. The pop ups will appear if you click on the symbol, which is something I had to apply before saving. I also wanted the legend visible on the slides, but you have to click on the legend on the upper right hand corner for it to open up. The presentation is not as nice as StoryMaps, but it did work for the first stage of my project. It is a little clunky and it takes a few seconds for the map to refocus when you move to the next slide.

I really did enjoy using ArcGIS Online, and I think I would use it in the future. I know that it is limited and if I would have used the desktop version or QGIS, I could have added more features to my map before uploading it to ArcGIS Online as mentioned in Finding the Right Tools for Mapping article. What really helped me were the tutorials and videos I found online explaining the way to add layers and special features. For someone who is not technically inclined, they were much needed. I think that if I would have had all of my data (full itinerary, branch information, and population breakdown in 1920), then maybe I would have used the desktop versions first. This is definitely a project I will continue working on, and hopefully I will have a wonderful StoryMap to share in the future. 

Along for the Ride: Mapping “Warwick Woodlands”

For our mapping project I created a digital story map and a static map for the first chapter of Warwick Woodlands by Frank Forester. I first created the story map with multiple maps within it and decided to make a map that contained all the locations with the addition of a legend.

The story map feature was something I wanted to utilize for this project because there are many template options and ways to showcase the data. I have used ArcGIS online, to plot points, lines and areas for research before, but never used the platform to house as a standalone site.

I still feel like the site needs major edits, it doesn’t seem complete. I, initially, wanted to geolocate for three to five chapters, but after closer analysis of the first chapter I decided to focus my attention on the initial text and how to best visualize. Something I feel this project is missing is black and white photography and paintings of the landscape, people and places written in the text from that time period. I have hyperlinked a map that covers a majority of the trip taken in the first chapter that was current to the time period, but was not able to find other imagery I was hoping to find, yet. One image, in particular, I was thinking of was a Jasper Cropsey painting, one of his Greenwood Lake landscapes, but for the slideshow effect I wanted to utilize a few images to look over near the end of the chapter. Regarding design, I used a historic map from 1840 for New York City as a base map for the earlier stops along the way, with a gray background to highlight the historic map, other ordinary base maps used were US Topography for the map of Hoboken to Warwick and Newspaper for the map of Warwick. For marks on the map I used simple imagery such as the circle and star and dotted line. The circle points are to describe known locates and the star points for possible locales. The dotted line is to inform the viewer of the approximate path. If there was a known path or known portion of a path, it would be on a straight line. The coloration is due to being the lowest color on the list of colors in the edit function. There is eight color groups for the symbology I was using, so I was thinking about using a different color for each day to designate a different chapter of the book. Something I am finding difficult after publishing the story map is the click-ability of the points – I have found it difficult. In a way it’s the most important aspect of the page, being able to click and read for a particular point. For most of the points, I have added descriptions from the text with the page number it derives from in the novel.

For the static map, I first edited one of the maps in ArcGIS online to encompass all the points created for the first chapter, along with the line to connect all the points and then opened in ArcMap to edit. After other little tweaks, I added a title, and legend with map labels, north arrow, distance, ratio, and a signature.

Mapping: A Novel Idea & A Simple Approach

Because my experience working with data is very specific and very limited, the idea of mapping is still rather foreign to me. I approached this assignment primarily from a learning/experimenting perspective, versus setting out a goal to create a complete and polished final product. I was struggling to find the information I would need to map the locations in some of my more recent favorite literary works and decided to focus on an old favorite: The Outlander series by Diana Gabaldon. These are a series of novels (and also now a television series) initially set in 1945 and 1743/44 Scotland. They are classic historical fiction with a significant love story and a surprising amount of time travel. Because Outlander has been a popular series for nearly 30 years, there is a fair amount of analysis and fan work already online. In fact, someone has already created an elaborate map using Google Earth – I discovered this after I was deep into my own work and it was too late to brainstorm a new idea, so I plowed ahead with their data, already created, and imported it into Tableau. Although I successfully imported the geo data in and it appeared on a map, the learning curve was a bit steep for me to figure out how to assign different colors to different locations (for example – five chapters in one location as one color, etc). In other words: I didn’t know how to do what I wanted. I think the data would need to be amended at that point to have more columns than just the description of Chapter/Location and the geographic latitude/longitude data point. Too time consuming for this particular project at this time for me. So I sent myself back to the drawing board despite my hesitancy in starting over.

My first attempt at Tableau, using the Outlander book location data.

I also considered recreating the walking tour by German Traces, NYC/Goethe Institute, but I was eager to create something that does not already exist and would fit into the time frame I had to work and without already knowing how to use these platforms. I felt like a bit of a fish out of water for this assignment and still struggle with the data portion that can even get to the mapping portion. Do you use someone else’s data set (I know this is common and there are tons of open source ones) and manipulate it to fit your needs, or do you create something on your own, perhaps very simple, given the amount of work that goes into creating a data set, and have it be original?

Because I decided to approach this as a learning opportunity, I opted for simple but original. I simply lacked the time necessary to create my own locations data set for a novel, or modify the Outlander one. However, since moving to NYC at the age of 18, I have lived in 12 different apartments in four of the five boroughs. I thought it would be fun to map this.

I used an online tool to get the Longitude/Latitude points for each apartment I have lived in, except for my current one, for some privacy. I used excel to create a sheet with a simple description of 1 through 11 and columns delineating longitude and latitude. I imported this as my data source into Tableau and voila. Although this seems really simple, it was a good learning process for me. I would love to go to a workshop and learn more about using Tableau and some of these other mapping softwares, as I’d love to be able to visualize more interesting data (as I’ve seen some of my classmates do for this assignment!) – but creating my own “data set” and experimenting with importing it, etc. was helpful for me. I am a hands on learner but it would have been more ideal to be walked through a first go, but this was also fun for me. I am eager to delve more into this, especially after our visualization class last week.

Now the big struggle: I used Tableau Desktop and so I do not have a link to share like I thought I would, so here is a screen shot. I was going to share my Tableau Workbook file, but WordPress wouldn’t allow that due to security reasons (so it said).

Marie’s Homes – map

Holding Fast: Mapping American Indigenous Sovereignty

My Hybrid Tableau/QGIS Project

My Process
While exploring Yarimar Bonilla and Max Hantel’s “Visualizing Sovereignty,” I was struck by the power of untethering the Caribbean islands from the too-familiar lands and waters that otherwise dwarfed or laid cultural claim to them by virtue of a colonial past. I was also struck by the “Invasion of America” video referenced therein, depicting the loss of native North American lands as Europeans arrived, colonized, and expanded. I’d seen the “Invasion of America” before, but I didn’t realize until now how much that visualization reinforces the Manifest Destiny mindset, almost confirming Andrew Jackson’s belief that Indigenous people “must necessarily yield to the force of circumstances and ere long disappear.”[1] That video, as helpful as it is in depicting colonial greed also focuses the story on indigenous loss rather than indigenous resilience.

So, for this project, I wanted to mimic Bonilla and Hantal’s process to map the sovereignty of Native American nations in hopes of challenging the popular defeatist tale.

I started in Tableau, familiar to me after this summer’s Intro to Data Visualization intensive. I discovered a shapefile from the US Census Bureau demarcating the 2017 “Current American Indian/Alaska Native/Native Hawaiian Areas.” I had never worked with shapefiles, but found this one fairly intuitive to map in the program. I distinguished each officially recognized “area” (as the Bureau calls it) by color and added the indigenous nation’s name to the tooltip to make each area visually distinct. As does nearly every step of a mapping exercise, this alone yielded some insights. Oklahoma is nearly all designated land. The Navajo nation has a land allotment larger than some US states. Two of the largest land parcels in Alaska belong to tribes I hadn’t heard of: the Knik and the Chickaloon.

This first view also presented two significant problems, predicted by our readings both from Monmonier as well as Guiliano and Heitman. First, Tableau’s map projection is grossly distorted, with Greenland larger than the contiguous states, instead of 1/5 the size of them. Second, the limits of the data set—collected by and in service of the US government—cut out the indigenous people of Canada and Mexico, whose connections with the represented people are severed. What a visual reminder of a political and historical truth!

Census Bureau Areas 2017

Screenshot of the Census Bureau’s mapped shapefile, with tooltip visible.

I did find a shapefile of Canadian aboriginal lands also from 2017, but couldn’t find a way to merge the geometries in one map. Mapping those Canadian reserves separately, I noted immediately how easy it is for political entities to be generous with lands they don’t value. (Of course, the map’s polar distortion may be enlarging that seeming, albeit self-serving, largesse.)

Canadian Aboriginal Reserves

Screenshot of the Canadian government’s shapefile mapped.

I returned to the US visualization to see if similar land prioritization was made, changing the base map to a satellite rendering.

Census Bureau areas on a satellite map

Screenshot of the Census Bureau’s shapefile on a satellite map.

Again, the new view offered some insights. The effect of the Indian Removal Act of 1830 is clear, as the wooded lands east of the Mississippi seem (from this height) nearly native-free. Reservations are carved in less desirable spots and are pushed toward the interior as, in addition to the westward push from the east, states began to be carved from the West Coast after the Gold Rush.

Next, eager to mirror Visualizing Sovereignty in turning the power tables, I removed the base map altogether. De Gaulle’s “specks of dust” quote sprang to mind, as I saw, in full view, this:

Census areas without a base map

Screenshot of the Census Bureau’s shapefile mapped, with the base map washed out.

Just this one act changed the scene for me entirely. Suddenly, Hawaii came into the picture, calling to mind its colonization in the name of strategic desirability. The whole scene reminded me of what Bonilla and Hantal (borrowing from Rodriquez) called “a nonsovereign archipelago, where patterns of constrained and challenged sovereignty can be said to repeat themselves.” I longed for the inclusion of the Canadian lands to flesh out the archipelago, though the missing data points to one such constraint and challenge.

Revealing just a surface level of the shifting sands of sovereignty, this data set includes ten distinct “classes” of recognized lands, so I included those in the ToolTips and offered an interactive component to allow users to isolate each class, foregrounding spaces that were connected by the US government’s classification of them. For example, choosing the D9 class (which the Census defines denoting a “statistical American Indian area defined for a state-recognized tribe that does not have a reservation or off-reservation trust land, specifically a state-designated tribal statistical area”) reduces the archipelago to a small southeastern corner—strongholds resistant, perhaps, to Jackson’s plans or perhaps more probably ones who went underground until the mid 20th century when the Civil Rights Movement empowered indigenous communities and gave birth to Native American studies.

Census Bureau's D9 class areas

The D9 class of recognized indigenous “areas.”

This divide-and-conquer, 10-class variety of sovereignty was underscored by the significant contrast in tone in the definitions of tribal sovereignty between the National Congress of American Indians (NCAI) and the US Bureau of Indian Affairs (BIA). The NCAI contextualizes and defines sovereignty with active, empowering language: “Currently, 573 sovereign tribal nations…have a formal nation-to-nation relationship with the US government. … Sovereignty is a legal word for an ordinary concept—the authority to self-govern. Hundreds of treaties, along with the Supreme Court, the President, and Congress, have repeatedly affirmed that tribal nations retain their inherent powers of self-government.”

In sharp contrast, the BIA contextualizes and defines sovereignty with passive, anemic language, explaining that, historically, indigenous tribes’ “strength in numbers, the control they exerted over the natural resources within and between their territories, and the European practice of establishing relations with countries other than themselves and the recognition of tribal property rights led to tribes being seen by exploring foreign powers as sovereign nations, who treatied with them accordingly. However, as the foreign powers’ presence expanded and with the establishment and growth of the United States, tribal populations dropped dramatically and tribal sovereignty gradually eroded. While tribal sovereignty is limited today by the United States under treaties, acts of Congress, Executive Orders, federal administrative agreements and court decisions, what remains is nevertheless protected and maintained by the federally recognized tribes against further encroachment by other sovereigns, such as the states. Tribal sovereignty ensures that any decisions about the tribes with regard to their property and citizens are made with their participation and consent.” “Participation and consent” are a far cry from “the authority to self-govern,” and even though the NCAI boasts of the Constitutional language assuring that tribes are politically on par with states, they make no mention of lack of representation in Congress or other such evident inequalities.

Shocked by the juxtaposition of these interpretations of sovereignty (and in a slightly less academically rigorous side jaunt), I pulled population data from Wikipedia into an Excel spreadsheet which I joined to my Tableau data. Using the World Atlas to compare population density of these reservations to the least densely populated states, I created an interactive view to show which reservations are more densely populated than the least densely populated states. Not surprisingly, many beat Alaska. But, other surprises emerged, such as the Omaha reservation’s greater population density than South Dakota, their neighbor to the north.

Area by population density

Screenshot of comparative population density.

I next wanted to recreate, in some way, the equalizing effect of the Visualizing Sovereignty project’s decision to same-size all of the Caribbean islands. But, with 573 federally recognized tribes, that seemed too ambitious for this assignment. So, I turned to video to record an exploration in zooming, giving some spots greater consideration than others, and starting in an oft-neglected place.

With Hawaii now foregrounded, the distortion of Tableau closer to the North Pole seemed too significant to neglect, so I learned a little QGIS in order to utilize its more size-righted mapping. Playing around with the new program, I found a powerful tool for foregrounding identity: labels. Merely including them turned the nonsovereign archipelago into a menacing swarm of equalized names. At all the same font size, they seemed like the Immortals of Xerxes’ Persian army, ever replenishing (as demonstrated in the linked, very rough video), regardless of how far away or close up I zoomed. They took over my RAM, slowing the display down with each change in scale, asserting themselves in greater detail the closer to the land I got and at their own pace. This view seemed to better represent the truth that contradicts Jackson’s prediction: the Indigenous have resisted and persisted despite all attempts to eradicate them. Further, this view points to the potential of collective action—a potential that may be best developed through DH, which can cut across geographic space.

QGIS view

A screenshot of the labels in QGIS

This project has raised for me a host of questions. What about the nearly 300 unrecognized tribes not included by the Census Bureau? What might be revealed if data from Canada and Central America were included, restoring indigenous domains to their geographic boundaries rather than their political ones? What happens if we introduce the element of time, as Visualizing Sovereignty did, to this visualization? (And, thinking of the Sioux Winter Coat, what if that temporal consideration were made through an indigenous representation of time keeping?) What happens if we look at this data as capta, including in it profiles of those reduced to geographic entities and statistics or map views that begin from each region (for example, the Hopi are surrounded by Navajo—a very different experience from the Quinault)? And how might digital humanities provide a platform or opportunities for sharing culture and resources across these spaces and histories?


[1]For the fully appalling context, read the paragraph twelfth from the bottom of his address to Congress on December 3, 1833.  https://millercenter.org/the-presidency/presidential-speeches/december-3-1833-fifth-annual-message-congress





Praxis Mapping Assignment

My goal when approaching this assignment was to find a mapping platform that I could apply to projects at work. I work at a small history archive in Long Island City that focuses on New York City political and Queens local history. I’ve seen some archives that have developed geotagged photos to show what a specific building or street looked like at a different point in history, or some have developed “walking tours” where users can follow a predetermined path to see historic sites and have relevant photos or material displayed when they get there. While reviewing the options in “Finding the Right Tools for Mapping,” I wanted to choose something that was free and accessible for someone with limited technical skills (ahem, me). I also wanted something that had at least some interactivity instead of a static map. I first skipped over the section on ArcGIS Desktop because it’s listed as proprietary and not very beginner friendly, however, one of the strengths lists ESRI’s Story Maps which I thought would create a neat linear display that would be great for creating a historic walking tour using archival materials.

Since we only had two weeks to put together a map, I didn’t have time to do the necessary research to put together an actual walking tour using my archive’s materials – so I created a map based on various places relevant to my life i.e. where I attended school, a semester abroad, honeymoon, etc. At first, I followed the link directly to the ArcGIS Story Maps page and quickly found the classic StoryMap page and found that one to be more accessible. I created a free account and created a map with nine data points.

Original Story Map

I plotted the points but quickly realized that the map was more static than I would have liked and didn’t have the easiest navigation between the data points. It did provide more information once you clicked on one of the data points, but I felt that this would be a better option if it were embedded into a webpage or online exhibit. I looked up a few tutorials and found Story Map Tour. By this time, I had latched on to the “walking tour” idea and was looking specifically for a map that could move through the data points in a more linear fashion. The StoryMap Tour seemed to be catered for that design.

This is the map I created: http://arcg.is/1DD9m8

Creating the map: the interface for creating a story map is very user-friendly and offers a lot of options for getting your data points on the map. Images and video and can be attached to the data points of which can be imported via Flickr, Youtube or CSV file. I didn’t have enough data to attempt a CSV import, but I have reservations about the level of detail needed to capture the information and plot it on the map automatically. I also wasn’t thrilled about having to use proprietary sites to import media content, but I used so Creative Commons images to add a visual element. When importing via Flickr – I had to manually plot the points which became very time consuming. Points could also be added using a URL to media and the latitude/longitude coordinates, however, that is also only able to be done one by one and could become time consuming.

Customizing the map: there a few features that allow you to customize from predetermined choices. The data pointers come in four different colors, there are 24 options for the base map, there are 3 layout options, and a title and header section that can be customized to include a custom logo or link to a specific web page. While this may be limiting to someone who’s technical knowledge with mapping/GIS software – it worked for my needs. I was also impressed with the how close the view would zoom in onto the map which would make manually plotting points much easier. After I plotted my nine points, I went through and give each data point a title and short description. For the 9th point, I filled the description box with lorem ipsum text to get a sense of how much content could be included.

Overall, I was trying to experiment and test the features of Story Maps Tour – with the idea of a archival-based walking tour in the back of my mind – and feel comfortable that I would be able to put something together. My next step would be to attempt importing a larger data set from a CSV file in order to really test the limits. However, for smaller, more localized projects I think Story Maps is a perfectly adequate tool for beginners with limited skills and limited budget.

10/2: Mapping Praxis Assignment

When starting my project, I knew that I wanted to explore topics within archaeology. One dream of mine had always been mapping out places that had sites, monuments, or artifacts in common. For myself, this was going to be a way to try and find connections between cultures or other data points that stand out. This map that I had imagined spanned across time as well as geographic locations, a kind of combination of a timeline and global map. Since my undergraduate career, however, I quickly realized that it was difficult to accumulate that sort of data without knowing the content intimately or being shown the resources. This is since much of the data is either not published yet or down so many scholarly rabbit holes that it is difficult to track down. I also encountered this when trying to design my mapping project.

When searching for open data sets, I was also looking for something that was intriguing to me. After some time, I ended up coming across a list of archeoastronomy sites that were listed by country on a Wikipedia page. Archaeoastronomy is the study of archaeological sites that may have been used to study astronomy. The idea then popped into my head, not only to see the sites marked on a map, but to see a sort of time lapse (either during the time period of the site, if possible, or present day) of the stars from the point of view of whatever site you click on. Although I felt like the sky-gazing part of the project would not be feasible with the programs we were starting out with, I was still curious to see where these sites were located and if there were any sort of pattern. I only resided on using Wikipedia due to the topic and the fact that much of the data I was finding was not something I could map.

When starting, I decided to create an excel sheet with the columns listed as follows: Country, Site, Location, and Coordinates (later separated to Latitude and Longitude). In order to get the coordinates for each site, I searched for the sites on Google Maps. Of course, this means that I was relying on the accuracy of Google Maps, but since there was no other way I could get the coordinates without physically being on site, it worked.

This process was interesting for many reasons. First, I noticed that there were some sites where their exact locations were not published, such as the Puyang tomb located in China. When trying to search for this site, on both Google Maps and through the search engine, there was no mention that I could find that clearly stated the location; not even an area that it was near besides Puyang. Thus, this coordinate was left blank. Some other locations that were like this one, I just used the coordinates for the nearest town. What stood out to me about this was that I did not have this trouble with any of the Western countries. Many of the Western locations were either more well-known or better mapped than those in Non-Western locations. This, however, also lead me to think that maybe the exact location for some of these sites were kept secret for a reason. When I was on a dig (one that had not been published yet) we were always told to not tell any of the locals that we were digging at an archaeological site. This was to keep away any potential looters.

In addition to this, there were topics on the Wikipedia page such as Nuraghi in Italy. A Nuraghe is a type of archaeological structure that is located all over Italy. Since this is something that appears many times in many different locations, it was not something that I could use as data points. The others that showed up like this were the temples on Malta and the Funnel-Beaker culture that appeared in Finland and ended up spreading throughout the Mediterranean.

On the opposite end to this, the listing for India was so extensive that many names were left off the list and the reader was referred to a book that discussed archaeoastronomy at sites located in India.

Being able to go through the names one by one on Google maps, although time consuming, was also fun for me because I was able to see sites that I have never seen or heard of. It was so interesting and made me even want to visit some of these sites one day.

When going to use a program, I checked out a few from the article we read but I eventually settled on QGIS. I am not sure what I was expecting (having never used a mapping program before) but this was not it. There was nothing to show me how to get started or where to go for even the simplest of things. Thus, I went to Google.

The first thing I knew I wanted to do was have a world map background to be able to map my coordinates. Eventually I came across a YouTube video that showed me how to set up three different types of map backgrounds, a regular map, a terrain map, and a satellite map. I decided that I would just use the regular and terrain maps.

Once I got the map background, I needed to figure out how to set up the coordinates. I ended up finding a tutorial page on how to import spreadsheets. This is when I had to separate my Coordinates column to Latitude and Longitude so that the information could import properly. Once imported, the items only showed as dots to mark the coordinate spots. After some searching and exploring the program some more, I was able to figure out how to add site labels to the dots. Once the labels were up, I noticed that there were some letters with accents and from other alphabets that did not transfer over well from my spreadsheet. I took some time to see if I could edit the labels within the program itself or, if I were to edit the labels within my CSV file, if I could overwrite the information I previously imported for more up-to-date information. I was not able to figure it out or find it online, so I just edited my CSV file and imported a new layer, deleting the previous one.

The next step that I tried doing was to see if the labels could only show if you were to either hover over the dot or clicked on it. I saw a few pages that were talking about how to set up the map for the web where you could use HTML and have the labels activated on hover. The problem with those pages, however, was that there were plugins that had to be installed in order to do this. The different plugins that I was told to search for never ended up showing up on QGIS for me. I am not sure if this was due to the fact that the plugins I was showed were out of date or if they were renamed. Whatever the case might be, I was not able to figure out how to do the pop-up labels.

If I had more time and a place I could go to look up different things that could be done in QGIS (a documentation file that is updated?) I think I would enjoy using QGIS. I am not sure, though, if QGIS would be able to support my original idea of being able to show a time lapse of the sky (though if it could be in the form of a video maybe that could work?). I have attached a PDF of the map I made down below.

Reconciling with the Archive

I’ve been looking forward to this week’s readings since the intersection of DH and the Archives is what I’m most interested in. However, in an effort to be totally transparent, I found myself reflexively being defensive when reading through Daut’s article the first time – there’s a history of archivists struggling to be recognized as professionals in their own right – and had to reread with a conscious effort to keep an open mind in case my own bias was keeping me in an old pattern of thinking.

In terms of access, I think Daut framed her discussion of decolonizing archives and repatriating Haitian documents in a way that exemplified discussions that archivists are having. I think in most disciplines there is a push back against the white/straight/male version of history that is commonly reflected in archival holdings and there has been a real effort in recent years to include materials that more accurately reflect a realistic historic record. I’m also glad she included Revue de la Société Haïtienne d’Histoire, de Géographie et de Géologie in her discussion about digitization. It echoes the same sentiments that was expressed in “Difficult Heritage…” from last week’s readings. Just because there are documents that can be digitized and available universally, it doesn’t mean that ethically they should.

I couldn’t overcome my bias during Daut’s discussion in the “Content” section as she advocates avoiding the “citizen historian” or crowdsourcing model in regards to digital scholarship and working with the materials. She says “Without a doubt, neither trained archivists nor traditional historians can be replaced in digital historical scholarship.” However, she continues on to discuss the contributions of “historian archivists” which itself diminishes the expertise and training of professional archivists. I think there is a clear difference in being trained to recognize and describe meta/data from documents and being a subject expert (historians) on the content, but both are needed in order to to fully engage with the data presented. This is a discussion that comes up from time to time in the archives profession and something I wanted to mention, but I do not want to devote too much space in this post to it.

Daut’s discussion on curation and context is a mixed bag for me, and I believe its because the term “archive” means something different to me. When Daut mentions that “Digital archiving projects…teach the reader/user the significance and importance of a defined set of documents…” that seems more like a digital project than an archive. By having a creator limit the documents that are used, it might restrict information that could potentially contribute to scholarship. The large amount of materials available in an archive (hopefully) means that no matter what question a researcher is trying to answer they have the resources to do so. That being said, I think that deeper evaluation of archival sources can contribute meaningfully to scholarship. In the case of Digital Aponte, a space was created for the absence of archival material. I thought the Digital Aponte project was a great way to carve out space for a gap in the archival record and to compile secondhand accounts in an effort to recreate some of what was lost. I particularly liked the interdisciplinary nature of the website and how there were sections devoted to genealogy and mapping, all while allowing annotations to encourage collaboration across multiple disciplines. Trying to center and create an environment that resembled Aponte’s Havana also adds necessary contextualization. I’m excited to hear Ada Ferrer’s description of the project during class.