VR Experience at a Columbia Colloquium

I attended Columbia’s Emerging Technologies Colloquium a few Saturdays ago and delved myself into the virtual reality world for the first time. Even though it was about a little over a month ago now, the experience is still resonant in my mind.

After listening to a talk by Columbia University’s State of AR/VR & the Computer Graphics and User Interfaces Lab, Steve K. Feiner, the colloquium broke for lunch and were invited to check out some of the Virtual Reality simulators that were set up around the room – I couldn’t wait to dive in! 

I have never engulfed myself into VR before, so I was very excited to experience it in an environment that was meant specifically for its exploration and play. There were eight devices set up around the room and the first VR experience I ‘dove’ into was an underwater one. From the computer screen, where you can see what the user is experiencing, it looked like the smoothest game play of all the other VR sets and I was looking forward to interacting with it. As soon as I went in, it was very immersive. I was at a single point a few meters below the water in what seemed like the ocean – what ocean it was though, I don’t have the slightest idea. There were different color coral and species of fish around me and I felt as if I could even touch them. I was trying to move to different spots at first with the controls but was unsuccessful, thinking I could swim around. The visuals were so beautiful though. Something I was able to do with the controls was slow down the scene around me, which was impactful. It made me notice smaller details like the sun’s rays on the shell of a sea turtle that swam by me. Also, the title for this specific underwater experience was “Sea Turtle” and thinking back at this moment reminds me of the Matrix when Neo notices the woman in the red dress – it looked and felt so real. Another title that I saw someone in was “Jelly Fish”. In that one a group of jelly fish pass by. Anyway, while I was immersed, a conversation was brought up about historical pedagogical uses for VR, such as being a witness to the Gettysburg address or some other significant place at a specific time. I was very much enjoying my VR underwater experience, but I couldn’t help but want to engage in the conversation, as well. Being plugged in though, I didn’t feel like I could converse with them about the topic, but when I came out, I discussed my interests in virtual tourism for pedagogical purposes. The handler of the machine was interested in this idea, as well, and told us that Google Earth is a good platform to play in, but while he was trying to change the platform to Google Earth, he informed us he wasn’t patched into an internet connection and was not able to show us at the time. 

Another Virtual Reality simulator I plugged into was a foreign language teaching tool. Once you had the gear on over your eyes and ears you were in a classroom. You are a student at a desk and there is a teacher at the front of the classroom. The teacher informs you about the language you are about to learn and then begins. The tutorial I was in was for Hebrew, but it malfunctioned and was not able to pick up my voice. The handler asked if it was working and I had to tell him it wasn’t and he, even more disappointed than me, took off my headset and told me he had to shut it down for the rest of the day. 

I am very intrigued by the direction VR can go in terms of pedagogical use and looking forward to watching this growth closely and hope to play in it more often, as well.

Intro to Photoshop Workshop

GC ITP Skills Labs: Intro to Photoshop Workshop
Monday, October 21st, 2019

Last week I went to the “Intro to Photoshop Workshop” as part of the GC ITP Skills Labs taught by Jessica Brodsky.

As some background, I have never really worked with Photoshop, but I have worked with other photo editing programs such as Lightroom. I had always been interested in using Photoshop to be able to edit photos or create items.

The first thing that we learned about was the differences between vector and raster graphics, which I had heard of before but was never quite sure about. Vector graphics (such as logos) that are made using mathematical formulas. As the image is enlarged the image retains its figure. Raster graphics (such as digital photos) are made up of pixels (units of color information). As the image is enlarged it becomes pixelated.

Next, we went over color theory which showed how colors mixed with print as opposed to digital media. Then we discussed hues (colors) as opposed to saturation (how vivid a color appears. Finally, we went over brightness (how much objects appear to be reflecting light) and contrast (differences in brightness and colors that make image distinguishable). Brodsky also mentioned how working with the different layers in can preserve the original image so that any errors or mistakes can easily be mended or reverted.

We then went over the composition “Rule of Thirds” which discussed how key parts of an image is usually made “dynamic and interesting” when they align with intersecting points of a 3 by 3 grid. I had known of the “Rule of Thirds” previously but did not quite understand how it worked which I am glad we were able to go over.

After learning these items and adjusting an image on our own, we went over the different ways of saving our adjusted image. First saving the file as a photoshop document (ending in .psd), which saves each layer so that everything can still be kept separate, or saving the file as a digital image file (ending in .jpg), which compresses all of the layers together into one image. We also learned how to save a digital image file so that it is of print-quality resolution, which means having at least 300 dpi (dots per inch) / ppi (pixels per inch).

Although I had not used Photoshop previously, I had enough prior knowledge to quickly grasp the topics covered. For myself, a beginner class may not have been the best fit for me. However, for those who are looking into learning the basics and do not know where to start, I think this workshop was perfect. It was enough information to get a handle on things without being overwhelming.

Introduction to the Command Line Workshop

At the end of last week’s class I mentioned that I wanted to do a text analysis project analyzing a large collection of syllabi. Zack asked if I knew how to use the command line, since it could be a good approach. 

I didn’t have any experience, but as luck would have it I was able to attended the ITP Skills Lab, ‘Introduction to the Command Line” on Monday, 10/28/2019. The course was attended by a diverse mix of students from doctoral programs and Masters programs; those getting the ITP certificate, and those who are not; and humanities, education, and science students. It was interesting to hear how very different researchers intended to use the command line. 


These are my key takeaways from the workshop, that I think will be helpful to those who did not attend.

1. The command line is a text interface for our computers, as opposed to a Graphical User Interface, or GUI, which is what we usually interact with — icons rather than text. It is a program that takes in commands, which the computer’s operating system then runs. I like to think of the command line as the “back end” of the computer, while the GUI is the “front end.”

2. We worked through four exercises using a set of files that we downloaded to our desktops (attached here for others to practice with), to understand how to move around the directories, or folders; create new directories;  edit existing files; rename files; create new files using Nano, a text editor; move files to different directories, check the directory to view updates.

3. We also learned how to do some text analysis, including how to find counts ( word, line, and character) and to use wildcard characters. I have some experience with SQL, and this was the easiest section for me to understand, because I had that background. In addition, some of the special characters like pipes (|) are used in cataloging systems, so I had experience with those as well.

Rather than list each of the commands and arguments that we learned in this post, I am attaching the resources that we were provided:

1. The workshop with exercises and steps taken to learn introductory command line.

2. Additional resources and cheatsheets that the instructor provided, including command line and wildcard guidelines among others.

 The workshop was led by Ph.D. student, Kathryn Mercier. This was her first workshop, and teaching the command line for the first time to novices is very hard! I know that I will eventually teach in some capacity, and it always helpful to see what works and what doesn’t. For example, using colored post-its to understand if students are “getting it” is really helpful, but she didn’t always remind us to use the tool. Additionally, while her workshop material was really good and easy to follow, she often missed steps when trying to move away from her computer, and I ended up confused. I know this will change with experience, and once I realized that I could follow the website rather than relying entirely on her instruction I was able to find a rhythm between reading, listening, and performing the tasks. 

I really hope that there are more workshops and opportunities to spend more time working with the command line. I think it’s fun!

What I learned from the ITP Skills Lab Workshop

Yesterday I attended the ITP Skills Workshop which took place in the computer lab room 6418. The workshop was led by Ph.D. student Kathryn Mercier. The goal of the workshop was to give general computer users a more in-dept understanding of how their operating system interacts with commands in a shell as opposed to a graphical user interface. Most users interact with their operating systems through GUI (graphical user interface), which is the outer most layer. Users may give commands through the interface by clicking, dragging, scrolling with their computer mouse, by pressing a combination of keys on their keyboard, typing in a search bar, and now by speaking directly to their OS. However, going through the workshop, I now have a better understanding of how to use my computer’s command line interface to accomplish the same goal. The command line interface is essentially a backdoor to tell your computer what to do without interacting directly with the objects you want to work with. You don’t have to click on a file to delete it. You don’t have to open a document to find out how many words it has.

Over the course of two hours, we worked through a 4-part exercise with a file we downloaded. We initially opened Terminal which is the command line interface for iOS, and we located ourselves in the directory using the command; [pwd]. [pwd] is a command which works with Unix operating systems which include iOS LinuX and Git. It unfortunately does not work with Windows so I had to complete my work on a borrowed Mac laptop computer. We then used commands such as [cd directory-name, cd .., cd~], all commands that help users change directory or get to the home directory. Once we are in the desired directory, we can use the command [ls] to list the files. We did a lot of work locating files using those commands using alternating methods such a writing the file path directly [ cd Desktop/Directory/filename].

Once we understood how to move around the directories, we created new directories and edited existing files using commands such as [mkdir directory-name]. We use text editor nano to write the file. Then we repeatedly used [ls] to list the updated directory to check if the file was created and saved.  We used the command [mv] to rename files and move files from one directory to another and [cp] to copy the files.

We did a bit of text analysis using the [cat] [grep] commands and arguments such as [-w] for word [-l] for line [-n] for number [-i] for all cases, and [*] for a wildcard. We ended up writing lines such as

[Grep -w n “The” haiku.txt].  This command returns all lines containing the word “The” and the line number.

Or [ls p*.pdb] this command lists all .pdb files starting with the letter p.

We can do even more analysis by getting the word count for each file using the command [wc], which also returns the number of lines and number of characters as a dataframe. We can save that file separately. we can rearrange our numerical data by order of greatest to least using the [sort] command. Just like a dataframe, we can print the head of the tail by using [head][tail] command.

Overall, this workshop was a great resource to me. Although I had learned a similar concept when I completed The Introduction to GitHub assignment in DataCamp. I felt a lot more comfortable going through this exercise. Perhaps the prior knowledge gave me a boost. However, the in person instruction was helpful and will I be using my command line shell a lot more moving forward.

This is an example of what I did in my Windows ‘s command shell as a demo.

Infrastructure in the news

Our readings for yesterday illuminated these news items for me:
https://www.huffpost.com/entry/alexandria-ocasio-cortez-mark-zuckerberg-political-ads_n_5db0afa6e4b0d5b789454272 , esp. from 1:30, and the response, “Congresswoman, I would say that we’re not the one assessing…” (resonating esp. with Posner’s “See no Evil”).

and https://www.technologyreview.com/s/614487/meet-americas-newest-military-giant-amazon/?utm_campaign=site_visitor.paid.acquisition&utm_source=facebook&utm_medium=tr_social&utm_content=keywee_paywall_retarget&kwp_0=1416678&fbclid=IwAR0Q6y3B6ZG100sBJS5QytU4SLBi-D09P3psgeWTa3WjWCQsXaqeUbmJ-6s
Last August, I was among a group of people arrested with an organization called JFREJ, at an event calling attention to Amazon’s role, with Palantir, in supplying surveillance tech to ICE. At a follow-up meeting, the question arose of whether to boycott Amazon. Reading this article led me to think about the likely demographic overlap of Americans who boycott Amazon (on policy grounds like these, rather than the consumption ones we discussed in class yesterday) and those who subscribe to the Washington Post. Bezos and Trump seem so similar and aligned in many ways, even as they perpetuate their public images as political and personal rivals. And as Bezos’ newspaper tries to profess and promote an ethics of care. The invisibility of tech makes it hard to figure out infrastructures, as I think was implied in Star’s article. And makes it harder to identify, and align with, the good guys.

Omeka Workshop

On Tuesday I went to Digital Initiatives workshop on using on using Omeka.net. I’ve only ever interacted with Omeka as a user and didn’t have any experience using the platform. The presentation slides are on the Digital Initiatives website.

Omeka is a content management system (CMS) and publishing platform that is used by many archives, historical societies, and libraries to build digital exhibits and small collections of objects. Omeka focuses more on metadata than WordPress so it’s a good option for collecting higher amounts of data and still being able to organize and present themes and narratives. The presenter went over concerns that should be thought about before choosing a CMS for a project such as:

  • metadata standards – what metadata standards do you want to use when importing your digital objects. Omeka defaults to Dublin Core but it can be customized
  • file formats – what file formats will be included? There are standards for file formats that you should think about, and this will also help you manage your storage i.e. a TIFF will require more storage than a jpg
  • information architecture – how do you envision accessibility and discoverability of your project
  • rights and permissions – do you have the rights or permission to use all the objects that will be used in your project?
  • sustainability – do you have the time to manage the project and update it when there are new versions of files available or to check compatibility with new media

The next part of the workshop was going through and using our test site to look at the different ways Omeka can be customized (we didn’t use Omeka S because of the cost and the extra features weren’t relevant for an introductory workshop), added individual items to the sites, and created a collection. The collections are comprised of items that are specifically curated to express a theme or narrative. I really liked the comparison that presenter mentioned where the items held in Omeka are the archive, but the collections are similar to pulling items out of the archive for a museum display.

Overall, the workshop was pretty easy and I found Omeka to be pretty accessible. I’ve used more complex CMSs tailored specifically for archives, so a lot of the interface looked pretty familiar. There was mention of an advanced Omeka workshop in the Spring that will focus creating exhibit and focus a bit on sustainability. The exhibits was the portion of Omeka I was particularly interested in so I’m looking forward to that.

Haiti’s Historical Erasure: A Reflection

(I wanted to contribute my thoughts on Wednesday’s class since I missed the discussion.)

“Haiti at the Digital Crossroads” is a richly layered examination of the modern challenges of archival work in the digital humanities. The author, Marlene Daut places 19th century Haitian historical narratives at the center of her argument and uses the summoning of Papa Legba, the gatekeeper of the archives, as an overture to one of the most the traditional epistemological frameworks for Haitian scholars, Vodou.

The text does not go deeply into the revolutionary history or the emblematic ‘image problem’ Haiti faces but is resonant in significant ways. For many people outside of Haiti, this piece is their introduction to figures such as Toussaint Louverture, Jean-Jacques Dessalines, and Henri Christophe recurring as more than honorable mentions in a discussion about archives and history. For the better part of two centuries, the Haitian Revolution has been a footnote in 19th century discourse. It is only ever brought up to reassign the modern political instability in Haiti into a direct and continuous line of violence to the revolution of 1804; or to pontificate about the ‘lack of progress’ that has been achieved since. Daut’s text is conscious of those facts and still carefully avoids over explaining the importance of the revolution and its cascading effects for black self-determination. However, the context is clear. The Haitian Revolution has never ceased to be a question mark to the powers that be, never mind the short-lived men who accomplished it. So why would these men or the revolution they waged be highlighted in any history books?

Vodou As an Epistemological Framework

The use of Vodou as an epistemological framework which creates alternative paths between the world of the living and that of the dead is a useful approach for archival work which seeks to understand a history that was often not preserved in text but by the memory of the dead we now wish to study. Vodou as a religious philosophy is irreconcilable with western religious traditions that inform western epistemologies. Unlike Christians who devote their earthly existence to the eventuality of eternal life, vodouissants have a sacred relationship with death and spend their entire life preparing for this important transition by honoring a relationship with their departed ancestors through ritual practice. Accessing an archive through vodou means understanding that the dead is itself a source of knowledge. One must acquire a profound understanding of how the dead communicates with the living and how the living can call out to the dead, not just by looking at archives but through other phenomenological pathways such as summoning of a Lwa Papa Legba.

Erasure and Inaccessibility in The Archives

In the context of a republic born out of a colonial history of slavery and to a large degree controlled by the interests of American imperialism since the 19th century, there are significant challenges with the archives, the foremost being, erasure and inaccessibility.

Haitians, much like American descendants of slaves live with the trauma of ritual erasure, not just in the archives of text and artifacts by in commerative and historical spaces. The positive promotion of slaveholders in our public commemorative spaces intentionally divorced from the memory of slavery is an act of historical erasure and a moment of ritual erasure for the descendants of slaves every time they are forced to endure the denial of their history in their own public spaces. I once had such a moment myself when I visited historical places in France for the first time. I remember walking through the hall of mirrors at Versailles and experiencing a moment of ritual erasure. Seeing the gluttonous display of wealth made me sick to my stomach, understanding that at the time that Louis XIV – Louis XVI built this palace and its grounds, It was on the backs of slaves in St. Domingue working on the sugar cane plantations and dying by the hundreds doing so. The erasure of my ancestors was in plain sight yet no other tourist around me seemed to have a clue about the ugly history that yielded these gaudy jewel-encrusted halls. Much like Daut reveals about France’s intentional erasure of Haiti from its history in the rejection of Nemours Histoire Militaire de la Guerre d’Indépendance de Saint-Domingue when “…the French government did not think these materials actually pertained to France”

For digital humanists to address erasure in historical narratives, they must rethink how they approach the archives and be willing to find pathways outside of the archives. Daut points out that one of the prongs in the erasure problem is the fact that; the Haitian people have not been in charge of their narrative; and the sources that have traditionally spoken for them have often come from non-Haitian spaces. Digital humanists must look at the archives differently to center Haitian narratives from Haitian spaces and invest in the work of Haitian scholars.  For example, the Revue de la Société Haïtienne d’Histoire, de Géographie et de Géologie is a Haitian journal that has been regularly published since 1925 yet is rarely used as an authoritative source outside of Haiti. The designation of what is and what isn’t an authoritative source is an important aspect of how Haiti’s erasure persists in western epistemologies. Many times in the text, scholars point out that Haiti doesn’t have a complete history written by Haitian historians, implicating that a written history is more authoritative than the one uniquely preserved through vodou and other traditional epistemologies – falsely leading to the conclusion that Haiti has a poor record of its history.

Although it is understandable that for the purpose of archival work, accessibility to material history such as text and artifacts is important for the construction of the historical narrative of any country. And the lack of accessibility to Haiti’s material history is an archival problem that Haitian humanists must work together to solve in the spirit of Jacques Roumain’s work. In Haiti, there is an idea of collaborative togetherness called konbit that we love to preach but rarely practice. And it is the responsibility of Haitians scholars to actualize this idea in the work of rehabilitating Haiti’s historical narrative.

Toussaint Louverture, Haiti’s founding father, who died in captivity in Fort-de-Joux, France said this as he was captured, and I think it is apt to repeat here in the context of Haiti’s “bad press” as Daut puts it.

 « En me renversant, ils n’ont abattu que le tronc de l’arbre de la liberté des noirs. It repoussera par ces racines parce qu’elles sont profondes et nombreuses. » Toussaint Louverture

Translation…

“In overthrowing me, you have done no more than cut down the trunk of the tree of the black liberty. It will spring back from the roots, for they are numerous and deep.” Toussaint Louverture.

Williamsburg in Monochrome: A Photographic Map

Via CartoDB — Williamsburg in Monochrome: A Photographic Map

I. The premise for my mapping project draws its inspiration from our running class dialogue about the complicated ways in which we must all negotiate our subjectivity when leveraging digital software and tools to design and build maps. So, in approaching the praxis mapping assignment, I found that my core aim was to wrestle with the interplay between subject/object by integrating cartographic methods with monochromatic photography, in turn juxtaposing the overhead vantage point of traditional cartography with the first-person standpoint and embeddedness of photography. Intending to embrace a more reflexive and intimate approach to mapping, in other words, I wanted to challenge traditional cartography by considering the means by which a series of photographs could invite viewers to craft their own personal visual narrative about a geographic space. While I feel as though my project involves more aesthetic ways of knowing than the propositional or analytic epistemologies of traditional mapping methods, I still want to recognize the fact that this approach nevertheless depends on those same cartographic conventions, otherwise I wouldn’t have a prearranged map of Williamsburg to plot each of my black and white photos. I also want to note that while I cannot wholly unlearn the biases or inclinations of my eye as an amateur photographer, I did make a self-conscious effort to capture a wide array of material, ranging from traffic lights and church spires, to convenience stores and ripped fliers, to flagpoles and local graffiti. Taking the urban landscape of Williamsburg as my priority, and with late-stage gentrification as a key thematic focus in my daily routes, I ultimately elected not to embed any photographs of people, if only because I did not want to exploit their likeness for the sake of my project or its associated narratives. (Admittedly, I am also not too skilled at portrait photography.)

II. With the vast majority of us having combed through our fair share of Google Street View, I think I can speak for most people when I say that these photos are decidedly austere in their ~25,500TB attempt at rendering an “immersive geography” of the world, which is to say, the industrial world. To some extent, given the parameters of Google’s on-the-ground approach to cartography, this spare style is understandable and even somewhat expected, but I nonetheless feel as though it is important to the note that the Lovecraftian deity of Google did not shoot these photographs wholesale; rather, these interactive VR panoramas are only possible due to a process called image stitching, in which computer software quilts together an overlapping array of adjacent photographic images. Part of the inspiration for my map plays on this Big Data concept of global mastery — i.e. via capturing and showcasing a three-dimensional rendering of our local “street view” experience of the world — by photographing and mapping Williamsburg through the subjective vision of a single digital camera.

III. The offer a visual overview of my photographic map of Williamsburg, built more specifically with CartoDB, I’ve embedded three annotated screenshots below in order to demonstrate the visual identity of my map as it stands now.

So far, I have plotted a total of 19 locations (and counting), each of which is represented by a black dot. I am hoping to expand the circumference of these plotted locations in order to better represent the eclectic, yet gentrified array of urban terrain that Williamsburg has to offer.
Hovering over one of the plotted locations reveals basic information about the title of my photograph, as well as its latitude and longitude.
Clicking one of the plotted locations will then reveal its associated black & white photograph, accompanied by a title.

IV. I see this project as a continual work-in-progress — one without an end in sight. In my mind, it is iterative, open to any and all collaborative efforts, reserved to a running state of flux and revision. As we seem to recognize as a class, maps are never subject to completion because the mere concept of “completion” in cartography is a representative fiction — or, better yet, an ever-persistent fantasy of corporate, colonial, and/or national hegemony. As far as my efforts go, in other words, I see this map as a draft in its beginning, ever in process and never quite complete.

Little Syria, New York

I used this praxis as an exploratory step in what I hope will become a larger project and potentially my thesis/capstone work. Recently I had the opportunity to walk the area of downtown Manhattan that was known during the late 19th and early 20th centuries, roughly 1880-1940, as Little Syria with GC historian and president of the Washington Street Advocacy Group, Todd Fine and music historian Ian Nagoski. I have not lived very long in New York City, so going in my knowledge of the history of this area and people who had lived there was rudimentary at best. That being said, I wanted to learn more about this group of immigrants from the Eastern Mediterranean and their role in the history of New York City, especially as the perception of immigrants from this part of the world remains so highly contentious. I have a background in Islamic/Middle East Studies and Arabic language and have been looking for a bridge to connect my current study of the digital humanities with my previous work in the Middle East. I think this project may just be that bridge.

“I believe that you have inherited from your forefathers an ancient dream, a song, a prophecy, which you can proudly lay as a gift of gratitude upon the lap of America.”

– Khalil Gibran, I Believe in You (to the Americans of Syrian origin)
Ottoman map of Greater Syria circa 1803

The walking experience of Little Syria was an incredible dive into the physical history of the area which was located on Washington Street, just south of the 9/11 memorial to Battery Park, but it was also an auditory exploration of recordings created by its residents, provided in the form of  a playlist by Ian Nagoski. The name Little Syria, can be a little misleading as it refers to the region of Greater Syria, which in the late 19th/early 20th century included parts of Iraq, Israel, Jordan, Lebanon, Palestine, and Syria, and was given to the area because it was the origin point of the majority of the population who lived there. Most of the buildings in Little Syria were demolished when the Brooklyn-Battery Tunnel was built in the 1940s, with just a few buildings remaining, including the St. George Chapel (white building on the right in the picture below, which was designated a New York City landmark in 2009 and is now home to the St. George Tavern).

St. George Tavern, Little Syria, New York

This walking experience got me thinking about the past and how I might explore the intersections of the history of Little Syria, the history of New York City, the experience of the immigrant’s “American Dream”, and our relationship with immigrants, all within a hauntological (“always-already absent present”) framework (simple, right?!) and could I use some sort of map to do it? I knew that I would not have time to build anything even close to what I envision for a final project, but as with any project, you have to start somewhere.

I have experience mapping using many of the applications that we read about in “Finding the Right Tools for Mapping” and I was not sure which would be the best for this project but first I needed some data. I must admit that I fell down many of the same “rabbit holes” that I have fallen down in the past, including spending far too much time looking for data resources, learning there were none and then having to find and build my own datasets which (I knew from past experience) requires a ridiculous amount of time, though I do seem to always underestimate just how long it takes.

I began by playing around a bit in Mapbox with a very small dataset that I built of the locations of Syrian Periodicals in New York based on information from The Syrian American Directory Almanac (1930). It turned out to be nothing particularly exciting so I decided to build something in Storymaps, which was not particularly exciting either.

In the end, I am still not sure what direction this project is going but it was an interesting exploration with mapping applications.