For my electronic rituals and fortune telling class final project I wanted to create a ritual/moment that helped me connect in a daily basis with things that go beyond my daily routine. Still, once I wasn’t really sure to what specifically and how to create this connection, I started to research ritualistic practices within my family and my religious background for inspiration. In this process, I came across the everyday jewish morning blessing: Birchot Hashachar.
In spite of being practiced only by orthodox jews and not commonly used by all people that follow judaism, Birchot Hashachar contains an excerpt that makes me feel very uncomfortable as a jewish woman: in the Orthodox daily prayer service, men thanks ‘God who has not made me a woman‘, whereas the parallel
blessing for women is thanking ‘God for making me according to his will‘.
Blessed be he who has not made me a woman
בָּרוּךְ אַתָּה יְיָ אֶלֹהֵֽינוּ מֶֽלֶךְ הָעוֹלָם, שֶׁלֹּא עָשַֽׂנִי אִשָּׁה
It is not new that, like other western world’s major religions, Judaism fosters gender hierarchy and traditional gender ideologies. The Jewish tradition defines separate spheres for men and women, with men occupying the public sphere and women limited to the private sphere. Accordingly, women are exempted from many of the religious rituals that could undermine their devotion to domestic responsibilities. Still, I was surprised by such a straightforward Gender discriminatory statement that is recited daily by such a large amount of men.
In response to that, I created for I am a woman. It is a Chrome Extension that gives me a daily reminder of the existence of this discriminatory blessing with a small dose of inspiration of people who defied/defy this status-quo.
At every 10AM this Chrome Extension pops up a window in my browser to show information about a different extraordinary person that helped/helps define, establish, and achieve political, economic, personal, and social equality of genders.
The pop up randomly chooses one inspirational person to display, selected from a json file where I manually added the data. This was the pop up that I received today.
This was the first chrome extension I created. At first, the idea for this project was to create a webpage. But, as I started to research and user test, people often asked me “when would I visit this page??”. I couldn’t come up with a good answer for that, and that’s when the idea of developing it as something that would pop up in your screen at a specific time seemed more appropriate.
In the background.js file on the Chrome Extension, I added this code that checks the time and displays the pop up with the content once its 10 o’clock.
You can check the code on my github.
When I moved to New York, besides becoming a student again, a newbie in town, a foreigner and so on… I became a “Latina”. An the truth is that I’ve never thought of myself as a latina. I am Brazilian. My first language is portuguese. I do not have any hispanic origin in my family. But yes, I was born in Latin America.
Accordingly, at the same time that it is uncomfortable to fill census reports and forms here in the United States, it is still interesting to understand how my identity as a human being is redefined once moving to a foreign country.
My goal is to play with that idea of how culture, bias and identity are created and defined in American society and somehow transform that into a map. One reference that I really like is Alfredo Jaar’s A Logo for America, a piece that doesn’t explicitly talk about those concepts but have them in its statement.
I’m not sure how I would develop that but I’ll look into more references and update it soon.
For this week’s readings we went over the article What would Feminist Data Visualization look like? and the chapter on Representation and the Necessity of Interpretation form Laura Kurgan. Both readings invite us to rethink about the way data is shown in maps, in order to understand that it is presented for a reason and purpose and therefore there will always be a bias and therefore a relationship of power involved.
The first article touches in the concept of how feminist standpoint theory would approach data visualization, mentioning that all knowledge is socially situated and that the perspectives of oppressed groups including women, minorities and others are systematically excluded from “general” knowledge. Despite that, it suggests interesting approaches that creators should think when trying to develop “unbiased” maps according to feminist data viz, such as developing new ways to represent uncertainty, invent new ways to reference the material economy behind the data and create ways to make dissent possible so we can find ways to go back to the material that originated that visualization.
The second reading starts by breaking down the perception that satellite images analysis are somehow neutral and can be deliberately taken as statements. It mentions about the use of satellite images to justify the invasion on Iraq, proving this point. Accordingly, it states that there is no such thing as raw-data and suggests that working with data is a para-empiricism.
I believe that both readings prove their points. I really liked the suggestions on the Feminist article about new ways to present data that would clarify the choices and make the “bias” explicit. What if we visually problematized the provenance of the data? The interests behind the data? The stakeholders in the data? I believe that it is part of the visualization experience to highlight some aspects over others, according to the maps functionality. Thus is impossible not to create somehow biased maps. Accordingly, as creators, it is our responsibility to be aware of those choices and keep them explicit to the public.
So for this assignment we were asked to play with our own .csv files to create a data visualization inside a map using Mapbox. Thus, Hadar and I decided to compare two datasets: the gini index of each country with its deaths by firearms.
Gini index or Gini coefficient is is a measure of statistical dispersion intended to represent the income or wealth distribution of a nation’s residents, and is the most commonly used measurement of inequality. Death by firearms is identified as the number of deaths caused by the use of firearms during an year in that specific country. By collecting these data I aimed to analyze how does income inequality relates to violence rate.
The blue circles are the gini index (the bigger the gini index, the more social inequality), and the red circles represent the deaths by firearms.
According to this map it doesn’t become clear if there is a connection between these two datasets. I believe we have to map the values better and to add the actual number of the datasets when the user hove some of the circles. Also, there are some circles that are too big and are covering some of the countries, which makes it hard to identify.
Besides, the datasets we used to collect this data are not very reliable. First, they didn’t include all country’s in in and second, the data is not correspondent from the same year.
Still, even though this map needs iteration, this was a very interesting experiment.
As Social Chairs Dan Oved and I created a simple website for the ITProm and its banner. There, ITP students can find main informations about the event and contests.
For the site, we used hugo, a platform that right away I loved! It’s a really easy way to create a webpage. The website is now live at www.itprom.space.
You can check the code in my GitHub.
As mentioned in the last post for my final for Magic Windows I continued developing a project on recreating Washington Square Park in 1968 by creating a story of 2 characters and visually placing archival images in space through AR while the monologues are played.
The monologues and their respective visual scenes are be created using geolocation and the final scene when the 2 characters meet triggered by image targeting – once the meeting point is set to be under the park’s arch.
Therefore, I worked with Mapbox to set the geolocation in the park and with ARKit 1.5 for the image targeting with the app.
I used Mapbox and set two points, one in each extreme side of the park. For each side I placed two different GameObjects to SetActive, so I created a script that checks the device location and triggers the GameObject related to the point that the user is closest to. And after some weeks of iterating. meeting with mapbox and some work, it worked great!
(as i move closer to the right side of the park the red square is triggered, if i move to the left side, the green ball is triggered)
Here follows the *messy* script I’m using:
I also added the image targeting using the new ARKit 1.5, which was very exciting since it works great!!! Check below.
For now I’ve been working mostly with placeholders, so as the device recognizes the arch, a blue capsule appears.
At that point mapbox/arkit 1.5 were living in different projects so the next step technically was to have the 3 scenes (character 1, character 2 and the encounter) inside the same project.
This took me a bit longer than expected, but once closely analyzing the files and replacing the older version of ARKit for the 1.5 inside the mapbox example it finally happened.
Next step then is to add the actual images to the scene in space in a – nice – ux way.
So that’s what I’ve been playing with lately, because it’s kind of hard actually.
One idea was to set the images active once the planes were detected. But as you can see below this was not the best. The images appear too close and as you have not much control over how they will show up I don’t believe it would work in this project scenario.
So I decided to add GameObjects set as colliders randomly on space. Now, again, they have materials so I can see them and play with their arrangement.
On the following video (made during the snow storm) I changed the locations to be able to test it on ITP’s floor all those features (geolocation, image targeting, objects colliding) are working.
Below follows the script I made to generate the random colliders and add the images in an array squence.
The “colliders” way to spawn images in the park worked well, but it got harder once the colliders were transparent. Some times they would cluster in a specific region and it was not creating the experience I wanted to.
So I developed another script that spawned the images according to the distance from the camera, and it worked well. It was a bit of a hustle having to customize all images to be in the right height and size but it’s working well.
Also, for better user testing, I added a “demo UX” with the introduction audio.
Below you can check the demo:
For this week’s assignment we had to create an electronic generated -omancy: a divination method that could forecast the future based on an object, user interaction or random selected event. Thus, I created a tool that, from uploading your ancestors b&w old pictures, you can see a specific prediction related to the data collected from that image. I called it ANCESTOR PHOTOMANCY.
So a divination based in your astrology assume that the stars interfered with the moment you were born creating specific traits that shape who you are and your destiny. Accordingly, I started to think about tangible aspects of nature and history that definitely change your life and are responsible for our very own existence – and consequently our future.
I feel it is indeed crazy, even though very obvious, to stop and think that we are here being who we are because some people in the past lived the way they did – people we didn’t get to know, have very little knowledge about their stories and personalities, and, of course, if you go way back then, people that we can’t even name, trace and know their origins.
So, I asked my mother in Brazil to scan and send me some pictures of my old relatives.
It’s so beautiful the aesthetics of those images. The posed way they appeared in the images, the clothes, the colors of the printing. It’s funny to think there is a bit of each one of those barely strangers inside me, and magic somehow. So, for this divination ritual I decided to play with the aspects of the images and connect that to a Tarot reading API and see what that could tell me about my future.
—- of course, this exercise has a playful approach so I am not really looking forward to forecast my future, but to play with the concept and explore electronic divination experiments.
The easiest way to analyze images is through brightness, and if you have B&W images that’s a very easy thing to do. So, I found this algorithm inspired in a project made by a colleague that would give me a number from 0 to 100 for qualifying the brightness of the images. Once having this number, I send it to the tarot reading API that will assign a correspondent Tarot Card and choose a random “fortune_telling” string from the 3 options on that specific card.
Here you can check the code:
For now I uploaded the images right on the folder directly in the code. A *must* for the iteration would add a user input in the browser with the CTA “upload your ancestors B&W image here” so everyone can actually use it.
Below you can check what my ancestors said about my future in this experiment.
Angelo and Sara think that I should reconsider my decisions –
Francisco disagrees, and has a more positive view –
Pedro, Sara and Mauricio are telling me to watch out —
Brazil’s democracy is a broken one: corruption, undeclared lobbies and a big misrepresentation of society. Accordingly, fundamental issues such as violence, poverty, lack of education are rarely properly addressed. In this context and with the emerging global phenomena of ‘society polarization’ and fake news, Brazilians feel lost in which media sources to trust and, ultimately, in who to vote for the upcoming presidential elections this November.
Candidates Dashboard is the first prototype of a project that aims organize and compare information of the current presidential pre-candidates (soon to be official candidates) so users can easily access and compare news from the main media sources. Further on the idea is also to play with sentimental analysis and social media hashtags/trending topics.
Thus, for this first prototype, I worked mostly with the Google News API, fetching the data for the Brazilian presidential pre-candidates. For now, the website is able to organize the informations from each candidates by source and display it in vertical columns.
Below you can check a previous version, but that had some bugs and displayed the images repeatedly. I reorganized the code to have the outcome we have now, and still need to add the other infos such as image, url, and the description.
This is an ongoing project trying to mix content and data viz.
As next challenges I want to develop a mock up/user flow to work further with css+html and figure out how this would be displayed in a mobile version.
You can check the js code below in this page or download it from my GitHub.
For my final for Magic Windows I want to keep developing a project I have been working since the beginning of the semester as I’m a fellow at NYC Media Lab and A+E Networks to create an AR experience.
Our project focuses in 1968, as it was an unique moment in the life of downtown Manhattan, and a year that is said to have changed the world.
By creating a site-specific cinematic AR time-travel experience, we are able to transport users back to 68′ Washington Square Park as they encounter those who waved protest signs at the same place, and often on behalf of the same values.
The idea is to tell a story of 2 characters, recreating their steps from 50 years ago, and reaching a climax once they meet in the center of the park. Each character represents a different side of the park and shows its own perspective. When the users meet, around the arch and the fountain, there is a new scene created upon this encounter.
The assets are being collected from archival footage and will be 2d still photos or animated gifs.
Main interactions consist on working with: