Everyday Windows

Sexism, harassment, abuse… They all have been historically regarded as personal issues, relegating them from the public discussion, and diverting attention of their status as sociopolitical systemic problems. We want to show what happens behind the doors -or windows, in this case-, what women (from our experience) go through and how society as a whole contributes to the expansion of these issues.

This VR experience about experiences was created in three.js and rendered with help of the WebVR API. It’s mounted on a node.js server connected via WebSockets to an Arduino MKR1000.

This was a project developed by Nicolas Pena-Escarpentier and me and it was our Final for Intro to Physical Computing and Intro to Computational Media.

Here you can check the Github repository and the Link (we will work on making a full web-only version soon). Below I’ll be explaining a bit about our process and the development of our project.

The Sketch

three.js

In order to have all the windows in one same sketch, we created an individual scene for each one, just changing the index of the one to be rendered. As most of the rooms have the same components (floor, cylindrical wall and images of the cutouts and windows), all the scenes are created within a for loop with their common elements, and a specific function for the specific assets on each scene. The images are simply textures on top of planes with transparency from the png files. Check the code below:

function createEnvironment(){
for (let i = 0; i < 6; i++) {
scenes[i].background = new THREE.Color( 0x555555 );
createLights(i);
createFloor(i);
createRoom(i);
}
scene0();
scene1();
scene2();
scene3();
scene4();
scene5();
}

function createLights(ind){
let p_light = new THREE.PointLight(col[ind], 1.5, 1000, 2);
p_light.position.set(0, 10, 0);
scenes[ind].add( p_light );
}

function createFloor(ind){
let floorGeo = new THREE.CylinderGeometry(roomSize*4, roomSize*4, 1, 24);
let floorMat = new THREE.MeshLambertMaterial({
color: 0x666666,
emissive: 0x101010,
});
let planeF = new THREE.Mesh(floorGeo, floorMat);
planeF.position.set(0, -roomSize/4, 0);
scenes[ind].add(planeF);
}

function createRoom(ind){
// planes w/ images
let plGeo = new THREE.PlaneGeometry(roomSize, roomSize, 10, 10);

// images
let windowMat = new THREE.MeshBasicMaterial({
map: loader.load("media/" + ind + "/window.png"),
side: THREE.DoubleSide,
transparent: true,
});
let personMat = new THREE.MeshBasicMaterial({
map: loader.load("media/" + ind + "/main.gif"),
side: THREE.DoubleSide,
transparent: true,
});
for (let i = 0; i < 4; i++) {
let windowPlane = new THREE.Mesh(plGeo, windowMat);
let personPlane = new THREE.Mesh(plGeo, personMat);
let rad = 10;
let posX = rad * Math.sin(i*Math.PI/2);
let posZ = rad * Math.cos(i*Math.PI/2);
personPlane.position.set(posX*6, roomSize/4, posZ*6);
personPlane.rotation.y = Math.PI/2 * Math.sin(i*Math.PI/2);
scenes[ind].add(personPlane);
windowPlane.position.set(posX*8, roomSize*.3, posZ*8);
windowPlane.rotation.y = Math.PI/2 * Math.sin(i*Math.PI/2);
scenes[ind].add(windowPlane);
}

// room walls
let wallGeo = new THREE.CylinderGeometry(roomSize*5, roomSize*5, 250, 24, 20, true);
let wallMat = new THREE.MeshLambertMaterial({
color: 0xd0d0d0,
side: THREE.DoubleSide,
});
let wall = new THREE.Mesh(wallGeo, wallMat);
wall.position.set(0, 230, 0);
scenes[ind].add(wall);
}

And this is how they look:

WebVR

To get the sketch to display on VR was tricky. The implementation of WebVR has been evolving and a lot of the information has changed drastically. Also, we’d like to thank Or Fleisher for helping us get started with WebVR.

We have to start by telling the renderer to enable the VR possibility, load the VREffect package to create a separate render for each eye, as well as the VRControls package to incorporate the accelerometer rotations for a correct camera control. It is also useful to install the WebVR API Emulation Chrome Extension in order to test the sketch with the new controls.

renderer.vr.enabled = true;

effect = new THREE.VREffect(renderer);
effect.setSize(window.innerWidth, window.innerHeight);

controls = new THREE.VRControls( camera );
controls.standing = true;
camera.position.y = controls.userHeight;
controls.update();

Then, we need to find if there’s an available VR display by using the function navigator.getVRDisplays(). In this case, we are defaulting to use the first (and most likely, only) VR display. With this display, we can also use the WebVR library tool to automatically create the button to display in VR.

// sets up the VR stage + button
function setupVRStage(){
// get available displays
navigator.getVRDisplays().then( function(displays){
if(displays.length > 0) {
vrDisplay = displays[0];
// setup button
vrButton = WEBVR.getButton( vrDisplay, renderer.domElement );
document.getElementById('vr_button').appendChild( vrButton );
} else {
console.log("NO VR DISPLAYS PRESENT");
}
update();
});
}

Now, the animation function is a tricky one, because it changes the rendering pipeline. Usually, the browser is the one that requests a new animation frame when it is ready to display a new one, but in this case, the VR display is the one that has to ask for it. Also, as we’re using two different renderers (the normal one or the VREffect), we need to discriminate between both states, which can be done with the vrDisplay.isPresenting parameter.

function animate(timestamp) {
let delta = Math.min(timestamp - lastRenderTime, 500);
lastRenderTime = timestamp;

if(vrDisplay.isPresenting){ // VR rendering
controls.update();
effect.render(scenes[current], camera);
vrDisplay.requestAnimationFrame(animate);
} else { // browser rendering
controls.update();
renderer.render(scenes[current], camera);
window.requestAnimationFrame(animate);
}
}

It is also worth noting that we have to add the WebVR Polyfill package for everything to work outside Google Chrome (remember, this is a browser based implementation!).

Node.js Server

To learn the basics from node, npm and how to mount a server, Daniel Shiffman’s Twitter Bot Tutorial and this lynda.com courseare an amazing start.

Thanks to these tutorials, mounting the server was easy, but the web socket implementation was rather difficult. We started with socket.io, but that implements extra things that interfered with the Arduino connection. Thankfully, Tom Igoe referred to me to his book Making Thinks Talk where he successfully implements this connection using the ws library on the server side. So, following one ofhis examples (all of them are on Github), we got it working perfectly.

// websocket setup
var WebSocket = require('ws').Server

wss = new WebSocket({ server: http });

wss.on('connection', function(ws_client){
console.log("user connected");

ws_client.on('message', function(msg){
// check if the values are valid/useful
var intComing = parseInt(msg);
if(intComing != NaN && intComing>=0 && intComing<=5){
_scene = parseInt(msg);
broadcast(_scene);
console.log("change scene broadcast: " + _scene);
}
});
});

function broadcast(msg){
wss.clients.forEach(function each(client) {
client.send(msg);
});
}

Another thing worth noting, is that to keep the application running on the DigitalOcean server, we used the 

[forever](https://www.npmjs.com/package/forever)

 package.

Arduino

For this project, we used an Arduino MKR1000, because we needed a way to wirelessly communicate with the phone (via a server, in this case) without resorting to a computer. In the beginning, we tried using a bluetooth module, but as the project was web-based, the security measures in the browsers do not let them access the bluetooth -or other hardware components- easily. Also, it was way harder than we initially thought it would be, and the WiFi communication much easier.

Internet connection

Getting the Arduino to connect to internet is pretty straightforward. Following this tutorial was all we needed.

The connection with the server was harder. After extensive web searches, we asked Tom Igoe who recommended his book Making Things Talk where he dedicates a whole chapter to this. So, following the book example and the
ArduinoHttpClient library example we got to set everything up.

#include
#include
#include

WiFiClient wifiClient;
WebSocketClient webSocket = WebSocketClient(wifiClient, server, port);

void connectToServer() {
Serial.println("attempting to connect to server");
webSocket.begin();

Serial.println(webSocket.connected());
if (!webSocket.connected()) {
Serial.println("failed to connect to server");
} else {
Serial.println("connected to server");
}
}

void sendWindow(int num){
// check wifi connection
if(WiFi.status() != WL_CONNECTED){
connectWiFi();
}
// check server connection
while(!webSocket.connected()){
connectToServer();
}
// send the message!
webSocket.beginMessage(TYPE_TEXT); // message type: text
webSocket.print(num); // send the value
webSocket.endMessage(); // close message
Serial.print("window ");
Serial.print(num);
Serial.println(" sent");
}

Interface components

In the beginning, we tried using a capacitive touch sensor (MPR121) and covered the borders of the windows with capacitive fabric for it to work. The code was easily done by following the Adafruit MPR121 tutorial plus a quick code fix. Sadly, the user-testing led us to realize that this was not the best choice. People would often try to touch the window itself, rather that the border, due to poor instructions. So, we opted for the not-as-fun more-conventional approach and got LED momentary pushbuttons.

In order to light up the rooms with colors that match the lights on the sketch, we planned to use RGB LEDs, but they posed a bunch of difficulties. They need a lot of pins (3 for each LED * 6 windows= 18 pins!!!), or a lot of mathematics if we were to hard code them (to use only one pin per window). Still, using NeoPixels was a much better idea, and amazingly simple to code! With the Adafruit NeoPixel library it’s as easy as giving it the pin number, how many pixels there are, and the RGB values for each. Et voilá! That way, everything stays in the code, in case we want to change anything.

void windowLight(int num){
// window variables
int r = window_colors[num][0];
int g = window_colors[num][1];
int b = window_colors[num][2];
// window pixels
for(int i=0; i<3; i++){
int index = window_pixels[num][i];
pixels.setPixelColor(index, pixels.Color(r, g, b));
pixels.show();
}
}

Resources

Here is a list of -previously unreferenced- web resources from where we took some code or help to implement everything:

Serial communication with Arduino + P5.js

So far I have been developing projects for Computational Media and Physical Computing separately. What I created in P5.js was developed and stayed in the browser, in the computer screen. Likewise, the inputs and outputs happening in my Arduino projects such as the LEDs, sensors and buttons in the breadboard, were physically interacting only outside any screen. But what if I want to make a physical button that triggers something in my P5.js sketch? How do I make it work?

For two devices to communicate, whether they are desktop computers, microcontrollers, or any other form of computer, we need a method of communication and an agreed-upon language. Serial communication, in this case, is one of the most common forms of communication between computers. Thus, this is the method I will be using to make a potentiometer and a button interact with the following sketch.

Check the code here.

As you can check this sketch is very similar to my previous Computational Media post. Indeed I used its code as a base. I replaced the images for a png cut taken out from a photo of my eye. I also took a photo of my eye closed and added an interaction so when the mouse is clicked the crazy random eyes can blink at you ;).
You can check also this version that where I replaced the images to pngs of my mouth so when the mouse is clicked the sketch sends you a kiss :*.

Check the code here.

Anyways, as you noticed I played enough with the p5.js sketch so now it’s time to integrate it with my Arduino code. First I established that I wanted to use, instead of the slider, a potentiometer to rotate the image. Second instead of the mouse Click for the blinking, I would replace this interaction for a physical button.

As a total beginner in serial communication, I followed this tutorial and watched the those videos of Professors Tom Igoe and Jeff Feddersen.

With that in mind, I created my circuit.

I added the code to my Arduino, keeping in mind that the way the data was sent was how my p5.js serial code would read, understand it, and then be able to make it interact properly with my sketch. Therefore, since I was using two inputs (the potentiometer and the button), I needed to make my Arduino code send it to my p5.js in a way it would get this data and understand it as two different variables. The way I did it was by parsing a “,” (comma) between those different numbers. You can check my Arduino code below.

Then, also following the tutorial linked before, i downloaded the p5.serialcontrol port and added the required information to make the effective serial communication with my Arduino circuit and my code.

And it worked!!!

Check the Serial P5.js code here.

 

Trump + KimJong P5.js Collage

Playing with P5.js has been a bit hard for since most of P5.js sketches draw geometrical shapes. With that tou can create beautiful and interesting forms or animations, still, is hard for me to have fun with this type of graphic representation. That’s why I got super excited when we started adding images to our work. Physically, I enjoy making collages, so why not bring it in a coded, optimized version to my P5.js drawings in the browser?

What I like most about collages is getting this ready made, mainstream images of things in society I would like to change, and playing with it. Such as the ideal of female body image, consumerism, politics…or in this case the international agenda. Collage is this little vodoo of mine. And I love the opportunity that collages with p5.js invite the user to play with it too with mouseover, clicks, reloading the page and so on. I loved making this funny sketches, taking simple .png files of Trump’s and Kim Jong’s available in a simple google search, and enabling the users to also interact with their faces.

Check the code here.

To create this sketch I played with loops, objects and arrays. You can notice that every time you reload the browser the way the faces are displayed change, creating different drawings.

I also developed other 2 simpler sketches with this approach.

  1. Create little worms from trump and kimjong’s faces by clicking in the canvas. Check the code here.
  2. Just see trump and kimjong faces appear randomly in the canvas. Check the code here.

These p5.js collage sketches were developed based on watching this Objects and Images video from my teacher’s  – Daniel Shiffman – youtube channel.

Hope you have fun,

Ilana.

 

 

Animations with P5.js

After getting started creating simple static drawings with P5, as you can check in my first Intro to Computational Media Post, it’s time to play with simple animations.

I added some fun interactions to my “Night in the City” sketch as you can check below:

In “Night in the City Animated” the main interactive/animated ideas are:

1. Every time you scroll the mouse from left to right, you change the background color from 0 to 255 (black to white) giving this impression that the night turns into morning.

2. Every time you reload the drawing (or the page — just refresh it and you will see) the stars appear in different positions, in an ever changing sky with random star positions and sizes.

3. Also, as you can see in the top of the antenna in my Empire State, there is an on/off red light. The circle becomes red every 1000 milliseconds and then goes of for the next 1000, repeating this function in a loop.
You can see the code in this link.

Getting started with P5.js

At ITP one highly recommended class to take in the first semester is ICM (Introduction to computational media). ICM is a beginner level coding class that is very effective to “break the ice”  with programming and help you get started. Personally, I do have an ok experience with JavaScript and web design, though, I don’t use it for art sketching. I am a creative human, and deeply passionate about art. Still, either physically or digitally, I have never quite found an artistic way to express myself. So, I am excited towards the possibility of exploring programming with this approach.

Furthermore, I really believe in the power of programming and its use to enable human beings to be informed and to inform, to have the power to create. Accordingly, my goal is also to be in an environment where I can see other people learning, maybe people that are programming for the first time. In Brazil, where I am from, this idea of perceiving code as a tool for everyone is not developed yet. Most people see it as something that should be used only for computer engineers and computer science experts. Maybe from what I’ll learn in this course I could help change this dynamics in my country. Who knows!?

So, what is this class is about after all?

It’s about playing to create sketches with code, using P5.js (a JavaScript library) to do that. We can create simple drawings, as this first week’s assignment, and further more complex images using loops, creating animations, interactions, making 3d images…. you can check it in P5.js art tumblr, there are a lot of awesome examples. Also, if you have any interest in getting started, my teacher, Dan Shiffman, has an AWESOME youtube channel . There are a lot of tutorials for beginners and they are a great – and fun – material.

For my first assignment I had to create a simple sketch, using basic shapes, calling basic functions of p5.js.

And TA-DA! here it is:

 

How did I do that?

The semiotics

Ok, first I looked at a bunch of references to come up with what I was going to create. I knew that I was going to use mostly geometrical forms, and I checked a couple of geometric works of artists I like. Also, I just arrived in NY, so based in my surroundings and mainly in my excitement to live in the city, I decided I would do a code sketch of it.

Usually, when go to Tisch (where my school, ITP is), I get out from the subway ride on the Broadway-Lafayette station and walk 3-4min on Broadway. By possibly getting out of the station through different exits or getting lost in the middle of fast walkers, more than twice I have already gone towards the opposite direction from Tisch’s building, loosing some minutes in the process. So, as a reference to make sure I am going on the right way, I have the Empire State Building. Empire State Building is north, so if I go in its direction I’ll surely arrive at ITP. As I have done that almost every day for the past week, apparently, this image got stuck in my head.

 

The coding

So, even though it seems like a kid’s paint brush sketch, it was not created in paintbrush – and not by a kid. It was coded! As you can see below.

You can also check the code and add your changes, colors and use it as you prefer. You’re more than welcome to play with 🙂

I read the functions here to create the shapes and add colors.

My challenge was getting used to P5.js coordinates, and by not being able to “inspect” each element, as you are able to do when using html/css.

Have fun,

Ilana