Everyday Windows

Sexism, harassment, abuse… They all have been historically regarded as personal issues, relegating them from the public discussion, and diverting attention of their status as sociopolitical systemic problems. We want to show what happens behind the doors -or windows, in this case-, what women (from our experience) go through and how society as a whole contributes to the expansion of these issues.

This VR experience about experiences was created in three.js and rendered with help of the WebVR API. It’s mounted on a node.js server connected via WebSockets to an Arduino MKR1000.

This was a project developed by Nicolas Pena-Escarpentier and me and it was our Final for Intro to Physical Computing and Intro to Computational Media.

Here you can check the Github repository and the Link (we will work on making a full web-only version soon). Below I’ll be explaining a bit about our process and the development of our project.

The Sketch

three.js

In order to have all the windows in one same sketch, we created an individual scene for each one, just changing the index of the one to be rendered. As most of the rooms have the same components (floor, cylindrical wall and images of the cutouts and windows), all the scenes are created within a for loop with their common elements, and a specific function for the specific assets on each scene. The images are simply textures on top of planes with transparency from the png files. Check the code below:

function createEnvironment(){
for (let i = 0; i < 6; i++) {
scenes[i].background = new THREE.Color( 0x555555 );
createLights(i);
createFloor(i);
createRoom(i);
}
scene0();
scene1();
scene2();
scene3();
scene4();
scene5();
}

function createLights(ind){
let p_light = new THREE.PointLight(col[ind], 1.5, 1000, 2);
p_light.position.set(0, 10, 0);
scenes[ind].add( p_light );
}

function createFloor(ind){
let floorGeo = new THREE.CylinderGeometry(roomSize*4, roomSize*4, 1, 24);
let floorMat = new THREE.MeshLambertMaterial({
color: 0x666666,
emissive: 0x101010,
});
let planeF = new THREE.Mesh(floorGeo, floorMat);
planeF.position.set(0, -roomSize/4, 0);
scenes[ind].add(planeF);
}

function createRoom(ind){
// planes w/ images
let plGeo = new THREE.PlaneGeometry(roomSize, roomSize, 10, 10);

// images
let windowMat = new THREE.MeshBasicMaterial({
map: loader.load("media/" + ind + "/window.png"),
side: THREE.DoubleSide,
transparent: true,
});
let personMat = new THREE.MeshBasicMaterial({
map: loader.load("media/" + ind + "/main.gif"),
side: THREE.DoubleSide,
transparent: true,
});
for (let i = 0; i < 4; i++) {
let windowPlane = new THREE.Mesh(plGeo, windowMat);
let personPlane = new THREE.Mesh(plGeo, personMat);
let rad = 10;
let posX = rad * Math.sin(i*Math.PI/2);
let posZ = rad * Math.cos(i*Math.PI/2);
personPlane.position.set(posX*6, roomSize/4, posZ*6);
personPlane.rotation.y = Math.PI/2 * Math.sin(i*Math.PI/2);
scenes[ind].add(personPlane);
windowPlane.position.set(posX*8, roomSize*.3, posZ*8);
windowPlane.rotation.y = Math.PI/2 * Math.sin(i*Math.PI/2);
scenes[ind].add(windowPlane);
}

// room walls
let wallGeo = new THREE.CylinderGeometry(roomSize*5, roomSize*5, 250, 24, 20, true);
let wallMat = new THREE.MeshLambertMaterial({
color: 0xd0d0d0,
side: THREE.DoubleSide,
});
let wall = new THREE.Mesh(wallGeo, wallMat);
wall.position.set(0, 230, 0);
scenes[ind].add(wall);
}

And this is how they look:

WebVR

To get the sketch to display on VR was tricky. The implementation of WebVR has been evolving and a lot of the information has changed drastically. Also, we’d like to thank Or Fleisher for helping us get started with WebVR.

We have to start by telling the renderer to enable the VR possibility, load the VREffect package to create a separate render for each eye, as well as the VRControls package to incorporate the accelerometer rotations for a correct camera control. It is also useful to install the WebVR API Emulation Chrome Extension in order to test the sketch with the new controls.

renderer.vr.enabled = true;

effect = new THREE.VREffect(renderer);
effect.setSize(window.innerWidth, window.innerHeight);

controls = new THREE.VRControls( camera );
controls.standing = true;
camera.position.y = controls.userHeight;
controls.update();

Then, we need to find if there’s an available VR display by using the function navigator.getVRDisplays(). In this case, we are defaulting to use the first (and most likely, only) VR display. With this display, we can also use the WebVR library tool to automatically create the button to display in VR.

// sets up the VR stage + button
function setupVRStage(){
// get available displays
navigator.getVRDisplays().then( function(displays){
if(displays.length > 0) {
vrDisplay = displays[0];
// setup button
vrButton = WEBVR.getButton( vrDisplay, renderer.domElement );
document.getElementById('vr_button').appendChild( vrButton );
} else {
console.log("NO VR DISPLAYS PRESENT");
}
update();
});
}

Now, the animation function is a tricky one, because it changes the rendering pipeline. Usually, the browser is the one that requests a new animation frame when it is ready to display a new one, but in this case, the VR display is the one that has to ask for it. Also, as we’re using two different renderers (the normal one or the VREffect), we need to discriminate between both states, which can be done with the vrDisplay.isPresenting parameter.

function animate(timestamp) {
let delta = Math.min(timestamp - lastRenderTime, 500);
lastRenderTime = timestamp;

if(vrDisplay.isPresenting){ // VR rendering
controls.update();
effect.render(scenes[current], camera);
vrDisplay.requestAnimationFrame(animate);
} else { // browser rendering
controls.update();
renderer.render(scenes[current], camera);
window.requestAnimationFrame(animate);
}
}

It is also worth noting that we have to add the WebVR Polyfill package for everything to work outside Google Chrome (remember, this is a browser based implementation!).

Node.js Server

To learn the basics from node, npm and how to mount a server, Daniel Shiffman’s Twitter Bot Tutorial and this lynda.com courseare an amazing start.

Thanks to these tutorials, mounting the server was easy, but the web socket implementation was rather difficult. We started with socket.io, but that implements extra things that interfered with the Arduino connection. Thankfully, Tom Igoe referred to me to his book Making Thinks Talk where he successfully implements this connection using the ws library on the server side. So, following one ofhis examples (all of them are on Github), we got it working perfectly.

// websocket setup
var WebSocket = require('ws').Server

wss = new WebSocket({ server: http });

wss.on('connection', function(ws_client){
console.log("user connected");

ws_client.on('message', function(msg){
// check if the values are valid/useful
var intComing = parseInt(msg);
if(intComing != NaN && intComing>=0 && intComing<=5){
_scene = parseInt(msg);
broadcast(_scene);
console.log("change scene broadcast: " + _scene);
}
});
});

function broadcast(msg){
wss.clients.forEach(function each(client) {
client.send(msg);
});
}

Another thing worth noting, is that to keep the application running on the DigitalOcean server, we used the 

[forever](https://www.npmjs.com/package/forever)

 package.

Arduino

For this project, we used an Arduino MKR1000, because we needed a way to wirelessly communicate with the phone (via a server, in this case) without resorting to a computer. In the beginning, we tried using a bluetooth module, but as the project was web-based, the security measures in the browsers do not let them access the bluetooth -or other hardware components- easily. Also, it was way harder than we initially thought it would be, and the WiFi communication much easier.

Internet connection

Getting the Arduino to connect to internet is pretty straightforward. Following this tutorial was all we needed.

The connection with the server was harder. After extensive web searches, we asked Tom Igoe who recommended his book Making Things Talk where he dedicates a whole chapter to this. So, following the book example and the
ArduinoHttpClient library example we got to set everything up.

#include
#include
#include

WiFiClient wifiClient;
WebSocketClient webSocket = WebSocketClient(wifiClient, server, port);

void connectToServer() {
Serial.println("attempting to connect to server");
webSocket.begin();

Serial.println(webSocket.connected());
if (!webSocket.connected()) {
Serial.println("failed to connect to server");
} else {
Serial.println("connected to server");
}
}

void sendWindow(int num){
// check wifi connection
if(WiFi.status() != WL_CONNECTED){
connectWiFi();
}
// check server connection
while(!webSocket.connected()){
connectToServer();
}
// send the message!
webSocket.beginMessage(TYPE_TEXT); // message type: text
webSocket.print(num); // send the value
webSocket.endMessage(); // close message
Serial.print("window ");
Serial.print(num);
Serial.println(" sent");
}

Interface components

In the beginning, we tried using a capacitive touch sensor (MPR121) and covered the borders of the windows with capacitive fabric for it to work. The code was easily done by following the Adafruit MPR121 tutorial plus a quick code fix. Sadly, the user-testing led us to realize that this was not the best choice. People would often try to touch the window itself, rather that the border, due to poor instructions. So, we opted for the not-as-fun more-conventional approach and got LED momentary pushbuttons.

In order to light up the rooms with colors that match the lights on the sketch, we planned to use RGB LEDs, but they posed a bunch of difficulties. They need a lot of pins (3 for each LED * 6 windows= 18 pins!!!), or a lot of mathematics if we were to hard code them (to use only one pin per window). Still, using NeoPixels was a much better idea, and amazingly simple to code! With the Adafruit NeoPixel library it’s as easy as giving it the pin number, how many pixels there are, and the RGB values for each. Et voilá! That way, everything stays in the code, in case we want to change anything.

void windowLight(int num){
// window variables
int r = window_colors[num][0];
int g = window_colors[num][1];
int b = window_colors[num][2];
// window pixels
for(int i=0; i<3; i++){
int index = window_pixels[num][i];
pixels.setPixelColor(index, pixels.Color(r, g, b));
pixels.show();
}
}

Resources

Here is a list of -previously unreferenced- web resources from where we took some code or help to implement everything:

PhotoJukebox: mixing photography, music & physical computing.

If you looked at some of my other projects in my blog or portfolio you have probably already noticed that I like playing with photography. Either in the political way we perceive images or in the personal way that pictures are attached to memories, as a designer, maker, producer, or whatever its the name for what I do, I find it fun to create with it.

Having that said, my boyfriend’s birthday was coming. And I wanted to give him a special gift. Not only because he was turning 28, but because he got a job in the city and made it happen to move to New York so we could be together – and he was arriving only a couple of days before his birthday. My boyfriend is a musician, so since the beginning of our relationship he frequently sends me audios of him playing songs that somehow relate to what we are living in that moment.

Meanwhile, I partnered with my colleague and friend Jenna who is an awesome designer (check her work here), to make our Physical Computing midterm project. I was inspired by this context and came up with the Photo Jukebox idea. She loved it and we decided to make it happen.

You check below we did it! Further on this post I’ll explain how.

Inspiration

My main inspiration came from a project that a colleague at ITP did. Amitabh is a genius when it comes to Physical Computing and Arduino (check his work here), and once he built a jukebox that played specific songs once you placed a related acrylic sheet on top of his machine. The acrylic sheets were really fun and were based in images of the bands that would then be played – as you can see below.

So his project triggered me (thanks Amitabh!!!!): what if we could put personal pictures there? I mean, today the way we interact with personal photographs is mainly digital, posting in our Social Media. At the same time, the few ones that are printed and take a physical form, stay in barely touched photo albums, or in beautiful but not at all interactive portraits. How fun would it be to have a machine like Amitabh’s one, but with a different approach and design, that could enable for the user a unique way to interact with pictures, listen to music, and trigger good memories or feelings attached to those photographed moments.

And so I started sketching. I wanted the design to have a vintage look, and be shaped kind of like a made with wood, giving this kind of Victrola feel.

The idea was to use the back of each photograph to close certain circuits, acting like pressed buttons. Consequently the button that was pressed should trigger a specific song related to it.  I knew we could easily do that by using copper tape.

Testing the concept

Thus, we started and getting the materials to be able to create and test the circuit and the code.

In order to do this first step we needed a major item that allows Arduino to actually play music and read an SD card without having to use a computer: an MP3 shield.

As we were really excited to get started with the project, Jenna and me ran to Thinkersphere and purchased the geetech.com MP3 shield without previously testing its library which turned out to be an immediately regrettable decision. There was very little documentation, the provided link to the datasheet was broken, and the library—incredibly—didn’t work.


*screams*

Luckily, Aaron (our miracle-worker of a resident) was able to help us hack the Adafruit mp3 shield library to work with our questionable Tinkersphere purchase. Unfortunately, that only opened the floodgates of pain and suffering, as there was still a lot of crazy mp3 shield logic deal with (delays, booleans for each song, the concept of interrupts and how they apply to serial communication…). Eventually, many office hours (thanks Yuli and Chino) and even more if statements got the job done. However, we weren’t able to figure out how to get combinations of switch states to allow for more songs.


preview of the madness

We tested the code+circuit with regular push buttons to test the code, and it worked!

 

So we threw together a rough prototype with the copper tape buttons to test the actual technical concept.

And it also played the songs as expected!

Creating the Enclosure

With the circuit working, it was time to work on the enclosure. We bought a utensil tray from The Container Store (shout out to this video), and laser cut an interface, first with cardboard:

Then with acrylic:

Building the circuit

This was a really challenging part. How to attach and solder the buttons, battery, potentiometer, on/of switch in an actual stable way so it could become a durable gift?

Until now, we have been mainly prototyping, playing with soldering but not actually worried on making it fit and not be interrupted or spoiled by a simple shake. We started working by ourselves, and after tons of hours came up with a working circuit. Still we had a lot of issues and were panicking over its instability.

So this is when you book office hours, and Aaron (the miracle Physical Computing talented resident) came to save us again.

He explained that, for final prototypes, you should ALWAYS work with multi-stranded wire. They are more flexible, fit better into enclosures and won’t break that easily once soldered. Also, he showed us some german awesome plugs (the orange/transparent thing you see in the pic) that can join multiple wires. Thus we could attach all our grounds and our 5V’s sides very easily.

Check below how was our life before and after Aaron’s help.

And our baby was born!

Jenna and me were both very proud of what we accomplished working together in a week and a half.

The gift was amazing and my boyfriend loved it.

It will definitely be part of our living room, sitting in the coffee table right next to the sofa. It will be perfect for when we feel nostalgic and want to going through our special moments. Also, when inviting guests, I believe they will feel curious towards it and may play with our Photo Jukebox. With that, they will sure learn more about our story together, and feel, through photography and music, a bit of our love.

 

 

 

 

 

 

Singing Hula Hoop #1 Prototype

In my Physical Computing class, we were starting to work with analog input data. Therefore, we were assigned to create a musical instrument that played with these different inputs and generated sound.

I am really excited with the idea of integrating dancing with physical computing as dance has always been a great way to express myself and let go. I think that further on it opens possibilities to develop art performances and mainly to create toys and learning tools for kids. But for now, the main idea was to prototype and have fun trying to make a hula hoop more interactive, transforming it into a musical instrument as proposed in the assignment.

Trial #1 —> Fail

Having in mind that my goal was to collect different inputs from the hooping (spinning) movement, and trying to avoid attaching many things to the hoop, my first idea was to work with infrared light. This would work in the following way: I would attach a battery and a series of infrared LED’s in the hoop, while the Arduino, with an infrared light sensor and a speaker, would be placed in a table and connected to a power source. Every time the distance from the infrared LED’s in the hoop and the infrared sensor in the table would vary, I would assign a specific note for it to play. This was a very practical and nice idea in theory, but it didn’t work in real life. In the following video you can check out why.

As you can see in this prototype version in the bread board, I was simulating the hoop movement to try to understand the variables I would be able to get from this sensor. Probably you noticed that actually the opposite from what I expected happened: the sensor data numbers, instead of increasing as I brought the LED’s light closer, decreased to a constant “0” (zero).  With this input I was not gonna be able to generate a proper sound and integrate as I wanted. Besides, I needed to bring the infrared LED’s super close to my Arduino and this wasn’t functional to my idea.

Trial #2 —> Success 

So I had to change my approach. I decided to work with an accelerometer. As I didn’t have the time to learn how to work to send bluetooth sensor data to the Arduino I had to embrace doing a very prototype-ish version of my idea and have the speaker and the Arduino placed in the Hula Hoop. My solution to minimize that was to get a smaller Arduino, so I ended up getting an Arduino MKR 1000. I also purchased the Adafruit_BNO055 accelerometer sensor.

To be able to work with  Adafruit_BNO055 accelerometer I had to download its documentation library. I only used its basic functions that enable be to get X,Y,Z variables. I chose to proceed with the X variable for now, as it sensed the 360 degrees movement.

So i had the sensor working (check!) and I identified the variables (check!). Now it was time to use the pitch.h library to attach the different variable inputs to notes (through if/else statements). You can check the code below:

#include <utility/imumaths.h>
#include "pitches.h"

Adafruit_BNO055 bno = Adafruit_BNO055(55);

int analogInput;

void setup(void)
{
Serial.begin(9600);
Serial.println("Orientation Sensor Test"); Serial.println("");

/* Initialise the sensor */
if (!bno.begin())
{
/* There was a problem detecting the BNO055 ... check your connections */
Serial.print("Ooops, no BNO055 detected ... Check your wiring or I2C ADDR!");
while (1);
}

delay(1000);

bno.setExtCrystalUse(true);

}

void loop(void) {
/* Get a new sensor event */
sensors_event_t event;
bno.getEvent(&event);

/* Display the floating point data */
Serial.print("X: ");
Serial.print(event.orientation.x, 4);
Serial.print("\tY: ");
Serial.print(event.orientation.y, 4);
Serial.print("\tZ: ");
Serial.print(event.orientation.z, 4);
Serial.println("");

delay(100);

int frequency = map(event.orientation.x, 0, 360, 100, 880);

if (event.orientation.x > 2 && event.orientation.x <= 5) { tone(8, NOTE_C4); } else if (event.orientation.x > 5 && event.orientation.x <= 10) { tone(8, NOTE_F6); } else if (event.orientation.x > 10 && event.orientation.x <= 20) { tone(8, NOTE_G3); } else if (event.orientation.x > 20 && event.orientation.x <= 30) { tone(8, NOTE_A3); } else if (event.orientation.x > 30 && event.orientation.x <= 40) { tone(8, NOTE_B3); } else if (event.orientation.x > 40 && event.orientation.x <= 50) { tone(8, NOTE_C4); } else if (event.orientation.x > 50 && event.orientation.x <= 60) { tone(8, NOTE_G3); } else if (event.orientation.x > 60 && event.orientation.x <= 70) { tone(8, NOTE_A3); } else if (event.orientation.x > 70 && event.orientation.x <= 80) { tone(8, NOTE_B3); } else if (event.orientation.x > 80 && event.orientation.x <= 90) { tone(8, NOTE_C4); } else if (event.orientation.x > 90 && event.orientation.x <= 100) { tone(8, NOTE_G3); } else if (event.orientation.x > 100 && event.orientation.x <= 110) { tone(8, NOTE_A3); } else if (event.orientation.x > 110 && event.orientation.x <= 120) { tone(8, NOTE_B3); } else if (event.orientation.x > 120 && event.orientation.x <= 130) { tone(8, NOTE_C4); } else if (event.orientation.x > 130 && event.orientation.x <= 140) { tone(8, NOTE_G3); } else if (event.orientation.x > 140 && event.orientation.x <= 150) { tone(8, NOTE_A3); } else if (event.orientation.x > 150 && event.orientation.x <= 160) { tone(8, NOTE_B3); } else if (event.orientation.x > 160 && event.orientation.x <= 170) { tone(8, NOTE_G3); } else if (event.orientation.x > 170 && event.orientation.x <= 180) { tone(8, NOTE_A3); } else if (event.orientation.x > 180 && event.orientation.x <= 190) { tone(8, NOTE_B3); } else if (event.orientation.x > 190 && event.orientation.x <= 200) { tone(8, NOTE_C4); } else if (event.orientation.x > 200 && event.orientation.x <= 210) { tone(8, NOTE_G3); } else if (event.orientation.x > 210 && event.orientation.x <= 220) { tone(8, NOTE_A3); } else if (event.orientation.x > 220 && event.orientation.x <= 230) { tone(8, NOTE_B3); } else if (event.orientation.x > 230 && event.orientation.x <= 240) { tone(8, NOTE_C4); } else if (event.orientation.x > 240 && event.orientation.x <= 250) { tone(8, NOTE_G3); } else if (event.orientation.x > 250 && event.orientation.x <= 260) { tone(8, NOTE_A3); } else if (event.orientation.x > 260 && event.orientation.x <= 270) { tone(8, NOTE_B3); } else if (event.orientation.x > 270 && event.orientation.x <= 280) { tone(8, NOTE_C4); } else if (event.orientation.x > 280 && event.orientation.x <= 290) { tone(8, NOTE_G3); } else if (event.orientation.x > 290 && event.orientation.x <= 300) { tone(8, NOTE_A3); } else if (event.orientation.x > 300 && event.orientation.x <= 310) { tone(8, NOTE_B3); } else if (event.orientation.x > 310 && event.orientation.x <= 320) { tone(8, NOTE_G3); } else if (event.orientation.x > 320 && event.orientation.x <= 330) { tone(8, NOTE_A3); } else if (event.orientation.x > 330 && event.orientation.x <= 340) { tone(8, NOTE_B3); } else if (event.orientation.x > 340 && event.orientation.x <= 345) { tone(8, NOTE_A5); } else if (event.orientation.x > 345 && event.orientation.x <= 350) { tone(8, NOTE_C4); } else if (event.orientation.x > 350) {
tone(8, NOTE_G3);
}
}

 

With that, the only remaining challenge was to attach the battery, Arduino, accelerometer, and speaker to the Hula Hoop in a way people could still hoop with it. So I imagined it would be better to balance out placing the 9V battery with the on/off button in a side and attach the rest in the other side of the Hula-Hoop. As it was a really prototype-ish version, i attached the Arduino/Accelerometer/Speaker with paper, and soldered it together. I also put the wires throughout the hoop as you can check in the images below.

* as you can see in the last image I added a resistor near the battery to avoid short circuit.

So as I turned on, now my Hulla-Hoop could emit noises with motion.

It is still not the best sound. Next step would be definitely adding a bluetooth integration from the accelerator to the Arduino. With that, I will place it inside the hoop and make it way better designed. Also, I have to work a lot on the code, to enable more control over the tunes when the user is hooping. It was a fun project for a first trial, but in order to be able to create performances I need to take it to the next level.

 

 

Lights, Arduino and Physical Computing

So here follows some images and recap on my first experiment with Physical computing and Arduino.

For this past week,  we learned how to use Digital/Analog inputs and outputs, implemented with Arduino code.
Therefore, I did a simple motion input experiment… that worked like those automatic lights we have in stairs, the ones that are motion activated.

For that, besides LEDs, wires, resistors, my Arduino and computer, I also needed a motion sensor.
So i plugged it in my bredboard and did the following circuit:

Then I added code, commanding how I wanted the lights (outputs) to behave when motion was detected (input).

Thus, my circuit executed what I programmed it to do:

 

 

What is Interaction?

Earlier this week I purchased a new computer.  My “old” MacBook Air started to show an error  between the connection of its trackpad and the core, processing part of the machine. I would try to put up a website and the arrow would not click in the right folder. When editing an image in Photoshop, it would pick any tool but the one I needed to use.  I was even afraid to open the browser: imagine what an uncontrolled arrow logged into your Facebook account can do with your social life. It drove me crazy.

I tried rebooting the OS, different USB mouses, Bluetooth mouses, reinstalling…until I finally took it to a Tech Bar. They did a proper diagnosis of the issue and the results were not good. The cost of fixing it wouldn’t pay off at all (around $900), so I had to say my goodbyes and spend some considerable extra bucks on a functional new model.

Even though this first paragraph tells the sad story about the last days of my 2014 MacBook, it’s a good example for defining effective interaction. Basically, it happens when you move your mouse and the computer is able to listen to this movement, process it, and respond to you, making the arrow on the screen go to the intended mouse movement’s direction.

In this dialog there are two actors,  person and computer. And they have a continuous conversation: person speaks (input), the machine thinks (process), and speaks (output), and so on. All steps are essential to the success of it. In the case of my computer, it was not processing the information right. Accordingly, the mouse didn’t respond to my command, and this important feature compromised its capabilities to further interaction.

Still, proper interaction can happen throughout different types of technology, with mouse or without mouse, with machine or without one. Interaction is something intrinsically human and essential to our survival. We can also have a conversation by speaking with other humans, by exchanging mails, by petting an animal, by using the fridge and seeing its internal light turn on as we open it and many other ways.

In a fun experiment made in 2016 by Kamptipopen, an award winning architecture firm from Japan, employees could use colorful pipes to communicate in the office. Still, the interaction that humans have with a computer is more complex and functional when compared to the interaction with the fridge, the playfull and colorfull pipes in Kamptipopen’s office or most other machines. So, it is not a boolean quality: it has levels. And is this high level of interactivity that make computers play such important roles in our daily lives.

“People claim that the computer’s true essence lies in its ability to crunch numbers, or handle mountains of information. While these are desirable features, they don’t lie at the core of what makes the computer so important to our civilization. Remember, we had plenty of number-crunching and data-cubbyholing computers in the 1960s and 1970s, but we don’t talk about “the computer revolution” until the 1980s. The revolutionary new element was interactivity.” – Chris Crawford

The interaction between human and machine devices has grown exponentially in the past years. We all experienced the shift from VHS videos to Netflix, from home fixed phone lines to our smartphones, from mailing to Whatsapp, from buying our books in a bookstore to having them by one click in our Kindles… and the list could go on and on. It has grown to a point that it is clear that we already behave as cyborgs, we just have to accept it.

Emerging technologies aim to optimize this further.

I agree with the idea put in  Bret Victor’s A Brief Rant On The Future Of Interaction Design, written back in 2011, in which he reminds us of all the other senses such as the ability to feel, that are still “invisible” through nowadays gadgets and networking tools. Still, for something to be effectively interactive, a conversation must exist. It must be intuitive enough so it won’t be perceived as an emerging technology or prototype, but as something that improves our daily communication process.