Magic Windows Final Project Documentation

As mentioned in the last post for my final for Magic Windows I continued developing a project on recreating Washington Square Park in 1968 by creating a story of 2 characters and visually placing archival images in space through AR while the monologues are played.

The monologues and their respective visual scenes are be created using geolocation and the final scene when the 2 characters meet triggered by image targeting – once the meeting point is set to be under the park’s arch.

Therefore, I worked with Mapbox to set the geolocation in the park and with ARKit 1.5 for the image targeting with the app.

I used Mapbox and set two points, one in each extreme side of the park. For each side I placed two different GameObjects to SetActive, so I created a script that checks the device location and triggers the GameObject related to the point that the user is closest to. And after some weeks of iterating. meeting with mapbox and some work, it worked great!

(as i move closer to the right side of the park the red square is triggered, if i move to the left side, the green ball is triggered)

Here follows the *messy* script I’m using:

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using Mapbox.Unity.Location;

public class locationRelative : MonoBehaviour {

public DeviceLocationProvider locationProvider;
public Vector2 parkLocation;
public float boxBounds;

public GameObject[] objects;
public Vector2[] locations;

// public GameObject sphere1;
// public GameObject sphere2;
// public GameObject sphere3;
// public GameObject sphere4;
// public GameObject sphere5;
// public GameObject sphere6;
// public GameObject sphere7;

// Use this for initialization
void Start () {


// Update is called once per frame
void Update () {

//location ONE
Vector2 myLocation = new Vector2 (

float minDistance = 9999999999;
int minIndex = -1;
for (int i = 0; i < locations.Length; i++) {
Vector2 location = locations [i];
float distance = Vector2.Distance (myLocation, location);
if (distance < minDistance) {
minDistance = distance;
minIndex = i;

if (minIndex == 0) {
} else {

// if (locationProvider.CurrentLocation.LatitudeLongitude.x < parkLocation.x + boxBounds && // locationProvider.CurrentLocation.LatitudeLongitude.x > parkLocation.x - boxBounds){
// if (locationProvider.CurrentLocation.LatitudeLongitude.y < parkLocation.y + boxBounds && // locationProvider.CurrentLocation.LatitudeLongitude.y > parkLocation.y - boxBounds){
// sphere1.gameObject.SetActive(true);
// sphere2.gameObject.SetActive(true);
// sphere3.gameObject.SetActive(true);
// sphere4.gameObject.SetActive(true);
// sphere5.gameObject.SetActive(true);
// sphere6.gameObject.SetActive(true);
// sphere7.gameObject.SetActive(true);
// }
// }


I also added the image targeting using the new ARKit 1.5, which was very exciting since it works great!!! Check below.

For now I’ve been working mostly with placeholders, so as the device recognizes the arch, a blue capsule appears.


At that point mapbox/arkit 1.5 were living in different projects so the next step technically was to have the 3 scenes (character 1, character 2 and the encounter) inside the same project.

This took me a bit longer than expected, but once closely analyzing the files and replacing the older version of ARKit for the 1.5 inside the mapbox example it finally happened.

Next step then is to add the actual images to the scene in space in a – nice – ux  way.

So that’s what I’ve been playing with lately, because it’s kind of hard actually.

One idea was to set the images active once the planes were detected. But as you can see below this was not the best. The images appear too close and as you have not much control over how they will show up I don’t believe it would work in this project scenario.

So I decided to add GameObjects set as colliders randomly on space. Now, again, they have materials so I can see them and play with their arrangement.

On the following video (made during the snow storm) I changed the locations to be able to test it on ITP’s floor all those features (geolocation, image targeting, objects colliding) are working.

Below follows the script I made to generate the random colliders and add the images in an array squence.

using UnityEngine;
using System.Collections;

public class CamSpawn : MonoBehaviour

public GameObject[] imageScene;

public GameObject sceneNYU;

public GameObject colliderObject;

private int imageCount = 0;
public int imageMax = 10;

private Camera cam;

int num = 0;
int collideLoop = 10;
int loopNum = 2;

int spawnCount = 0;
int spawnLimit = 2000;
float delayCount = 0.5f;

private bool TapSpawn = true;

void Start()
cam = GetComponent();

for (int hey = 0; hey < collideLoop; hey++){
Spawn ();

void Update(){

// ///Touch
// int touchNum = Input.touchCount;
// Touch[] myTouches = Input.touches;
// for (int i = 0; i < Input.touchCount; i++) {
// if (Input.touchCount == 1) {
// }
// }

void minSpawnCount(){
spawnCount -= 1;

void spawnBool(){
TapSpawn = true;

void Spawn(){
for (int b = 0; b < loopNum; b++) {
Vector3 position = new Vector3 (Random.Range (-10.0f, 10.0f), 0, Random.Range (10.0f, 50.0f));
GameObject obj = (GameObject)Instantiate (colliderObject, position, Quaternion.identity);
obj.transform.parent = sceneNYU.transform;

void OnTriggerEnter (Collider other) {
if (spawnCount <= spawnLimit && TapSpawn == true) {

float DistanceToCamera = 3.0f;
Vector3 position = cam.transform.forward * DistanceToCamera + cam.transform.position;

GameObject obj = (GameObject)Instantiate (imageScene[imageCount], position, Quaternion.identity);

//this object is the child of scene nyu
obj.transform.parent = sceneNYU.transform;

spawnCount += 1;

float tempRate = Random.Range (60.0f, 70.0f);
GameObject.Destroy (obj, tempRate);
InvokeRepeating ("minSpawnCount", tempRate, 0.0f);

TapSpawn = false;
InvokeRepeating ("spawnBool", delayCount, 0.0f);

imageCount += 1;

if (imageCount == imageMax) {
imageCount = 0;



The “colliders” way to spawn images in the park worked well, but it got harder once the colliders were transparent. Some times they would cluster in a specific region and it was not creating the experience I wanted to.

So I developed another script that spawned the images according to the distance from the camera, and it worked well. It was a bit of a hustle having to customize all images to be in the right height and size but it’s working well.

Also, for better user testing, I added a “demo UX” with the introduction audio.

Below you can check the demo:

AR Tarot Reader

In this post I will go through the research/ideation/first prototype of an AR Tarot Reader app to augment and give life to these ancient cards in order to  help users connect with its symbolism, stories and, ultimately, themselves.

Tarot and Emotional Intelligence

Tarot is a divination ritual originated in the mid-15th century to help people connect through symbols to understand the present and forecast the future. Those symbols are pulled from a history of human myths and archetypes and are part of what Carl Jung would call the collective unconscious. It is an interesting tool where, through tangible objects such as cards, participants can project their feelings and thoughts. By allowing them to see the parallels between the stories represented in the cards and their own experiences,  people can take a moment to meditate on their own environment and behavior – action that can influence future outcomes. Therefore, Tarot is a powerful tool for self knowledge.

With the rise of religious bricolage, people are increasingly partaking in the Tarot ritual both by visiting formal card readers and by purchasing their own cards, becoming readers themselves. Accordingly, several workshops, events, books, blogs, YouTube channels and Mobile Apps on the subject have been developed in the past years.


Tarot Apps

As a digital creator, I have been taking a close look at the current Tarot mobile apps. Most Tarot apps today either function with two main approaches: as an online “card taker”(as they are randomly/digitally picked for you), or as a guide to learn the meaning of the cards according to  the outcome of a physically played one – sometimes they can also combine both of these functions. Still, I believe that neither of those actually works effectively for the purpose of helping the user identify with the card’s stories and meditate.

I think that Tarot is a tactile experience – holding the deck in your hands, shuffling the cards, turning them over one by one…if you make it 100% digital you take great part of that Tarot magical moment out. Still, by playing it digitally, it is really easy to access informations and symbolic meanings. Unfortunately, when you make an app that is only a guide, you have to manually look for that specific card name and its meaning and, when you finally find it, you will discover that the information provided feels incomplete as it is usually displayed as shots (a paragraph or two) of content.

Below follows some examples of the Top 2 apps that I selected. I really LOVE the design of the first on and the UX/UI is great.

Golden Thread Tarot


Tarot and Augmented Reality

With those pros/cons in mind, I started to think about creating an App that would use Augmented Reality to bring the Tarot cards to life, as a tool to promote emotional intelligence. And I was not the only one, there is already a developed version as you can check:

Again, even though I see a lot of potential in the approach and the AR integration works great, it simply lacks content. For me it works more as an Augmented Reality Tech prove of concept than an actual tool that people would use to play Tarot. Also, it is very game-icky. Accordingly I see that as an opportunity to create a more interactive and content-based experience.


My idea

As a way to explore how to use UX in AR and to tell the stories of the cards, my idea is to  and personify each Tarot card. Based on the idea of the article/prototype experience AniThings: Animism and Heterogeneous Multiplicity  as the user places the desired Tarot under their phone’s camera, the card would show some information and serve as a virtual button, that when pressed would tell its symbology, story, and meaning.

By coming to life I mean start talking, as I would work mainly with sound. Visually I would display text and maybe some sparkles or some kind of smooth visual feedback to let the user know that by pressing the card he/she is activating the sound. I believe that working with too much visual animations or actual visual characters would take some of the meditative characteristics that I aim to incorporate in the experience.

Press play to check the prototype:


Further on, maybe even with AI, people would be able to have an actual conversation with each card, like a “Tarot Alexa”.

Touch the Mona Lisa: virtual buttons & object augmentation ideas

This week with Vuforia I explored creating an AR button.

By touching the Mona Lisa you are able to rotate the cube! Ho awesome haha – but it does give me some ideas and helps me practice my C# scripting.


The readings for class this next week comprehended two articles: Anithings, Animism and Heterogeneous Multiplicity and Developing Augmented Objects, A process perspective.

Both of them explore ideas to add digital qualities to physical objects, but while the first article create the things and gives them human characteristics,  the second one incorporate electronic functionalities to everyday existing objects.

As a homework for class we were supposed to think of an object we would augment. There are plenty of objects that I can imagine augmented in my house and in my everyday life. Maybe my purse or my pockets, reminding me of the checklist of things I can’t forget to put inside it; maybe my closet, to help me choose the outfit for tomorrow; maybe my notebook, to show me personalized, easy to read news; or my neckless, to help me send a “poke” to my loved ones; my fridge, mirror…. and so on. But one article had a framework on ideating an prototyping augmented objects.

So let’s try using it:


So let’s define my augmented object:

  • Problem Definition: when I play the guitar is an awful experience to have to check for the lyrics and chords through apps such as Ultimate Guitar in the computer or in the cellphone. The phone keeps falling from my lap and it would be nice to have an playlist to play to, plus the size of the letters are usually small.
  • Definition of the AO usage context: when I will play the guitar. Usually in the sofa, in the living room. Hands are not available.
  • Requirements definition: show lyrics and chords in a visible,  interactive way along the music. Command by voice.
  • Selection of the object to be augmented: not sure. I don’t think it should be the guitar… maybe an object to be placed in a coffee table in the living room that would project the lyrics somewhere?

I’m still not sure what would be the right object to augment to solve this problem. I’ll keep ideating and will probably discuss in class about it. Also, is Augmented Reality the way to go to augment this object? Don’t think so, right. I mean, in the AO Usage context my hands are not available…so at least not in the phone.

Let’s see what I will come up with!


Starts with coffee: an AR experience

In our second assignment for Magic Windows we were asked to play around and create a storytelling experience using Vuforia.

That was not the first time I used Unity+Vuforia. In a previous post made for my Animations class last semester, I followed a basic tutorial, and added a dancing monster to a Starbucks logo. Thus, this time I went all the way focusing on concept and in the storytelling aspect of the experience.

This is my morning coffee mug:

I really like it and it has a lot of sentimental value attached to it. My mother collects mugs from the places where she travels to, and even though i do not have the same habit, it seems I do have an attachment for mugs from places where I did live in.

I lived in Tel Aviv from Feb 2015 to Sep 2016 and I have great memories from it. So because of that, every time I have coffee in the morning, which means every day, every morning, I kind of stare at it and play in my head with the tiny Bauhaus city illustrated in the mug.

Accordingly, I decided to keep playing and use it to tell a story in AR:


Starts with Coffee

Because in the cycle of our daily routines, everything starts with coffee.


In this experience, the user is invited to interact with the mug, unveiling videos and gifs that tell the story of a common week day in New Yorkers lives. The story is mainly a collage of resources I found on the web. Together and in this composition, they tell a story of an everyday week day. Starting with the alarm, brushing our teeth, breakfast. Than commute > work >commute. And all the possibilities held for the evening such as party, dinner, sex, movies and of course going back to sleep.

As the mug is  circular, its an ongoing experience, like our lives, right!? I thought it was interesting to recreate the “scenes” meaning morning/day/evening based in the three images displayed in the mug that I used as targets when creating the Vuforia trackers.

I look forward to exploring more ways of storytelling and augmenting objects when using AR. Having started to master the set ups and feeling more comfortable with Unity I’m also looking forward to start scripting more in C#. With that, besides the user input of triggering the augmented environments when locating targets, I will be able to establish more complex relations and therefore add way more user interactivity.


Projection Mapping: spatial augmented reality experience

For our first assignment for Magic Windows and Mixed Up Realities class we had to find an element of your home or space that we’d like to augment (could be a wall, a poster or a painting) and give it life using data, user interaction or any other means.

As soon as we had the homework I already knew what “screen” I would want to play with: my bedroom mirror. I find mirrors such a playful screen that haven’t been much used yet for digital projects. Our TV’s are already connected, the are already some digital portraits (even though they usually are not very interactive and look very tacky), but still, no much interaction with mirrors.

At the same time, in a “physical” level, many companies, restaurants and people have played with mirrors, leaving mainly motivational quotes or messages – mainly cute/loving/or friendship ones maybe using simply lipstick or post-its. Inspired by those I had the idea to play with inspirational quotes to help enhance my mood when I woke up.

So I created the following app using Websockets, so once a specific mood was pressed in my phone the projection connected to same web browser would change. I called is “mood mirror”. You can check the GitHub repository link here.

The idea is to select your current mood, while in bed, once you access the website on your phone. So when you wake up to check yourself in the mirror, you can see a correspondent positive message when facing yourself to start a new day, once the projector is connected to the same url in a computer.

Here is the mobile website interface:

As you select the Emoji corresponding to your mood you will see a quote to inspire you reflected in the mirror.

The result was really prototyp-ish, but it was definitelly fun to play with. I tried to use Madmapper, the program suggested by our teacher Rui, to be able to project from an angle so I could use the projection in the mirror, instead of the reflection in the user, but I didn’t have a good result with that. Anyways, I believe it’s also interesting also the idea of projecting something in the mirror, and playing with it as it reflects at the user.

With a lot of people taking mirror selfies, it could be easy to play with that to create an unique user experience. Wonder what would happen if we projected quotes and people could select their moods in an interaction placed in corporate buildings in elevators mirrors? – maybe we could create a fun, distraction moment before people entered the office or by simply using it at home.

Oh and here is my .gif or  <animation> that represents me: