As mentioned in the last post for my final for Magic Windows I continued developing a project on recreating Washington Square Park in 1968 by creating a story of 2 characters and visually placing archival images in space through AR while the monologues are played.

The monologues and their respective visual scenes are be created using geolocation and the final scene when the 2 characters meet triggered by image targeting – once the meeting point is set to be under the park’s arch.

Therefore, I worked with Mapbox to set the geolocation in the park and with ARKit 1.5 for the image targeting with the app.

I used Mapbox and set two points, one in each extreme side of the park. For each side I placed two different GameObjects to SetActive, so I created a script that checks the device location and triggers the GameObject related to the point that the user is closest to. And after some weeks of iterating. meeting with mapbox and some work, it worked great!

(as i move closer to the right side of the park the red square is triggered, if i move to the left side, the green ball is triggered)

Here follows the *messy* script I’m using:

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using Mapbox.Unity.Location;

public class locationRelative : MonoBehaviour {

public DeviceLocationProvider locationProvider;
public Vector2 parkLocation;
public float boxBounds;

public GameObject[] objects;
public Vector2[] locations;

// public GameObject sphere1;
// public GameObject sphere2;
// public GameObject sphere3;
// public GameObject sphere4;
// public GameObject sphere5;
// public GameObject sphere6;
// public GameObject sphere7;

// Use this for initialization
void Start () {


// Update is called once per frame
void Update () {

//location ONE
Vector2 myLocation = new Vector2 (

float minDistance = 9999999999;
int minIndex = -1;
for (int i = 0; i < locations.Length; i++) {
Vector2 location = locations [i];
float distance = Vector2.Distance (myLocation, location);
if (distance < minDistance) {
minDistance = distance;
minIndex = i;

if (minIndex == 0) {
} else {

// if (locationProvider.CurrentLocation.LatitudeLongitude.x < parkLocation.x + boxBounds && // locationProvider.CurrentLocation.LatitudeLongitude.x > parkLocation.x - boxBounds){
// if (locationProvider.CurrentLocation.LatitudeLongitude.y < parkLocation.y + boxBounds && // locationProvider.CurrentLocation.LatitudeLongitude.y > parkLocation.y - boxBounds){
// sphere1.gameObject.SetActive(true);
// sphere2.gameObject.SetActive(true);
// sphere3.gameObject.SetActive(true);
// sphere4.gameObject.SetActive(true);
// sphere5.gameObject.SetActive(true);
// sphere6.gameObject.SetActive(true);
// sphere7.gameObject.SetActive(true);
// }
// }


I also added the image targeting using the new ARKit 1.5, which was very exciting since it works great!!! Check below.

For now I’ve been working mostly with placeholders, so as the device recognizes the arch, a blue capsule appears.


At that point mapbox/arkit 1.5 were living in different projects so the next step technically was to have the 3 scenes (character 1, character 2 and the encounter) inside the same project.

This took me a bit longer than expected, but once closely analyzing the files and replacing the older version of ARKit for the 1.5 inside the mapbox example it finally happened.

Next step then is to add the actual images to the scene in space in a – nice – ux  way.

So that’s what I’ve been playing with lately, because it’s kind of hard actually.

One idea was to set the images active once the planes were detected. But as you can see below this was not the best. The images appear too close and as you have not much control over how they will show up I don’t believe it would work in this project scenario.

So I decided to add GameObjects set as colliders randomly on space. Now, again, they have materials so I can see them and play with their arrangement.

On the following video (made during the snow storm) I changed the locations to be able to test it on ITP’s floor all those features (geolocation, image targeting, objects colliding) are working.

Below follows the script I made to generate the random colliders and add the images in an array squence.

using UnityEngine;
using System.Collections;

public class CamSpawn : MonoBehaviour

public GameObject[] imageScene;

public GameObject sceneNYU;

public GameObject colliderObject;

private int imageCount = 0;
public int imageMax = 10;

private Camera cam;

int num = 0;
int collideLoop = 10;
int loopNum = 2;

int spawnCount = 0;
int spawnLimit = 2000;
float delayCount = 0.5f;

private bool TapSpawn = true;

void Start()
cam = GetComponent();

for (int hey = 0; hey < collideLoop; hey++){
Spawn ();

void Update(){

// ///Touch
// int touchNum = Input.touchCount;
// Touch[] myTouches = Input.touches;
// for (int i = 0; i < Input.touchCount; i++) {
// if (Input.touchCount == 1) {
// }
// }

void minSpawnCount(){
spawnCount -= 1;

void spawnBool(){
TapSpawn = true;

void Spawn(){
for (int b = 0; b < loopNum; b++) {
Vector3 position = new Vector3 (Random.Range (-10.0f, 10.0f), 0, Random.Range (10.0f, 50.0f));
GameObject obj = (GameObject)Instantiate (colliderObject, position, Quaternion.identity);
obj.transform.parent = sceneNYU.transform;

void OnTriggerEnter (Collider other) {
if (spawnCount <= spawnLimit && TapSpawn == true) {

float DistanceToCamera = 3.0f;
Vector3 position = cam.transform.forward * DistanceToCamera + cam.transform.position;

GameObject obj = (GameObject)Instantiate (imageScene[imageCount], position, Quaternion.identity);

//this object is the child of scene nyu
obj.transform.parent = sceneNYU.transform;

spawnCount += 1;

float tempRate = Random.Range (60.0f, 70.0f);
GameObject.Destroy (obj, tempRate);
InvokeRepeating ("minSpawnCount", tempRate, 0.0f);

TapSpawn = false;
InvokeRepeating ("spawnBool", delayCount, 0.0f);

imageCount += 1;

if (imageCount == imageMax) {
imageCount = 0;



The “colliders” way to spawn images in the park worked well, but it got harder once the colliders were transparent. Some times they would cluster in a specific region and it was not creating the experience I wanted to.

So I developed another script that spawned the images according to the distance from the camera, and it worked well. It was a bit of a hustle having to customize all images to be in the right height and size but it’s working well.

Also, for better user testing, I added a “demo UX” with the introduction audio.

Below you can check the demo:

Share this Post

Leave a Reply

Your email address will not be published. Required fields are marked *