Getting Started with Unity

As the last tool for Animations class were assigned to work with Unity.  I have been working on creating 3D environments using three.js for my Final to intro to Computational Media, which is a super interesting javascript library to create 3D sketches directly in the browser, and was excited to get started. In my experience, three.js is a fun library to prototype but it lacks a more “finished” touch and the ability to develop more complex interactions, such as the ones I see people creating with Unity or Unreal. Thus, It was finally time to get started 🙂

Roll-a-Ball Tutorial

So first things first, as Gabe, the animation teacher recommended, I started by doing the roll-a-ball tutorial.

The roll-a-ball tutorial makes you follow a step by step to learn how to make a simple 3D game, where you can control the ball movements using your keyboard arrows. It was super straightforward and a nice way to start getting the hang of how Unity works. You can try to create it yourself by clicking here.

Below you can check a video of me playing it. I am really bad at it btw.

Creating a Character and Animating it in Unity

So, for a second step, we had to create a character to animate. I experimented creating mine in Fuse, an Adobe program that lets you create 3D characters to later animate, but as they all looked a bit weird, I just decided to go all the way weird with a monster I found in Mixamo (other adobe program that lets you animate your created characters or get pre made characters from their own library).

I also downloaded my monster along with a dancing movement.

So by following the Make a Character Controller in Unity tutorial, made by my animations teacher, I was able to make my monster dance and move, just like the ball in the Roll-a-Ball tutorial, but now with a bit more style and charisma <3

In the following video you can check Bob, my Bellydancing monster moving.

Creating my first AR using Unity+Vuforia

As the last step I wanted to make Bob (my monster) dance in other surfaces. Thus, I decided to use as a target the Starbucks logo in a pastry paper and set it ito appear as an Augmented Relity Animation using Vuforia.

So I added it into my Target Database and followed the Getting started with Augmented Reality in Unity tutorial, also made by Gabe.

And this was the result:

I think is really cute and charming the way you can see Bob’s ass when he turns. He is definitely a great dancer.

As you can see, I am just getting started, and I am excited to explore Unity further and create new animations soon.

 

Everyday Windows

Sexism, harassment, abuse… They all have been historically regarded as personal issues, relegating them from the public discussion, and diverting attention of their status as sociopolitical systemic problems. We want to show what happens behind the doors -or windows, in this case-, what women (from our experience) go through and how society as a whole contributes to the expansion of these issues.

This VR experience about experiences was created in three.js and rendered with help of the WebVR API. It’s mounted on a node.js server connected via WebSockets to an Arduino MKR1000.

This was a project developed by Nicolas Pena-Escarpentier and me and it was our Final for Intro to Physical Computing and Intro to Computational Media.

Here you can check the Github repository and the Link (we will work on making a full web-only version soon). Below I’ll be explaining a bit about our process and the development of our project.

The Sketch

three.js

In order to have all the windows in one same sketch, we created an individual scene for each one, just changing the index of the one to be rendered. As most of the rooms have the same components (floor, cylindrical wall and images of the cutouts and windows), all the scenes are created within a for loop with their common elements, and a specific function for the specific assets on each scene. The images are simply textures on top of planes with transparency from the png files. Check the code below:

function createEnvironment(){
for (let i = 0; i &lt; 6; i++) {
scenes[i].background = new THREE.Color( 0x555555 );
createLights(i);
createFloor(i);
createRoom(i);
}
scene0();
scene1();
scene2();
scene3();
scene4();
scene5();
}

function createLights(ind){
let p_light = new THREE.PointLight(col[ind], 1.5, 1000, 2);
p_light.position.set(0, 10, 0);
scenes[ind].add( p_light );
}

function createFloor(ind){
let floorGeo = new THREE.CylinderGeometry(roomSize*4, roomSize*4, 1, 24);
let floorMat = new THREE.MeshLambertMaterial({
color: 0x666666,
emissive: 0x101010,
});
let planeF = new THREE.Mesh(floorGeo, floorMat);
planeF.position.set(0, -roomSize/4, 0);
scenes[ind].add(planeF);
}

function createRoom(ind){
// planes w/ images
let plGeo = new THREE.PlaneGeometry(roomSize, roomSize, 10, 10);

// images
let windowMat = new THREE.MeshBasicMaterial({
map: loader.load("media/" + ind + "/window.png"),
side: THREE.DoubleSide,
transparent: true,
});
let personMat = new THREE.MeshBasicMaterial({
map: loader.load("media/" + ind + "/main.gif"),
side: THREE.DoubleSide,
transparent: true,
});
for (let i = 0; i &lt; 4; i++) {
let windowPlane = new THREE.Mesh(plGeo, windowMat);
let personPlane = new THREE.Mesh(plGeo, personMat);
let rad = 10;
let posX = rad * Math.sin(i*Math.PI/2);
let posZ = rad * Math.cos(i*Math.PI/2);
personPlane.position.set(posX*6, roomSize/4, posZ*6);
personPlane.rotation.y = Math.PI/2 * Math.sin(i*Math.PI/2);
scenes[ind].add(personPlane);
windowPlane.position.set(posX*8, roomSize*.3, posZ*8);
windowPlane.rotation.y = Math.PI/2 * Math.sin(i*Math.PI/2);
scenes[ind].add(windowPlane);
}

// room walls
let wallGeo = new THREE.CylinderGeometry(roomSize*5, roomSize*5, 250, 24, 20, true);
let wallMat = new THREE.MeshLambertMaterial({
color: 0xd0d0d0,
side: THREE.DoubleSide,
});
let wall = new THREE.Mesh(wallGeo, wallMat);
wall.position.set(0, 230, 0);
scenes[ind].add(wall);
}

And this is how they look:

WebVR

To get the sketch to display on VR was tricky. The implementation of WebVR has been evolving and a lot of the information has changed drastically. Also, we’d like to thank Or Fleisher for helping us get started with WebVR.

We have to start by telling the renderer to enable the VR possibility, load the VREffect package to create a separate render for each eye, as well as the VRControls package to incorporate the accelerometer rotations for a correct camera control. It is also useful to install the WebVR API Emulation Chrome Extension in order to test the sketch with the new controls.

renderer.vr.enabled = true;

effect = new THREE.VREffect(renderer);
effect.setSize(window.innerWidth, window.innerHeight);

controls = new THREE.VRControls( camera );
controls.standing = true;
camera.position.y = controls.userHeight;
controls.update();

Then, we need to find if there’s an available VR display by using the function navigator.getVRDisplays(). In this case, we are defaulting to use the first (and most likely, only) VR display. With this display, we can also use the WebVR library tool to automatically create the button to display in VR.

// sets up the VR stage + button
function setupVRStage(){
// get available displays
navigator.getVRDisplays().then( function(displays){
if(displays.length &gt; 0) {
vrDisplay = displays[0];
// setup button
vrButton = WEBVR.getButton( vrDisplay, renderer.domElement );
document.getElementById('vr_button').appendChild( vrButton );
} else {
console.log("NO VR DISPLAYS PRESENT");
}
update();
});
}

Now, the animation function is a tricky one, because it changes the rendering pipeline. Usually, the browser is the one that requests a new animation frame when it is ready to display a new one, but in this case, the VR display is the one that has to ask for it. Also, as we’re using two different renderers (the normal one or the VREffect), we need to discriminate between both states, which can be done with the vrDisplay.isPresenting parameter.

function animate(timestamp) {
let delta = Math.min(timestamp - lastRenderTime, 500);
lastRenderTime = timestamp;

if(vrDisplay.isPresenting){ // VR rendering
controls.update();
effect.render(scenes[current], camera);
vrDisplay.requestAnimationFrame(animate);
} else { // browser rendering
controls.update();
renderer.render(scenes[current], camera);
window.requestAnimationFrame(animate);
}
}

It is also worth noting that we have to add the WebVR Polyfill package for everything to work outside Google Chrome (remember, this is a browser based implementation!).

Node.js Server

To learn the basics from node, npm and how to mount a server, Daniel Shiffman’s Twitter Bot Tutorial and this lynda.com courseare an amazing start.

Thanks to these tutorials, mounting the server was easy, but the web socket implementation was rather difficult. We started with socket.io, but that implements extra things that interfered with the Arduino connection. Thankfully, Tom Igoe referred to me to his book Making Thinks Talk where he successfully implements this connection using the ws library on the server side. So, following one ofhis examples (all of them are on Github), we got it working perfectly.

// websocket setup
var WebSocket = require('ws').Server

wss = new WebSocket({ server: http });

wss.on('connection', function(ws_client){
console.log("user connected");

ws_client.on('message', function(msg){
// check if the values are valid/useful
var intComing = parseInt(msg);
if(intComing != NaN &amp;&amp; intComing&gt;=0 &amp;&amp; intComing&lt;=5){
_scene = parseInt(msg);
broadcast(_scene);
console.log("change scene broadcast: " + _scene);
}
});
});

function broadcast(msg){
wss.clients.forEach(function each(client) {
client.send(msg);
});
}

Another thing worth noting, is that to keep the application running on the DigitalOcean server, we used the 

[forever](https://www.npmjs.com/package/forever)

 package.

Arduino

For this project, we used an Arduino MKR1000, because we needed a way to wirelessly communicate with the phone (via a server, in this case) without resorting to a computer. In the beginning, we tried using a bluetooth module, but as the project was web-based, the security measures in the browsers do not let them access the bluetooth -or other hardware components- easily. Also, it was way harder than we initially thought it would be, and the WiFi communication much easier.

Internet connection

Getting the Arduino to connect to internet is pretty straightforward. Following this tutorial was all we needed.

The connection with the server was harder. After extensive web searches, we asked Tom Igoe who recommended his book Making Things Talk where he dedicates a whole chapter to this. So, following the book example and the
ArduinoHttpClient library example we got to set everything up.

#include
#include
#include

WiFiClient wifiClient;
WebSocketClient webSocket = WebSocketClient(wifiClient, server, port);

void connectToServer() {
Serial.println("attempting to connect to server");
webSocket.begin();

Serial.println(webSocket.connected());
if (!webSocket.connected()) {
Serial.println("failed to connect to server");
} else {
Serial.println("connected to server");
}
}

void sendWindow(int num){
// check wifi connection
if(WiFi.status() != WL_CONNECTED){
connectWiFi();
}
// check server connection
while(!webSocket.connected()){
connectToServer();
}
// send the message!
webSocket.beginMessage(TYPE_TEXT); // message type: text
webSocket.print(num); // send the value
webSocket.endMessage(); // close message
Serial.print("window ");
Serial.print(num);
Serial.println(" sent");
}

Interface components

In the beginning, we tried using a capacitive touch sensor (MPR121) and covered the borders of the windows with capacitive fabric for it to work. The code was easily done by following the Adafruit MPR121 tutorial plus a quick code fix. Sadly, the user-testing led us to realize that this was not the best choice. People would often try to touch the window itself, rather that the border, due to poor instructions. So, we opted for the not-as-fun more-conventional approach and got LED momentary pushbuttons.

In order to light up the rooms with colors that match the lights on the sketch, we planned to use RGB LEDs, but they posed a bunch of difficulties. They need a lot of pins (3 for each LED * 6 windows= 18 pins!!!), or a lot of mathematics if we were to hard code them (to use only one pin per window). Still, using NeoPixels was a much better idea, and amazingly simple to code! With the Adafruit NeoPixel library it’s as easy as giving it the pin number, how many pixels there are, and the RGB values for each. Et voilá! That way, everything stays in the code, in case we want to change anything.

void windowLight(int num){
// window variables
int r = window_colors[num][0];
int g = window_colors[num][1];
int b = window_colors[num][2];
// window pixels
for(int i=0; i&lt;3; i++){
int index = window_pixels[num][i];
pixels.setPixelColor(index, pixels.Color(r, g, b));
pixels.show();
}
}

Resources

Here is a list of -previously unreferenced- web resources from where we took some code or help to implement everything:

PhotoJukebox: mixing photography, music & physical computing.

If you looked at some of my other projects in my blog or portfolio you have probably already noticed that I like playing with photography. Either in the political way we perceive images or in the personal way that pictures are attached to memories, as a designer, maker, producer, or whatever its the name for what I do, I find it fun to create with it.

Having that said, my boyfriend’s birthday was coming. And I wanted to give him a special gift. Not only because he was turning 28, but because he got a job in the city and made it happen to move to New York so we could be together – and he was arriving only a couple of days before his birthday. My boyfriend is a musician, so since the beginning of our relationship he frequently sends me audios of him playing songs that somehow relate to what we are living in that moment.

Meanwhile, I partnered with my colleague and friend Jenna who is an awesome designer (check her work here), to make our Physical Computing midterm project. I was inspired by this context and came up with the Photo Jukebox idea. She loved it and we decided to make it happen.

You check below we did it! Further on this post I’ll explain how.

Inspiration

My main inspiration came from a project that a colleague at ITP did. Amitabh is a genius when it comes to Physical Computing and Arduino (check his work here), and once he built a jukebox that played specific songs once you placed a related acrylic sheet on top of his machine. The acrylic sheets were really fun and were based in images of the bands that would then be played – as you can see below.

So his project triggered me (thanks Amitabh!!!!): what if we could put personal pictures there? I mean, today the way we interact with personal photographs is mainly digital, posting in our Social Media. At the same time, the few ones that are printed and take a physical form, stay in barely touched photo albums, or in beautiful but not at all interactive portraits. How fun would it be to have a machine like Amitabh’s one, but with a different approach and design, that could enable for the user a unique way to interact with pictures, listen to music, and trigger good memories or feelings attached to those photographed moments.

And so I started sketching. I wanted the design to have a vintage look, and be shaped kind of like a made with wood, giving this kind of Victrola feel.

The idea was to use the back of each photograph to close certain circuits, acting like pressed buttons. Consequently the button that was pressed should trigger a specific song related to it.  I knew we could easily do that by using copper tape.

Testing the concept

Thus, we started and getting the materials to be able to create and test the circuit and the code.

In order to do this first step we needed a major item that allows Arduino to actually play music and read an SD card without having to use a computer: an MP3 shield.

As we were really excited to get started with the project, Jenna and me ran to Thinkersphere and purchased the geetech.com MP3 shield without previously testing its library which turned out to be an immediately regrettable decision. There was very little documentation, the provided link to the datasheet was broken, and the library—incredibly—didn’t work.


*screams*

Luckily, Aaron (our miracle-worker of a resident) was able to help us hack the Adafruit mp3 shield library to work with our questionable Tinkersphere purchase. Unfortunately, that only opened the floodgates of pain and suffering, as there was still a lot of crazy mp3 shield logic deal with (delays, booleans for each song, the concept of interrupts and how they apply to serial communication…). Eventually, many office hours (thanks Yuli and Chino) and even more if statements got the job done. However, we weren’t able to figure out how to get combinations of switch states to allow for more songs.


preview of the madness

We tested the code+circuit with regular push buttons to test the code, and it worked!

 

So we threw together a rough prototype with the copper tape buttons to test the actual technical concept.

And it also played the songs as expected!

Creating the Enclosure

With the circuit working, it was time to work on the enclosure. We bought a utensil tray from The Container Store (shout out to this video), and laser cut an interface, first with cardboard:

Then with acrylic:

Building the circuit

This was a really challenging part. How to attach and solder the buttons, battery, potentiometer, on/of switch in an actual stable way so it could become a durable gift?

Until now, we have been mainly prototyping, playing with soldering but not actually worried on making it fit and not be interrupted or spoiled by a simple shake. We started working by ourselves, and after tons of hours came up with a working circuit. Still we had a lot of issues and were panicking over its instability.

So this is when you book office hours, and Aaron (the miracle Physical Computing talented resident) came to save us again.

He explained that, for final prototypes, you should ALWAYS work with multi-stranded wire. They are more flexible, fit better into enclosures and won’t break that easily once soldered. Also, he showed us some german awesome plugs (the orange/transparent thing you see in the pic) that can join multiple wires. Thus we could attach all our grounds and our 5V’s sides very easily.

Check below how was our life before and after Aaron’s help.

And our baby was born!

Jenna and me were both very proud of what we accomplished working together in a week and a half.

The gift was amazing and my boyfriend loved it.

It will definitely be part of our living room, sitting in the coffee table right next to the sofa. It will be perfect for when we feel nostalgic and want to going through our special moments. Also, when inviting guests, I believe they will feel curious towards it and may play with our Photo Jukebox. With that, they will sure learn more about our story together, and feel, through photography and music, a bit of our love.

 

 

 

 

 

 

Period Paper Signal

In this week – and final!! – project for Intro to Fab, we were assigned to use a motor. Thus we were supposed to make a creation that moved using servos, DC motors, Steppers and other examples showed in class.

When I heard about this assigned homework I reminded of a project I heard in the beginning of the semester called Paper Signals. Paper Signals is a Google Voice experiment that explores how physical things can be controlled with voice. The creators designed a few examples of paper controllers that track things like weather, Bitcoin, rocket launches, and more.

For the experiment, they fabricate the small signals with paper and use servos to create the movement once the data APIS connected to the Google Voice triggers it. As I really liked the approach of this experiment and I am really excited about connecting online digital data to offline physical computing objects, I believed it was a great prototype to try and decided to purchase the materials and got started.

The Paper Signals Page is very straightforward. They give you a very nice tutorial on how to download and connect the data. They also provide the PDF of the templates to create the paper fabrication. So my first action was to recreate one original Paper Signal as an example.

So I recreated the countdown.

For that I printed the Paper Signals  pdf templates and cut it.

Also, I assembled the Adafruit Feather with the Micro Servo and put the code and its libraries together. They had a bug in their Arduino code so at first it wouldn’t compile, as you can check below.

But with some help from Mathura (thank you!!!) we fixed it. It was just a matter of declaring the requested variable that probably was deleted by mistake from some user on Github. Below you can check the added void.

And it Compiled!

And then I was ready to put it together!

 

For a second step I aimed to create my own Paper Signal. So I thought: paper signals are supposed to inform you, in a cute way about relevant data. So what is a very relevant data that I would like to be warned or informed about in a daily basis?

If you are a human being and you have an uterus, you will or had or probably have a couple of days in the month when you get your period. For most of this fortunate people who have an ovary, this time of the month comes along with pain, discomfort, hormonal changes: meaning a lot of variables that negatively affect your humor. This, if you live with someone, this person also has to live with your mood during those days.

So I created my Paper Period Signal!

A paper signal that is connected to my Google Calendar and, when I set my period days (which usually is set for 2 days before that because of PMS), is able to inform everyone who sees it about it!

For now I put together the physical part (customized from the original Rocket paper signal template), and am working on getting my GCalendar Api to replace it in the following part of the code, replacing the rocket launch date as you can check below.

 

Experimenting with different fabrication materials

For this week’s intro to Fab assignment we should use different materials and combine it into something. I have already created a similar assignment when I worked in my repeatability one, in which I built tiny succulent vases, combining cement with wood (that you can see in the image above). Thus, this time, I was excited about trying to work with metal.

Due to time constraints — finals — I was not able to go to  Metalliferous , the metal and metal tool supply house recommended in class. Accordingly, if I still wanted to work with metal I would have to find another way to get my material. So I started thinking about everyday metal objects that we use in our life. And what is the first think that pops in your mind? Cutlery.

So I started researching different ways to use cutlery as a source of material, and I was impressed by how many different things people create from this “ready made”. A lot of jewelry such as rings, earrings and even necklaces; portrait frames, sculpture animals, and so on. In this search, I found a really cute set of forks that were bended to look like human hands, and I thought this was a fun challenge to do for this week’s assignment.

I purchased at home depot 4 forks – as I was aiming to make at least one I believed it was better to play it safe and get some extras. And went back to itp’s shop to get started. I tried bending it with pliers, which worked ok. I broke some forks in the process, but I still got two well bended. Still, I needed to bend it even further as my idea was to bend it in a way that it would seem that the fork was making the peace sign.

So, the shop staff recommended me to use the clamps and use my weight to bend it while the fork was been hold by the tool. That helped me bend it further. I also had an extra “fork tooth” from a fork that I broke earlier when first trying with pliers – the secret about bending these forks is to actually do it gently and slowly, otherwise they break – so I decided to glue it in the fork adding a “fifth finger” to it so it could look more like a human hand. I used superglue for that.

Because of the bending marks and a bit of the glue that you could see in the metal, I sanded it and painted it gold.

While it was drying I started to think about the holder of the fork. I found a good chunk of leftover wood in the shop and cut it into a small piece. As I wanted to place the fork inside the wood, Ben – our intro to fab teacher-,  recommended me to cut my wood holder in half, measure the fork and sand it with the “drill sanding machine” until it had this space to fit the fork and then glue those two pieces of wood back together. And that’s what I did.

Before, though, I realized it was better to cut the fork. So I cut it using the metal saw to fit better into my wood holder. I also painted the wood holder black, and that’s how I made my fork-peace&love-metal&wood-sculpture.

 

Music Player Enclosure

My assignment for intro to fab in the past week was to create a concept of an enclosure for a project.

The enclosure should have a function and buttons for manipulating it. As I am really focused on working in my final for pcomp and icm I knew I wouldn’t have much time to focus on this project. Besides, one of the main things I am working on my final is actually fabricating the cardboard dollhouse that will be the scenario of it. Thus, I have been already spending a lot of time in the laser cutter.

Accordingly, to get away from the laser cutter and do a simple yet useful enclosure I got a simple plastic box at Thinkersphere.

I specifically liked this box size because you can fit a considerable amount of important things in it, such as breadboard, Arduino Uno, a battery, a speaker and still have space. To create the buttons, I also got an on/off switch (you can’t never have too many of those), and used two potentiometers from a previous project. I used the ruler to draw where I wanted to place the holes.

And now it was time to use the static DrillBid – is this the name of the machine? -, and I was doing it for the first time. How exciting it was to get away from the laser cutting for a bit and be able to do it manually. I was really happy with the result and surprised by the simplicity of the process. I was done in about an hour. Wow.

So now I could add the buttons and use my Music Player Enclosure as I wish!

 

A really simple project, still I found out important new tools and resources 🙂

 

ITP Students Final’s Laser Cut Souvenir

We are all going crazy over our final projects for Intro to Physical Computing and Computational Media. At least I am. Coming up with ideas, thinking about its execution, starting to develop it…Giving up and thinking the first idea is dumb or is really hard to accomplish in the given time. Thinking about a second idea…going back to the first one…wait, did I hear a third idea? It is indeed a challenging process.

Therefore, for the laser cut assignment I decided to create little keychains to give as fun gift for the class, or, if I achieved to produce more, even to distribute to ITP students.

I had a spare thick (6mm) acrylic sheet that I bought at Canal Plastics when doing my PCOMP Midterm. Since I had spent a considerable amount last week on my succulent vases (which I glued yesterday and I will be updating the final results later today in last week’s post), I decided to create the keychains from it.

I was curious to see the issue Ben, our teacher, mention about creating multiple tiny things in the laser cutter. The laser cutter (the 75w that I worked on because of the thickness of the material) has an x of 0-32 and y of 0-20. The closer from 0, the stronger the laser is. Therefore, If you use a sheet that is 18×12 (like mine) to create the keychains, chances are the ones located in the 5×5 will be fine, engraved and cut, while the other ones will remain unfinished and have to go through a second/third/fourth/… laser cut process.

So I created my files in Illustrator, using 0.1px black for what I wanted to engrave and 0.01 red for what I wanted to cut. I tested it on a cardboard. I wanted to test two sizes and ask people which size they would prefer to use as a keychain.

Everyone liked the small one better.

Then, I tested in a scrap sheet of the same material that I had.

And now it was time for the official test. My first aim was to cut my whole acrylic sheet for the tiny keychains. So, besides setting the cutting lines to cut the keychains I also set it to cut my whole acrylic sheet after two vertical roles of the keychains. I decided to do that in order to avoid the failing Laser Cut issue that Ben mentioned.

Throughout Laser Cutting, I had to repeat several times my process. I pressed “go” to engrave about 4 times and would have pressed more if it wasn’t for the time – I booked 2 hours but was still running out – and had to press go 4 times.

For my surprise, the keychains placed in the bottom of my two vertical rows were not ready by the time I had to leave the machine. Only the keychains placed in the top 8in of my vertical row were actually well engraved and cut. I imagined that doing two rows vertically and so on would be enough but at the end it wasn’t.

Still, I got about 11 ok Keychains (I booked an emergency 30 minutes before class laser cut time so maybe I can have 17 by the time the class starts to be able to give it to everyone). And, mainly, a couple of lessons to keep in mind: Laser cutting can take a lot of time, always book waaay more time than you think to work on the laser cutter and, mainly if you are working in the production of many tiny objects, keep in mind the x and y power factor and that you will have to deal with that while creating with it.

To finalize, I used a sharpie pen and a marker to fill my engraving with a black color. I bought a lot of keychain holders from Amazon, you can get 100 units for $5. It didn’t turn out exactly how I expected but, well, embrace your process 😉

 

Repeatability Succulent Vases

For the second week’s assignment for my Intro to Fabrication class, we had to create 5 identical objects. The aim was to learn the process of creating replicas.

As I have just moved to a new apartment and I am really looking forward to fill my place with plants, I thought right away of making vases. Also, since in my last project I only used paper (besides the circuit to make the lantern), I wanted to challenge myself and work with different materials. Specially materials that I have never worked before.

I love this “Pinterest” concrete vases. And it seemed interesting to work with concrete. As I also wanted to test my skills Miter Saw and wood – since it looks a bit scary but at the same time cool to play with. In spite of that, by working with copper tape last week I really got into the idea of bringing to my house something with copper color. There is something about this industrial style that is very visually appealing.

So the first idea was to create a concrete square shaped vase, with a wood square bottom as a base. This wooden base, I would spray paint – something I also have never done before – with copper color spray. And the result should be similar to the picture above.

Step 1 – Getting the materials

There are a considerable amount of youtube tutorials and blog posts on how to make those vases. This one was my main reference.

As I googled where to buy concrete, Home Depot was the first answer. So Friday I headed to the store where I purchased my plants ( I was lucky to find tiny ones so I could make the small square shaped vases I wanted ), a high piece of wood – that had the width to fit the tiny vases and have spare space for the concrete borders, a pre-mixed bucket of refined concrete, Copper color all-materials spray paint, and Gorilla Glue, to glue the wooden bottom to the concrete in the end.

Step 2 – Making the mold & adding concrete

I spared my Sunday afternoon to make this project. In the video tutorial it was mentioned that you needed 24h to take it out from the mold, and 48h for it to completely dry. So I thought that by doing it on Sunday I would be safe.

*laughing*

For this assignment we were not able to use the laser cutter. Therefore, to make the mold I did it first on Photoshop, printed it in regular a4 paper , cut by hand. I chose a cardboard from ITP’s shop cardboard shelf, and, as I was really worried to make a stable mold (in the tutorial video one of the errors the Youtuber mentions is that at first she used a very light cardboard and thus it collapsed) I made the questionable choice of picking a thick one – later you’ll understand why.

I out the a4 paper mold on top of it and cut 5 molds of cardboards with a knife. I assembled the little squares, and also, being afraid of it collapsing, made a really safe enclosure with the Glue Guns and tape. This may have also been a questionable move.

After that, I put vaseline in the insides of the vases. Also, I got the little plastic vases of the plants to use it as the holes to put my succulents in. I added the concrete and everything came together. Now I just had to let it dry, and hope for the best.

Step 3 – Getting it out of the mold (after 72h and still not dry) & making the wooden base

Since Sunday, I kept checking everyday on my five concrete baby vases to see if they were ready to come out to the real world. This was when I noticed that was something wrong with what I did with my mold: I made it so still, stable and enclosed that there was barely space for air to come in. Thus, it wasn’t drying. So Wednesday afternoon I decided to take it off and let it dry without the molds, even if with this I had to embrace a lot of imperfections since parts of the concrete would glue on the cardboard, that was really hard to get off.

 

Meanwhile, I made the wooden bases. And it was incredibly fun and successful to work with the Miter Saw – at least something was working out!

 

So this is how my not yet dry vases were looking so far:

 

I sanded the wooden squares to ass paint and spray painted with my new favorite color.

Step 4 – Waiting for it to dry and hoping for the best

My plants are still homeless, but I’m sure the concrete WILL dry at some point and they will be soon sitting in cute copper trendy industrial like vases.

 

[to be continued…]