for I am a woman: revisiting the morning orthodox jewish blessings

For my electronic rituals and fortune telling class final project I wanted to create a ritual/moment that helped me connect in a daily basis with things that go beyond my daily routine. Still, once I wasn’t really sure to what specifically and how to create this connection, I started to research ritualistic practices within my family and my religious background for inspiration. In this process, I came across the everyday jewish morning blessing: Birchot Hashachar.

In spite of being practiced only by orthodox jews and not commonly used by all people that follow judaism, Birchot Hashachar contains an excerpt that makes me feel very uncomfortable as a jewish woman: in the Orthodox daily prayer service, men thanks ‘God who has not made me a woman‘, whereas the parallel
blessing for women is thanking ‘God for making me according to his will‘.

Blessed be he who has not made me a woman

בָּרוּךְ אַתָּה יְיָ אֶלֹהֵֽינוּ מֶֽלֶךְ הָעוֹלָם, שֶׁלֹּא עָשַֽׂנִי אִשָּׁה

It is not new that, like other western world’s major religions, Judaism fosters gender hierarchy and traditional gender ideologies. The Jewish tradition defines separate spheres for men and women, with men occupying the public sphere and women limited to the private sphere. Accordingly, women are exempted from many of the religious rituals that could undermine their devotion to domestic responsibilities. Still, I was surprised by such a straightforward Gender discriminatory statement that is recited daily by such a large amount of men.

In response to that, I created for I am a woman. It is a Chrome Extension that gives me a daily  reminder of the existence of this discriminatory blessing  with a small dose of inspiration of people who defied/defy this status-quo.

At every 10AM this Chrome Extension pops up a window in my browser to show information about a different extraordinary person that helped/helps define, establish, and achieve political, economic, personal, and social equality of genders.

The pop up randomly chooses one inspirational person to display, selected from a json file where I manually added the data. This was the pop up that I received today.

This was the first chrome extension I created. At first, the idea for this project was to create a webpage. But, as I started to research and user test, people often asked me “when would I visit this page??”. I couldn’t come up with a good answer for that, and that’s when the idea of developing it as something that would pop up in your screen at a specific time seemed more appropriate.

In the background.js file on the Chrome Extension, I added this code that checks the time and displays the pop up with the content once its 10 o’clock.

var popup = chrome.extension.getURL('foriamawoman/index.html');
var win;


function buttonClicked(){
console.log("button clicked")

setInterval(checkTime, 10000);

var alarmTriggered = false;

win =, '', 'width=750, height=360');

function checkTime(){
var now = new Date;
var hour = now.getHours();
if (hour == 10){
if (!alarmTriggered){
win =, '', 'width=750, height=360');
alarmTriggered = true;
} else if (hour > 10){
if (alarmTriggered){
alarmTriggered = false;

You can check the code on my github.




Impossible Maps Final Idea

When I moved to New York, besides becoming a student again, a newbie in town, a foreigner and so on… I became a “Latina”.  An the truth is that I’ve never thought of myself as a latina. I am Brazilian. My first language is portuguese. I do not have any hispanic origin in my family. But yes, I was born in Latin America.

Accordingly, at the same time that it is uncomfortable to fill census reports and forms here in the United States, it is still interesting to understand how my identity as a human being is redefined once moving to a foreign country.

My goal is to play with that idea of how culture, bias and identity are created and defined in American society and somehow transform that into a map. One reference that I really like is Alfredo Jaar’s  A Logo for America, a piece that doesn’t explicitly talk about those concepts but have them in its statement.

I’m not sure how I would develop that but I’ll look into more references and update it soon.






Feminist mapping and biased satellites

For this week’s readings we went over the article What would Feminist Data Visualization look like? and the chapter on Representation and the Necessity of Interpretation form Laura Kurgan. Both readings invite us to rethink about the way data is shown in maps, in order to understand that it is presented for a reason and purpose and therefore there will always be a bias and therefore a relationship of power involved.

The first article touches in the concept of how feminist standpoint theory would approach data visualization, mentioning that  all knowledge is socially situated and that the perspectives of oppressed groups including women, minorities and others are systematically excluded from “general” knowledge. Despite that, it suggests interesting approaches that creators should think when trying to develop “unbiased” maps according to feminist data viz, such as developing new ways to represent uncertainty, invent new ways to reference the material economy behind the data and create ways to make dissent possible so we can find ways to go back to the material that originated that visualization.

The second reading starts by breaking down the perception that satellite images analysis are somehow neutral and can be deliberately taken as statements. It mentions about the use of satellite images to justify the invasion on Iraq, proving this point. Accordingly, it states that there is no such thing as raw-data and suggests that working with data is a para-empiricism.

I believe that both readings prove their points. I really liked the suggestions on the Feminist article about new ways to present data that would clarify the choices and make the “bias” explicit. What if we visually problematized the provenance of the data? The interests behind the data? The stakeholders in the data? I believe that it is part of the visualization experience to highlight some aspects over others, according to the maps functionality. Thus is impossible not to create somehow biased maps. Accordingly, as creators, it is our responsibility to be aware of those choices and keep them explicit to the public.

How does income gap affects violence rates?

So for this assignment we were asked to play with our own .csv files to create a data visualization inside a map using Mapbox.  Thus, Hadar and I decided to compare two datasets: the gini index of each country with its deaths by firearms.

Gini index or Gini coefficient is is a measure of statistical dispersion intended to represent the income or wealth distribution of a nation’s residents, and is the most commonly used measurement of inequality. Death by firearms is identified as the number of deaths caused by the use of firearms during an year in that specific country. By collecting these data I aimed to analyze how does income inequality relates to violence rate.

You can check the map here.

The blue circles are the gini index (the bigger the gini index, the more social inequality), and the red circles represent the deaths by firearms.

According to this map it doesn’t become clear if there is a connection between these two datasets. I believe we have to map the values better and to add the actual number of the datasets when the user hove some of the circles. Also, there are some circles that are too big and are covering some of the countries, which makes it hard to identify.

Besides, the datasets we used to collect this data are not very reliable. First, they didn’t include all country’s in in and second, the data is not correspondent from the same year.

Still, even though this map needs iteration, this was a very interesting experiment.




Magic Windows Final Project Documentation

As mentioned in the last post for my final for Magic Windows I continued developing a project on recreating Washington Square Park in 1968 by creating a story of 2 characters and visually placing archival images in space through AR while the monologues are played.

The monologues and their respective visual scenes are be created using geolocation and the final scene when the 2 characters meet triggered by image targeting – once the meeting point is set to be under the park’s arch.

Therefore, I worked with Mapbox to set the geolocation in the park and with ARKit 1.5 for the image targeting with the app.

I used Mapbox and set two points, one in each extreme side of the park. For each side I placed two different GameObjects to SetActive, so I created a script that checks the device location and triggers the GameObject related to the point that the user is closest to. And after some weeks of iterating. meeting with mapbox and some work, it worked great!

(as i move closer to the right side of the park the red square is triggered, if i move to the left side, the green ball is triggered)

Here follows the *messy* script I’m using:

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using Mapbox.Unity.Location;

public class locationRelative : MonoBehaviour {

public DeviceLocationProvider locationProvider;
public Vector2 parkLocation;
public float boxBounds;

public GameObject[] objects;
public Vector2[] locations;

// public GameObject sphere1;
// public GameObject sphere2;
// public GameObject sphere3;
// public GameObject sphere4;
// public GameObject sphere5;
// public GameObject sphere6;
// public GameObject sphere7;

// Use this for initialization
void Start () {


// Update is called once per frame
void Update () {

//location ONE
Vector2 myLocation = new Vector2 (

float minDistance = 9999999999;
int minIndex = -1;
for (int i = 0; i < locations.Length; i++) {
Vector2 location = locations [i];
float distance = Vector2.Distance (myLocation, location);
if (distance < minDistance) {
minDistance = distance;
minIndex = i;

if (minIndex == 0) {
} else {

// if (locationProvider.CurrentLocation.LatitudeLongitude.x < parkLocation.x + boxBounds && // locationProvider.CurrentLocation.LatitudeLongitude.x > parkLocation.x - boxBounds){
// if (locationProvider.CurrentLocation.LatitudeLongitude.y < parkLocation.y + boxBounds && // locationProvider.CurrentLocation.LatitudeLongitude.y > parkLocation.y - boxBounds){
// sphere1.gameObject.SetActive(true);
// sphere2.gameObject.SetActive(true);
// sphere3.gameObject.SetActive(true);
// sphere4.gameObject.SetActive(true);
// sphere5.gameObject.SetActive(true);
// sphere6.gameObject.SetActive(true);
// sphere7.gameObject.SetActive(true);
// }
// }


I also added the image targeting using the new ARKit 1.5, which was very exciting since it works great!!! Check below.

For now I’ve been working mostly with placeholders, so as the device recognizes the arch, a blue capsule appears.


At that point mapbox/arkit 1.5 were living in different projects so the next step technically was to have the 3 scenes (character 1, character 2 and the encounter) inside the same project.

This took me a bit longer than expected, but once closely analyzing the files and replacing the older version of ARKit for the 1.5 inside the mapbox example it finally happened.

Next step then is to add the actual images to the scene in space in a – nice – ux  way.

So that’s what I’ve been playing with lately, because it’s kind of hard actually.

One idea was to set the images active once the planes were detected. But as you can see below this was not the best. The images appear too close and as you have not much control over how they will show up I don’t believe it would work in this project scenario.

So I decided to add GameObjects set as colliders randomly on space. Now, again, they have materials so I can see them and play with their arrangement.

On the following video (made during the snow storm) I changed the locations to be able to test it on ITP’s floor all those features (geolocation, image targeting, objects colliding) are working.

Below follows the script I made to generate the random colliders and add the images in an array squence.

using UnityEngine;
using System.Collections;

public class CamSpawn : MonoBehaviour

public GameObject[] imageScene;

public GameObject sceneNYU;

public GameObject colliderObject;

private int imageCount = 0;
public int imageMax = 10;

private Camera cam;

int num = 0;
int collideLoop = 10;
int loopNum = 2;

int spawnCount = 0;
int spawnLimit = 2000;
float delayCount = 0.5f;

private bool TapSpawn = true;

void Start()
cam = GetComponent();

for (int hey = 0; hey < collideLoop; hey++){
Spawn ();

void Update(){

// ///Touch
// int touchNum = Input.touchCount;
// Touch[] myTouches = Input.touches;
// for (int i = 0; i < Input.touchCount; i++) {
// if (Input.touchCount == 1) {
// }
// }

void minSpawnCount(){
spawnCount -= 1;

void spawnBool(){
TapSpawn = true;

void Spawn(){
for (int b = 0; b < loopNum; b++) {
Vector3 position = new Vector3 (Random.Range (-10.0f, 10.0f), 0, Random.Range (10.0f, 50.0f));
GameObject obj = (GameObject)Instantiate (colliderObject, position, Quaternion.identity);
obj.transform.parent = sceneNYU.transform;

void OnTriggerEnter (Collider other) {
if (spawnCount <= spawnLimit && TapSpawn == true) {

float DistanceToCamera = 3.0f;
Vector3 position = cam.transform.forward * DistanceToCamera + cam.transform.position;

GameObject obj = (GameObject)Instantiate (imageScene[imageCount], position, Quaternion.identity);

//this object is the child of scene nyu
obj.transform.parent = sceneNYU.transform;

spawnCount += 1;

float tempRate = Random.Range (60.0f, 70.0f);
GameObject.Destroy (obj, tempRate);
InvokeRepeating ("minSpawnCount", tempRate, 0.0f);

TapSpawn = false;
InvokeRepeating ("spawnBool", delayCount, 0.0f);

imageCount += 1;

if (imageCount == imageMax) {
imageCount = 0;



The “colliders” way to spawn images in the park worked well, but it got harder once the colliders were transparent. Some times they would cluster in a specific region and it was not creating the experience I wanted to.

So I developed another script that spawned the images according to the distance from the camera, and it worked well. It was a bit of a hustle having to customize all images to be in the right height and size but it’s working well.

Also, for better user testing, I added a “demo UX” with the introduction audio.

Below you can check the demo:

Ancestor-Photomancy: what does your ancestors pictures tell about your future?

For this week’s assignment we had to create an electronic generated -omancy: a divination method that could forecast the future based on an object, user interaction or random selected event. Thus, I created a tool that, from uploading your ancestors b&w old pictures, you can see a specific prediction related to the data collected from that image. I called it ANCESTOR PHOTOMANCY.

So a divination based in your astrology assume that the stars interfered with the moment you were born creating specific traits that shape who you are and your destiny. Accordingly, I started to think about tangible aspects of nature and history that definitely change your life and are responsible for our very own existence – and consequently our future.

I feel it is indeed crazy, even though very obvious, to stop and think that we are here being who we are because some people in the past lived the way they did – people we didn’t get to know, have very little knowledge about their stories and personalities, and, of course, if you go way back then, people that we can’t even name, trace and know their origins.

So, I asked my mother in Brazil to scan and send me some pictures of my old relatives.

It’s so beautiful the aesthetics of those images. The posed way they appeared in the images, the clothes, the colors of the printing. It’s funny to think there is a bit of each one of those barely strangers inside me, and magic somehow. So, for this divination ritual I decided to play with the aspects of the images and connect that to a Tarot reading API and see what that could tell me about my future.

—- of course, this exercise has a playful approach so I am not really looking forward to forecast my future, but to play with the concept and explore electronic divination experiments.

The easiest way to analyze images is through brightness, and if you have B&W images that’s a very easy thing to do. So, I found this algorithm inspired in a project made by a colleague that would give me a number from 0 to 100 for qualifying the brightness of the images. Once having this number, I send it to the tarot reading API that will assign a correspondent Tarot Card and choose a random fortune_telling” string from the 3 options on that specific card.

Here you can check the code:

//not so serious ancestor-picromancy engine to give you a glimpse of what the future holds
// using these tarot explanations

let myImage;
let pix;
let rank; // king: rank 25, queen: rank 24, knight: rank 23, page: rank 22;
let brightness;
let fortune_array = [];

function preload() {
myFont = loadFont('assets/Kristi-Regular.ttf');
myImage = loadImage("pics/ancestor.jpg");
title = loadImage("etch.png");
data = loadJSON("")

function setup() {
createCanvas(windowWidth, windowHeight);
translate(windowWidth / 2, windowHeight / 2);
image(title, 0, 0);
text("ASK YOUR ANCESTORS TO KNOW", windowWidth/3.25, windowHeight/3.6 - myImage.height*0.1/2 - 45);
text("WHAT THE FUTURE HOLDS", windowWidth/3.5, windowHeight/3 - myImage.height*0.1/2 - 45);
translate(windowWidth / 2, windowHeight / 2.1);
image(myImage, 0, 0);
//get average brightness of image and match it to card rank in tarot set;
console.log("rank: " + rank); //can somehow not access "rank" as a global variable ...???
// search as well for king, queen, knight and page ranks
if (rank > 21){
extra_ranks = ['page', 'knight', 'queen', 'knight'];
rank = extra_ranks[rank - 22];
let fortunes = data.tarot_interpretations[rank].fortune_telling[round(random(data.tarot_interpretations[rank].fortune_telling.length -1),0)];
text(fortunes + ".", windowWidth/2, windowHeight/1.2)
let fortunes = data.tarot_interpretations[0].fortune_telling[0];

// function taken from
// converts each color to gray scale and returns average of all pixels
// brightness: 0 (darkest) and 255 (brightest)
function getImageLightness(imageSrc,callback) {
img = document.createElement("img");
img.src = imageSrc; = "none";

let colorSum = 0;

img.onload = function() {
// create canvas
let canvas = document.createElement("canvas");
canvas.width = this.width;
canvas.height = this.height;

let ctx = canvas.getContext("2d");

let imageData = ctx.getImageData(0,0,canvas.width,canvas.height);
let data =;
let r,g,b,avg;

for(let x = 0, len = data.length; x < len; x+=4) {// noprotect.
r = data[x];
g = data[x+1];
b = data[x+2];

avg = Math.floor((r+g+b)/3);
colorSum += avg;

brightness = Math.floor(colorSum / (this.width*this.height));
// map & round brightness to 0 - 10 value of Tarot cards
brightness = round(, 255, 0, 25), 0);
rank = brightness;
callback(brightness, rank);


// map 0 - 255 average brightness values to 0 - 10 Tarot card ranks
// (taken from = function (in_min, in_max, out_min, out_max) {
return (this - in_min) * (out_max - out_min) / (in_max - in_min) + out_min;

// round values
// (taken from
function round(value, decimals) {
return Number(Math.round(value+'e'+decimals)+'e-'+decimals);

// append all entries into array for ranks
// (not taken from anywhere ;)
function find_ranks(key){
for(i = 0; i < data.tarot_interpretations.length; i++) {
if (data.tarot_interpretations[i].rank == key){
console.log('found matching rank in array ' + i);
console.log('found matching rank in arrays ' + fortune_array)
rank = fortune_array[round((random(fortune_array.length -1)),0)];
console.log('selected rank in array ' + rank)

// go fullscreen and resize if necessary
function windowResized() {
resizeCanvas(windowWidth, windowHeight);

For now I uploaded the images right on the folder directly in the code. A *must* for the  iteration would add a user input in the browser with the CTA “upload your ancestors B&W image here” so everyone can actually use it.

Below you can check what my ancestors said about my future in this experiment.

Angelo and Sara think that I should reconsider my decisions –

Francisco disagrees, and has a more positive view –

Pedro, Sara and Mauricio are telling me to watch out —

2018 Brazil Presidential Elections: Candidates Dashboard

Brazil’s democracy is a broken one: corruption, undeclared lobbies and a big misrepresentation of society. Accordingly, fundamental issues such as violence, poverty, lack of education are rarely properly addressed. In this context and with the emerging global phenomena of ‘society polarization’ and fake news, Brazilians feel lost in which media sources to trust and, ultimately, in who to vote for the upcoming presidential elections this November.

Candidates Dashboard is the first prototype of a project that aims organize and compare information of the current presidential pre-candidates (soon to be official candidates) so users can easily access and compare news from the main media sources. Further on the idea is also to play with sentimental analysis and social media hashtags/trending topics.

Thus, for this first prototype, I worked mostly with the Google News API, fetching the data for the Brazilian presidential pre-candidates. For now, the website is able to organize the informations from each candidates by source and display it in vertical columns.

Below you can check a previous version, but that had some bugs and displayed the images repeatedly. I reorganized the code to have the outcome we have now, and still need to add the other infos such as image, url, and the description.

This is an ongoing project trying to mix content and data viz.

As next challenges I want to develop a mock up/user flow to work further with css+html and figure out how this would be displayed in a mobile version.

You can check the js code below in this page or download it from my GitHub.

//store json file
var info = null;
//store keys from the json file
var keys = null;

//preload() runs first, once
function preload() {
//load json file
info = loadJSON("./candidatos.json");

//setup() runs once, after preload()
function setup() {

//retrieve keys from json
keys = Object.keys(info)

//retrieve menu
var menu = document.getElementById("selectMenu");
var text = document.getElementById("currentQuery");
var text = document.getElementById("newsDisplay");

//iterate through keys
for (var i = 0; i < keys.length; i++) {
var option = document.createElement("option");
//make the text be the key
option.text = keys[i];

function putArticlesOnPage(articlesAndSources){
// }

// let availableSources = Object.keys(articlesBySource);
// array like:
// [globo, cartacapital, ...]

for(let i = 0; i < articlesAndSources.length; i++){ let sourceName = articlesAndSources[i].name; let articles = articlesAndSources[i].articles; if(articles.length > 1){
let sourceContainer = document.createElement('div');
sourceContainer.className = "sourceContainer source_"+i;

let headline = document.createElement('h1');
headline.innerHTML = sourceName + " (" + articles.length + " articles)";

let articleContainer = document.createElement('div');
articleContainer.className = "articleContainer";

for(let j = 0; j < articles.length; j++){
let articleTitle = articles[j].newTitle;
let titleElement = document.createElement("h4");
titleElement.innerHTML = j + ": " +articleTitle;




// with this array we can loop over the object like this:
// for(let i = 0; i < availableSources.length; i++){
// let source = availableSources[i];
// console.log("SOURCE:", source);
// let articleArrayOfThisSource = articlesBySource[source];
// console.log(articleArrayOfThisSource);
// let sourceContainer = document.createElement('div');
// sourceContainer.className = "sourceContainer source_"+i;
// let headline = document.createElement('h1');
// headline.innerHTML = source + " (" + articleArrayOfThisSource.length + " articles)";
// sourceContainer.append(headline);
// let articleContainer = document.createElement('div');
// articleContainer.className = "articleContainer";
// sourceContainer.append(articleContainer);
// for(let j = 0; j < articleArrayOfThisSource.length; j++){
// let articleTitle = articleArrayOfThisSource[j].newTitle;
// let titleElement = document.createElement("h4");
// titleElement.innerHTML = j + ": " +articleTitle;
// articleContainer.append(titleElement);
// }

// $('.news-container').append(sourceContainer)

// var htmlString = '
<div class="newSource">';
// htmlString += '

<h1 class="titleSource">' + source + '</h1>
// $('
// for(let j = 0; j &lt; articleArrayOfThisSource.length; j++){
// var htmlString2 = '
<a href="' + articleArrayOfThisSource[j].newURL + '">'
// htmlString2 += '
<div class="articleDiv">';
// htmlString2 += '

<div class="articleImg">';
// htmlString2 += '
<img class="Img" src="' + articleArrayOfThisSource[j].newImgURL + '" />';
// htmlString2 += '
// htmlString2 += '

<div class="articleText">';
// htmlString2 += '

<h1 class="articleTitle">' + articleArrayOfThisSource[j].newTitle + '</h1>
// htmlString2 += '

<p class="articleDesc">' + articleArrayOfThisSource[j].newDesc + '</p>
// htmlString2 += '

// htmlString2 += '
// $('
// }
// console.log("SOURCE IS", source);
// console.log("NUMBER OF ARTICLE IS", articleArrayOfThisSource.length);
// console.log("ARTICLEs", articleArrayOfThisSource);

// }

// function triggered when an item is selected on the menu
function selectChange(selector) {
var searchTerm = info[selector];

console.log("searching", searchTerm);
// lets give it 2 seconds, before we use the data
// here we can do whatever we want with our
// data in the object articlesBySource
// do if statement creating divs by source
//do if statement creating divs for articles

let availableSources = Object.keys(articlesBySource);

let articlesANDsourcesArray = [];

for(let i = 0; i &lt; availableSources.length; i++){
// each packet containes the name of a source and all associated articles
let packet = {};
let source = availableSources[i];
let articleArrayOfThisSource = articlesBySource[source]; = source;
packet.articles = articleArrayOfThisSource;


function compare(b,a) {
if (a.articles.length &lt; b.articles.length) return -1; if (a.articles.length &gt; b.articles.length)
return 1;
return 0;

// console.log("put articles on page!");

// {
// globo: [
// article,
// article
// ],

}, 5000);


function requestNews(searchTerm, page, callback){
var newsAPIURL = '' + searchTerm +'&amp;sortBy=publishedAt&amp;page='+page+'&amp;apiKey=';
var newsAPIKey = "3a8163d38cf846d28099503687290b56";
var newsAPIReqURL = newsAPIURL + newsAPIKey;

url: newsAPIReqURL,
type: 'GET',
dataType: 'json',
error: function(err){
success: function(data){

let articlesBySource = {};

function processData(articles){

for(let i = 0; i &lt; articles.length; i++){
// let's package a little news object:
let newsObject = {
newTitle: articles[i].title,
newDesc: articles[i].description,
newURL: articles[i].url,
newImgURL: articles[i].urlToImage,
newSource: articles[i]

if(!(newsObject.newSource in articlesBySource)){
// if the source does not yet have an entry in articlesBySource,
// create and entry and add the newsObject as the first item:
articlesBySource[newsObject.newSource] = [newsObject];
// if there is already an entry for this source
// just push the newsObject/add it to the array

let addThisArticle = true;
for(let j = 0; j &lt; articlesBySource[newsObject.newSource].length; j++){

if(articlesBySource[newsObject.newSource][j].newTitle == newsObject.newTitle){
// we dont want to add the thing
addThisArticle = false;




let maxResultsWeEverWant = 500;
function getNewNews(searchTerm){
// get the first batch of results:
requestNews(searchTerm, 1, function(data){
// // process this batch:
let numTotalResults = data.totalResults;
let numResultsWeWant = Math.min(maxResultsWeEverWant, numTotalResults);
let numResultsLeftToDo = numResultsWeWant - data.articles.length;
let numRequestsLeftToDo = Math.floor(numResultsLeftToDo/20);
// make the remaining requests:
for(let i = 0; i &lt; numRequestsLeftToDo; i++){
requestNews(searchTerm, 2, function(data){
// // process this batch:


// getting data from Google news API
function getNews(searchTerm, page){
// console.log("Getting Data");

var newsAPIURL = '' + searchTerm +'&amp;sortBy=publishedAt&amp;page='+page+'&amp;apiKey=';
var newsAPIKey = "3a8163d38cf846d28099503687290b56";
var newsAPIReqURL = newsAPIURL + newsAPIKey;

url: newsAPIReqURL,
type: 'GET',
dataType: 'json',
error: function(err){
success: function(data){
console.log("Got the data");

// getting data I want from every article
for (i=0; i&lt; data.articles.length; i++){
var newTitle = data.articles[i].title;
var newDesc = data.articles[i].description;
var newURL = data.articles[i].url;
var newImgURL = data.articles[i].urlToImage;
var newSource = data.articles[i];


// display data in html
function makeNewsHTML(newTitle,newDesc,newURL,newImgURL,newSource) {
var htmlString = '
<div class="new">'
htmlString += '<img src="' + newImgURL + '" />';
htmlString += '
+ newTitle + '</h1>
htmlString += '

<a href="+ newURL +">'
+ newSource+ '</a>


htmlString += '

+ newDesc + '

htmlString += '


//draw() runs in a loop, after setup()
function draw() {