[Data] Observing data with R

I’ve been observing data via Excel and MySQL databases, which isn’t ideal, so R was a nice breath of fresh air. The dataset I’m most familiar with is my NHL Draft database, so I pushed it into R to see what I could do with it.

The first thing I did was look to test the null hypothesis with a randomization test. I previously wrote a piece about the importance of handedness in hockey, and I wanted to see where it landed on that randomized table. After running the R code and pushing out a histogram, I found that my results were far from random and I disproved the null hypothesis. Yay. This was an ideal experiment for an A-B test, but I had to go a step further for non-boolean data.

So right now I have data for a) when a player was selected and b) how that player has performed. I have a regression line going through that data — a logarithmic function — and it works decently. But the next step is to smooth these lines using local polynomials.

These strategies are incredibly valuable; I think Mark is right in that, at some point, it will come in very valuable. I had a few moments where I thought, “Hey, that’s something I’ve been working on for a while — and this strategy would be perfect!” I had other moments where I thought it would come in use in the future, though right now I had no application.

That said, the most interesting part of this course was the historical background of probability. I’ve often approached data with the mindset that there’s a ‘right’ way and ‘wrong’ way to do thing — and not really not a spectrum of ‘worse’ to ‘better’. And I appreciate how we thought about data as real-world indicators, and not reality. It strangely made data more real, which I often find tough when looking at a whole bunch of numbers in a table.

Moving forward, I plan to get more adept in R. I understand the concepts we talked about. I don’t know how to implement them — yet. The last class was incredibly helpful in helping me understand how to take data, run it through something like a Python script and then pushing it into R — and then adjusting the data further to plot it in a workable way. I understand that, in itself, could take several weeks. But I think it’s a worthwhile lesson plan to help us get data into a tool like R. At ITP, I’ve learned most by pursuing my own curiosities and then asking for (or already having been taught) tools to work through those questions. I think this class gave me ideas about the tools that existed, which was awesome. But I would’ve liked to walk through an example of, say, making local polynomials for give datapoints. If this were a full semester class, perhaps the contents of the course could be spread out and some of these examples could be worked through.

Processing Android with Ushahidi

For my project, Nestless, I used Processing Android to create a front-end interface to interact with Ushahidi. Now, there are several reasons why this is advantageous:

1. Processing Android allows for unique interfaces that are entirely customizable, which can be crucial when designing a UI that best fits certain projects. For example, in my app, I have a bird that flies to a certain spot on a custom map.

2. Processing Android provides opportunities for rich visualization; web application do not.

3. This allows full control over the user experience. This can be crucial for software that asks users to contribute time to map something.

Of course, this is limited to Android phones. In addition, it can take a bit more time to create. But if you can get over that, this is a great opportunity to marry the rich visual potential of Processing with the robust and easy-to-use backend of Ushahidi.

Before you go on, you may want to look over my post on Nestless, just to get an idea of what I’ve used this for. (It’s a super simple implementation, with more to come.)

So here are some simple instructions on how to do it:

1. Create a Ushahidi or Crowdmap site: (Crowdmap is simply the hosted version.) Ushahidi basically serves as a mapping service. You send information to Ushahidi, and it will map it for you — and you don’t have to deal with databases or creating a website or how to parse your data and display it in a useful way.

Anyway, I recommend creating this page first because helps you understand what kind of information you want to collect from your app. Basically, Ushahidi needs a handful of things from your Android app:

> Report title
> Report description
> Category
> Date/time
> Location: latitude and longitude

2. Create your Processing for Android app: I wrote up a quick post on how to install Processing for Android.

3. How to create text boxes: Processing for Android doesn’t make it easy to create text boxes for users. However, a nice library to use here is APWidgets. After you download the library and put it in your library folder, type the follow at the top of your code:

import apwidgets.*;
import android.text.InputType;
import android.view.inputmethod.*;

In addition, on your menubar, go to “Sketch” > “Add file…” and navigate to your Library folder. From there, find “apwidgets” > “library” > “apwidgets.jar” — and click open.

From there, you need to create the object for the container and well as the text fields. So I did something like this (the apwidgets site has some nice examples):

APWidgetContainer container;
APEditText nameField, descriptionField;

And in the setup, make sure to initiate it:

container = new APWidgetContainer( this );

nameField = new APEditText( 10, 100, width-20, 70 ); //sets location of text field
nameField.setInputType(InputType.TYPE_CLASS_TEXT); //set the input type to text
nameField.setImeOptions(EditorInfo.IME_ACTION_DONE); //enables done button
nameField.setCloseImeOnDone(true); //close the IME when done is pressed

descriptionField = new APEditText( 10, 100, width-20, 150 );

There are lots of things you can customize. Just mess around with the examples, and you’ll figure it out.

Now, you may notice that you don’t actually want the text boxes on the screen at all times. So the way to deal with it is by adding the “nameField” when I want it and removing it when I’m done with it. Here’s how to do that:

if (I_want_to_add == true) {
} else {

Lastly, if you want to set text into your fields, use this function:

nameField.setText("Here's some text");

4. Other data input: You can use basic buttons to help users input data, as well. (The mouseReleased() function is a great way around using touch function, although the Processing for Android website gives you some other great options of how to deal with the touch screen.)

Just remember to store all your data in variables.

5. Location data: Here’s are the examples to help get the GPS data from your phone. It’s easy; trust me.

6. Do you have all your data? So make sure that by the time you get to a “submit” button, all the data you want to send is stored in variables. Good? Yeah, of course you are.

7. Create PHP file: Now, it’s best if Processing talks to a PHP script, which then talks to Ushahidi. So here’s how that works.

First, create an empty PHP file and put it on your server. Let’s assume you uploaded your php file at: http://rainbowchang.com/phpfile.php

8. Sending the variables to PHP: Now, go back to Processing and we can start gathering a string to load to your php script. By that, I mean we can compile all your variables, combine it into one long url and load it. Here’s how that works:

String thing = "http://rainbowchang.com/phpfile.php?name=" + name + "&description=" + description + "&longitude=" + currentLongitude + "&latitude=" + currentLatitude;

So when do you run this code? Well hopefully you’ll have something like a “Submit” button. So you can say:

if (submitted == true) {
String thing = "http://rainbowchang.com/phpfile.php?name=" + name + "&description=" + description + "&longitude=" + currentLongitude + "&latitude=" + currentLatitude;

Might freeze while it sends, so beware.

Before we go on, a bit about urls. Lpok at the String “thing”. Everything after the ‘?’ are “paramaters.” And in this case, these are basically passing variables to the php script. So if the “name” variable we’re passing through is “Alvin” and the description is “coolperson”, then it would be “http://rainbowchang.com/phpfile.php?name=Alvin&description=coolperson”. Notice how the two variables in the url are divided with the ampersand (‘&’).

The next part of this is the “loadStrings(thing)” which basically runs that php script on your server, while passing your variables to it.

But wait! It won’t work yet. Your php script is empty because I just made you create a blank file. Let’s fix that.

9. Editing PHP file: Go to your php file on your server and edit it like this:

<title>PHP Page</title>


$name = $_GET["name"];
$latitude = $_GET["latitude"];
$longitude = $_GET["longitude"];
$description = $_GET["description"];
$incidentdate = date("m/d/Y");
$incidenthour = date("g");
$incidentmin = date("i");
$incidentam = date("a");
$category = 0;

(Note: All PHP code by go between the ‘‘)

Now, notice how the word inside the “$_GET[]” functions mirror the variables that we passed to it in Processing in step No. 7. Well, this php script is basically grabbing the variables from the parameters you’ve passed through. So remember how we talked about this url: “http://rainbowchang.com/phpfile.php?name=Alvin&description=coolperson”?

Well now the PHP script is reading it like this:

$name = "Alvin";
$description = "coolperson";

(Also notice: PHP variables have a dollar sign in front of them. If you have trouble with PHP syntax, just Google it. It’s easy as pie.)

OK, so now you’ve passed through all your data. You should figure out how it determines what category it will be. So you can say:

if ($name == "Alvin") {
$category = 1;

You can set your categories on your Ushahidi settings page, which you can access via your dashboard.

Now you should be ready to use the Ushahidi API.

10. The Ushahidi API: This is how you’re sending your data from PHP to Ushahidi.

Take the code below and replace the URL. Give what we’ve set up previously, the variables should be named the same below. But if needed, edit the variable names that you’re sending to Ushahidi:

if ($name == "Alvin") {
$category = 1;

$posturl = &quot;https://yourUshahidiURL.com&quot;; //YOUR USHAHIDI URL HERE
$Curl_Session = curl_init($posturl);
curl_setopt ($Curl_Session, CURLOPT_POST, 1);

curl_setopt ($Curl_Session, CURLOPT_POSTFIELDS, &quot;task=report&amp;incident_title=$name&amp;incident_description=$description&amp;incident_date=$incidentdate&amp;incident_hour=$incidenthour&amp;incident_minute=$incidentmin&amp;incident_ampm=$incidentam&amp;incident_category=$category&amp;latitude=$latitude&amp;longitude=$longitude&amp;location_name=Unknown&quot;); //You may need to edit this line with the correct variable names

curl_setopt ($Curl_Session, CURLOPT_FOLLOWLOCATION, 1);
curl_exec ($Curl_Session);
curl_close ($Curl_Session);

11. Entire PHP Script: It might look something like this, in total:


$name = $_GET["name"];
$latitude = $_GET["latitude"];
$longitude = $_GET["longitude"];
$description = $_GET["description"];
$incidentdate = date("m/d/Y");
$incidenthour = date("g");
$incidentmin = date("i");
$incidentam = date("a");
$category = 0;

$posturl = "https://yourUshahidiURL.com"; //YOUR USHAHIDI URL HERE
$Curl_Session = curl_init($posturl);
curl_setopt ($Curl_Session, CURLOPT_POST, 1);

curl_setopt ($Curl_Session, CURLOPT_POSTFIELDS, "task=report&incident_title=$name&incident_description=$description&incident_date=$incidentdate&incident_hour=$incidenthour&incident_minute=$incidentmin&incident_ampm=$incidentam&incident_category=$category&latitude=$latitude&longitude=$longitude&location_name=Unknown"); //You may need to edit this line with the correct variable names

curl_setopt ($Curl_Session, CURLOPT_FOLLOWLOCATION, 1);
curl_exec ($Curl_Session);
curl_close ($Curl_Session);

12. Done! Now go to your Processing app, run through it and you should be able to send those variable to Ushahidi — and it should be mapped on your map URL. Just go adjust some info on your Ushahidi settings page, but that’s not a huge deal.

Please feel free to ask any questions or point out better ways/errors here.

[Nestless] A mobile app to help homeless

UPDATE: The project is in testing. Please visit NestlessNYC.com for more info.


UPDATE: A lot of people are stumbling onto this blog post and asking about the future of this app. Well here’s the update: I’m working hard to consult with people to design a working system. In addition, the technical side of things are coming along nicely. I hope to have a final product to present — along with some beta testing — by May 2012. If you have more questions or feedback, please e-mail me at alvinschang@gmail.com.


Nestless is a mobile app that allows people to help homeless people during inclement weather. There are two parts to this:

1) When passersby (who have Nestless on their smart phone) see a homeless person in need, they can “report” them using the app. Nestless uses the phone’s GPS to map them on a website.

2) People who are at home can see homeless people nearby who need help. They can bring clothes, blankets, warm meals or whatever else they may need.

This tries to bring efficiency, information and immediacy to the situation, essentially allowing local people to throw a lifeline to those in need.


When I first came to New York as an 18-year-old, I saw all the homeless people on the streets. People told me, at some point, I’d get used to them. But I vowed to never “get used to” them.

This also made me think a lot about why we “get used to” people in need. I don’t think it’s because we don’t care; I think it’s because we don’t know how to care. Other than giving a few bucks here and there — which is often discouraged, although I do it anyway — passersby can do very little to help a homeless person. Of course, if they really wanted to do something, they could offer them a bed in their apartment, though it’s a bit absurd; the threshold for that happening is extrememly high. But lower that threshold, and there are definitely people willing to give old clothes or blankets or even a hot bowl of soup.

But that threshold is still high. Only a small percentage of people ever do such a thing. And quite frankly, that’s OK. But that doesn’t mean they don’t care. It just means we have to find other ways for them to help.

So that’s how the ideas of “Nestless” came about.

It’s a mobile application that allows people to help homeless people, and the threshold for contribution is very low. It only requires that you notice a situation, take out your phone and input some basic data. Easy, right? It can be easier than sending a text message.

Here are my first musings on the idea:

Initial idea: Everyone Counts

So the first iteration of this app was called “Everyone Counts.” The idea was that New York City needed help counting homeless people. It’s a key to solving the problem, which is why there are often volunteers who walk the streets to try to get some kind of estimate.

On a technical note: I used Processing Android, and had my share of trouble. But if you know Processing, it can be a solid option. To get the touch screen to work properly, I simply used the mouseReleased() function.

Here’s some initial work on how the first screens would work:

Network and data

The second iteration involved getting the mobile app to collect data and send it to a PHP file on a server. Dan Shiffman’s PHP/Processing tutorial really helped.

So once I collected the from the app, I was able to shoot it over to the PHP script, which parsed it.

From there, I explored options of where to display the data. An SQL database was an option; I played around with somehow displaying it with Processing.js and some data visualization. But the problem was: The purpose was still convoluted, and there were still many problems with the “counting” aspect. When multiple people are reporting the same person, how do we get an accurate count? Does it require fancy math? What do these numbers mean, anyway?

Here are some other things I thought about in the early process, including other potential devices:

User Interaction

The app used basic sliding pages to allow for your typical smart phone interaction. It was simple; it had big buttons; it made sense. However, it was boring. There was nothing that really caught the user’s eye from a visual standpoint, which is something Processing could take advantage of. However, I wanted to make sure it wasn’t too complex; after all, this is a simply program. Here are some initial notes on it:

Anyway, to spice up the interaction, I introduced Henry the Bird.

He flutters around the screen, following your choices and eventually flying to the location you’re reporting from. It serves as a visual metaphor that, when you report a homeless person, they are getting some kind of help. And with Henry came a re-naming: Nestless.

Mapping: Ushahidi

After exploring several options, I decided to hook this up to Ushahidi for my mapping. I installed it on my server without my hiccup but, because of time constraints, I decided to prototype with CrowdMap, which is the hosted version. Here’s the site (link):

I also thought about how to map on the mobile phone itself. It’s still a work in progress but, for now, I’m simply trying to map to a still image. Here is the initial thought process on that:

Breakthrough: Inclement weather help

At this point, it was still a counting program. However, as I began displaying data on Ushahidi, I realized the power of this program was not only crowd-sourced reporting; it was timeliness. It could essentially tell you when a person was on the street within seconds of someone reporting it.

Now, as I was having this thought, I was walking through some miserably cold rain, testing the program, reporting people as I walked past them and seeing how it could come into use. And I thought: It feels convoluted to report someone for population counting as they are freezing right here in front of me!

And that’s where I had my breakthrough.

Instead of counting, what if Nestless was used to reporting homeless people who are suffering in inclement weather with insufficient supplies. So if it’s raining, someone walking home from work could reporting a homeless person. And another person who lives nearby — and happens to check the Nestless Ushahidi map — could walk over and give the homeless person a spare umbrella, or a hot bowl of soup or even some socks. (You wouldn’t believe how much some guys really want dry socks.)

It encourages a community of helpers. It only requires a few people to have a lower threshold to help. Everyone else can just report them. And when someone else shows up with a blanket and a hot bowl of soup, they can just say a little birdy told them they needed the help.

User stories

WITH THE APP: You’re new to New York. You see a homeless person who is freezing their butt off while sleeping in front of a small store; you don’t know how to help; you don’t have time to help. What do you do?

You download this app, and you basically let other people know that there is someone who needs help. It takes a few seconds. If you want, you can even ask them if there’s anything they need and report that.

WITH THE MAP: You’re the type of person who likes to go above and beyond to help out. You also have extra clothes and blankets.

So it’s snowing one day. And you check out the Nestless map to see if there is anyone who needs help nearby. There is! You pick out a blanket and some dry socks in a paper bag, walk a few blocks and hand it to the homeless person who has been freezing their butt off.

Pillars of support

It’s interesting to look at this from a “pillars of support” model because we’re not trying to take down any organization. At it’s core, it’s trying to a) lower the threshold for people who want to help and b) make donations more efficient and timely.

But it’s interesting to remember: We rely on governments or non-profits to help the homeless. Individual efforts can be quite difficult, mostly because the lack of time, information and efficiency. This is falling victim to a monolithic model, where organizations and power structures determine our ability to help.

However, Nestless tries to win over the local people by saying: “Look, you can help right now, right here. The connection is immediate. The help can be immediate.”


Nestless would be an initial success of thousands of New Yorkers downloaded the app and, on rainy or snowy or cold days, there are a few hundred reports of people in need. From there, if even half of those requests can be filled by local neighbors, it would be such a lifesaver for so many homeless people having potentially the worst night of their lives.

Current Progress

I have a working app, although there are still some bugs. Also there are a few other features I want to incorporate, but I’ll have to think it through a bit more. Anyway, without further ado, here are some screenshot:

Home screen and the three secondary buttons — Weather & Stats, About and Resources.

Home screen and the successive pages to submit a report.

Hanging a bed net [UNICEF]

OK, so hanging a bed net ia a pain in the behind. Why? Let’s go through the steps.

First of all, it asks you to air out your net for a day. But in New York City, that’s pretty much impossible. I’m guessing that’s about the only advantage Northern Ugandans have in terms of installing this thing.

Secondly, I expected some kind of hanging hooks or screws or nails to come with this thing. However, there was nothing. It said to use string to hang it, but if you don’t have string at the moment, you can’t hang it.

Thirdly, it’s unclear how safe it is by just reading the package. Can I hang it up around my dog? Is it OK to touch it and then eat food with those same hands? Does it leave a residue on my stuff that is dangerous?

In any case, I went out to the hardware store to buy a few things to hang this from. I assumed I didn’t need anything that held too much weight because these bed nets are pretty light. So I got this rope:

The entire time at the hardware store, I wondered whether Ugandans would have access to this kind of material. I imagine butcher twine or a nylon rope would be more likely. But I bought this because I think it’s materials are extremely cheap, and it’s very sturdy.

I also bought this hook/nail set to hang the rope from:

You get four of these in a pack, and I imagine it wouldn’t be expensive at all to ship these in bulk. It also allows the end to be nailed over the rope, which is nice if you want to make sure the rope isn’t going to fall. However, this required a hammer — at at least something hard to bang it with. Is this a readily available tool? When they ship bed nets to the villages, can they just give them one hammer for everyone?

So I got to work on these hooks first, and I got them up like this:

I was feeling pretty good about myself because this seemed pretty easy at this point. However, the next 20 minutes were not fun. It was unclear just how low I needed to make the nets. Also, I had to be very strategic about where to hang the hooks. I think the instructions on the back of the package do a poor job of describing strategies of how to best install these nets. In addition, they keep on falling off. The nails don’t stick in the sheet rock. I imagine the walls in Ugandan villages aren’t much better, so I don’t know how the hell these people are hanging this up. Anyway, after wrestling with this for a while, I finally got it up!

Not bad, eh? Only problem is: I’m not sure if the tiny space at the bottom of the net is OK. Can a mosquito get in? How do mosquitos behave? Do they seek this out? Are there so many that we just need to shut all the holes? In addition, I know most nets aren’t this color. But seriously. This color needs to go. While it is fun, is this the best color for dissipating heat? Also it might be nice if the bed nets were less… there. If it were more clear, I’d be more willing to put it up. However, maybe it also wouldn’t do a great job of hiding dirt and it would get messier quicker. Something to consider.

I also wanted to see if my dog was OK sitting inside the bed net, and what would happen if she wanted to get out. It turns out she can tear this thing down pretty easily; here’s a picture before she tore it down.

So takeaways:

  • Installation needs to get easier. Can there be tools included? Can there be strategies of hanging the nets on the bag?
  • There needs to be more information about these nets on the package.
  • Once it is up, how do we make it less intrusive? Can it possible be relaxing even? Can other functions be built into the nets?
  • We’re essentially making a room with a net. What better ways are there to do this?

Mobile Infrared Fighter


I’ve been working with the Kinect’s 3D sensing technology and realizing the immense power it could have on surveillance. In fact, infrared in general is a touchy subject because, I think, it offers cameras a view of the world that humans cannot see. That’s perhaps why Boston residents balked at a heat-sensing infrared camera that would teach them about energy efficiency.

heat seeking infrared

I can very easily see this type of technology being incorporated into surveillance cameras on the subways or sidewalks. And, as this technology advances, we may be able to determine the approximate height, weight, race and facial structure of every person walking by. It could even track the way someone walks! Imagine catching a criminal by tracking their strut! It would be like an uber-version of facial recognition. This could surely be a great asset for law enforcement; tracking down fugitives might be easier than ever. But for the privacy-seeking person, this is a nightmare. (Sure we can say this kind of thing would never be allowed. But isn’t that what people said about surveillance cameras?)

Anyway, how do we combat this type of surveillance?

Well these cameras use infrared, which is basically a non-visible electro-magnetic wave. Visible light has a wave length that we can see, but infrared has a longer wave length which happens to be the similar wave-length emitted by objects that emit heat. If we can interfere with these waves, these cameras suffer interference. Now, most of these cameras have a filter that can prevent some interference. But it’s not perfect, and the Kinect technology shows us the basics of how some waves can interfere with the camera.

Here, we just have two infrared cameras interfering with each other. It’s not a super strong interference, but it’s because these are designed for that type of work. That’s basically working as an impromptu IR-jammer — the cheapest of which is available online for super cheap, but there are military-grade IR-jammers, too.

Anyway, with a development of this kind of technology, we could design a mobile device that emits interference against IR cameras. And perhaps they can even be built into the back of cell phone — or be some type of peripheral that goes on the back.

If we looked at a Kinect depth image, it would look like this, sans this type of device (borrowed source image from Kotaku):

But using this type of infrared fighter, we could conceivable make ourselves — or at least our faces — anonymous:

This would essentially protect the face from detection, although this wouldn’t help as far as an RGB camera image. (A mask would do the trick! Or a beard…) But if this infrared blocker was strong, maybe enough to conceal the entire body, then maybe it would protect against this type of identification better.

In Witness’ “Cameras Everywhere” report, it says:

It is alarming how little public discussion there is about visual privacy and anonymity. Everyone is discussing and designing for privacy of personal data, but almost no-one is considering the right to control one’s personal image or the right to be anonymous in a video-mediated world. The human rights community’s understanding of the importance of anonymity as an enabler of free expression must now develop a new dimension – the right to visual anonymity.

3D sensing can be a great thing, but when used to track people without their consent — which is certainly one possible direction — it can be an incredible infringement upon human rights. So while this concept isn’t perfect yet, the idea here is to empower the every-day person with a hand-held device.

SMS for prenatal care

I ended up making a pros and cons list for SMS-based services, because I often felt that a) it was being used when it didn’t need to be used and b) many of the services I envisioned had already been implemented. Many of the “cons” had to do with the major limits we have with SMS — character length and GPS info, mostly. However, there were two major “pros” that fascinated me.

The obvious one: A large majority of people in places like Uganda have access to SMS.

Secondly, once users send you a text, you have their information. So you can ping them whenever you want, as long as they don’t unsubscribe. For my idea, this is key.

I created a SMS service that helps pregnant women in Uganda. (An aside: I’m in Design for UNICEF, where we’re focusing on Uganda — so that’s been on my mind. Thus far, though, we are brainstorming ideas not related to SMS.) Prenatal care is vital in Uganda, but many women don’t see a doctor during pregnancy. But if they are subscribed to an SMS service, that service can remind them the importance of seeing a doctor. It can also help them answer questions and find the closest doctor.

Here’s how the test service works:

1. INITIAL TEXT: When a woman finds out she is pregnant, she texts “PREGO” to 41411.

2. INFO GATHERING She gets a response: “Congratulations! How many weeks have you been pregnant? Reply by typing ‘PREGO’ followed by the number of weeks. For example, ‘PREGO 28′”

3. DIRECTED RESPONSE: Depending on how far along she is, the service gives her a specific message. Right now, it’s fairly simple. But I think this can be built out pretty extensively.

3a. REAL-WORLD REFERRAL: Some of the messages direct her to a doctor, and the idea is that there is a service built in that tells her where the nearest one is.

4. REMINDERS: This part hasn’t been implemented but: Since we know how far along she approximately is, the service can send her a text message every week, reminding her to see a doctor or giving her hints on what she should be doing at that stage in her pregnancy.

The idea is that this would be a free service — or at least highly subsidized — so the individual doesn’t have to pay SMS charges.

This builds off a UNICEF initiative that tries to get children birth certificates via SMS, which has been fairly successful. (link)

On a technical note: I used TextMarks (with a PHP script) to create the service. It was incredibly easy to create, which makes me pretty excited. Here’s the script below:



$from = $_REQUEST['from'];
$message = $_REQUEST['msg'];
$weeks = 10-$message;

if ($message == "") 
	echo "Congratulations! How many weeks have you been pregnant? Reply by typing 'PREGO' followed by the number of weeks. For example, 'PREGO 28'";
else if ($message == "doctor")
	echo "Here are a few doctor: Mr. Doctor, 11 Doctor Drive; John Doc, 14 Medical Street";
else if ($message == "yes")
	echo "Congratulations! You should get your baby registered by texting TKTKTKT. This will help your child get medical care.";
else if ($message == "help")
	echo "COMING SOON...";
else if ($message > 10 && $message < 36 ) 
	echo "Have you seen a doctor? If not, this is very important. If you need help finding one, reply with 'PREGO DOCTOR'";
else if ($message < 10) 
	echo "In {$weeks} weeks, you should see a doctor. If you need help finding one, reply with 'PREGO DOCTOR.' For other tips, reply 'PREGO HELP'";
else if ($message > 36  && $message < 40 ) 
	echo "You're going to give birth soon! If you need help finding athe closest doctor, reply with 'PREGO DOCTOR'";
else if ($message > 40 ) 
	echo "Have you given birth yet? If yes, reply with 'PREGO YES'. If not, see a doctor; reply with 'PREGO DOCTOR' to find the nearest one.";
else echo "We don't recognize your answer. Try again.";


This is a human-to-computer interaction that I think works for three reasons: a) It doesn’t require massive manpower, b) a database can keep track of the information a women gives the program and c) it gives the teeny bit of anonymity, which might be crucial for women who have unwanted pregnancies.

Processing for Android: First delve

I’ve delved into Processing for Android, and I decided to post very basic documentation on how I got started. It took a few tumbles, but I’m up and running. There are instructions on the Android for Processing website, but I’ve boiled them down. I worked on a MacBook Pro, so I can only help with that:

  1. Download Android SDK
  2. Download Processing. If you already have it, make sure you have the most recent version.
  3. Put your Android SDK folder in a secure location. And then navigate to that folder > tools. Double-click on “android.” Terminal should run, plus a GUI for the settings window.
  4. In the window, look on the left side. There should be a tab for “Available Packages.” Click the pull-down menu for “Android Repository.” Then check the files for “Android SDK Platform-tools” and “SDK Platform Android 2.1, API 7.” Don’t worry about the revision number.
  5. Click pull-down window for Third-Party > Google. Check “Google APIs by Google Inc., Android API 7.” Don’t worry about revision number.
  6. If you had trouble finding these packages, try clicking the check box next to “Refresh” at the bottom of the page.
  7. Click “Install Selected.” Follow instructions and install.
  8. Now you should be able to go open Processing and, on the top-right window, you should be able to change from “Standard” to “Android.” Once you do that, it’ll ask you to find the SDK. So navigate to the SDK folder and select.
  9.  You’re good to go! … but…

OK, so that’s all good and well. But I had a few hiccups.

When I try to run a sketch, it doesn’t work!

Wait. The Android Emulator needs time to open.

ALSO: If you’re debugging, I strongly recommend connecting your Android phone to your computer and debugging directly on there. It’s much quicker — plus you can see how the touch screen is responding.

But even before that, I recommend building basics of the app in regular Processing — and then moving over to the Android portion and putting in the mobile portions. It’ll save a ton of time.

But it still doesn’t work!

OK, then go back to your SDK folder and open the Android SDK settings window once more. Make sure you downloaded all the packages needed. Sometimes, it doesn’t download the checked packages so you may have to re-download.

The emulator opens, but I can’t run anything!

Try saving your sketch, switching back to “Standard” mode in Processing and switching back to “Android” mode. If that doesn’t work, try closing Processing and re-opening it.

Still no dice…

In Processing/Android mode, there is an Android menu on your taskbar. Click that > Android SDK & AVD Manager. In that window, you should see something that says “Virtual Devices.” Under that tab, delete the running emulator. Just do it.

Grrr. Not working.

Try restarting your computer. Try running sketch. No worky? Try deleting the Android SDK folder and re-downloading everything.

You can also do a Google search of your problem, but most of them will suggest one of the steps above. Once you’re off and running, you can run the sketch in the emulator — or you can run it on your Android phone, and it’s super easy. And awesome. It’ll install the program on there for you!

Mobile App for Social Activism

There are several types of mobile applications for social activism, but I was particularly interested in two types of them. The first: Apps that demand little from the user, but provide money to causes via advertising. One of my favorites is Sproutster, which uses the Free Rice model. Users essentially play a game, during which they are exposed to advertising. Those add dollars are what goes to these causes. Interestingly, how well you do in the game theoretically affects how much is donated.

Now, I know these types of apps don’t necessarily lobby for a social change. But I’m more interested in the model of asking little from the user. Is this effective? Also, sometimes I wonder how these apps would do if they weren’t linked to social activism, but just marketed as regular apps. Would users play more? Would advertisers pay as much? (Image from iTunes store.)

The other type of app that intrigued me has to do with the accessibility of information. The Android app called Congress does an impressive job making Congressional information very easily accessible. In fact, recent bills and contact information is easier to find with this app than on a computer! (Image from Android Market.)

As an everyday person, it’s difficult to keep track of what bills are going through Congress. We often rely on the media to do that work — and they don’t always catch the bills that have big impacts on us. This app ensures that more people have eyes on the bulls going through Congress, which allows for a more thorough watchdog filter. But perhaps more importantly, this office tries to keep our reps accountable by making voting records and contact information readily available. In a previous life, I’ve had to call 20 Congressmen in one day — and it’s hard! This app makes that process easier. But, more importantly, I think this app reminds people that our reps are people we can talk to — and it shows people that Congress isn’t just some blob of incompetent power, but they are individuals who (are supposed to) represent us.

The Congress app is a prime example of how a re-organization of information can make it more powerful for activists. While all of this information is available online, this just makes it so easy that it helps many people overcome the laziness/time barrier of actually mining out this info.

Soliloquy [NOC]

A non-visual world that is created with wind and sounds. A collaboration with Alex Dodge.

Most of us know our world through visual means; our other senses are often overshadowed or taken for granted, but they give us a fuller sense of the world around us. So in Soliloquy, we wanted to explore way humans interact with non-visual feedback, depriving the user of any visual feedback. In addition, it’s a way to create a world for the user which is entirely in their own mind — a world in which a person can fill in the gaps with their own preconceived visual information that correlates with these sounds and


The installation is a circular rig with eight fans hanging off the side, about head-height. The user sits on a seat in the middle of these fans, then the user puts on headphones and a blindfold. After a quick calibration, the user is able to hear sounds from the world and can explore the world by leaning in whichever direction he wishes to move. As she move certain directions, the user can feel the fans — i.e. wind — blowing in her face, as if she is flying through the world. The faster she moves, the stronger the wind. The user fan also feel wind sounds rushing past their ears, and they can hear sounds in 3D space and attempt to chase these sounds.

Image by Alex Dodge. His initial post is here.

The rig

The physical rig, designed by Alex, is a ring of eight fans hung in a circular fashion. We are using four 200 mm PC fans because they run off of DC power, which allows us to manipulate the speed of the rotation. There are also four smaller fans to close the gap between the four big fans. The ring of fans hang off of a stand for audio speakers. Much credit to Alex for this incredible engineering feat. The rig is easily collapsable, and can be stored in small spaces — much needed, since we had no staging space to work with.

The circuit

We were essentially talking out from Processing to Arduino — sending four values in, and writing those values to the eight fans: six for the front and back, two for the left and right. Alex designed power supplies out of old PC power units.

The circuit is wired through a TIP 120 power resistor, since we are working with a high power load. We simply used the example from the Physical Computing lab on high current loads.

Now, one of the troubles we ran into was writing values from Processing to Arduino. We were taught how to write from Arduino to Processing — and that’s simple, because Processing has functions that parse the data coming in from Arduino. (Here’s a lab demonstrating that.) However, writing from Processing to Arduino meant we had to parse the data on the Arduino side using arrays. With the help of Tom Igoe, we figured it out.

We wrote out from Processing following code:

void fans() {
  //We're running fans, depending on whether we're going right/left, forward/backward. 
  if (vel.y <  0) {
   fan1 = int(map(vel.y,0,-20,920,1023)); //Straight ahead, 9 
  } else {
  fan4 = int(map(vel.y,0,20,900,1023)); ; //Backward, 3
  if (fan1 < 1000 && fan4 < 1000) {
  if (vel.x < 0) {
   fan3 = int(map(vel.x,0,-20,900,1023)); //To the left, 6
  } else {
  fan2 = int(map(vel.x,0,20,900,1023)); //To the right, 5
//We're getting the fans going at full blast for 5 seconds
  if (millis() < 5000) {
  port.write(1023); //FORWARD
  port.write(1023); //RIGHT
  port.write(1023); //LEFT
  port.write(1023); //BACKWARD
  ///We're writing to the fans here
  port.write(fan1); //FORWARD
  port.write(fan2); //RIGHT
  port.write(fan3); //LEFT
  port.write(fan4); //BACKWARD

So visually, we’re writing out to Arduino like this:

R, 1023, G, 900, B, 900, Q, 900
R, 1000, G, 900, B, 800, Q, 800

Our delimiter is a comma. That’s when we know there is a new array value coming. And we know to reset the array when we hit a newline, represented by “\n”.

Now, to read values in Arduino, we had to store the incoming values in a buffer array. Here’s the code:

char buffer[9];
int counter = 0;

void setup() {

void loop() {
  if (Serial.available() > 0) { 
    char thisByte = Serial.read();
    buffer[counter] = thisByte;
    if (thisByte == '\n') {
     counter = 0;

void parseBuffer() {
  if (buffer[0] == 'R') {
   analogWrite(9, buffer[1]);
  if (buffer[2] == 'G') {
   analogWrite(6, buffer[3]);
  if (buffer[4] == 'B') {
   analogWrite(5, buffer[5]);
  if (buffer[6] == 'Q') {
   analogWrite(3, buffer[7]);

First, Arduino reads from the Serial port and puts it in the thisByte variable. Then it puts that character into the buffer. So if we start with 0, it writes to buffer[0]. Then it increases the array key by one, so we’re then going to write to buffer[1] in the next loop.

It keeps doing that until it find the new-line character, which is “\n”. That’s when it knows to begin parsing the buffer, which now looks something like:


First, the program asks whether the first buffer key — buffer[0] — is the value “R”. If it isn’t, the know it isn’t accurate and our values will be off. But if it is, then we know the following number in buffer[1] is the first value we want to read. We do that four more times to get all the values. And once all the numbers are parsed, we reset the counter to buffer[0] — that way it can begin writing another set of fan values.

In all, we are writing nine values from Proccessing to Arduino. We are writing a character to determine which port we want to write to — in this case, R, G and B — and we are writing the values for each port. And lastly we have a newline characters.

The body tracking

In order to track the body, we used the Microsoft Kinect. Using the OSCeleton library, which is developed by Sensebloom, we are able to track the joints of a human being. (I recommend using Tohm Judson’s guide to installation.) It returns each joint as an array of x, y and z coordinates. So for the head, you would get a variable that looks like: head[100,300,900], indicating where the head is.

So in order to track how a person’s body is moving, we found the angle between a horizontal line and the shoulders. That determined left-to-right movement. For forward-to-back, we looked at the angle between a vertical line and the neck/torso. Simple trigonometry did the trick here.

Now, this is a great library — except one problem: It requires calibration, which mean someone has to put their hands up in the air at a 90 degree angle at the elbows. While it worked for some people in our rig, it didn’t work for taller people. It just wouldn’t calibrate, and that was unacceptable. So in the coming days, we will work on tracking the user from atop the rig with color tracking and a normal camera. It’s a much simpler solution, but it seems to be the best solution. We may even resort to IR tracking, if it comes to it.

The visuals

Even though this isn’t a visual system, we had to create some type of visual indication for a few reasons. First off, debugging would be impossible without visual feedback. Secondly, we are visually-oriented people, and we know space as a mainly visual thing.

So I created a world that visualized the person, and the sounds around him. Here it is below:

The white dot is the person. The blue dots are the sounds. The average range of the sound is indicated by the translucent circle surrounding the dot.

Now, the model we are using for movement is flying. When you fly, I assume we can’t stop on a dime, much like swimming. There’s momentum. So we using the location, velocity and acceleration model we learned from Dan Shiffman’s Vectors lesson in order to achieve this. So when we got values in from the Kinect camera about how far the person is leaning, we fed that number into the acceleration variable. So a person would speed up slowly, not instantly.

Now, with fans, there’s already real-world physics there. We don’t need to program in acceleration. But we do need to program it in for the computer world. To marry those two worlds together, I set the friction quite high and the acceleration high, too. This way, you can accelerate quickly and decelerate quickly. It makes it a little more responsive.

Lastly, there’s the zooming and tracking functionality of the visuals. Now, OpenGL doesn’t play nicely with ToxicLibs. So I kind of hacked together a fake 3D. For the tracking, I used the translate function in two dimension — and translated the whole visual world as we moved the screen. For the zooming, I faked 3D by using the scale() function and scaling the entire world. This type of 3D wouldn’t work if we rotated anything… but we’re not rotating anything, so this was perfect!

The sounds

We used ToxicLibs’ audio library, which can be downloaded here. The documentation for that can be found here.

In order to set the location of the listener, we can use the SoundListener class and use the setPosition() function, which takes an x, y and z value. Now, we’re working in a 2D space so we always set the z-value to 0. In order to set the location of the sounds, we used the AudioSource class, which has a setPosition() function as well, and also takes x, y and z values.

This all seems easy — until you want to place these sounds in a space, and have them increase and decrease in volume, as well as have a doppler effect. There are a few steps to this:

1. I found that only mono wav files would work with this. Otherwise, the sounds did not have a location.

2. In order to set the sounds, you have to use the function setReferenceDistance(). This function takes a number which determines how far away you can hear a sound, and how loudly you can hear it. It basically determines the falloff.

3. That seems easy enough… except that falloff doesn’t always work. Remember: When working with a non-visual world, everything is relative. On a computer screen, the proportions are determined by the size of your screen and your resolution. But in a non-visual world, it can be infinite. So if the user can move faster, but the world is bigger, then that’s the same as the user moving slower and the world being smaller. Eeek.

So we had a problem with the falloff not working properly. In order to fix that, I made a little if-statement that makes the falloff more of an exponential function. Keep in mind that you[0] and you[1] are the x- and y-coordinates of the listener. The position[0] and position[1] are the x- and y-coordinates of the sound. Those values can be found by using the getPosition() function, which returns an array with three values of x, y and z:

you = listener.getPosition();
  for (int i = 0; i < sound.length; i++) {
  if (dist(position[0],position[1],you[0],you[1]) != 0) {
  } else {

That pretty much did it for the sounds. It was just a matter of playing with what types of sounds worked the best from there on out.

Last thoughts

We’ll continue to work on this for the show, but I’ve learned a lot working on this project. First off, the power of relativity: We don’t often know this because we work with limited visual space, but when nothing is definite — and there are no limits — everything is relative. I think that’s a profound realization.

Technically speaking, I’ve learned a lot from Alex about these physical rigs. Also, I learned a bit from Tom Igoe about how to talk from Processing to Arduino and, of course, learned a massive amount from Dan Shiffman’s class about how to emulate nature in computer programming.

The project itself has a powerful appeal to it. Once you “plug in” to the world, you’re in a completely different universe. It has philosophical implications that are quite interesting. It brings to the forefront exactly how much humans construct their own version of reality with the senses we know, with certain weights put on each sense — visuals being the strongest. When you’re in Soliloquy, you are in a completely different world where you are disoriented from the world you are accustomed to.

Going forward, I’m going to program in the functionality that allows for blob tracking with a camera. I hope it will be more effective than using the Kinect.

The code for the entire program can be found below. I have not included the data files for sake of size. Also, the Coords and Skeleton classes are almost entirely from the OSCeleton example called Stickmanetic. Hopefully we can strip down the code some more once we begin using camera tracking:

by Alvin Chang and Alex Dodge

This is the code for an installation in which a user sit in the middle of a ring of wind-creating fans
and leans his or her body in a certain direction to move oneself in this world. We are using the XBoX Kinect
as the sensor to detect the angle of the user's shoulders, and the angle of the forward/backward lean.
We are using the OSCeleton library from Sensebloom, as well as wind sounds from the user ERH at freesound.org.
We are using Toxiclibs' sound library to create the non-visual sound space.

This project was created in Daniel Shiffman's course, "The Nature of Code" at NYU's Interactive Telecommunications Program.

For more information, e-mail alvinschang@gmail.com

import oscP5.*;
import netP5.*;
import processing.serial.*;
import toxi.audio.*;
import toxi.geom.*;

OscP5 oscP5;
Shoulders shoulders;
Coords coords;
Skeleton s;
Sounds sound;

//Zoom and pan
int transX = 440;
int transY = 460;
float transZ = .1;

int ballSize = 30;
Hashtable<Integer, Skeleton> skels = new Hashtable<Integer, Skeleton>();

int fan1;
int fan2;
int fan3;
int fan4;
Serial port;

PVector loc;
PVector vel;
PVector acc ;

PVector X;
PVector Y;

boolean calibrated = false;

void setup() {
    shoulders = new Shoulders();
    coords = new Coords();
    sound = new Sounds();
    s = new Skeleton(1);
    loc = new PVector(width/2,height/2);
    vel = new PVector(0,0);
    acc = new PVector(0,0);
    println("Available serial ports:");
    port = new Serial(this, Serial.list()[1], 9600);

void draw() {
  //Drawing the body shape
  for (Skeleton s: skels.values()) {   
  s.run(); //draw skeleton
  ellipse(s.headCoords[0]*width, s.headCoords[1]*height + 30, ballSize*2.5, ballSize*2.5);
  ellipse(s.headCoords[0]*width, s.headCoords[1]*height + 23, ballSize*1.8, ballSize*.8);
  ellipse(s.headCoords[0]*width+ballSize*.3, s.headCoords[1]*height +23, ballSize*.3, ballSize*.3);
  ellipse(s.headCoords[0]*width-ballSize*.3, s.headCoords[1]*height +23, ballSize*.3, ballSize*.3);

  float c = .7;
  PVector friction = vel.get(); 
  if (vel.x < 0.1 && vel.x > 0.1) {
   vel.x = 0; 
  if (vel.y < 0.1 && vel.y > 0.1) {
   vel.y = 0; 
  //Calibrated? Mousepressed sets this to true, and also calibrates
  if (keyPressed && key == ' ') {
   calibrated = true; 
  //If it isn't calibrated, don't apply any forces
  if (calibrated == true) {
  X = new PVector(shoulders.angle*-12,0);
  Y = new PVector(0,shoulders.angle2*1.5);
  } else {
  X = new PVector(0,0);
  Y = new PVector(0,0); 
  text("Loc X: " + int(loc.x-450),10,30);
  text("Loc Y: " + int(loc.y-450),10,45);
  text("Speed-X: " + int(vel.x*3),10,70);
  text("Speed-Y: " + int(vel.y*3),10,85);
  text("World X: " + (transX-440),10,110);
  text("World Y: " + (transY-460),10,125);
  text("Zoom: " + int(transZ*1000) + "%",10,140);
  text("Calibrated (press SPACEBAR): " + calibrated,10,165);
  text("CONTROLS:", 10, 195);
  text("Press 'R' to re-place sounds",10,210);
  text("arrow keys move world",10,225);
  text("'a' zooms in, 'z' zooms out ",10,240);
  text("(when debugging, j/i/k/l moves listener)",10,255);
  //Forces: X is left-right movement. Y is forward/backward. Friction is, well, friction.
  //Shoulders calculates the shoulder angles for the force
  //Initiates the Serial stuff
  //This allows us to move around the screen with the arrow keys
  //We're running sound in here because the sounds class draws the location of the listener

//We're adding up all the values for movement
void update() {

//We're applying a force here
void applyForce(PVector f) {

//Our serial data
void fans() {
  //We're running fans, depending on whether we're going right/left, forward/backward. 
  if (vel.y <  0) {
   fan1 = int(map(vel.y,0,-20,920,1023)); //Straight ahead, 9 
  } else {
  fan4 = int(map(vel.y,0,20,900,1023)); ; //Backward, 3
  if (fan1 < 1000 && fan4 < 1000) {
  if (vel.x < 0) {
   fan3 = int(map(vel.x,0,-20,900,1023)); //To the left, 6
  } else {
  fan2 = int(map(vel.x,0,20,900,1023)); //To the right, 5
  if (millis() < 5000) {
  port.write(1023); //FORWARD
  port.write(1023); //RIGHT
  port.write(1023); //LEFT
  port.write(1023); //BACKWARD
  ///We're writing to the fans here
  port.write(fan1); //FORWARD
  port.write(fan2); //RIGHT
  port.write(fan3); //LEFT
  port.write(fan4); //BACKWARD

void zoom() {
 if (keyPressed && keyCode == UP) {
  transY += 4;
 if (keyPressed && keyCode == DOWN) {
  transY -= 4;
 if (keyPressed && keyCode == LEFT) {
  transX += 4;
 if (keyPressed && keyCode == RIGHT) {
  transX -= 4;

 if (keyPressed && key == 'a') {
  transZ += 0.0006;
 if (keyPressed && key == 'z') {
  transZ -= 0.0006;

class Coords {
float ballsize = 20;

Coords() {
  oscP5 = new OscP5(this, "", 7110);
void run() {
  for (Skeleton s: skels.values()) {


/* incoming osc message are forwarded to the oscEvent method. */
// Here you can easily see the format of the OSC messages sent. For each user, the joints are named with 
// the joint named followed by user ID (head0, neck0 .... r_foot0; head1, neck1.....)
void oscEvent(OscMessage msg) {
  if (msg.checkAddrPattern("/joint") && msg.checkTypetag("sifff")) {
    // We have received joint coordinates, let's find out which skeleton/joint and save the values 😉
    Integer id = msg.get(1).intValue();
    Skeleton s = skels.get(id);
    if (s == null) {
      s = new Skeleton(id);
      skels.put(id, s);
    if (msg.get(0).stringValue().equals("head")) {
      s.headCoords[0] = msg.get(2).floatValue();
      s.headCoords[1] = msg.get(3).floatValue();
      s.headCoords[2] = msg.get(4).floatValue();
    else if (msg.get(0).stringValue().equals("neck")) {
      s.neckCoords[0] = msg.get(2).floatValue();
      s.neckCoords[1] = msg.get(3).floatValue();
      s.neckCoords[2] = msg.get(4).floatValue();
    else if (msg.get(0).stringValue().equals("r_collar")) {
      s.rCollarCoords[0] = msg.get(2).floatValue();
      s.rCollarCoords[1] = msg.get(3).floatValue();
      s.rCollarCoords[2] = msg.get(4).floatValue();
    else if (msg.get(0).stringValue().equals("r_shoulder")) {
      s.rShoulderCoords[0] = msg.get(2).floatValue();
      s.rShoulderCoords[1] = msg.get(3).floatValue();
      s.rShoulderCoords[2] = msg.get(4).floatValue();
    else if (msg.get(0).stringValue().equals("r_elbow")) {
      s.rElbowCoords[0] = msg.get(2).floatValue();
      s.rElbowCoords[1] = msg.get(3).floatValue();
      s.rElbowCoords[2] = msg.get(4).floatValue();
    else if (msg.get(0).stringValue().equals("r_wrist")) {
      s.rWristCoords[0] = msg.get(2).floatValue();
      s.rWristCoords[1] = msg.get(3).floatValue();
      s.rWristCoords[2] = msg.get(4).floatValue();
    else if (msg.get(0).stringValue().equals("r_hand")) {
      s.rHandCoords[0] = msg.get(2).floatValue();
      s.rHandCoords[1] = msg.get(3).floatValue();
      s.rHandCoords[2] = msg.get(4).floatValue();
    else if (msg.get(0).stringValue().equals("r_finger")) {
      s.rFingerCoords[0] = msg.get(2).floatValue();
      s.rFingerCoords[1] = msg.get(3).floatValue();
      s.rFingerCoords[2] = msg.get(4).floatValue();
    else if (msg.get(0).stringValue().equals("r_collar")) {
      s.lCollarCoords[0] = msg.get(2).floatValue();
      s.lCollarCoords[1] = msg.get(3).floatValue();
      s.lCollarCoords[2] = msg.get(4).floatValue();
    else if (msg.get(0).stringValue().equals("l_shoulder")) {
      s.lShoulderCoords[0] = msg.get(2).floatValue();
      s.lShoulderCoords[1] = msg.get(3).floatValue();
      s.lShoulderCoords[2] = msg.get(4).floatValue();
    else if (msg.get(0).stringValue().equals("l_elbow")) {
      s.lElbowCoords[0] = msg.get(2).floatValue();
      s.lElbowCoords[1] = msg.get(3).floatValue();
      s.lElbowCoords[2] = msg.get(4).floatValue();
    else if (msg.get(0).stringValue().equals("l_wrist")) {
      s.lWristCoords[0] = msg.get(2).floatValue();
      s.lWristCoords[1] = msg.get(3).floatValue();
      s.lWristCoords[2] = msg.get(4).floatValue();
    else if (msg.get(0).stringValue().equals("l_hand")) {
      s.lHandCoords[0] = msg.get(2).floatValue();
      s.lHandCoords[1] = msg.get(3).floatValue();
      s.lHandCoords[2] = msg.get(4).floatValue();
    else if (msg.get(0).stringValue().equals("l_finger")) {
      s.lFingerCoords[0] = msg.get(2).floatValue();
      s.lFingerCoords[1] = msg.get(3).floatValue();
      s.lFingerCoords[2] = msg.get(4).floatValue();
    else if (msg.get(0).stringValue().equals("torso")) {
      s.torsoCoords[0] = msg.get(2).floatValue();
      s.torsoCoords[1] = msg.get(3).floatValue();
      s.torsoCoords[2] = msg.get(4).floatValue();
    else if (msg.get(0).stringValue().equals("r_hip")) {
      s.rHipCoords[0] = msg.get(2).floatValue();
      s.rHipCoords[1] = msg.get(3).floatValue();
      s.rHipCoords[2] = msg.get(4).floatValue();
    else if (msg.get(0).stringValue().equals("r_knee")) {
      s.rKneeCoords[0] = msg.get(2).floatValue();
      s.rKneeCoords[1] = msg.get(3).floatValue();
      s.rKneeCoords[2] = msg.get(4).floatValue();
    else if (msg.get(0).stringValue().equals("r_ankle")) {
      s.rAnkleCoords[0] = msg.get(2).floatValue();
      s.rAnkleCoords[1] = msg.get(3).floatValue();
      s.rAnkleCoords[2] = msg.get(4).floatValue();
    else if (msg.get(0).stringValue().equals("r_foot")) {
      s.rFootCoords[0] = msg.get(2).floatValue();
      s.rFootCoords[1] = msg.get(3).floatValue();
      s.rFootCoords[2] = msg.get(4).floatValue();
    else if (msg.get(0).stringValue().equals("l_hip")) {
      s.lHipCoords[0] = msg.get(2).floatValue();
      s.lHipCoords[1] = msg.get(3).floatValue();
      s.lHipCoords[2] = msg.get(4).floatValue();
    else if (msg.get(0).stringValue().equals("l_knee")) {
      s.lKneeCoords[0] = msg.get(2).floatValue();
      s.lKneeCoords[1] = msg.get(3).floatValue();
      s.lKneeCoords[2] = msg.get(4).floatValue();
    else if (msg.get(0).stringValue().equals("l_ankle")) {
      s.lAnkleCoords[0] = msg.get(2).floatValue();
      s.lAnkleCoords[1] = msg.get(3).floatValue();
      s.lAnkleCoords[2] = msg.get(4).floatValue();
    else if (msg.get(0).stringValue().equals("l_foot")) {
      s.lFootCoords[0] = msg.get(2).floatValue();
      s.lFootCoords[1] = msg.get(3).floatValue();
      s.lFootCoords[2] = msg.get(4).floatValue();
  else if (msg.checkAddrPattern("/new_user") && msg.checkTypetag("i")) {
    // A new user is in front of the kinect... Tell him to do the calibration pose!
//    println("New user with ID = " + msg.get(0).intValue());
  else if(msg.checkAddrPattern("/new_skel") && msg.checkTypetag("i")) {
    //New skeleton calibrated! Lets create it!
    Integer id = msg.get(0).intValue();
    Skeleton s = new Skeleton(id);
    skels.put(id, s);
  else if(msg.checkAddrPattern("/lost_user") && msg.checkTypetag("i")) {
    //Lost user/skeleton
    Integer id = msg.get(0).intValue();
//    println("Lost user " + id);


class Shoulders {
  float angle; 
  float angle2;
  float initial1 = 0;
  float initial2 = 0;

  Shoulders() {

  void pan() {
    //Using trigonometry to calculate the angle between a straight line and the alignment of the shoulders. Basically, we're doing cos = adjacent/hypotenuse. And we're using the shoulder coordinates to do it.
    //Also notice that we're taking the inital value, which is the calibrate value set by this same calculation at mousePressed. This calibrates to 0.
    for (Skeleton s: skels.values()) {
      angle = initial1 - ((cos(dist(s.headCoords[0], s.headCoords[1], s.headCoords[0], s.rShoulderCoords[1])/dist(s.headCoords[0], s.headCoords[1], s.rShoulderCoords[0], s.rShoulderCoords[1]))));

    text("Side angle: " + int((initial1-angle)*1000), 10, 15);

  void zoom() {
    //Calculates the angle between a vertical line and the line between the torso and the neck.
    for (Skeleton s: skels.values()) {
      if (s.neckCoords[2] < s.torsoCoords[2]) {
        angle2 = initial2 - (1*(tan(dist(s.torsoCoords[2], s.torsoCoords[1], s.neckCoords[2], s.torsoCoords[1])/dist(s.torsoCoords[2], s.torsoCoords[1], s.neckCoords[2], s.neckCoords[1]))));
      else { 
        angle2 = initial2 - (-2*(tan(dist(s.torsoCoords[2], s.torsoCoords[1], s.neckCoords[2], s.torsoCoords[1])/dist(s.torsoCoords[2], s.torsoCoords[1], s.neckCoords[2], s.neckCoords[1]))));

    text("Forward angle: " + int(1000*(initial2-angle2)), 100, 15);

    if (keyPressed && key == ' ') {
      for (Skeleton s: skels.values()) {
        initial1 = (cos(dist(s.headCoords[0], s.headCoords[1], s.headCoords[0], s.rShoulderCoords[1])/dist(s.headCoords[0], s.headCoords[1], s.rShoulderCoords[0], s.rShoulderCoords[1])));
        if (s.neckCoords[2] < s.torsoCoords[2]) {
          initial2 = 1*(tan(dist(s.torsoCoords[2], s.torsoCoords[1], s.neckCoords[2], s.torsoCoords[1])/dist(s.torsoCoords[2], s.torsoCoords[1], s.neckCoords[2], s.neckCoords[1])));
        else { 
          initial2 = -2*(tan(dist(s.torsoCoords[2], s.torsoCoords[1], s.neckCoords[2], s.torsoCoords[1])/dist(s.torsoCoords[2], s.torsoCoords[1], s.neckCoords[2], s.neckCoords[1])));

  void run() {

class Skeleton {
  // We just use this class as a structure to store the joint coordinates sent by OSC.
  // The format is {x, y, z}, where x and y are in the [0.0, 1.0] interval, 
  // and z is in the [0.0, 7.0] interval.
  float headCoords[] = new float[3];
  float neckCoords[] = new float[3];
  float rCollarCoords[] = new float[3];
  float rShoulderCoords[] = new float[3];
  float rElbowCoords[] = new float[3];
  float rWristCoords[] = new float[3];
  float rHandCoords[] = new float[3];
  float rFingerCoords[] = new float[3];
  float lCollarCoords[] = new float[3];
  float lShoulderCoords[] = new float[3];
  float lElbowCoords[] = new float[3];
  float lWristCoords[] = new float[3];
  float lHandCoords[] = new float[3];
  float lFingerCoords[] = new float[3];
  float torsoCoords[] = new float[3];
  float rHipCoords[] = new float[3];
  float rKneeCoords[] = new float[3];
  float rAnkleCoords[] = new float[3];
  float rFootCoords[] = new float[3];
  float lHipCoords[] = new float[3];
  float lKneeCoords[] = new float[3];
  float lAnkleCoords[] = new float[3];
  float lFootCoords[] = new float[3];
  float[] allCoords[] = {headCoords, neckCoords, rCollarCoords, rShoulderCoords, rElbowCoords, rWristCoords,
                       rHandCoords, rFingerCoords, lCollarCoords, lShoulderCoords, lElbowCoords, lWristCoords,
                       lHandCoords, lFingerCoords, torsoCoords, rHipCoords, rKneeCoords, rAnkleCoords,
                       rFootCoords, lHipCoords, lKneeCoords, lAnkleCoords, lFootCoords};
  int id; //here we store the skeleton's ID as assigned by OpenNI and sent through OSC.

  Skeleton(int id) {
    this.id = id;
  void drawBone(float joint1[], float joint2[]) {
    if ((joint1[0] == -1 && joint1[1] == -1) || (joint2[0] == -1 && joint2[1] == -1))
  float dx = (joint2[0] - joint1[0]) * width;
  float dy = (joint2[1] - joint1[1]) * height;
  float steps = 4 * sqrt(pow(dx,2) + pow(dy,2)) / ballSize;
  float step_x = dx / steps / width;
  float step_y = dy / steps / height;
  for (int i=0; i<=steps; i++) {
    ellipse((joint1[0] + (i*step_x))*width, 
            (joint1[1] + (i*step_y))*height, 
            ballSize, ballSize);

void run() {
  //Head to neck
    //Center upper body
    drawBone(rShoulderCoords, neckCoords);
    drawBone(lShoulderCoords, neckCoords);
    drawBone(neckCoords, torsoCoords);
    //Right upper body
    drawBone(rShoulderCoords, rElbowCoords);
    drawBone(rElbowCoords, rHandCoords);
    //Left upper body
    drawBone(lShoulderCoords, lElbowCoords);
    drawBone(lElbowCoords, lHandCoords);
    //drawBone(rShoulderCoords, rHipCoords);
    //drawBone(lShoulderCoords, lHipCoords);
    drawBone(rHipCoords, torsoCoords);
    drawBone(lHipCoords, torsoCoords);
    //drawBone(lHipCoords, rHipCoords);
    //Right leg
  //  drawBone(rHipCoords, rKneeCoords);
  //  drawBone(rKneeCoords, rFootCoords);
  //  drawBone(rFootCoords, lHipCoords);
    //Left leg
  //  drawBone(lHipCoords, lKneeCoords);
  //  drawBone(lKneeCoords, lFootCoords);
  //  drawBone(lFootCoords, rHipCoords); 

class Sounds {
    JOALUtil audioSys;
    AudioSource[] sound = new AudioSource[20];
    AudioSource backnoise;
    SoundListener listener;
    float position[];
    float you[];
    boolean useFalloff=true;
    float gain;
    float offsetx;
    float offsety;
  Sounds() {
  audioSys = JOALUtil.getInstance();

   //People scene
    //Sparrow and water
    //Panflute and gong
    //Bell and flute
    //Frog and waterflow
    //Jay and wind
    //birds follow
  //baby and zip
    //Keyboard and crunch
    //steps and clong

  for (int i = 0; i < sound.length; i++) {  
  //Background noise
  backnoise = audioSys.generateSourceFromFile(dataPath("backnoise.wav"));

  void run() {
  you = listener.getPosition();
  for (int i = 0; i < sound.length; i++) {
  position = sound[i].getPosition();
  if (dist(position[0],position[1],you[0],you[1]) != 0) {
  } else {
  you = listener.getPosition();


void reset() {
 if (keyPressed && key == 'r') { 
   for (int i = 0; i < sound.length; i++) {

void debug() {


  if (keyPressed && key == 'l') {
   acc.x += 1; 
  if (keyPressed  && key == 'j') {
   acc.x -= 1; 
  if (keyPressed  && key == 'k') {
   acc.y += 1; 
  if (keyPressed  && key == 'i') {
   acc.y -= 1; 

void backnoise() {
  gain = map(abs(vel.x+vel.y),0,40,.005,1);
  if (vel.x > 1) {
  offsetx = 1;  
  } else if (vel.x < 1) {
  offsetx = -1; 
  } else {
  offsetx = 0;
  if (vel.y > 1) {
    offsety = 3;
  } else if (vel.y < 1) {
    offsety = -3;
  } else {
    offsety = 0;

public void stop() {


Smiling Statues

Smiling Statues was a project in which I made 91 smiling statuettes and distributed them around New York City (and elsewhere). Each Smiler had a note attached to it, which asked the finder to tell me how they found it and to tell me the Smiler was safe. I asked people to submit a photo and their stories via e-mail or on the website, www.smilingstatues.com.

From a storytelling perspective — and an academic one — I was trying to give people gifts first, in hopes of getting a story from them. In most instances, storytellers ask for a story, and then give a “payment” in the form of that story being presented in a beautiful way. In addition, when we ask for stories, it’s often very direct; we’re asking questions. But I wanted these statues to create narratives on their own, and I wanted people to relate their stories within that narrative. I thought this object and this even of finding a statue would give people an entryway into telling an interesting story — and enjoy themselves.

That said, I had a more extensive reflection on the actual Smiling Statues website, which I’ve pasted below.


In the first few days of statue distribution, I stumbled upon a homeless guy on 4th Street and Avenue A. He asked me what I was carrying, and I immediately thought: “There’s no way he has computer access or a camera.” So I told him it was nothing, gave him a dollar and walked away.

I went into this project wanting to bring joy to people, but it quickly became about getting a return on investment. I wanted as many statues as possible to return stories on this website. I would leave a dozen statues in various locations, and I’d be disappointed to see just one story come back from the day’s work. It made me wonder: Was is worth it? All the work in creating these little guys? It was hard not knowing where my statues ended up.

I don’t think it was bad to want more stories from people. But it made me forget the original intent of the project: to make people smile. On the last day of distribution, Easter Sunday, I was walking on 42nd Street, near Grand Central, and a homeless woman asked for change. I gave her a statue, and she reluctantly grabbed it. I guess it was my attempt at redeeming myself, but I knew it wasn’t what she wanted or needed. Smiling Statues can’t feed anyone.

So even then, I wondered: Was it worth it? Not only were these Smilers practically useless. But even in the context of my project, it wasn’t a huge success. I made 91 statues, and I only knew 14 of them were — 15, counting the one I gave to that homeless woman. That meant I lost 76 of the carefully molded, colorfully painted and precisely smiled statues.

After the homeless woman examined the Smiler, she looked up at me. And as I walked away, she said, “Oh! Thank you!” And she smiled.

Smiling Statues are just a little bit of clay and paint molded in a specific way. They don’t do anything and they are worth, in essence, pennies. But to think that something so small — so insignificant — can cheer someone up, or make them forget for just a second that they don’t have a place to sleep at night… it’s like alchemy: spinning smiles out of virtually nothing. But the most beautiful part is that we don’t need Smiling Statues to do that. These little guys are just the perfect excuse.

Other Observations

Gender: When people referred to the Smilers, almost everyone used masculine pronouns. It’s very interesting that the assumption was that these statues were males. Most of them had no distinctive features, but maybe it’s because none of them had long hair? Or maybe it was the color that determined it? Or maybe I’m overanalyzing?

Globetrotters: I was tracking the web stats on the site, and I’m convinced these Smilers made it to multiple states. As far as I know, they made it to New York, New Jersey, Massachusetts, Connecticut, Rhode Island and Washington. In addition, there were visitors to the site from Hawaii, California, Nevada, Colorado, Kansas, Minnesota, North Carolina, Indiana, Florida, Illinois, Virginia, Pennsylvania and Maine. Also, there were visitors from Canada, Mexico, the United Arab Emirates and England.

Authority of stores: I dropped off about 25 of them in stores. And none of them returned. Now, when I dropped them off in stores, I felt very uncomfortable — almost as if I was stealing something. So my assumption is that when people saw them in stores, they were hesitant to pick up and take anything without paying. Because, in stories, we generally don’t take things for free.

Eye aperture: Only one of the Smilers left on the streets returned. My guess is that it’s because, when we are outside, the aperture of our eyes is focused on things far away. So we don’t have focus on smaller things, no matter how small and colorful they are. In addition, I think Smilers left on the street could’ve been mistaken for junk, because New York City isn’t all that clean.

Sit down, relax: I had the most success at coffeeshops and fast food places. These are locations in which there isn’t a waiter or waitress clearing tables, and people sit down and relax for a long period of time. So this gives them time to open up the note and read it.

The people it attracts: A friend mentioned that, if I had put down swanky-looking envelopes instead of colorful statues, I would’ve baited a whole different crowd of people. I think a lot of young kids and artistic-type people picked these up, and they elicited a certain playful — or cathartic — response. But if the bait itself was less cheerful and whimsical, I probably would’ve been different.

Smiling Statues II: I think I may do this again in the summer, just for fun. I think people need a little random joy in their lives, and I think Smilers offers that. In addition, I think the making of Smilers can be a community thing. It’s not so hard, or expensive, to gather people to mold and paint these guys. And it’s a lot of fun to distribute them in random places.