ilteris kaplan blog

Programming from AtoZ final proposal

April 05, 2006

I have been jumping too much lately from subject to subject and getting too much input just created a bottleneck problem in my head. Here is what I have talked in the class for my proposal:

quote:“-data is everywhere. information?”

scenario 1: user just goes in the room, he sees the screen full of data from various web sources that look meaningless to him. Once he is in the range of the sensing camera, he sees himself inside the screen as a silhouette covered with these information flow. According to his moves, the data becomes something more “meaningful” for him. I am looking for ways to get certain inputs from the user and/or how to create this “meaningful” information.

scenario 2: the information on the screen is related to physical character of the user. The sensing mechanism calculates height, the shape of the body, the colours of the user’s clothes, brands and spits information according to this. advertising? Is this really what user wants? What would be our reaction to a strange machine inspecting us and our privacy, while everyday we are being inspected with eyes of everyone that surrounded us.

scenario 3: the information sources could be constrained. classified(bloody news, happy news). certain mappings. Red shirt you wear brings wikipedia entry about red shirt etc. getting cues from the audience and spit information according to that. What kind of information, historical, geographical, biological, horoscope?

idea : we are surrounded with lots of data that is meaningless to us. Is there a way to make this data more meaningful for us? Do we want this data to be more meaningful? What kind of data is more attracting to us than others? Is there anyway to reveal certain patterns in user behaviors that gives cues about information they are looking for? Could these patterns improve the quality of the information sources over time?

This was last week, I still feel something is missing in my project that I’d would like to make users aware. Here is the steps that I am going to take in this project:

1 ’ Get screenshot of users image through a camera. Try to get cues related to user. Colors of what he/she wears, his/her height are the ones that look more appropriate as I am stressing not to get this information through his/her intentions. Why? By getting this information without his notice, I am suggesting the information is already there whether we want it to be there or not. But still this gives me so little and abstract information unfortunately.

2 ’ Get the average color value of the image. Go to flickr and mine random images that have similar average color values. Get their tags and display on the screen. This is a little bit tricky and is still not the final thought. What I am trying to question is, could be the information related to the owner with only getting so little data, data that is put out of his/her intentions. The problem I will likely come across is that I am afraid the users are not able to connect what they see on the screen with themselves and the work will be left conceptually. So I need a strong element that connects those two. I might escape this in processing with adding something that is following this users’ path.

3 ’ Starting random images in flickr doesn’t seem to be a good idea to me at this point. At least there has to be some connection, why flickr, why starting with random a as opposed to random b? Those questions are still waiting to be answered. Also I have come up with this project called Open Mind Commonsense. The page seems to be down, but there is an article in KurweizAI.net by Push Singh. This is really parallel what I am trying to achieve. I am after text, images etc which is already there related to us. check out this first paragraph from the article:

Why is it that our computers have no grasp of ordinary life? Wouldn’t it be great if your search engine knew enough about life so that it could conclude that when you typed in “a gift for my brother”, it knew that because he had just moved into his first apartment that he could probably use some new furniture? Or if your cell phone knew enough about emergencies that, even though you had silenced it in the movie theater, it could know to ring if your mother were to call from the hospital? Or if your personal digital assistant knew enough about people that it could know to cancel a hiking trip with a friend who had just broken a leg?

4- Wouldn’t it be great if we can search the text by its affective emotions and color code the text according to that?

Technically I started with baby steps and right now I can get average RGB ’ HSB values of an image without any problems. Here is my code:

PImage b;
void setup() {
b = loadImage("deneme.jpg");
int rsum = 0;
int gsum = 0;
int bsum = 0;

 for (int i = 0; i <b.pixels.length; i++) {
    color redk = (color) ((b.pixels[i]   &  0xFF0000) >> 16);
    // println("redk: "  + redk);
    color greenk = (color) ((b.pixels[i] & 0x00FF00) >> 8);
    // println("greenk: "  + greenk);
    color bluek = (color) ( b.pixels[i] & 0x0000FF);
    // println("bluek: "  + bluek);</p>
  rsum += redk;
    gsum += greenk;
    bsum += bluek;
  }
  println(rsum/b.pixels.length);
  println(gsum/b.pixels.length);
  println(bsum/b.pixels.length);
  float[] hsb = Color.RGBtoHSB(rsum, gsum, bsum, null);</p>
// println(hsb[0]);
  size(200,150);
  image(b,0,0);
}

So for my next step, I should figure out how to mine images in flickr and get their tags according to those images. I should still think how I can reveal sensible information and connect those with the users.


Written by Ilteris Kaplan who still lives and works in New York. Twitter