Choicelessness

Lost in Translation

Posted in art, programming, projects, technology by johnsnavely on May 21, 2009


MonaTweeta II, originally uploaded by Quasimondo.

Skitch delicioused me this project, which I think is pretty cool. Basically, the challenge was to compress an image into a 140 tweet. The image description describes the process in more detail:

Preliminary result of a little competition with the goal to write an image encoder/decoder that allows to send an image in a tweet. The image on the left is what I currently manage to send in 140 characters via twitter.

This is the tweet for the image:
圑嘌婂搒孵怤實恄幖戰怴搝愩娻屗奊唀唭嚟帧啜徠山峔巰喜圂嗊埯廇嗕患嚵幇墥彫壛嶂壋悟声喿墰廚埽崙嫖嘵奰恛嬂啷婕媸姴嚥娐嗪嫤圣峈嬻尤囮愰啴屽嶍屽嶰寂喿 嶐唥帑尸庠啞彐啯廂喪帄嗆怠嗙开唅恰唦慼啥憛幮悐喆悠喚忐嗳惐唔戠啹媊婼捐啸抃岖嗅怲幀嗈拀唹坭嵄彠喺悠單囏庰抂唋岰媮岬夣宐彋媀恦啼彐壔姩宔嬀

I am using chinese characters here since in UTF-8 encoding they allow me to send 210 bytes of data in 140 chars. In theory I could use the whole character code range from 0x0000-0xffff, but there are several control chars among them which probably could not be sent properly. With some tweaking and testing it would be possible to use at least 1 or 2 more bits which would allow to sneak 17 or 35 more bytes into a tweet, but the whole encoding would be way more nasty and the tweets would contain chars that have no font representation.

Besides this char hack there are a few other tricks at work in the encoding. I will reveal them over time. For now I just mention the difficulties involved here:

A typical RGB color needs 24 bits which is 3 bytes. This means if you just stored raw colors you could send 70 colors. Unfortunately you couldn’t send anything else. At least that would allow you to send a 7×10 pixel matrix.

The worst way to store one full x/y coordinate would be 2 times 4 bytes, which is 26 coordinates in one tweet. That’s 8 triangles. Obviously you have to do some concessions with the precision here. 2 bytes per number maybe? Gives you 52 points or 17 triangles. Unfortunately those come without color info.

What I like about this project, other than the fact that you can send an image (albeit a pretty lo-res one) via twitter, is the unintentional text that’s generated from the compression. In this case the compression has to stay in the realm of text and therefore is still “readable”. In the comments for the image, one fan of this project has translated the Chinese characters that encode the mona lisa:

The whip is war
that easily comes
framing a wild mountain.

Hello, you in the closet,
singing–posing carved peaks
of sound understanding.

Upon a kitchen altar
visit a prostitute–
an ugly woman saint–
who decoys.

Particularly
lonesome mountain valley,
your treasury: a dumb corpse and
funeral car, idle choke open.

Reclassification:
exactly what you would call nervous.
Well, do not suggest recalcitrance
those who donated sad.

The smell of a rugged frame
strikes cement block once.

Where you?a
Cape. Cylinder. Cry.

It’s nice to see digital art that has multiple readings which are dependent on the medium itself. We still use the words “images” and “text” when we’re talking about the digital analogs of real world media.

But maybe they are qualitatively different?

Ceci n’est pas une pipe

Posted in personal, programming, projects, technology by johnsnavely on April 15, 2009

As part of my hiring package at Microsoft, I got a very modest stock award. Those of you who know me also know that I have never owned any stocks in my life and also usually don’t have any savings.

However, these are tough economic times and I’d at least like to keep track of how the stock I have (it’s not much and it’s all in one company) is doing. I’d also like to keep track of it with a daily reminder, a daily notice that fits in with my other daily activities. For me this means twitter.

What I want is something pretty simple. A twitter account that I can follow that will update me on Microsoft’s stock price daily. Now there are a number of twitter stockbots out there. Generally, however, you have to ask them for a stock quote. (Which defeats the whole push model of twitter to begin with) After searching for 5 minutes on the internet and not finding a solution, I decided to build my own.

I took an rss feed from QuoteRSS.com and then used TwitterFeed.com to tweet it to a new Twitter account. I think it’s all working, and it literally took about 10 minutes from start to finish. The only annoying part was having to create a new twitter account; this seems really dumb.

In the same way that I can build and manage my RSS feeds, I’d really really like be able to create virtual twitter accounts. Twitter isn’t just about looking at other news sources or information outside of myself. Twitter should be able to deliver stuff that I can curate.

I need a Yahoo Pipes for Twitter.

Populations

Posted in architecture, programming by johnsnavely on April 9, 2009

As promised here’s a few snapshots of the work done in the two day Rhinoscripting Workshop in Portland. (Scripts, on the way)

S_ShotB2 copy

C_SectionalElevation-copy

Images: Raha Talebi

process_2

process_3

Images: Robert Petter and Darin Harding

01_Script-Works

05_Unit_Surface

Images: Peter Burns

More pictures here
.

Modus Ponens

Posted in programming, work by johnsnavely on March 1, 2009

Last Friday, the video that I’d worked on with my team (the Envisioning Team in Office Labs) had its first public showing.

Stephen Elop, President of the Microsoft Business Division, showed the video at a conference at the Wharton School of Business. His speech, and the video, can be viewed here:

http://www.microsoft.com/presspass/presskits/Officesoftwareplusservices/vision.mspx

Our team also built the pan and zoom software for his presentation. Soon there will be a hi-res public version of the video. Eventually, I expect a version of the presentation software available too. Although in the meantime, Office Labs has built pan and zoom plug-ins for PowerPoint and OneNote, which you can download for free. (At some point, I’d like to post some images of the designs and rough renders I made for the hardware props in the video.)

Like many things, there’s a lag time between when things happen inside Microsoft and when they’re released to the public. Of course, the issues of productization and IP are complicated and some betas are too ugly to release into the wild. There’s also the intense criticism MS projects receive from the public. Reading the comments on the last Envisioning video on YouTube (which nobody should do) is intimidating. I guess it’s no worse than an architecture crit.

Microsoft, however, is getting better and better at opening its doors. The latency on the video was only a couple months.

There were a couple of projects that Stephen Elop mentioned in his speech that are public, but haven’t been made into products. I thought I’d call out two of them, both from Microsoft Research. Even though these videos are kind of old and the technology simple, there are some really smart ideas in there: the interactive applications/implications are (imho) pretty exciting.

Here’s NanoTouch (I played Unreal Tournament on this device and it was sweet.) :

And Secondlight:

These projects represent a very small part of the many, many projects at Microsoft. Part of what I do is try and find connections between things, ask how our daily life might be affected by certain technological shifts, and listen to how people are already creating their own ways of working. It’s a fallible process, certainly, but there’s a lot of value in the questions themselves.

Addendum: Looks like NYTimes covered the same projects at Techfest! (thanks, vrex)

Recycling Part II

Posted in architecture, personal, programming, projects by johnsnavely on February 22, 2009

As I mentioned a post ago, I was throwing old work up onto flickr. One project was a bird house, the other was a set of stairs and a copper wall.

It was the first architectural project I’ve ever worked on.

From Wikipedia, the free encyclopedia

From Wikipedia, the free encyclopedia

I had just learned Rhinoscript and that was my major contribution to the project. Ann Pendleton-Julian and Alex Tsamis brought me in. They wanted to know if I could turn an involute into a staircase.

So I scripted one.

stair4

stair2

It was an exterior stair for the rear entrance, the one closest to the garage.

bigone

The stair was tucked into the innards of the building but still exterior thanks to a copper wall that wrapped deeply inward.

copperlayout2

Ann thought it would be boring if all the copper tiles were the same so I wrote another script that basically fit a set of tiles with varying sizes onto the surface. The script used large tiles in areas of little curvature and smaller tiles where the curve tightened.

copper4 [Converted]

If I can find the scripts I used I’ll post them as well.

Really Simple

Posted in culture, programming, technology, web, work by johnsnavely on February 17, 2009

A few days ago, I gave a little presentation at work on how I use my RSS feeds. Most of it was stuff I’ve learned from T-Bone. Today, T shot me a tweet asking if I had written any of it down. Which I had not. So I thought I’d try.

First, most of this is going to be old hat for people who read this blog. Most of you probably have better solutions that me or are using software that I’m still late-to-the-party for. Anyway, here goes:

Several years ago, I was completely ignorant of RSS and readers. I had sites– blogs, news sites, social networking sites (Friendster!), etc. that I was interested in. Many, I would check daily for new updates. Then T introduced me to Google Reader, an RSS feed reader. With a feed reader (there are many out there– I use feedly these days), I read updates to websites as if they were emails in an inbox. This means I can check all those sites in one place and only when there’s new stuff! (Switching browsing modes to a push-pull strategy).

But RSS isn’t just the content of blogs. All manner of things come in the RSS flavor. For example, any search in Craigslist (and Ebay too, although it’s harder to find) can be saved out as a feed. This is how I found my apartment here in Seattle. I went to Craigslist, searched for Fremont / apartments  / 1+ Bedrooms / price range / dogs and stored the resulting RSS in my reader. Whenever a new listing appeared, it showed up directly in my reader, I didn’t have to check Craiglist. I stored several searches in different neighborhoods and called them as soon as listings came up. I got the house I’m renting now, because I “was the first to call”.

I still use this technique for shopping. I’ll create some search feeds on Ebay and Amazon of stuff I want at a price I want, if there’s a hit, I see it in my feed reader. Easy! And you can do it with jobs, services, and *ahem* dates, if you’re into that.

Another type of site that offers feeds that I keep track of are social networking sites. I’m not a huge fan of facebook, but I do like the updates. So I grab the updates as a feed. For my closer friends, I track their twitter updates, flickr photos, delicious links, locations (with dopplr), etc etc. Delicious is a nice site because, like Craigslist, every page has a feed. You can follow tags, people, people networks– combinations of those. Using a feed reader, I can finally unify the content that all these disparate social networks are supposed to connect me to anyway. I can also know if someone sends me a link in delicious or comments on my photos… those are feeds too. Now I don’t actually have to go to the site to know what’s happening, all that information comes to me.

But what if you want something different than what a given feed can offer? Say you like Slashdot, but the feed has a ton of posts that you’re never going to read. It would be great if you could filter them. Luckily you can use something like Yahoo Pipes or MS Popfly. These web services take RSS (and things that aren’t RSS, but can be converted) and let you use a graphical programming language to manipulate a data stream into an RSS feed that you can be happy with. (T has a great little tutorial on Pipes.)

The last thing I do with feeds is package up my own. I use Swurl, which isn’t great, but it does the trick. Now I’ve got a feed with my blog posts, tweets, delicious links, netflix queue etc– basically everything I’m spamming out on to the interwebs. I put that feed back into my reader. Now when I want to search for something I’ve forgotten or should know, I search my reader instead of a Search Engine or Delicious. My reader has all of my stuff and all the stuff of the people and places that I care about.

These days I’m trying to do some of the same sort of things– taking streams and modifying them– with Twitter’s version of a reader: TweetDeck. The flow is surprisingly similar in places. We’ll see how it turns out…

Problem, Set, Match

Posted in architecture, hobbies, personal, programming, projects by johnsnavely on January 11, 2009

I’m working on a little project that has to do with architecture, but I’m stuck. (Because I’m stupid at math and never took that darn linear algebra course in college.)

So I’m appealling to all my readers who, in fact, are smart. Here’s a math problem for you.

Let’s say I have a ruler and a camera. (which I do) I can take a picture like this:

Fig 1 A

I’ve marked each inch with a little “x”. If we were to look just at the x’s the picture would look like this:

Fig 1 B

Although the ponts are rotated slightly, since the ruler parallel to the base of the camera’s view frustum (ignoring lens distortion), the points are equidistant. If I wanted to figure out what kind of line this was just from these dots, I could easily fit the points to a curve (in this case a straight line) and rotate them straight. To make it look like this:

Fig 2

But what about this case? Where the ruler is in perspective…

Fig 3 A

The dots would look like this:

Fig 3 B

Here the dots are not equidistant. However, I know that this is a straight line and since the distances shrink proportionately each time, I can again redisplay this curve on a plane (again, a straight line) like this (Fig 2):

Fig 2

Now comes the hard part! What if I’ve got a ruler that looks like this?:

Fig 4 A

Dots like this:

Fig 4 B

Can I flatten those to this?:

Fig 5

So the question is: Given a set of points like Fig 4B. Assuming points are equidistant, and the curve lies in a single plane (i.e. isn’t three dimensional) can you transform each of the points into a “flat” plane, where the distance between the points is actually the same?

Some issues:

1.) Since the ruler’s curved, the points are no longer exactly equidistant, but we should assume that they are. (I think, in fact, that the problem is unsolvable if we don’t.)

2.) Without more than one view or camera information, the curve could be reflected. So there are actually two solutions for every input set of points.

3.) This isn’t a 3d to 2d transformation per se, really just a 2d to 2d transformation.

Auto Raves

Posted in books, culture, programming, technology by johnsnavely on December 13, 2008
Cover of Gibsons Neuromancer

Wikipedia: Cover of Gibson's Neuromancer

In a recent conversation, Johnny Lee mentioned (and I think he was referring to Desnee Tan‘s work) that sensors could be all over the place and still not give us all the information we need. For example, we could have a camera in our car but it might be hard to recognize with computer vision an “accident” ahead. But if we placed sensors on someone’s body we might be able to record their heartrate or adrenaline as they “sensed” the accident ahead. We could sense of lot of information through the body.

The first thing I thought of in the back of my mind was Amazon’s Mechanical Turk. Billed “Artificial Artifical Intelligence”, Amazon’s Mechanical Turk basically provides an interface and market for people to divide computationally complex tasks into small Human Intelligence Tasks. For example, if you come up with a question that fits a given statement, you’ll get $0.03.  Spammers have used this service to bypass the “test” to see you’re a real person on blogs and forums.

But one big problem with Turk is that the HIT’s generally require conscious effort, which is slow and time-consuming for both Workers and Requestors…  I think you see where I’m going here.

People hooked up to sensors could “automatically” be turked, selling data gathered subconsciously (thanks for the link T). For example, I want to do a quick test of a new ad campaign. I flash the ad up to some Workers. They just look at it for a second. I gather bio-feedback. And we’re done! They get some pennies in their accounts. I get results instantaneously.

The whole thing feels sorta Neuromancer-like, which means it’s probably going to happen.

Retro

Posted in art, programming, projects by johnsnavely on September 12, 2008

My former teacher and mentor, Meejin Yoon, is up for tenure review. Since we can’t vote on the matter, cross your fingers or something for her!

Earlier this week she asked me for some images and scripts of old work I’d done in her studio many years ago. Sadly, I’ve had a few hard drive crashes since then, all I could dig up was a buggy script and no images.

So last night I fixed the script (hacked) and made a few new images. The script is a pretty poor implementation of an L-System generator in Rhinoscript. But I’ll post the full script to the scripting blog this evening. Maybe someone will find it useful.

This has been a Rhino week. On Wednesday, I met with David Rutten, who’s just as brilliant as you might expect from his work. He’s an architect turned programmer (ahem), but he’s one those cases where he’s actually a very very good programmer. He showed me a beta version of Grasshopper, formerly called the Explicit History Plugin, which is a visual programming language (using boxes and arrows– much like Yahoo pipes) for parametrics in Rhino. (Tim, you should check it out, it reminds me of some of the work you’ve done with a pipes like interface.)

My first project might be to remake these L-Systems in Grasshopper.

Anyway, here are the pretty pictures. I hope they help Meejin get what she deserves– sweet sweet tenure. (The full set of pics is here.)

Comestibles

Posted in art, books, fashion, programming by johnsnavely on March 23, 2008

First, an addendum: The Van Cliburn YouTube competition is for 35 and older only. Double poo! My praise has been redacted.

As reward that I had been planning for a very long time, K and I had dinner last night at TW Food. It was restaurant week in Boston, but their special menu was so abbreviated, that it seemed a travesty not to hit the seven course Winter Grand Tasting. We tapped that. It was the best meal I’ve had in a looong time. Three hours of delicious food and great wines. Awesome.

In other news, Chessgames.com has updated their viewer. Now, while watching a game (like a movie), you can pause and play out different lines of the game yourself, in the position of Byrne or (sigh) Bobby Fischer. I wish more media experiences incorporated the idea of the “choose your own adventure”.

A long time ago, when I worked at AMNH, we recorded a fly-thru in the known universe with cue-points that allowed a user to get off the “spaceship” and look around. (Not the most beautiful thing I’ve ever made, but an interesting experiment.) While this type of thing may be considered a “non-linear” narrative, the novelty of the experience is actually how the user would construct a very linear pathway through the interactive. In fact, the user wants linearity as much as possible in order to organize and understand what they are seeing.

These days we’re brainstorming a project whose main conceptual twist is an audio and video with two seperate narratives. The video is an idealized world; the audio, the purgatory of a mundane life. The problem with fashioning such a “non-linear” (or duo-linear) narrative is that a viewer automatically tries to rectify the two stories into a single understandable story. For example, showing a radio alarm clock, but playing a ringing bell, makes the viewer think that there’s another alarm clock off-screen; not that the audio might tell another story.

Even as it’s foiling my plans, there’s something fascinating about this desire for single, linear, understandable narratives. According to K, Ricoeur has a crapload to say about Time and Narrative. I guess I have some reading to do. In the meantime, here’s a lovely quote that aptly describes what low brow books I am actually reading.

Then, too, narration includes prophecy in its province to the extent that prophecy is narrative in its fashion.
Paul Ricoeur

Continuing my quest to have some sort of aesthetic position on “the future”, I’ve been reading a fair amount of science fiction. Of course, the reading list includes the Hugo-Nebula award winners, but also some pulpy losers from the 60s and 70s. The fonts, graphics, and yep, even the writing are nuts.

A Hugo award winner, for instance Dune (one of my favorite books– and Lynch films– of all time), has one of the traits of a timeless work of art, namely that it is timeless. Reading it today, it is as fresh and unusual as when I read it 15 years ago.

Sci-fi pulp, on the other hand, is of its time, and has many recognizable idiosyncrasies of the culture, time, and place in which it was written. Perhaps this just creates some sort of hot tranny mess, but maybe, when we’re looking to sample styles, the obviousness of these expressions is an asset. (Prada’s gorgeous new look isn’t about the subtle 70’s.)