Choicelessness

Messy is Neat

Posted in technology, work by johnsnavely on November 18, 2011

I published a few images from the Productivity Vision Video that I mentioned in my last post into my portfolio.

The videos have a complex set of criteria. On the one hand, they are used often by marketing folks to give a sense to customers of what we think the future might hold. On the other, we’re trying to inspire product groups within the company to take a fresh look at things. For the interfaces, we try and have a level a detail that is unusually well thought out for a video, but more like a sketch to those who work in interface design. Since we are not a product group, we have a delicate dance to do; we’re not showing the next version of the product and we can’t give away company secrets, but we still need to build something that’s actually relevant. It’s a lot to pack into 5-6 minutes.

I thought I’d give a little behind-the-scenes the thinking process that goes into a few seconds of the movie…

In the home scene, Shannon (the little girl) exits from her math homework and goes to her bake sale scrap book . The sequence takes about 5 seconds (4:50-4:55min on youtube). We thought of Shannon’s device as her digital notebook, a notebook that she took everywhere. There’s a lot of precedence for an “os” that uses the metaphor of books. Most notably Microsoft’s Courier device (in fact we studied some of the early design concepts for the device), but also the Amazon book store, and Apple’s book shelf and One Note for Office.

Most of these interfaces try to take the activity of reading and collecting books and organize it. Bookshelves can be beautiful and are great for displaying books, but when we’re actually working with open books, things look a little different. They might look like this:

Or this:

What would an interface look like that let me “spread out” all my books? What would the books be? How would I navigate it?

For all of Shannon’s digital content, we started to place everything we might traditionally think of as inside “apps” or “windows” as part of books instead. Social networking could be done in her yearbook, while instant messaging and SMS looks like a comic book (see the lower left). Search results are gathered into a book that she could save or dynamically filter. At the beginning of this scene, Shannon learns math from an bear-in-a-book who knows when she should take a break and work on a different project.

To contain all these books we thought a pan and zoom canvas would be ideal for the infinite space, much like the interface Blaise talks about in his TED talk on Deep Zoom & Photosynth. She could organize things spatially and build her own relationships between books– a story about Amelia Earhart sits next to a diagram of how a beetle flies, sits next to a page on how to fold an origami bird. (see the lower right).

As books are layered on and there’s quite a bit of them, however, you might forget where you’ve placed something. Touch and hold brings up a contextual menu. And touch and hold and speech can issue a command, like “Find my bake sale stuff” to search for that book.

There’s more little software secrets hidden in this scene and throughout the video. I wish we had more time to do a video on one of them at a time… I’d love to explore the interaction model we built for the phones, for example.

New Work

Posted in work by johnsnavely on November 5, 2011

Last week I finished up a big project: the latest Productivity Vision Video for Microsoft. It was released on YouTube and within a week generated nearly 2 million views. I was the Creative Director of the project, but it was a big team effort. I worked with awesome folks like Mason Nicoll, Hiroshi Endo and Ethan Keller and many more.

Here’s the video. It’s sparked some intense debates both positive and negative about the role of technology in our lives and that discussion is exciting to see.

Now that the project is finally finished, I’ve taken the opportunity to use my free weekends to updated my portfolio. The video isn’t in there yet, but I’ll be posting some screenshots there and hopefully a few here. There’s a lot of detail in the software and interface design that I’d like to share and hear what people think…

Envisioning Lab

Posted in architecture, work by johnsnavely on July 15, 2010

A project I’ve worked on, The Envisioning Lab, opened a few months ago. We finally brought in a professional photographer (Paul Warchol) to take some pictures.

Here’s a couple:

More here.

Unbearable Liteness of Being

Posted in technology, work by johnsnavely on December 30, 2009

I was sitting in a brainstorm with some colleagues from Office Labs a couple months. We were all talking about the various social networking softwarez we use. At one point, Jeremy, who had been writing while everyone else was chatting, described the two lists that we had written down. He’d made a list of all the web apps that he signed up for in one column and then the lighter, simpler version of that service that he actually used in the other. Instead of twitter, destroyTwitter; instead facebook, facebook lite; instead of Ta-Da list, he uses the iphone app; etc, etc, etc.

I thought he was onto something here… Although I’m not sure exactly where the “ah-ha” moment is, the idea of finding the simple-est, little app for assisting a task rang very true. Not just with how I think many people work that way when on the web, but also how the apps on the iphone have infected all these moments in our day. (There’s an app for that.)

Underlying this trend there’s this idea that our lives are divisible, with each parcel of this division nicely paired up with a piece of technology. What I find lacking in this scenario of these apps-as-endpoints is that we end up without enough glue to hold them together. I’ve written a little bit about RSS (and Yahoo Pipes) as a glue that might allow you to string together endpoints with some amount of intention, but it’s not good enough. And with a mobile phone you get a set of destinations that don’t really connect well to one another.

What I want are applications that step into the flow of my life and talk to each other. For example, if I wanted to grab a beer with some friends tonight I could: use the Yelp app to find a bar (first, suggesting one’s that my friends already like), tell my calendar event automatically notify my friends of the location, and when I’m ready to call a taxi— I don’t have to type in my location. Not that complicated, really. But difficult to do when all apps are in their own silo.

Not to be regressive, but I need some sort of aggregator, glue, dashboard, whatever you want to call it. Maybe we should care about our OS again?

Wave.wav

Posted in technology, web, work by johnsnavely on November 20, 2009

I wanted to write a blog post on Google Wave from within Wave and then publish it straight from wave neatly to my blog (as promised in the tutorial video). But this didn’t happen. So instead I just cut and paste what I sloppily wrote from wave here. Boo! On the wave were a number of coworkers who are interested in social media. But I’ve cut out their comments because there weren’t many and to protect their privacy.

Here’s my very random thoughts on Google Wave after only using for short period of time

First let’s get the ugly out of the way.

The UI is pretty awful… not just in terms of looks, but pieces of it actually don’t function correctly. (scrollbars I’m looking at you) This is pre-pre-alpha stuff… needing frequent browser refreshes after many crashes. That said, it’s getting better every week. And features show up daily. For example, a few days ago in-line commenting appeared. It crashes wave, but it’s there. Note to Google: I want an “undo” please.

Now here’s what’s nice about Wave.

I was invited to Wave by my bud T (of course). My first conversation on Wave was just asking him how to use this. The first little bit of awesome in this interaction was watching someone type. When you communicate “synchronously”, you move faster. There’s no waiting, and there’s no lag, which means I stop trying to multitask outside of wave. Seriously, I want my IM program to have a mode where I can talk this way.

After a short chat with T, we started talking about old projects so he created another wave and started writing a document that listed out our various potential projects and had a short description of each. In places he called upon me to fill in information, which I did by editing his post. Meanwhile, we had several concurrent conversations about whether other ideas qualified as projects. We also thought about inviting another participant who neither knew personally, but who we thought might have some expertise. So I started another Wave added T and the twitter bot, then we tweeted the friend and asked them for their wave ID. After a short dialog on twitter, (which should have been  conducted through wave, but becuase the twitter bot doesn’t have some basic features), we brought our internet friend into the wave and kept talking. All of these things were happening simultaneously and we were all working in a very fluid multichannel way. Wave as a sort of light weight wiki that enables chatting and document editing in one place works really great.

I really like the idea of the bots in Wave. They have a lot of potential. I see them as ways to aggregate all of my conversations into Wave. For example, all the convo’s happening on Twitter, Facebook,gReader comments, on blogs, and of course, on Wave should be seamlessly accessible inside and outside wave. This is why, even though it works like crap, the feature that I’m most excited about is the Wave to Blog bot. This bot can take a Wave and turn it into a blog with comments and vice versa; any comments on the blog appear in the wave. The commenting/conversation absence was one of my major problems with gReader; potentially, Wave could fix this issue. Right now, however, a lot of them are hollow shells of usefulness.

The other potential of Wave as an aggregator is an ability to unify content across social networks. I’d really like to move some of the stuff that I check in Reader daily (because the network I’ve cultivated there is producing content I can use) over to wave in order to talk about it with people outside of that network. Delicious is the example I’m thinking of….

Of course, since I work for Microsoft I was wondering what our company would make that’s similar to this. In some ways it is actually similar Outlook in terms of the way it uses panes and attempts to have some continuity between messenging and phone (Google Voice is going to be supported, right?)  But if I tried to imagine the above scenario with T and chl in Outlook… it just wouldn’t happen. Nor would a lot of the communication that I do in outlook translate over the Wave very well.

However, the closest piece of Microsoft software that I could imagine using for a similar collaboration is OneNote. In some ways, Wave is like OneNote without the tabbed navigation (which I don’t like very much anyway), but with a paned communication UI on top of it. (Aside, one of the bots for Wave is a whiteboard app, adding to the OneNote smell.) I wonder what a version of OneNote created specifically for the web might look like. Courier, maybe?

Props

Posted in architecture, technology, work by johnsnavely on April 10, 2009

I uploaded a few of the rough renders I did for the props I designed for the Productivity Vision Video. I modeled them all in Rhino, rendered in Max. And then had them milled out of acrylic at a local shop out here.

This is the clear desk monitor that one of the office workers uses:

deskmodel1

deskmodel7

A keyboard/slate device:

keyboard2

More here.

There’s a pen and the cell phone (which has some detail) on a harddrive that just crashed. Once I can recover it, I’ll post those too.

I also loaded a hi-res version of the video to YouTube.

Modus Ponens

Posted in programming, work by johnsnavely on March 1, 2009

Last Friday, the video that I’d worked on with my team (the Envisioning Team in Office Labs) had its first public showing.

Stephen Elop, President of the Microsoft Business Division, showed the video at a conference at the Wharton School of Business. His speech, and the video, can be viewed here:

http://www.microsoft.com/presspass/presskits/Officesoftwareplusservices/vision.mspx

Our team also built the pan and zoom software for his presentation. Soon there will be a hi-res public version of the video. Eventually, I expect a version of the presentation software available too. Although in the meantime, Office Labs has built pan and zoom plug-ins for PowerPoint and OneNote, which you can download for free. (At some point, I’d like to post some images of the designs and rough renders I made for the hardware props in the video.)

Like many things, there’s a lag time between when things happen inside Microsoft and when they’re released to the public. Of course, the issues of productization and IP are complicated and some betas are too ugly to release into the wild. There’s also the intense criticism MS projects receive from the public. Reading the comments on the last Envisioning video on YouTube (which nobody should do) is intimidating. I guess it’s no worse than an architecture crit.

Microsoft, however, is getting better and better at opening its doors. The latency on the video was only a couple months.

There were a couple of projects that Stephen Elop mentioned in his speech that are public, but haven’t been made into products. I thought I’d call out two of them, both from Microsoft Research. Even though these videos are kind of old and the technology simple, there are some really smart ideas in there: the interactive applications/implications are (imho) pretty exciting.

Here’s NanoTouch (I played Unreal Tournament on this device and it was sweet.) :

And Secondlight:

These projects represent a very small part of the many, many projects at Microsoft. Part of what I do is try and find connections between things, ask how our daily life might be affected by certain technological shifts, and listen to how people are already creating their own ways of working. It’s a fallible process, certainly, but there’s a lot of value in the questions themselves.

Addendum: Looks like NYTimes covered the same projects at Techfest! (thanks, vrex)

Really Simple

Posted in culture, programming, technology, web, work by johnsnavely on February 17, 2009

A few days ago, I gave a little presentation at work on how I use my RSS feeds. Most of it was stuff I’ve learned from T-Bone. Today, T shot me a tweet asking if I had written any of it down. Which I had not. So I thought I’d try.

First, most of this is going to be old hat for people who read this blog. Most of you probably have better solutions that me or are using software that I’m still late-to-the-party for. Anyway, here goes:

Several years ago, I was completely ignorant of RSS and readers. I had sites– blogs, news sites, social networking sites (Friendster!), etc. that I was interested in. Many, I would check daily for new updates. Then T introduced me to Google Reader, an RSS feed reader. With a feed reader (there are many out there– I use feedly these days), I read updates to websites as if they were emails in an inbox. This means I can check all those sites in one place and only when there’s new stuff! (Switching browsing modes to a push-pull strategy).

But RSS isn’t just the content of blogs. All manner of things come in the RSS flavor. For example, any search in Craigslist (and Ebay too, although it’s harder to find) can be saved out as a feed. This is how I found my apartment here in Seattle. I went to Craigslist, searched for Fremont / apartments  / 1+ Bedrooms / price range / dogs and stored the resulting RSS in my reader. Whenever a new listing appeared, it showed up directly in my reader, I didn’t have to check Craiglist. I stored several searches in different neighborhoods and called them as soon as listings came up. I got the house I’m renting now, because I “was the first to call”.

I still use this technique for shopping. I’ll create some search feeds on Ebay and Amazon of stuff I want at a price I want, if there’s a hit, I see it in my feed reader. Easy! And you can do it with jobs, services, and *ahem* dates, if you’re into that.

Another type of site that offers feeds that I keep track of are social networking sites. I’m not a huge fan of facebook, but I do like the updates. So I grab the updates as a feed. For my closer friends, I track their twitter updates, flickr photos, delicious links, locations (with dopplr), etc etc. Delicious is a nice site because, like Craigslist, every page has a feed. You can follow tags, people, people networks– combinations of those. Using a feed reader, I can finally unify the content that all these disparate social networks are supposed to connect me to anyway. I can also know if someone sends me a link in delicious or comments on my photos… those are feeds too. Now I don’t actually have to go to the site to know what’s happening, all that information comes to me.

But what if you want something different than what a given feed can offer? Say you like Slashdot, but the feed has a ton of posts that you’re never going to read. It would be great if you could filter them. Luckily you can use something like Yahoo Pipes or MS Popfly. These web services take RSS (and things that aren’t RSS, but can be converted) and let you use a graphical programming language to manipulate a data stream into an RSS feed that you can be happy with. (T has a great little tutorial on Pipes.)

The last thing I do with feeds is package up my own. I use Swurl, which isn’t great, but it does the trick. Now I’ve got a feed with my blog posts, tweets, delicious links, netflix queue etc– basically everything I’m spamming out on to the interwebs. I put that feed back into my reader. Now when I want to search for something I’ve forgotten or should know, I search my reader instead of a Search Engine or Delicious. My reader has all of my stuff and all the stuff of the people and places that I care about.

These days I’m trying to do some of the same sort of things– taking streams and modifying them– with Twitter’s version of a reader: TweetDeck. The flow is surprisingly similar in places. We’ll see how it turns out…

Zugswang

Posted in architecture, culture, work by johnsnavely on December 19, 2008

DSC02784

DSC02599

I’m back from a 3 day business trip to the Herman Miller Headquarters in Michigan. It was really fun. We talked with them about their design process, saw some interesting work they’re doing at the architectural scale, and toured their factory, designed by Cradle to Cradle’s  infamous William McDonough.

I also got a chance to meet and chat with Chuck Hoberman about parametric design. I did not know he’d moved to buildings.

All in all, it was a fantastic visit and hopefully soon I can talk more about what will come out of it.

More pictures from the visit are here. (I’m a pretty poor photographer and I use flickr for storage more than gallery…apologies for the blurry pics!)

The 4 R’s: Search

Posted in technology, work by johnsnavely on December 7, 2008

Total Recall

So T delicioused me this article. It is thought provoking. And since delicious doesn’t allow for multiple comments or conversations, I’ll just have to discuss it here. This is going to continue an earlier post series.

We’ll try to concentrate searching and remembering, two tasks that we do all the time, offline and away from computers. But first I want to clarify a few things brought up in the article.

So here’s Philipp Keller’s primary polemic:

One common task while browsing the web is making sure you will be able to recall a valuable information you are just looking at. This article aims to prove that social bookmarking as in delicious, simpy, magnolia et al. is the wrong tool for that task.

I’m in “total” agreement. If you’re using delicious as your main tool for recall, you’re probably using the wrong tool. That said, is Keller seriously using delicious to remember stuff? He thinks delicious is the right tool for “Sharing Links” and Using “bookmarks to get things done”, but poor for remembering. Are those first two things really less important than recall?

Until I started using twitter, and the “share” option in my feed reader, my delicious feed was basically my micro-blog. I delicious stuff all the time with no clear intention of returning to it. Shit, I’ll delicious a link that I know someone else might be interested in just to have a conversation with them about it, even if I’ve never actually read anything on that link. (I may not have read Moby Dick, but I sure as hell delicioused it.)

Delicious is “social-bookmarking” or alternatively “url lifecasting” and as I’ve mentioned before it’s real power is in conversation and narrative. Delicious needs to buttress up these areas quickly or I’m going to export all my links to somewhere else and stream from there. Two features in particular drive me nutty:

1.) Why can’t I respond to someone’s description of a link? To have a conversation around a link that someone sent to me, I have to send it back with another “for:” attached.

2.) Why doesn’t a delicious post have a unique url? (This is such a pain in the butthole.) Then at least I could generate a twitter feed of all of my delicious posts or something like Feynman’s Turtles, delicious all the way down.

Granted, delicious also has poor tagging and search mechanisms. But even if these features somehow appeared in the next version of delicious, I wouldn’t think that it’s suddenly a memory aid. Why? Because I don’t want to “remember” by URL. Social bookmarking is great, but it’s not a perfect tool for “memory augmentation”. All it remembers are URL’s and tags; two fairly abstract methods of notation and organization. These only cover a small part of the things and methods by which I remember.

When you’re asked to search for the answer to a question, the question itself might fall into a range of categories. One extreme of that range is that the question is completely random, like trivia… like “who wrote ‘Fermata‘”, for example? Offline you’d probably go to a dictionary or encyclopedia. Online you’d probably go to your search engine of choice. This isn’t the same as remembering because you never knew the answer in the first place.

The other extreme is that you already know you’ve seen the answer, you just can’t remember where or how. The answer is buried somewhere in your collected detritus of bookmarks, files, emails, pdfs, or whatever. I would like to search my own stuff.

The category right in the middle is where I don’t know the answer, but I’m pretty sure my friends do, so I’d like to search through their stuff. Of course, these all exist on a range, so sometimes I might want to search a particular friend and their social network, etc. The closest thing I have to this is when feedly (which I like, but is still a seriously buggy work in progress) will show my feed reader matches for any google search that I do. I wish the delicious plugin did the same. But again, a search engine search doesn’t look at my “local” files, emails, or documents and when I need to remember something, it seems silly that I also have to remember the format of the content as well.

A tough recall scenario will go something like this. I need the answer that was a result of a conversation I had over the phone, that continued in twitter, that spanned a couple blog posts, that was mentioned obliquely in the description of a delicious url someone sent to me, that I annottated onto a pdf doc. These are remembered piecemeal, of course, which means I repeat a search for each bucket.

Keller thinks that one reason for the problem is because delicious tears links from context of the original page. This is 50% true. Contextual search/recall is what we need but the context isn’t just the page. The context is the series of thoughts and conversations that led me to the content, and also, where I think they fit in to what I already know. Why isn’t my all my data organized according to conversations or topics that I’m interested in? If it were, then I could grab things with even “less” context. For example, Internet Explorer webslices let me grab just a piece of a page. Sadly, MS tied these awkwardly to IE bookmarks. (I hope they get it right soon…those need to be feeds. Great idea, we hardly knew ye.) I would like to be able to grab slices of conversations, slices of videos, a piece of a song, or a section of a diagram. These snippets could be mashed-up into some other thoughts that might have little to do with their origin.

So what’s missing here? Well, I need a personal database or personal file system. (Live Mesh is an interesting start to the very basics of a personal file system. So far, I like it.) Then I can pipe all the pieces of my life stream (delicious, flickr, blogs, reader stats etc) as well as all of my emails and documents and conversations into a place where things I “know” are at hand and accessible. Then I can finally mine my own data. Which will allow for me to organize what I’ve seen but haven’t learned; an infinite stream of procrastination, ty CS18 & Professor Donald. Some of those promises I’ll make good on; others I never will. So having thousands of delicious links with no tags is fine. Not reading them is awesome. (Right now, I’ll usually only return to the most recent few hundred to find things anyway.) At the very least this personal data/file system lets me view my content in flows which match my life: a twitter comment sparked a delicious link sparked a blog post which was a conversation that I wrote a research paper about.

Anyway, there’s a lot more here to talk about. I can’t help but think of this recent obituary in the NY Times.