The New York Times would like to join you in the living room

In a corner of the research and development lab at The New York Times Co., they’ve prototyped a living room of the future. It’s not as whizbang awesome as you might hope — a lamp glows red or green depending on how the markets are doing — but it does feel like a reasonable conception of Living Room 2.0. Their major bet: as Internet-enabled televisions become more common, people will increasingly choose to consume web material on those huge, high-definition screens.

That wouldn’t, on its face, be an advantageous development for the Times, which produces the vast majority of its content in longform text you’d never consider reading on TV. But as Alexis Lloyd, a creative technologist in the R&D group, explains in today’s video, it may be possible to shift gears in the living room and emphasize the newspaper’s multimedia content. She demonstrates the concept with “Choking on Growth,” a major series on environmental damage in China from 2007.

This is the third in our weeklong series of videos from the R&D group, and it may be the one that’s easiest to imagine coming to pass. Laptop and desktop computers are already commonplace in the living room, Boxee is a huge hit, and Apple keeps plugging away at converging TV and the Internet. (On Oxygen’s The Bad Girls Club, the cast members check their email on a television in the living room. QED.)

Still, reimagining The New York Times in HDTV is a challenging leap. (You might recall the Times Co. made an unsuccessful foray into television with the Discovery Channel earlier this decade.) The newspaper produces a ton of multimedia content — certainly more than its competitors — but a satisfactory living-room experience would require video on a scale the Times isn’t yet producing. That’s why they call it the future.

You’ll see more of the R&D group’s living room in tomorrow’s video (yesterday‘s was also shot in there). After the jump, you’ll find a mock-up by design integration editor Nick Bilton, which adds a projector but is otherwise pretty faithful to the actual room. And below that, there a transcript of today’s video.

Alexis Lloyd: The main problem we see with content from The New York Times in the living room is that our primary form of storytelling is still long-form text, which works really well on paper, still works well on the web — but once you’re sitting ten feet from a television in your living room, that pretty much breaks down. But we do produce all this great multimedia content. It’s just usually pushed off to the side a little bit. So in this demo we are asking the question: Can we flip that paradigm around and use the media that works really well in the living room — the video and the images — and make that the spine of the story, but still pull in some of the text and pull in some degree of interactivity that you might want when you’re in the living room?

So I’ll show you this. In this case, I’m using a standard mouse to navigate this, but we’re also looking at a lot of devices like these air mice that I could sit and navigate from my sofa, as well as doing custom remote controls and interfaces like the kind that Mike showed you on CustomTimes.

So this is just a one-minute video that I’m going to start playing, and as the video plays there are these panels that appear that I can open up to show you contextual information about what’s being discussed in the video. So in this case, I can get background information about the turtle that’s being mentioned. It’s text, but it’s short, it’s big, and furthermore, it’s optional. So I can just open it up, read it, and then I’m back in the video. So it doesn’t take me out of that central experience of sitting back and being told a story, which is my primary kind of mode when I’m in the living room.

And we can do this with all kinds of content. So in that case, that was some, a piece of text that was related to what they’re talking about in the video. In this case, there’s a woman who is being interviewed. She’s written a book about mammals in China. I can open this up to read an excerpt from that book. Furthermore, it knows it’s a book, so there’s an e-commerce component that’s integrated into this. And I can just choose from this interface to buy the book. It goes into my Amazon one-click shopping process, and I’m back in the video. So I’ve done all this, but I haven’t been taken out of that basic experience.

And this is really pointing to the idea of creating more granular levels of metadata about content. So we have metadata about our videos as a whole, but now we can begin to say, at this particular point in time in the video, we have a related map or at this particular point in time, they’re talking about this lake. And we have a slide show about that. So I’m going to open that up.

And then you can see our photojournalism really has a place on the big screen because the photos are stunning at this size. And furthermore, the photos themselves have this more granular level of metadata where there are these hot spots that I can use to get deeper information about objects or people in the video — or in the photo, rather. So I can find out all about this toxic algae that’s growing on this lake as a result of chemical plants dumping on it.

And at the end of the video, we’ve also integrated some social functionality, so I can choose to share this video with other friends, and it pulls in the people I most frequently share with. I can say, I want to share this with Michael, and then it will go into any number of his social feeds and into our lifestream app, which Ted will show you in a moment. There’s also some contextual advertising in here, so you might be inspired to give to Unicef‘s clean water campaign after watching that video.

And furthermore, that’s just a one-minute video piece, but this series was a yearlong series. There is a huge collection of multimedia content that was created for it. So we started asking the question of can we use the metadata that we’re already creating for our content to allow readers and users different lenses into these large collection of media that might be overwhelming to them?

So in this case I have four different views that I can go into that are dynamically created from the metadata associated with it. So there’s an editors’ choice where I can say I just want to know what the New York Times editors think are the highlights of this collection or this package. But I can take that same content and I can sort it geographically, or I can sort it over time and see a timeline. Or I can sort it thematically and start to see relationships between different themes in the collection of media.

So those are some of the different ideas we’re looking at around how our content could be produced and packaged and repurposed for exploration in the living room.

  • http://www.twitter.com/jpratt jonathan pratt

    custom times is educational while it evokes curiosity from the viewer/interactive user. this would redefine television.

  • http://homeprojector.info/end-video-projector.html End Video Projector

    [...] The New York Times would like to join you in the living room … [...]

  • http://toughloveforxerox.blogspot MichaelJ

    If it were implemented in K-12 classrooms it could drive reinventing education.

    They already have the large screens, broadband and everyone is there to learn. If you put it together with customized print and wikis, it could be a game changer.

  • http://www.niemanlab.org/2009/05/if-the-ny-times-were-mounted-on-your-wall-it-might-look-like-this/ If The N.Y. Times were mounted on your wall, it might look like this » Nieman Journalism Lab

    [...] back in Living Room 2.0 at The New York Times Co. today for their research and development group’s [...]

  • http://click.logg.it/2009/05/15/come-cambia-leditoria-google-news-integra-youtube-i-blog-vanno-su-kindle/ click.logg | Come cambia l’editoria: Google News integra Youtube, i blog vanno su Kindle

    [...] come un televideo potenziato o da altri device come internet tablet e smartphone. Qui i video: Nieman e [...]

  • Mike Tompkins

    I have been waiting for this. Please do not stop.

    On the active/passive spectrum of interaction, I find myself at times of less energy desiring to be lead through a narative by a voice rather than by reading. I enjoy hearing the narrator’s voice leading me when my eyes are too tired to read, but I still want to follow a story. However, I am also leary of being mislead, so even in this lower energy state, I appreciate an opportunity to drill down into “granular data” (good phrase) to investigate supporting or tangential information. This particular activity needs to be accessible in my lower energy state, though, so the technological bridges of the air mouse and other navigational devices will be crucial to my satisfaction with the entire experience.

blog comments powered by Disqus

Check out these related posts

The Nieman Journalism Lab is a collaborative attempt to figure out how quality journalism can survive and thrive in the Internet age.
Some rights reserved. Copyright information »
Elsewhere on Nieman Journalism Lab
Follow us on Twitter    Like us on Facebook