Please take one minute to fill out our MIT Docubase user survey x

_Interview with James George and Jonathan Minard

Sandra Gaudenzi interviews James George and Jonathan Minard, the creators of “Clouds” which has been among the first to experiment with the use of Oculus Rift, in a documentary context.

by Sandra Gaudenzi

As Sundance opened its doors and presents 13 Virtual Reality (VR) experiences under its New Frontier’s umbrella, we turned to a project that has been among the first to experiment with the use of Oculus Rift, in a documentary context, and that clearly shows the advantages and disadvantages of using a technology that is not totally market ready: CLOUDS, by media artist James George and filmmaker Jonathan Minard.

Clouds is a documentary that explores the beauty of code and digital art by interviewing hackers and artists though a Microsoft Kinect sensor attached to a digital SLR. The image is then rendered through DepthKit, an open source editing suite developed by George and Minard with the help of experimental photographer Alexander Porter, and the result is beautiful 3D pixelated sculptural images – an art piece in itself.

When George and Minard started a Kickstarter campaign for their project back in 2012, they announced that CLOUDS would “presents a generative portrait of the digital arts community in a videogame-like environment”. They were looking for ways to create a generative documentary – one that composes itself in real time like in a video game – where the user could interact with the code itself. The web community believed in their dream and granted them more than what they pledged for ($34,000 for the $25,000 they asked for).

However, the problem was to find a platform that would allow them to develop their project in such an interactive way… Then came the Oculus Rift.

The version of Clouds that was shown at IDFA this year was an Oculus Rift installation and it looked amazing, but how can such a documentary be accessible to a wider public? Do Rift documentaries have a market yet?

It is with this question in mind that I engaged a conversation with James George and Jonathan Minard.

 

SG: I am confused on how many versions of Clouds do exist. You mentioned you would release an app, but there only seems to be an Oculus Rift installation at the moment…

JG: Yes, it is the same piece of software fundamentally.

JM: So the flexibility of the software lets us bring it to different interfaces and platforms. I think people think of the Rift as being an entirely new paradigm, but it’s quite possible to bring most 3D content or video game like content into it. We just use it as a different display, so then the difference is in the interaction and how people engage with it.

SG: Will there also be a linear film out of it?

JM: I don’t think the current version will become a linear film and what we learned is – we made this original experiment that was like a 16 minute film with a 5 minute Vimeo release, that was a linear process. The limitations of doing it that way inspired us to make what Clouds is now. I think when we then tried to create linear edits, even just for documentation purposes, it felt very much like revisiting the reasons why we went back to doing it as an interactive thing to begin with. The format, the form, the way it’s edited algorithmically works well when you know as a viewer that you are intentionally interacting with the system and as soon as it becomes a passive experience where you’re watching a video, you have a different set of expectations. It doesn’t really feel like you’ve fulfilled the idea of what it’s trying to be.

We just released the IO walkthrough and I think for me, this is a really nice thing, as a linear 40-minute experience of Clouds, but it’s set in the knowledge that you’re watching the documentation of a performance, versus the primary thing itself.

SG: Why do you call it a generative documentary? When I experienced it I had the feeling that I was browsing a 3D environment where I could choose which interview to watch next, but it felt more as a branching narrative than a generative experience.

JM: I hear what you mean. When you make a selection of a question in the beginning, there’s an algorithm that we call the Story Engine that does sequence out a path that you’re following along. So unless you make a choice to divert from that, it will continue. But I think that when we watch any kind of content that’s meaningfully assembled, we have an assumption of an inevitability, or that it’s fixed in some way.

I think I have found this as well, experiencing branching narratives and other video projects, that I always assume that the path I have taken is the only one. In fact there is an element of randomness, like, even if you choose the same question again and again, it would be slightly different. You can only really apprehend that by going back and trying again and playing with it in some ways. The system is designed to edit as an editor would. I think it can feel linear, even though it’s not.

JG: Also the generative nature of it in a lot of ways had more to do with production process than it did with its final form.

Because of the way that Clouds was made we were essentially assembling metadata at the level of the clip so that was all edited, but then the larger web structure that emerged was not something that was coded. We never put those things in sequence, but rather we would prune and pick and link and create small connections in the larger web that you end up exploring. It was never something where we laid tracks by hand, it was more we created a system in which that would emerge. Every small link is intentional so we know from one particular clip, which clips could possibly come next and then to introduce questions – but that’s systematic, that decision making was systematic.

So it’s essentially a tag class that you’re now getting.

JM: Yeah, so every clip is a node. They are little points in a galaxy. As you traverse the map, you are encountering one clip after another, but it is a web like structure. It’s generative on another level too; we are visualizing the ideas that people are talking about and since these are coders and designers, that’s their native language, we actually bring their work into the documentary as code. So when you’re seeing one of these visualizations, that is a separate file of C++ code that’s being brought up and you’re interacting with it.

Those systems are generative. They have elements of randomness and so we’re able to make pre-sets, but there are also decisions that are left up to the system, they are created through your interaction with them. You might see a flocking algorithm and these different little artificial cells moving around and this kind of thing, or polygonal forms that are being generated.

SG: So those are generated on the fly?

JM: Yes. That is literally generative art, right? That’s one reason that the Oculus is a nice interface for expressing this because there is no other way to display this type of thing, for that file. If you have an understanding of the medium when you look at this work, you know it’s not a video, a rendering on a screen, because you have volition in just where you choose to look.

SG: If I was able to read code could I have access to it?

JG: Yes, and there’s a separate story there; the Git repository has a year of collaboration that’s documented through people assigning their progress. It’s only understandable with a certain technical expertise, but it’s a different way of looking at Clouds. It’s an amorphous thing, hence the name.

JM: It’s like open source software, it’s almost like people coming together and creating a quilt, like a knitted quilt, or building some kind of larger structure together that has a bit of their creative hand, but also it’s reflective of the community and the film was made in that same way. We have this, what we call repository, and a lot of the artists in the film contribute to that, so when you look at that record of the software’s creation, you can see different people’s contributions coming together.

I think that’s another reason that that cloud or that network is kind of an apt metaphor. It feels like this thing we’ve woven together.

SG: By looking at your Kickstarter campaign I had the impression that you did not really know at that time what sort of interactivity you would use in the project?

JM: By the time we launched the Kickstarter, we knew it was going to be an interactive piece. There was some doubt at a certain point, like mid-way through production we were thinking, “Okay, this could be a linear film that has an interactive component.”

JG: Definitively, by the time the Kickstarter was launched, we had determined to make it interactive, but we didn’t know how to do that.

SG: So did the Kickstarter campaign help you to shape the project, rather than just give you the money you needed?

JM: Yes, it was actually definitely more than the money because of the intention of the project and how tightly knit this community is and collaborative. People wanted to be involved because they believed in the dream of the project and were uniquely positioned to understand its ambitions. They saw the way the project had come about was based on the values that they hold as members of the community of such collaborative software developers.

We had a lot of people joining…

SG: But we’re speaking about lots of people? What sort of numbers?

JM: Probably about 25 people.

JG: Yes, but the core team ended up being smaller. We’d have programmers who were kind of lead developers making more of the work and then there was a portion of these modules and systems that were commissioned. So we approached curators where we knew we needed to cover a certain range of topics and we had a kind of hierarchy of those topics and started developing ideas for these visuals, then would sort of find people and match them and develop ideas with them and collaborate with them. But we were essentially commissioning those works.

SG: You started Clouds two years ago. This year you are obviously going around the world, showing the Rift version. What’s next? Will you try to commercialize it?

JG: Yes, I mean the end goal is always distribution, just getting it to as many people as we can. It makes sense to use the web to distribute it. We have a form of the documentary, which can work on Mac or PC. The problem is that there is no app store for virtual reality content right now. So all of the possible outlets have felt a little bit awkward or mismatched.

JM: The technical capabilities of the channels and the audiences that they engender in this work that falls into an intersection is sometimes quite diverse. The kind of software that we wrote for Clouds is on a technical level very akin to a video game, graphics, it runs on your computer, it’s 3D, immersive, and there are channels for distributing that, like Steam being a named one.

SG: Exactly, Steam, but is that the target audience that you’re looking for?

JM: It’s a cultural question; it’s a demographic question. It would be great to capture new interest through people in that area, but how many people will download Steam just to get Clouds?

JG: I feel about the distribution, the same way there wasn’t a clear beginning of the project; there won’t be a clear end. In the same way there’s not one way of experiencing it, there won’t be one distribution platform. We’ve talked about a multiplicity of ways of breaking apart the distribution, everything from this very clean download on Steam to a Bittorrent model where you can archive the whole thing, for more of an educational purpose.

I think that we’re reaching a stage where we want to get to a point where we’re closing some of these ends and creating endpoints so people can find it, and then we’ll have a sense of closure with the project because we’ll be able to answer the question… when someone comes to us like, “Hey, how can I see the project?” based on who they are and their interest levels, there’s a good avenue for them to discover it at whatever level.

SG: Some final words?

JM: Well… a wonderful outcome of this has been participating in a conversation around the future of cinema and VR and where that’s going. I think there is a benefit to being ahead of the curve in some ways and getting to define the language and try something out and excite people about possibilities. Clouds has always had a hypothetical angle. You can call it kind of a speculative film. It’s a film the outcome of which, in its ideal form, we can’t really attain with our current technologies. It’s about exciting the imagination of what’s to come.

So one end result of Clouds is that there is a lot of software that’s come out of it and processes, like DepthKit, that other people are using. I think we have to look at the impact of the project in terms of other works that it inspires and direct products that use that code.

SG: Thank you for your time James and Jonathan, and we’ll stay tuned to see how Clouds evolves!

 

like 0

0 comments

show comments

Join the Discussion

 
 
powered by_