Contributory Audio AR: Practice and Technology

Flash and JavaScript are required for this feature.

Download the video from Internet Archive.

Description: Artist and technologist Halsey Burgund talks about his work with audio augmented reality and answers questions from students. He includes a video discussing one of his installations at the deCordova Sculpture Park and Museum in Lincoln, MA.

Speaker: Halsey Burgund

 

[CREAK]

[WHOOSH]

[TAPPING]

 

AUDIENCE: So I should talk with him directly?

HALSEY BURGUND: Yeah, so thank you. I was extremely excited when I learned that this class was happening at all, just because I've been doing things that I would consider audio AR for 12 years now, probably. I'm very old, so don't ask me any particulars.

And it's so exciting to be at a time now where things are becoming, I don't know if I'd call it mainstream yet, but at least getting to the point where people are talking about it and teaching it. And so I was excited to learn from what you guys are going through and learning in this class and to contribute to the extent that I can. So I don't know what just happened there, but I'm going to press Play again. Am I back?

I'm a sound artist and technologist. And I've been doing work with what I would call audio AR for quite a while, as I was saying. I work a lot with spoken human voices. And the contributory part of my work is that I generally do work that feeds off of responses and commentary that I get from people who are experiencing the work.

So I ask people questions. I record their answers, or they record themselves. And then those sort of feed back into these evolving works that are often location-based and change over time and often have a musical underpinning, as well.

So I'm going to talk about a whole bunch of examples of my work and then talk a little bit about the technology that I've developed to support some of that work, which is what I would call a contributory audio AR framework platform called Roundware. And then I'm hoping there'll be plenty of time to ask questions. Because that's probably going to be the most useful and interesting part, perhaps.

So feel free to jump in any time if you have questions. But there will be time at the end, either way. So let me just leap in. I don't know what order things are going to happen here, but I'm going to go for it, whatever it is.

OK, so what is Roundware? I just mentioned a little bit about what Roundware is, but essentially, Roundware is a framework, or a platform, depending on your preference there, that basically, it allows to overlay any kind of physical landscape with a layer of audio. And it's actually multiple layers of audio that can interact with each other. And I have this quick example here of the basic elements of what a Roundware installation covers.

So let's say you wanted to do something on the Mall in DC. And this is the mall. And the first thing Roundware allows you to do is to create regions of what I call ambient music. Well, I don't call it-- lots of people call things ambient music, but it's sort of a continuous layer of ambient sound.

And those, right now, they're just circles that are drawn here. You could have any shapes you want. And basically, the way to think about it is each of these circles has a different underlying ambient-- it doesn't have to be ambient but, often, in my case, it's ambient-- soundtrack. And as you walk from one to the next, one fades out, the other one fades in, et cetera. Here, you've got all three of them playing.

So this is a way to compose a piece of music or a audio experience of some sort that really is very highly dependent on how somebody traverses an actual terrain. So this is quite simplified in the sense that there are only circles here right now. Roundware started out, you could only do circles. I've since moved on to polygons, which is very exciting. So you can draw just a block, or you can just draw right around here on a map and basically assign any audio to that.

And in addition to the underlying, continuous layer of audio, there are two types of what I call momentary audio that can be interspersed within that. They're usually smaller bits, and they are usually pieces of audio that play once. Like, you get into the area, and then it's triggered. It will play once, and then it'll be done, as opposed to a looping, ambient situation.

And those can be either curated by the person creating the projects-- me or whomever is using Roundware-- or they can be added-- using the same app-- they can be added by the people who are experiencing the project. So these red ones are to indicate that there's another type of content, which is a dynamic type of content, which happens over time when a particular piece is installed.

So if I installed this in DC on January 1, and the app was released and people could wander around listening to everything which was the ambient layer and then these yellow dots, and then they added their own on top of that to respond to various questions that are asked within the app-- and depending on what the piece is about, I can ask any question I want-- look at the clouds and tell me what you think, or tell me what this building reminds you of, or anything at all, describe the person who just walked past you or something like that, there are lots of explorations that can be done-- but the point is that these red ones are indicative of a co-creation element to what I do, a contributory element, and a way in which I lose control over a lot of what the piece ends up being, which is both exciting and totally nerve-racking, as I'm sure you can imagine.

But it's a crucial part of my work and of what Roundware allows. Because, again, Roundware was developed, obviously, to scratch my own itch that I had to create the pieces.

So I will keep on going. I'm going to kind of crank through some of the history of where Roundware came from just relatively quickly, although I'm going to show one video, which I think will be useful. So Roundware was actually named because the first project I did with it was called Round.

So this was a project for a museum outside of New York called The Aldrich Contemporary Art Museum. And they asked me to come in and do something that would be slightly resembling an audio tour but not be an audio tour. Because they really didn't like the general approach of audio tours to sort of tell you what you're supposed to think about the artwork, et cetera, et cetera.

And I decided that what I would do was basically create a sort of soundtrack for the museum so when people are wandering around, they could hear that, and then infuse it with comments that people made about the artworks in the museum that would be heard when you were standing in front of that artwork. So you could be standing in front of a painting, and you could hear the curator's remarks about the painting, but then you could also hear a six year old's comments about the painting, as well, and the idea of sort of democratizing and making art into something that I feel it should always be, which is that anybody can appreciate it.

Anybody can like or dislike anything, and you don't need to have a PhD in art history or some kind of critical discourse to have that sort of appreciation. So that's where Roundware came from.

The title, Round, the reason I named it that, which is somewhat useful, perhaps, to understand, is, first of all, King Arthur's Round Table, where everybody sits around a round table, and nobody's opinion is more important than others, and then, second of all, the notion of a musical round. Do you guys know what a musical round is, like "Row, Row, Row, Your Boat"? And everybody's singing together but kind of in an offset way, which I think is somewhat what Roundware enables you to create.

So I'm going to hope that this is actually going to play a video. This was a piece that I did at the deCordova Sculpture Garden and Museum out in Lincoln, outside of Boston. And it was one of the first pieces using Roundware, and I think it will give you a little bit of an idea of what Roundware can actually do and what my work is like.

[VIDEO PLAYBACK]

[MUSIC PLAYING]

 

HALSEY BURGUND: The Scapes piece is, I think of it as a musical composition that is a spatially related musical composition.

And the volumes of those parts fade up and down, depending on how close you are to where the center of that particular musical part is.

- OK, let's walk. The music is changing now.

Yeah, we get these bells more towards-- the entrance of the museum is more of a sort of melodic part of our slow melody. All the music is based on some of the topographic characteristics of the park itself.

- I know that--

- Oh, there's some voices.

- There's some voices.

- Hey, there goes a chipmunk.

- --somebody--

- Right where I'm standing, some guys saw a chipmunk.

- There's a chipmunk.

- There goes the chipmunk again.

- Which could have been a chipmunk from two weeks ago.

- Yes, it could have, absolutely, which I think is kind of fun to think about where that chipmunk is right now.

- [LAUGHS]

- --is getting what I would call ominous, a little darker. And as I'm standing here, a huge, dark cloud is rolling in over the garden, over the trees. The light's getting darker. And I don't know how the application [INAUDIBLE].

- I'm standing inside this glass sculpture. And the acoustics are very fun.

- Oh, wow. So that guy went and did this. They stuck their head into this open glass dome that's part of the structure. And these are the acoustics he was talking about. And now, I'm doing it, too.

- You might not have done that otherwise.

- I'm pretty sure I wouldn't have.

- [LAUGHS]

 

- In a week, they are recruiting into the Israeli army. I'm going to be a soldier. I'm very scared, pretty excited. Being here is pretty peaceful, I guess, compared to what I'm about to go through. Good luck to me.

- Oh, wow. Is this rain?

- That's rain.

- Someone must have been here when it was raining.

- And they recorded the rain and left it here for us.

- It's not raining, though, right now.

- No.

- That's pretty cool. So if you want to make your own recording.

- OK, I'm looking at this sculpture. It looks like a piece of like HVAC duct work from a building, but it kind of twists in a way that reminds me of a lower intestine. And we hit stop?

- And hit stop, and then we can listen to it back.

- OK, I'm looking at this sculpture. It looks like a piece of--

- There I am.

- --like HVAC duct work.

- There are at least three elements going on simultaneously. There is the music. There was the rain from god knows when that was recorded, not on a day like today. And then, there's me.

- From just a few minutes ago.

- It looks like a very peaceful garden. The thing I think would help it would be a few-- would be a few of one of those type of fish that are catfish.

[END PLAYBACK]

HALSEY BURGUND: So that should give you a little bit of an idea. So that piece started out with just the music and none of the comments. And then, people made the comments over time, which, again, that's the other sort of nerve-racking part of this is when you start and the music is written to have comments on top of it, so it's not that interesting until commentary comes in.

But you hope for the best and hope that you get kids who like catfish. I don't know. "It's a type of fish that are catfish" is very interesting phrasing. But super cute, though. So that gives you the basic idea, I think, hopefully, with the retro iPhone and everything.

So I'll go through-- actually, I'm going to skip some of these things real quick here. That's not, that's not, that's not-- OK, another audio ARP, Sound Sky. This was a piece I did in Christchurch, New Zealand. A number of years ago, they had a huge earthquake in Christchurch, which pretty much destroyed a huge portion of the downtown.

So the sort of hope with this piece was to create a piece in the downtown area and let people who are residents go wander through and, in part, reminisce about what used to exist in certain places that were, unfortunately, destroyed but then also look to the future and sort of talk about, well, this was this, but now we have an opportunity to rebuild, and this could become this, and the downtown could transform into this new sort of thing. So the idea of having a location-based piece that allowed people to make these comments, to actually create the comments when they're in the emotional space of being there, was extremely important. Because I think that really puts you in the right place to make heartfelt, earnest, authentic comments.

And then, also, obviously, when other people come by to listen to those comments, they exist in the place where the person made them. So there's this connection to the specific landscape, to that specific location, on both ends of that experience, which I think is one of the ways that audio AR can be really, really compelling-- certainly not the only but one of the great ways that you can navigate through a space, listening to something, and not having your phone in front of you the whole time, and get that added layer of content.

So another project, Tributaries, this was a project in Newcastle in the UK, which was done for the 100th anniversary of World War I-- the end of-- well, the entire-- a bunch of the museums over there were doing a bunch of exhibitions on World War I. And they wanted to try to bring back some voices from back then and bring back some of the experiences that people who were alive then went through.

Obviously, there's not many recordings from back then, so we had to reproduce some of that. But they did have a lot of text, a lot of diaries, a lot of other-- there were weather reports and all that kind of stuff. So there were ways that we could-- we got a present-day weatherperson to voice the weather reports from a hundred years ago and distribute those throughout the city and kind of create this connection between this historical moment and the present day.

And I think, again, I think audio, and audio AR, in particular, I think can be very effective at compressing time, if you will, or connecting different points in time, whether it's yesterday and today-- I experienced something in this location now, and somebody will have a new experience in this location tomorrow, and there's sort of a constancy of location-- but a breadth of time, I think, can be quite interesting. And whether it's a day, a week, a month, or a hundred years, I think there's a lot of ways of jumping into that and doing some exploring. So Tributaries, that was probably the biggest aspect of that project that was new and different for me.

So now, I'll get a little into behind-the-scenes stuff-- not too much. So as I mentioned before, Roundware is a software platform. It is open source. I'm not going to get really, hardly at all, technical. So I'm sure you guys are-- but feel free to ask more technical questions if you have them.

But Roundware, essentially, is a client server. The server sits in the cloud. All the audio is stored up there, database, all that kind of stuff. And then there are various clients that can talk to that server via the Roundware API. This is a somewhat old diagram, so forgive some of the stuff that's a little outdated.

But essentially, just what I said-- database, some software on the server that used to do the audio mixing. The audio mixing is now moved to the clients. But the basic point is that there's a database of audio that is all geotagged. And the clients access that data with the knowledge, of course, of where the client is at any given time, how the client's moving, all of that information.

And that enables the generation of a dynamic stream of audio that is unique for any individual. Any client will have a unique stream of audio, whether it's coming from the client or being generated locally. And it all depends on how they're walking around, how fast they're going, in some cases, what direction they're looking. I know you guys are fully aware of all the Bose stuff, the frames and whatnot.

So that's a sort of high-level architecture. This is way more than I want to actually go through. As I mentioned, the mixing is now done on the client side. So just the component parts of the audio stream are downloaded. And then, the client does the mixing, which allows me to do less latent activities, such as when you look in a different direction, you don't want to have to wait for the buffering time to have your audio actually change accordingly. So client-side mixing makes a lot more sense and is something that we've moved to.

The iOS framework has moved to that, as has the web app framework. Android is a little behind-- budgetary problems. You know how it is, open-source projects. When I get an enthusiastic Roundware developer to participate, then that will catch up, I think.

But yeah, so frameworks for iOS, you can put it into any iOS app you want. And it basically just implements the Roundware API, allows for the generation of these streams and the communication with the server.

This is a diagram of the playlist. So essentially, at any given time, there's a dynamic playlist that Roundware generates, which determines what of those assets will play back for the person. And the assets are, if you remember the diagram before, the small dots.

You could be in a spot where there are a hundred assets. You don't want to hear them all at the same time, obviously. You want to have some kind of capacity for filtering and whatnot. So here's just a high-level view of the filtering that's possible. You have all the assets on the top and then, for a particular project, then you can filter them by tags.

Basically, it's very simple metadata kind of tags that are grouped by categories. Which question are you responding to? Are you an adult or a child, if you care about that? What's your favorite color? I mean, it could be anything.

And then, there are location-based filters based on where you're located, based on whether you heard the asset recently. And if it's been blocked for some reason, then there's ordering and prioritization, all that kind of stuff.

So again, I don't want to go into huge detail because we don't have a ton of time. But it's important, obviously, to be able to filter content to be able to create the kind of experiences that you want. And some of this content filtering is automatic, and others can be exposed to the user based on what you want.

So in some cases, I might want people to be able to say, I want to hear only comments that museum visitors have made versus curatorial comments. In other cases, I might want to enforce that those are mixed with each other to make some kind of point, for example.

So here's a few other types. These are kind of different listening modes, if you will, that Roundware supports. The traditional mode is when you're within the range of a particular asset, it becomes part of the playlist. So right here, these three are in range. They're either circles, or you can have a polygon for assets, as well. That's pretty straightforward. Whatever you're close to, you can hear.

The global listen just makes everything available, no matter where you are. That can be very effective at times for certain types of projects. Or at certain times within the experience of a project, you could switch to global mode and then switch back, if you wanted.

Range listening allows for you to create some different approaches of listening to stuff that's at different distances from you. So there's a minimum and a maximum distance. I've done some experimentation with-- lift your phone up and it lets you listen to stuff that's farther away from you. And you point it downwards, it brings the stuff closer.

So again, that's a way that you can determine what content you're listening to. And that's useful if you create projects that don't have content everywhere, and you want people to still be able to listen. They can just expand their range.

And then directional listening. Again, this is what Bose Frames do but also the IMU on your phone does. So if you take your phone when you're listening to the app and you just move it around, you can give yourself a certain arc and listen to the content. In this case, these three guys would be available.

You can combine these in different ways. You can use the distance-- minimum and maximum distance-- with the range. You can combine these sort of approaches in different ways.

So I'm not going to demo the web admin because I don't think that's very exciting. But I'm happy to answer questions about that. And we're coming up on 4:30 here, so I thought it would probably be better for you guys to have me stop blabbing a lot and perhaps dive into what is most interesting to you guys-- hopefully, something. And we can go from there.

Is that cool? Does that work for you guys? I can certainly talk about more projects, but I'd love to hear what questions you guys have. And I can certainly demonstrate stuff, as well if you want. Questions? Comments? There's got to be something. There's got to be.

AUDIENCE: One of our first game ideas that we haven't ended up pursuing was actually Gwen's idea, but was this murder mystery, where you have to go to different locations to get different clues, to hear different clues. The idea was you'd go back in time and hear something relevant to whatever case you're trying to solve and that sort of thing.

HALSEY BURGUND: Yeah, that's super cool.

AUDIENCE: That sounds like it would be an application of--

HALSEY BURGUND: Totally. Yeah, no, I love that idea. I think the time traveling thing, as I was mentioning before, I think that can be really helpful. You can create soundscape. You could have a horse and buggy driving by-- whatever. You could have period-type sounds going if you wanted to really make somebody feel like they've gone back in time.

AUDIENCE: We could personalize that for each user so each user could be doing a different story or something.

HALSEY BURGUND: Yes. That would probably be a set of tags. You would assign different tags to different stories. You could have story A, story B, story C, and people would choose, or could be assigned, one of those three. And then, that would segment the content based on what they wanted.

But yes, going to a particular location, gathering more information or something like that for the murder mystery is definitely something that Roundware could do. It's actually funny. Actually, that Scapes piece, the Scapes piece that I showed the video of, people started making up their own games with it. And one person did kind of what you're saying.

It wasn't a murder mystery, by any stretch, but they went to a random place and said, go to the corner of the parking lot to receive further instructions or something like that. And then, they went over there, and then they proceeded to talk about the further instruction. And it was kind of a game within the piece, which I didn't have anything to do with that. It was somebody taking their own initiative to figure that out, which I thought was great.

A whole bunch of kids talked about zombies and stuff. They were like, I'm hiding beneath this sculpture. The zombies are coming. And then this story of zombies actually continued over months, with different people. All the zombie fans kind of came together.

And there was this one area of the sculpture park that kind of turned into zombie land. And then, at the end, you'll be very glad to know, the helicopters came and landed, and everybody was saved. So nobody died. But obviously, the zombies are--

AUDIENCE: They're never leaving.

HALSEY BURGUND: I don't know what they are. I'm not going to make a comment. I'm not going to come down on one side of that argument. But it's just one of those things where, when you open up your work to this kind of potential interaction with it, you sometimes get really, really neat takes on it. I mean, I certainly didn't think to create some kind of zombie scenario. But it's wonderful.

And it adds-- to me, it adds to the excitement of creating these pieces. My work is very, I guess, maybe it's a fear of ever having something be done. But there's something about knowing that things can be sent out into the world and kind of take on their own life, if you will, much like a zombie-- no--

AUDIENCE: [LAUGHS]

HALSEY BURGUND: --and get this level of creativity or level of input that I couldn't do on my own. I really get tons of inspiration from people who come in and do stuff. So yeah, I think that idea is certainly one that would be well-suited for Roundware.

I have dreams of Roundware becoming a SaaS at some point. It is not yet. It requires a little more technical involvement to set up the server and then get the client talking to the server. All the code is there. It's just a matter of going through those steps.

There's a demo Roundware app that can be used for testing stuff, which is pretty handy. If you just want to go out and make recordings in certain locations and then see how it works, that can be done. The iOS version is a good place to do that. But I think that idea would definitely be-- Roundware could certainly help at least get the basics going on that.

AUDIENCE: Question.

HALSEY BURGUND: Yes?

AUDIENCE: You showed a little bit of the Scapes user interface for the end user, where they can record it and just pretty much just drop a sound that has some radius. But then, you also showed all this other capacity of direction and like minimum and maximum radius. How much of that do you see being useful to expose to the most typical part of the project, right, and how much of it is just too complex, I mean, is off-putting?

HALSEY BURGUND: Yeah, to date, I've exposed very little of it. And that's a really good question. It is something I think a lot about because I do want it to remain simple for the end user. That's why there's two buttons. It's Listen or Speak. And obviously, within Speak, there's a few steps that you need to do-- what question do you want to respond to and whatnot. But it's fairly straightforward in that regard. And exposing more does cause some problems sometimes.

So I've experimented a little bit with adding additional filtering capacity but having it semi-hidden-- not really hidden, but it's behind a button. You hit a button. Then you open up, oh, here are some filters I can do. And the filters have a default state, which is basically listen to everything. And then you can specify from there and become a super-user of sorts, if you want.

AUDIENCE: So this a user choosing their listening experience based on eliminating things that they don't want to hear.

HALSEY BURGUND: In this case, yes, yes. I like to start with the broad set. If you start with nothing, then people are going to get confused, so kind of starting broad and then narrowing down. As far as the other listening modes and everything goes, that's more of a set up the project that way, one way or the other.

And if you're seeding the project with a certain amount of content-- like the Scapes piece was seeded with basically no content. I allowed people to come in and do whatever they wanted. The Tributaries piece, I created a lot of content myself, and I put it out there-- the weather reports and diary entries and machinist's shop logs, stuff like that that was historical material. I could create that, and then I could place that where I wanted, and I could create the shapes. And I could do anything like that.

Actually, the scenario in which I think exposing more to an end user that would be most appropriate would be something that I'm working on now, which is enabling the app to be a different type of creation tool. So the person creating the experience would be able to go and say Create a Recording and then walk around a closed polygon that they want that recording to exist in, for example, and do other manipulations-- be able to move stuff around, to be able to adjust. So you're really in the actual physical location that the piece is going to be experienced when you're creating it.

Because I think that's a far cry from sitting at your computer at a Google map and drawing something. Like, I think this is where it is, but I'm not really sure, and I don't really know if this is going to work. So you do that. I always find myself doing that and then going out and listening and then going back in. And this back and forth can be very frustrating.

So I think the exposure of more in-depth I don't know if you want to call them editing tools or just experience-modifying tools, i think makes more sense for more the creator's side of things. Of course, end users are creators, to a certain extent because they have the option of adding their own content.

But it's a good question. Because I think it's a constant balance. And maybe when people get more familiar with the overall idea of audio AR, then we'll be able to push a little farther in that direction.

But right now, my experience has been that people, when explaining it to them, it was wonderful when Pokémon GO came out for me. Because even though this really isn't Pokémon GO, there is an element of, hey, it's like Pokémon GO, except instead of finding Pokémon, you're going around, and you're finding little pieces of audio. And those are playing.

AUDIENCE: And you can put some down.

HALSEY BURGUND: And you can add-- right. Pokémon, you can't spawn your own Pokémon, really, at this point. But that was a nice thing to give me a context within which to talk about this stuff, despite the obvious differences. But perhaps, when AR generally and maybe audio AR gets to the point where everybody understands that it exists and there are possibilities on a base level, then we'll be able to push things farther.

But yeah, I think there's a ton of stuff with the directional. And that's why the Bose IMU-enabled headphones or frames-- and I know that there are other manufacturers doing stuff, as well. And I think that's going to be really exciting, for sure. Or AirPods, I think they've got IMUs or some semi-IMU. The new AirPod Pros have something. They certainly have an accelerometer in them. So I don't know.

AUDIENCE: Well, yeah, that's right. Because you can tap them.

HALSEY BURGUND: Yeah, because you could tap them. And I don't know what kind of APIs are going to open up on that. But it seems like people are thinking about that stuff-- wearables, hearables. I don't know if I'm too into that phrase, but it seems to be what people are using somewhat now.

But yeah, for me, the more hardware that companies put out that I can experiment with, the better. It's great because I certainly can't build hardware. I tried strapping an IMU to my-- it was a major failure. But I'm not really a hardware guy. So more questions, more comments, more things you would like me to show? I can go to--

AUDIENCE: Just a take question. What is the precision where you delineate where the song will play? How exact is it?

HALSEY BURGUND: Well, Roundware is what I would call location-- it's agnostic to where it gets its location information from. So you note I showed you mainly outdoor projects. Because we all know the problems with GPS indoors. We also know that GPS is the only ubiquitous free location technology that exists right now. So we have GPS error bars right now, which, inside a building, are way too big to make it useful.

I have done a lot of experimentation with other interior systems, whether they be beacon-based or magnetic field-based or visual positioning systems. But at the end of the day, at this point, Roundware just takes two coordinates. Roundware doesn't have a z-axis at this point.

But adding that isn't a huge leap. But I haven't bothered to add it. There's no point until I can get something that actually gives me useful z data to actually base something off. But it would be super cool if-- you get UWB systems, which are like--

AUDIENCE: What's that?

HALSEY BURGUND: Ultra wideband. You can get millimeter-level. And then you could raise your phone up, and you could stand up and sit down and get whole new levels of ambient sound. I mean, it could be totally amazing. But unfortunately, Roundware depends on the systems that are built by large companies and governments that I don't have any control over.

AUDIENCE: But your experience with implementing this museum park walk, or art exhibit, did it feel like it connected spatially and conceptually. It felt very connected in the demo of, oh, he must have been talking about this particular thing. But was that the experience when you implemented it, as well, that, yeah, people really can see what object they're talking about? Or does it sort of pop up in the wrong place sometimes?

HALSEY BURGUND: It would be unfair of me to say that it was always fine because it certainly wasn't. Sometimes GPS gets wacky. And, of course, the error bars get multiplied because you make a recording, and there's an error bar associated with that. And then, you're listening to the recording, there's another error bar associated. Sometimes those can destructively interact, which is great, other times not.

So certainly, there are times when weird stuff happens. I think, generally, I was pretty aware of what the error bars were when I was creating the experience. So I could set things up to a certain extent understanding that and guiding the experience such that it would work relatively well with that.

But I will also say that, as an artist, I have it a little bit easier. Because people are a little more willing, with aesthetic experiences, a little more willing to be forgiving about stuff like that. So if you're talking about something that is maybe over there, you'd be like, oh, well, the artist wanted me to walk over there. I mean, if you're in a generous kind of interpretive mode, that can work.

I will say that, a lot of times, little GPS hiccups can actually be really interesting. Because if you're walking by and somebody's like-- this happened to me in the Scapes project, just to use that as an example-- I was walking by, and somebody was like, I'm standing inside this ring of trees, and I'm looking up towards the sky. And it reminds me of my grandfather's house up in Maine, where I had this similar situation.

I was like, I'm not in A ring of trees right now. I'm just walking out in this field. So it made me look around. And I was like, oh, 30 feet that direction was, in fact, something that looked like a ring of trees. And it got me to go into it.

So I think there is something really nice about sort of this audio AR generally being able to kind of pull you in different directions or guide you or encourage or whatever words you want but to get you to explore a little more by giving you information that is pertinent to a spot when you're not quite at that spot yet, for example. And I know that might seem like an excuse for GPS errors.

AUDIENCE: Right. Yeah, I can see that.

HALSEY BURGUND: But sometimes, they can be nice. Of course, sometimes they can be inappropriate and bad. But you've got to open yourself up to the goods and the bads.

But yeah, I'm just super excited for the day when interior positioning is accurate enough to really do something where you can place sound objects, walk around them, and have them feel like they've been spatialized and really get a much more true-to-reality kind of experience. But you guys probably know more about how close we are to that than I do.

But from where I sit, exciting stuff is happening. But it's still-- I mean, the big guys are all working on it, right? Spatial computing, everybody wants their spatial engine. And Amazon is doing their thing, and Niantic, of course, is well into it. Magic Leap has got the whatever-- I don't know all the-- the magic verse. Is that what they call it?

AUDIENCE: I never know what Magic Leap [INAUDIBLE].

HALSEY BURGUND: (LAUGHING) Neither do I. Neither do I. But it seems like they're spending a lot of money, and they got good people doing stuff.

AUDIENCE: They certainly are spending a lot of money.

HALSEY BURGUND: Yes. They got Neil Stephenson doing stuff. So they got some content possibilities. So yeah. I mean, it's super exciting. But we're definitely dependent on the systems that are out of my control, out of most people's control. But I think if you work within those constraints, you can still create some pretty cool stuff.

AUDIENCE: Other questions?

HALSEY BURGUND: Other questions? Yes?

AUDIENCE: In my project, I wanted to make something that people could potentially contribute poetry to a location. So I think this might be really helpful with that.

AUDIENCE: Yeah, yeah. That's very cool. Yeah, I think poetry can have some wonderful, obviously, a lot of the language can be very physically put in a place. And it would be nice to actually be in that place, too. Yeah, I did a piece. It's no longer available because of the inexorable progress of new iOS versions and whatnot.

But I did a piece in Harvard Yard that, as the source material, it used a bunch of audio that the Woodberry Poetry Room at Harvard had. They've had a poetry series for, like, a hundred years of poets coming in and reading their works. So they had recordings of Robert Frost and William Carlos Williams and Ezra Pound-- really old, amazing recordings of these poets reading their work. And then they have more contemporary poets, as well.

And I took those, and I kind of sliced them up line by line and then distributed them around in different places. So I had all the first lines of the poem in a certain area. So if you went to this area, you could hear first line after first line after first line. Then you could go somewhere else and hear all the last lines.

And it made it very abstract. And the poets probably would be very upset with me if they found out about this. But it was a remixing of that, a spatial remixing, which was pretty fun to do.

But I love your idea. So were you thinking of having people read their own poetry or something along those lines? So write a poem that related to a certain location and then go to that location and kind of recite it sort of thing? Or I could imagine if you want to share any more thoughts.

AUDIENCE: I was thinking about while they're in the location, creating a poem there.

HALSEY BURGUND: Oh, almost like improv or something like on the-- oh, yeah, that could be really great. So yeah, the question would be-- I mean, in a Roundware context, that would be open the app, press Speak, make up a poem right now. And then you just press record, and you would record it and then upload it to the cloud.

And then, whenever somebody else came by, they could hear that. And then you'd have the ability, of course, as an administrator in that case to listen and move the recording or make the area bigger or those sorts of things if you were to want to do that. But that could be a wonderful way of-- it would be a nice way to traverse a city, right? You kind of wander through a city, and you hear residents' improvised poetry at crosswalks or wherever. That's a cool idea.

AUDIENCE: And you've been working a lot on prompts, right?

AUDIENCE: Yeah.

HALSEY BURGUND: Oh, cool.

AUDIENCE: Things to sort of get people who may not necessarily appeal to [INAUDIBLE].

HALSEY BURGUND: That's really smart to think about that. Because that's really hard. I mean, poetry is particularly hard. But even for me, to say anything is sometimes hard. So I always think about what questions I ask. And I usually have some question that's so broad that anybody could do it.

Like just tell a story about your day. I don't know. Look at the ground and tell me what it reminds you of. I'm just sorry. I'm making up bad examples right now but something that's very broad. I mean, literally, it could just be, share something. Say something. And then, as I go down the list, I get into more details.

Because I find that some people freak out when they have a super-broad question. They're like, what do you mean something? What does that mean? What is say something? They are much happier being given instructions like, look at a tree near you and describe it.

Then they're like, OK, I can do that. There's a tree. Record. The tree is quite tall, and it's got lots of branches. And the leaves right now are green, but they're starting to turn yellow. Some people are much more comfortable with that. So in a poetry context, that could be really interesting. What are some prompts that you've come up with thus far? Because that's a challenge.

AUDIENCE: Yeah. Right now, I've just been thinking about sound. So it wasn't necessarily a poem about sound but ask, what sounds can you hear right now? What can you really [INAUDIBLE]? For one test, I asked them to write a poem about a different subject based on the sounds they heard.

HALSEY BURGUND: You mean like a poem about sound that isn't about what that sound is, actually? Like what it reminds you? Is that what you're saying? Sorry, I'm--

AUDIENCE: Kind of. The theme is not necessarily the sound itself.

HALSEY BURGUND: Right. So what it reminds you of or what it maybe points towards or something like that, yeah. Yeah, no, that's great to use sound. Poetry is such a-- in my mind, poetry is meant to be read. It's got all the rhythms and everything really come out when it gets into the audio from just the written page. I mean, the written page can be really interesting, too. But it's hard. It's hard to just make up a poem. And it must rhyme!

AUDIENCE: I think that one's not so strict on your [INAUDIBLE] but a little bit more free verse.

HALSEY BURGUND: Yes, that's good. That's good. Yes, free form is-- I'm always impressed with free form rap battles and stuff like that. I mean, obviously, M&M is the-- the 8 Mile thing, it's just like, oh, my gosh. How do they come up with this stuff? And I know a lot of it is-- I know there are strategies and techniques that make it more doable. But my brain doesn't work that way. I can't just-- although it sounds like it right now, that I can just talk forever. But I don't always-- not always able to.

AUDIENCE: If you write the audio SDKs, you probably could.

HALSEY BURGUND: Yeah. Right. Oh, that reminds me, I should also mention that I have started a new website called audioar.org.

AUDIENCE: Yeah, we got your Webpost for--

HALSEY BURGUND: Oh, great, great, great. So the site is in its infancy right now. I've started it with a couple colleagues, a couple journalists, and Fran, who came here a few months ago-- Fran Panetta.

AUDIENCE: A month ago.

HALSEY BURGUND: A month ago? To talk to you guys. British. You probably remember because her very perfect British accent. So she has been doing audio AR stuff, obviously, as you know, for a while, too. So we started this site just to try to collect different practitioners in the area.

And it felt like something was starting-- companies, obviously. Bose and Amazon and Apple and others are getting interested. And we thought it would be a nice place to let practitioners and technologists and academics and other people share their experiences, things that work, things that didn't work, and let people know when projects are happening and whatnot.

So we're ramping up on that. And Philip has agreed to be interviewed for that, at some point, whenever I get myself in gear. I promise I will, and I'm looking forward to that.

AUDIENCE: No rush.

HALSEY BURGUND: Yeah, I know. You're like, please forget. [LAUGHS] No. But yeah, any projects you guys do, and we'd love to think about getting stuff up there. Because I think this class is evidence of the fact that it is becoming a real thing, which is really exciting. So I'm super excited to hear or to experience what you guys choose to do with the class. Is a project sort of the deliverable for the class?

AUDIENCE: Right. So there will be a presentation. We need to discuss that before you leave, when it's going to happen. Because there are possible changes there. But there will be a presentation at the end of the class. Francesca will hopefully attend. Some people from Bose will attend. If you can come, that would be great.

HALSEY BURGUND: Oh, cool. Yeah, yeah, keep me posted.

AUDIENCE: Otherwise, we'll spend the next two weeks thinking through how to disseminate our insights from the class. So there will be something online, I dare say. But we will try to decide that together, exactly what form it will take. So there will be something. We will leave a trace.

HALSEY BURGUND: Well, to the extent possible, we'd love to amplify that message on audioar.org if it's possible. But we can obviously talk about that. But yeah. So if you guys have any ideas about things to include or practitioners or whatever, please feel free to reach out. Because we are super open and want to learn from each other.

AUDIENCE: So we're running out of time, but I am very curious about one aspect of the system. Is it dynamic in any way that people who make contribution could actually change parameters or aspects of the experience for other people beyond just the sound trace they leave? So an example would be like if there's a lot of people leaving-- I don't know why I decided to call them sound traces-- but if a lot of people leave sound traces in the same location, can the system detect that take and decide to play sort of a more multi-layered ambient soundtrack in that area or something, some way of being aware of what's being inputed?

HALSEY BURGUND: That's a really interesting area to think about. The simple answer right now is no, in the sense that Roundware doesn't-- I mean, Roundware, you can choose how many simultaneous voices can play. And therefore, if you say, well, three simultaneous voices can play if three are available, then, in that sense, if there's a location where people add more, then that will be a location where, potentially, three will-- if there's a hundred in the spot, you'll probably have three playing altogether at any given time.

That's not the music layer or whatever. That's just the-- whereas if you have a spot that only has one available, even though three could potentially play at the same time, only one will ever actually play because there's only one available. So that's a very, very rudimentary aspect of what you were talking about.

And that's the only-- at least if I'm understanding you correctly-- that's the only thing that Roundware does currently. But I love this idea. I hadn't thought about this. I love this idea of more activity, the heat map heats up and something changes with--

AUDIENCE: Yeah, we are always trying to think about is there a way I can program this as a participant? Because we're at MIT. So there might be a way, even if you don't know it, injecting code some weird way by--

HALSEY BURGUND: Yeah, who knows?

AUDIENCE: --overloading the database or something. But it's just I think there is some potential there somewhere. It could be very advanced, like the words you say could actually be understood by the system, and it could change things. Then, yeah, you're halfway to The Matrix.

HALSEY BURGUND: I don't know. I've been thinking about the first step to that, obviously, is getting some kind of accurate translation or transcription service automated. And then you could get to the point where you could search for only play me commentary that uses the word "love."

And then you could walk around and get this whole experience with just-- and then you do sentiment analysis. And then it's like only give me the comments that feel love, even though they don't say love, but they feel it. So I think those are super-exciting areas to look into.

AUDIENCE: Yeah, I just want to know what the zombie people would do with something like that, right, where they can affect the system itself.

HALSEY BURGUND: Or the zombies themselves.

AUDIENCE: Yeah.

HALSEY BURGUND: Yeah, this is very, very important. But I love that idea. And yeah. So it's open source. You go for it. [LAUGHS]

AUDIENCE: We're out of time. This was really helpful. Actually, we have one team who are out today because of medical emergencies that are basically doing the kind of Pokémon GO with music.

HALSEY BURGUND: Oh, cool.

AUDIENCE: So we'll definitely have them look at your system. And they, I would guess, will want to use it.

HALSEY BURGUND: Well, feel free to reach out. I can, again, depending on the level of technical skill and time involved, or time available, I can recommend different approaches to things. But all the code is there. The server is written in Python using Django. Obviously, iOS is Swift. And the web stuff is very nascent right now, but it's all JavaScript, sort of modern-day JavaScript stuff. So to the extent. Yes?

AUDIENCE: [INAUDIBLE]. But is there a way to procedurally generate a location for a sound byte, or is it totally tied?

HALSEY BURGUND: Can you describe what scenario? Like, what input would you be taking to--

AUDIENCE: So this has to do with the Pokémon GO, like audio music project that Chuck was doing. Chuck and?

AUDIENCE: Terese?

AUDIENCE: Terese, yeah-- Chuck and Terese are doing. Anyway, so in Pokémon GO, you walk around. Pokémon spawn, Can audio also spawn--

AUDIENCE: The probability--

AUDIENCE: --based on a probability?

HALSEY BURGUND: Yeah. That would be the kind of thing that would be a relatively small modification for a particular project. I haven't done that. The other thing that I haven't done that I'm very excited about is moving. After their spawn, then they move. You assign a path and a time, whatever, and they loop around.

So then, you can just sit in one place, and you can let stuff flow over you. Or you can move around, and there's this sort of double interaction in that sense. So it's a single interaction, but you know what I mean-- double motion. So that kind of dynamism, I think, is very exciting. But yeah, I like the idea of spawning, sort of popping up, based on maybe your previous experience within the app or something like that.

But yeah, I think the framework is there that you can-- I've been just adding stuff as we go, semi-organic codebase in that sense, which is sometimes problematic. But it's fairly clean. So check it out. If you know Python, you can jump in and have a look, all right?

AUDIENCE: Cool.

AUDIENCE: Thank you. It's been--

HALSEY BURGUND: Thank you guys so much. Thank you.

[APPLAUSE]