The media landscape is rapidly changing and new technologies for visual journalism and storytelling are proliferating. Whether it is 360 video, Artificial Intelligence, Virtual Reality, or Augmented Reality, contemporary visual storytelling is in a period of experimentation as people prepare for the next iteration of the web, especially what is called Web 3.0 and the metaverse.
Our guide through this novel terrain is DJ Clark, a world-renowned multimedia journalist, and researcher. Winner of the 2020 World Press Photo Interactive of the Year for his “Battleground PolyU” project, DJ Clark developed a new workflow for his team reporting on the Hong Kong protests that combined 360 video, digital mirrorless cameras, and smartphones.
In this event, DJ Clark hows and discusses how “Battleground PolyU” was produced, and then details the new technologies he thinks will rise to prominence in 21st-century visual journalism.
David Campbell: So DJ, it’s great to have you with us here to talk about new technologies for visual journalism in the metaverse and I know that you’re going to explain the metaverse to us and various things. I imagine that people have read your profile and so on. But I think that you’re uniquely placed to kind of take us through this landscape, given your combination of being photojournalist and multimedia journalist, producer, director of a multimedia unit at China Daily, and obviously a researcher and so on. So thank you very much for your time and, and being with us. I’ll hand over to you to take the presentation, and just everyone in the audience, as always, if you’ve got questions, drop them in the Q&A box throughout the event. I’ll bring the questions in or we’ll address them at the end.
DJ Clark: Okay, thank you. Thank you, David, for the introduction. I’m gonna go straight into sharing screen. And I think as you said, you’ve probably done enough in terms of introducing me, and let me just play this if I can. So you should be able to see it. Seems I’ve lost my play button. Where are we? Alright, here we go. Okay. So yeah, I’m going to be talking a little bit about new technologies for visual journalism in the metaverse and I’m going to be talking from my perspective, given that this particular subject is something that, certainly the way that I read it, and I read around it, that a lot of people have very different understandings of what this is. I’m going to explain how we see it. I’m not suggesting for any minute that I’m a real expert in this, but it’s a challenge that I’ve been sort of drawn to take on. So let me just go through what I’m going to talk about today. Firstly, how we see web 3.0. That’s in my company. As David has explained, I’m the multimedia director at China Daily Asia, but kind of more importantly, I work sort of on a lead role with the Asia News Network. I’ll talk about them a little more in developing this sort of new strategies for the metaverse. I’m going to talk a little bit about artificial intelligence, AR and VR, that I’m sure you’re kind of familiar with, and try to unpack them a little bit and show a little bit of the kind of work that we’ve been doing on those different technologies. And we’ll talk about the Hong Kong protests, because this is this sort of unfortunate, well maybe fortunate, I’m not sure, coincidence, in that, we kind of stumbled across this ability to create what we think is very engaging virtual reality content in a visual journalism context, not through us planning for it, but just through a sort of series of coincidences that I’m going to explain that led us to this project, Battleground PolyU, that went on to win a lot of awards. But also, more importantly, I think, for us kind of really opened our eyes to the potential for virtual reality as a journalistic tool. So I’ll be talking quite a lot about that. And then talking a little bit more from that on the opportunities, I think, there are for visual journalists in this space. We start with the Asia News Network, because a lot of what I’m going to be talking about is kind of based on the work that I’m doing, as I said, as kind of one of the lead strategists around how we as a network, this is a network of 23 media organizations across Asia. We have had a, we’ve been going more than 20 years. We’ve had this partnership for this for the last 20 years sharing firstly text and then photographs and then video. I was brought in to Hong Kong to head up the video sharing. And as I’m sort of talking through, we’re now looking at how we position ourselves for web 3.0. And I’m going to try and explain now the different way that we see webs. And again, this is again, just sort of our understanding of it. The way that we talk about it, though, I understand that this is not something that is—
It is something that is contested, and people have different opinions of it, depending on where they come from, which angles they come from. And I don’t profess, as I said before, to be a real sort of technology expert, but we kind of understand enough about it to know that it’s something that’s coming our way, and we need to prepare for it. So let me just explain this through. So Web 1.0. This was the first iteration of the internet. And for us, that was really about as media organizations of kind of shoveling what we had in legacy media, that’s radio, television, or print media, and shoving it onto a space on the internet where people could could read it. But it’s very, very similar to the previous format, in that it was very one way in the way that we were just sort of putting what we’d already done. And we normally after it’s been aired, or it’s gone to print, it then has a presence online. When we moved into web 2.0, this is where we start to kind of take away from this, these ideas of legacy media, and really kind of look at the potential of a space where everything is possible. So looking at using our own platforms, but also other platforms like Facebook, and Twitter, and YouTube and all these other social media platforms, where effectively we kind of look at a story. And as we look at it, we think, Okay, what’s the best way to tell this story when everything is possible, given that in legacy media, you know, you were very much limited in television to just video, in radio, just to audio and print, pictures and text. Here we can say, okay, we can not only rethink the way that we tell stories to give the audience a better experience. But we can also involve them in the whole process in sort of sharing and discussing and also sort of thinking around, not waiting for the news to come out in the paper first, but actually trying to think of them as digital first. So all of this happened around what we would call web 2.0. And then as we move into web 3.0, we’re now looking, as I said before, of the possibilities of using augmented reality, virtual reality and artificial intelligence in a way of trying to create very different content that brings the audience from sort of just being in front of the screen and looking at us and interacting with us as being something that is separate from where we are to actually sort of jumping inside and being all around us or being with us. And it’s very difficult to explain that if you have not experienced—I’ve got a pair of, an Oculus set here, if you’ve not sort of experienced this with one of these, with these sets, it’s something that it’s hard to, to understand what this means. But effectively, what we’re saying is that with web 3.0, we’re looking at the audience to actually sort of be almost inside the screen or being all around us rather than acting as it’s something that’s in front of you.
Okay, not sure if I explained that very well. But we see this as being this sort of a presence concept, being in very, in a very early stage. And I like to use, when I’m trying to explain this to our managers and the different editors around the Asia News Network, as like the very first mobile phones. They were clunky, they were big. They were awkward. They were embarrassing to bring out in front of other people. And we kind of feel like that these sort of things are very much like that. I mean, that they are not particularly comfortable to wear, they feel awkward. And they look a bit stupid if you’re walking around with them on, but we see that as something that will change. We’re not sure on the timescale of it. But we can see these new sort of concept type glasses and things that are coming out which are much slimmer, much more fashionable. And you can see that people would wear them without feeling that it’s something that is, that feels very strange. Interestingly, just last week, Google had the the yearly keynote address. And at the very end, they teased these glasses. I’m going to show you the video of this. But this for the first time is, although these are only augmented reality glasses, we’re seeing for the first time, something that looks much more natural to wear than these big, chunky things that we’ve been wearing before. Let me just play this, I think you should be able to get some of the audio coming from me.
DJ Clark: I’m limited on time. So I’m not gonna play the whole thing. But I think you get the idea that that although these are not on the market yet, and as I said they were just teased at the very end of the keynote, but we do see a marked difference from coming from, as I said something like this, which is big and chunky and embarrassing to wear to something that looks much more natural that you can put on. So let me just go through the sort of key elements of the metaverse. So we have VR interface. We become avatars and that we have digital ownership. And this is one of the key sort of problems at the moment with the the metaverse because there are multiple metaverses and how do you bring things that you own from one metaverse to the other is complicated at the moment. And it’s one thing that they’re working on. And I’m going to show you just again a little short video before I get into talking more about PolyU. Again, this is just, if you’re not familiar with this technology to give you a sort of sense of what it looks like when you wear these goggles and the potential around it. This is just from one app, Spatial.
DJ Clark: Okay, again, just for the sake of time. This is, this just gives you a sort of sense of what of what we’re at now. And in fact, we use Spatial and the Facebook Horizons for meetings now with our editors at the Asia News Network. I’ll show you an example of that later. And so what you’re seeing there is not something futuristic. It’s something that’s very much present at the moment. But I’m going to get into how that translates across to visual journalism as as we go on through this. So very quickly, the kind of work that we’ve been doing, we’ve been working with artificial intelligence, augmented reality and virtual reality—augmented reality, not so much. I’m going to give a couple of examples of what we can do with these technologies. This is something that we’ve done in the last sort of five years or so. So none of this is new. But what we’re hoping is to bring this knowledge and the kinds of projects that we’re doing into the metaverse. In fact some of it is possible that we can just put it straight from these existing websites, they’re set on into metaverse experiences. So I’m going to need to just change my share for this, to come out of this. And just try and jump over to my Google Chrome, here we go. So you should be able to see now this is a website that I’m asking David to share with you later. And we have some projects in there, too, that I want to pick out. This first one is an artificial intelligence project. And it’s very simple. I mean, effectively what you do with this is you ask the maker, who’s a transgender politician from Nepal, a question and you sit—I press the microphone. Oops. So I need to allow it. Just wait one second.
Okay, let’s do it again. What is it like being transgender? So I’ll try again, what is it like being transgender in Nepal? Alright, I’m gonna have to start again, what are the challenges of being transgender in Nepal?
Artificial Intelligence Bhumika: In Nepalese society, many transgender people are facing many discrimination, statutes, violence, by society, by family—
DJ Clark: Um, stop it there, again, just for to keep things moving quickly. But what this is, is about eight hours of interview with Bhumika. And then what we’ve done is we’ve transcribed that whole thing. And we’ve tried to feed that into an artificial intelligence engine. And what we’re able to do then is you just ask a question, it does a translation, a transcription of your question, and then edits on the fly, looks through all of this transcript of eight hours of video, and gives you an answer. So effectively, you can sit down, and again, there’ll be a link to this, you can try this out later. But you can sit down and have a conversation with her in whichever way you want. And the idea, again, was just a proof of concept. But if you take this into two other formats of visual journalism, we’re thinking, you know, there are all sorts of people, maybe somebody who fought in a war who, maybe the second world war or something, there’s not many people that still alive, and you want to have a chance for future generations to sit down and talk to this person or ask them questions and you’re able to do that through artificial intelligence. So again, this is one example that we’ve done in terms of trying to experiment with this technology and how we might use it as a journalistic project. Another that I will show you very quickly here is a story that we wanted to— We wanted to look at the UNESCO sites in Asia that were under threat. These are sites, International Heritage Sites that they are worried won’t exist in years to come and creates virtual reality—
Artificial Intelligence Program Plays
DJ Clark: So you can pick any of these and it will go straight into a VR video. And again, it’s a format—
Artificial Intelligence Program Plays
DJ Clark: I’m gonna close that down and I’m gonna go back to my share. So these are two examples. I think I have an AR example as well. Somewhere in here, go back to my keynote. Here we go and I’ll play that.
David Campbell: And these are these are projects that you’ve undertaken at China Daily and in the unit that you head up.
DJ Clark: Exactly. So these are projects that we’ve done, sort of experimenting with some of these new technologies that have come out that, although at the time when we were doing them, we weren’t talking about the metaverse, but we were experimenting with what was possible. But but everything that we’ve done up to this stage is kind of, is portable into this next phase, what we’re calling web 3.0. And it’s through doing these experiments and working with our partners at the Asia News Network, that we were able to kind of move and try to get ourselves in a better position to exploit the opportunities that are going to come up when they come up. At the moment, these projects are extremely expensive for media organizations to run, to put together, some of them we’ve had funding for. But at the same time, we feel they’re worth experimenting and investing with. And we know that our competitors are doing it across the world, you know, we’re all trying these different ideas. But we’re hoping that we’re going to be able to use the the knowledge that we get from these to bring it into this sort of new space, as I said, where people can actually just immerse themselves in the technology, it’s going to be all around them. And so projects like this, at the moment, most people are engaging with them just on a screen like we’re doing now. But in future, they’ll be able to jump in, and they’re going to be able to actually be in those—So in the first example, they’re going to be able to sit down in front of Bhumika, and be able to have a conversation with her. And then in the second example, they’re going to be able to go to these places which may no longer exist, because for whatever reason, they’re under threat, and experienced them firsthand, or even from anywhere in the world. And this was another example of AR which I’m going to flip through very quickly. So augmented reality, where we’re placing information in front of what we see, I think most people have played with examples of this. This is one example, which isn’t ours that I wanted to bring down, bring up as an example of that, which is looking at different planets and what they consist of. Okay, I’m going to keep moving, because I’m aware of time constraints. So let me talk about PolyU now, because I think the Battleground PolyU is really where we feel visual journalism kind of, has got a place to play. A lot of what I’ve shown you up until now, and a lot of what we’ve done up until now has been more around documentary. It’s been about trying to document ideas and things that are happening and using this new technology to do that. And we quite often make the the comparison to you know, when photography was first invented, and photography came along, it kind of released painting, which was that up until that point, that tradition of of actually documenting what was in front of you. And in the same way, we think that you know, some of these new technologies, particularly virtual reality, they’re so lifelike, they’re so great at actually just preserving what’s in front of you, it kind of releases photography to become more artistic. And we see it in a very different way. It doesn’t have to play that role of just documenting, because the documentary side is often done better in this format. So let me talk about PolyU. So I’m sure most of you know, there was a series of protests in Hong Kong in the second half of 2019, which I found myself in the middle of. And these were regular protests that were happening two or three times a week, sometimes more. They were typically starting with a march that would later turn into the sort of conflict between police and protesters with you know, rocks and tear gas and you know, all sorts of stuff that lots of things that they used to do. So, as a result of covering this over a period, very soon, we kind of came up with a system. Now bear in mind that our job primarily was just report what was happening. Our job was to be on the ground and visualize that, take photographs, take video, sometimes go live and show people exactly what was happening at the point. We had a desk that was in our office in the China Daily Office, but we’ve also serving our Asia News Network partners with this kind of live raw feed of visual information about what was happening. So early on, we kind of figured this system of having a a DSLR, or a mirrorless camera, in this case on my my shoulder there, which had a long lens on it, which will be used for long shots. And I’ve got an example here. So this would be taking high positions from a distance. And using the long lens to be able to pick out the action, and stay clear of the tear gas and water cannons and all the other stuff that they were using. We also had phones with us, which we use for doing live events. So the phone was by far the easiest way of going live. We have good internet connections generally, around Hong Kong. And with 5g phones, or I think these were 4g in that point in time, we were able to stream live from wherever we are, if anyone wanted a live connection. And then finally, you’ll see in my right hand there, I’m holding a 360 camera. And this was a real kind of new innovation for us is that we found when we were in close action, that we could often miss what was happening when we were trying to shoot it with a normal camera because it was happening all around us. And it was happening so fast. And because it wasn’t I’m saying wasn’t dangerous. I mean, there was some dangers involved, there was rubber bullets, and there was tear gas and water cannons and all the other devices the police were using, but there wasn’t live fire. And therefore you could get fairly close to the action. But you’d find that you’d miss it because it would often happen so quickly, with a petrol bomb or something being thrown behind you. That was very difficult when you were right in the middle of it to catch that. And we quickly discovered that with the small cameras that were shooting five, or this was a 4k camera, we later updated to five, the GoPro Max, which is the one that I use for PolyU, which was a 5.6k camera, or 5.2k, I think. Excuse me. And we found that if we just placed that camera up above that we were able to very quickly capture everything that was happening around us. And if something happened, we could then if there was some some action, we then go back and edit it from the sidelines. I think I’ve got an example here that I can show you of what that looked like. I’m pointing this out, I’m showing me filming.
David Campbell: Are these themes that actually made it into the final production of Battleground PolyU.
DJ Clark: Let me turn the volume down. Can you repeat the question?
David Campbell: Are these scenes that made it into the final film of Battleground PolyU?
DJ Clark: This was just like a regular protest, this is kind of building up to it by just showing you— the advance of this was that right after the event. Soon as the action had finished, I would go to the site, I’d be able to upload the 360 footage to my phone. And from that I was able to edit very quickly, edit a scene in 16 by nine not in 360. Like I just showed you there and send it down to the desk. So we had video and I could also do audio. So I could do still grabs as well. And so generally speaking, you know, in terms of trying to get this this rounded, multimedia approach, and we did that we didn’t have photographers and videographers. But we soon worked out that because things were happening all over the place. It was better that we all tried to do photography and video. And we spread ourselves wide, we were more likely to catch the action and then all be together in one place trying to do you know multiple things, so singular things at the same time. So it was just a workflow that worked for us. And we worked that out fairly early on in the protests. And as I said, we would quickly capture something that would happen, we would go to a sidestreet like this and then from the phone, I could download the images and the video from my DSLR. Anything on the phone was already there and then also from the GoPro it was a quick edit from the 360. And I could very quickly send that to the desk and get back out there. So that’s really the workflow that we were working with when we came to PolyU and the PolyU happened in November, sort of November 19th I think it started so it’s quite late in the protests, and things have been escalating slowly. And this was when a group of students they first took over, students took over university campuses across Hong Kong, this was a particularly important one. There had been a big conflict at Chinese University, which was across a major road, this was even more important because the PolyU sat right across one of the three big crossings of Victoria Harbour. And it has blocked that crossing. And so it was causing major disruption to the city. And so we kind of figured that this was going to be the place. And again, just to keep things moving fast. I was there for, I covered it for five days, sleeping in the university for one night, sleeping out on the street outside it for another night. And it was the first story that— most of the protests that we did, there was no kind of real beginning, middle and end. It was a story that kind of unfolded over five days, this one, where you had the protesters firstly occupying the university, then this conflict that happened with the police. When the police moved in and tried to flush them out, and was one we’re not able to break the barricades. This is during, these pictures were taken during that period, that kind of culminated with the police making this big charge on the Sunday night to try and get across the bridges into the university. And then this big fire storm and the the students trying to set alight the bridges. Then the police then gave up trying to get into the university, they thought it’s too dangerous. And so they just put this cordon around it. And then the next day we saw 1000s of protesters coming from outside of the university, the people who—because the protesters were stuck in, to try to break them out, tried to break the police cordon. And we had a whole day on the Monday of this continuous battle with people from outside trying to get in. I was fortuitous in the fact that I was in the university during the early stages. I, my batteries and my my disk ran out and so I decided to come out on the Sunday night. And then on the Monday, I was outside, which is when all the action happened on the outside. This is from that period there. And then I managed to get back in on the last day, which is when, not all but the majority of the protesters either gave themselves up or managed to escape one way or another out of the university. So it was the first time I had a story that had a beginning, middle and end. And my thoughts immediately went to well, this is a place where I can make a story, I can make a film rather than just doing this news coverage, which I’ve done pretty well every protest leading up to this. But it was during the last day. In fact, when I was inside the university,
DJ Clark: I started playing back some of the footage that I had from the 360 camera. And again, fortuitous as it was that I picked up a GoPro Max, which was a new camera that just come out. And it had this thing called Spatial Sound, which I hadn’t heard of ambisonic sound or 360 sound. These are new words that we hear all the time now. But this was something that was new to this camera and I was playing it back with the headphones and I couldn’t really— I found myself even just watching it on the on the phone that this was had this completely different level of engagement that I hadn’t experienced before because the sound was coming from the right direction. It seemed like it was you know, you’re completely engaged in it. And I was watching this and thinking Wow. Though, having said that when I shot it I hadn’t ever thought of putting a film together in 360. But after it we looked at what we’ve got and decided to put the film together. This is a quick promo of that film.
360 Film Promo Plays
David Campbell: So just to say that I put the link to the full film in the chat we want, it’s on YouTube. And that link is in the chat for everyone to see.
DJ Clark: So you can watch it on YouTube. And if you watch it on a computer, there is an experience there. But I think the thing that people miss is two things is, firstly, the experience of having this all around you when you’re watching it as an immersive piece. And secondly, addition of having this ambisonic audio, which is really what the metaverse is all about. I think a lot, a lot of what I find the biggest thing that sort of differentiates what we call the metaverse and virtual reality that came before that, is this real kind of spatial audio experience, which tricks the senses in a way that just having regular audio doesn’t tricks, the senses to think that things are actually are around you. And you know, they always say what audio is 40% or 60% of film or depending on who you’re talking to. But in this case, it is it’s more about just kind of bringing you out of this space of feeling that you’re, you know, you’re distant from whatever you’re watching to really feel that you’re in the middle of it. And I think this is this sort of experiential thing that we’re getting and I think with this particular film, for the first time, and you know, the reactions that we got from people who are watching it with their headsets and headphones on is, you know, that they really feel that they’re there in it. And for me, again, as a piece of documentary, you know, to be able to experience that it’s something that I now look back at other historical events, like the D Day landings, or whatever, and you think, Wow, if I could, you know, if I could actually experience it in the same way that I’m experiencing PolyU, that would be amazing, to be able to go back through all these big historical events, and not just sort of having to turn around and see what you’re looking at. But also to kind of feel the intensity of being there just to the audio and everything that’s going on around you. And that’s really the sort of reaction that we’ve got. I’m sure, you know, as we go through time that people will get more used to this. But certainly in the moment, though, there’s still something that really grabs my, my attention. I mean, the other thing about the PolyU film is that— Now, I was talking before about how this is such a better way of documenting than photography, is because you get to look at what you want to look at. It’s not the photographer anymore dictating, you need to look at this, or you need to look at that. You get to see your own story. And if you look at the film, there’s lots of small stories going on within it. And you can follow them however you want. At the very beginning, for example, there’s the police chase, and you’re running through the streets with the policeman who’s chasing a protester. And you can watch that whole scene from that angle, but you can also look the other way and see something else that’s happening and watch that. And there’s stories within that. So you effectively get to choose what you want to look at. And of course, there’s lots of sort of ethical questions that go along with that. But it does solve some of the the questions that people say is, you know, as photographers, and having been a photojournalist all my life, is that I know that I’m making decisions on telling people what to look at. Now that becomes more, you know, that’s what’s important to me, rather than what’s important. You know, the most important part of the story, let me let me see if I can move on. How am I doing for time? I don’t have a watch on me.
David Campbell: Plenty of time. I mean, it’s absolutely fascinating. So I want to spend—
DJ Clark: Okay, let me keep going. So, so we learned a huge amount from that PolyU experience and as I said it was never intentional. We never actually even went to that one protest to make this film. But it’s something that just came about from the workflow that we developed earlier on that we thought was the best way of covering as a news event. And we weren’t using the 360 video, we were only using it as 16×9, which we were editing and putting out in a traditional way. Having said that, you know, we knew how to do it because we had tried 360 video before. We had a one year YouTube channel, which was running a video every week, a new video every week from around Asia that was trying to experiment with 360. We’ve done that project that I showed you earlier on Asia’s ailing heritage. And so we kind of had the experience of knowing how to do it, but we’d almost sort of thrown it aside and saying this is never gonna work. And then we brought it back and as I said with this ambisonic audio, but we see from that that there’s all sorts of potential in covering events. Not just— I mean, it took us, what, six weeks or whatever to edit that. So it wasn’t something that was coming out very quickly. And we’re, I’m kind of interested to know, if we’re going to see some of this. I did look for this from Ukraine, to see if I could find stuff, maybe I’ve missed something. But I couldn’t really find very much other than there was one photo project, which was very interesting. But it may be that they’re going to, it’s going to come out later, because I said, it took us time to do that. But actually, the camera does have live capabilities, and we should be able to stream it live, if we have the connections that are fast enough, and people have the connections that are fast enough to receive it as well. Bear in mind that, you know, we’re not just looking at one video, we’re now looking at video all the way around, and then you try and up the quality to HD, you do need fast connections to make that work both up and down, for people to look at it. And also people need to have at the moment, they need to have this kind of technology. So, this is not working. I need to put it next to me. How to make that work, we need to have these kinds of goggles and things for people to be able to consume it. And that’s been a huge barrier. People can watch it on the phone, and they can scroll around. But it’s not the kind of thing that you’re going to watch on us on a on the subway when you’re going to work. Or you’re going to watch you know, in lots of environments, you really have got to sort of sit down inside to do it. There’s also people have problems with motion sickness and all sorts of other things with these devices. So there’s lots of barriers that are, that they’re gonna have to come over.
David Campbell: So it changes the— just to interject the point of it changes the kind of the nature of your position as an audience member, because it’s very much an individualized experience, because of having to need the goggles and the technology, you as an individual to have those things. And you need a series of individuals with those to get an audience of some size or whatever. So it’s changing the nature of the audience in that sense, I think.
DJ Clark: It is. And as I said, we kind of tried it like New York Times, and other people, lots of people tried doing these 360 experiences, and we all kind of gave up. I think we tried it too early. Basically, we were trying to create something when there wasn’t really an audience for that. And one of the things I think we really underestimated, as I said, is how much people consume in places where they don’t necessarily have even headphones on let alone you know, they’re not going to be standing in a subway and turning around with their phone, or around in circles. So you know, people are consuming media in places where they don’t really want that. And then as I said, there’s issues with with motion sickness, and all sorts of other things as well. So these are all things that they are working on. And I think they’re getting much better at, and we will likely see this change. I want to move now just to talk about what we’re doing right now, because I’ve kind of talked about where we came from. I’ve talked about PolyU. And now I want to move to the next phase, which is really what we’re doing right now, because the PolyU film came out in 2019. And a lot has happened since then. I’m showing you this screen at the moment just to show you different avatars that I have. We’ve started having meetings, using this new technology for doing 3D meetings or doing virtual meetings, rather than doing the zoom that we’re doing right now. I’m hoping or maybe, you know, this, this kind of event that we’re doing right now will be very different in some years to come. On the right, you’ll see me— this is my spatial avatar, which is a much more realistic representation of myself. And on the left you have my Facebook Horizons, work rooms avatar, and as you can see, it’s great that I can go into work in a pair of shorts and T shirt. I can just put a suit on for a formal meeting, or whatever. I just dress myself, my avatar up, rather than have to wear what I’m actually wearing. We are using these Oculus headsets at the moment. And again, there’s issues with that because everyone’s got to have the same headset for us to work with some of the— not all of the different environments, but some of them we have to do that. But we’re hopefully looking forward to these changing very soon and that the goggles that you see just below that these are HTC Vive these are actually on the market already. The gold ones and the ones below are just kind of mock ups of what these might look like in the future. It’s also important to know you know the metaverse has existed for a long time. If people remember Second Life or played Second Life and then now you know Fortnight and Roblox and Minecraft and you know, all the others that are coming in. The gaming world has been within this space for a long time, though not always in virtual reality, quite often in different ways. We also sort of are aware of the fact that, no, the major tech, you look at Facebook, Google, and Microsoft, these are three big, big tech companies who’ve come out very strongly to say that, you know, they’re investing very heavily in this and they believe this is very much the future of where the web is going. I, you know, I’m not, I’m not going to predict whether it is or isn’t. There’s a survey out from Gartner that saying, you know, 25% of people— they are expecting 25% of people to be spending one hour daily in the metaverse by 2026. I was just reading today, actually, that someone who was pointing out that often tech companies get the tech right, they do get the future right when they predict it, but they get the timescale wrong. So in the past, I think they were talking about what Apple was sort of predicting that the iPad was going to come out 20 years before it actually did. They got it right. You know, they knew what it was going to look like. But they the timing was so good. So I you know, we don’t know how, how quickly this is going to happen. And I’m certainly not qualified to make that. But we do know that it’s coming. And we do know that we need to make preparation. So what are we doing? Well, we started with just starting our meetings, this here, if you take the audio off here, someone just filmed this with a mobile phone. So it’s not a great representation. But this is me giving an A&M presentation to the A&M editors using our Horizons—.
So kind of moving around and talking to people. We’ve started to do our meetings in this format. We’re looking at— before we actually gettin on to content, something that we do a lot of in the media industry as a whole is we do a lot of sort of events and conferences, now more importantly than ever, in terms of it being a revenue stream for us. And we’re certainly at A&M, we’re looking at setting up some of these. We have started to participate in other virtual events that have been done. I mean, not just virtual events, but sort of metaverse, events that have been done by other people. And we’re now looking to set up our own. This is the sort of next phase after the meetings. And we’re looking at content share as well. So we’re starting to create content, we’re just starting our first major collaboration on a project that will be giving you a virtual tour of Asia and you’ll be able to go from one city to another to another to another. We’re filming that at the moment because, I think about seven or eight of the partners have signed up for this. And we’re working on that. And there will be more content that we’re creating. This is very much what Asia Network does is share content. So we have a number of partners who started to produce content that we think that we can put into virtual spaces. And then we hope in time that we’re going to build our own space within metaverses, and be able to have things like this, you know, galleries and places where people can come. So where are we? Just gonna move to the last slide here, come back to those in a minute. So this is one example of that where we see people buying space in a metaverse and as you can see here, South China Morning Post, which is one of our rival publications, have bought a huge chunk of real estate in this particular metaverse in Sandbox. And you know, so we see other sort of media organization starting to—but here’s nothing you can do in this at the moment. But these are places where you know, we expect that people will go to and be and we need to start building experiences or whatever within that. So as I said, the first sort of idea is to build just simple galleries where people can come and visit. I’m working on a big project in Hong Kong at the moment, which is sort of documenting Hong Kong’s landscapes through these big 180 panoramas that we’re doing with drones. And that is a sort of project that will works very well in metaverse because you can, you can really make them huge and people can come in and really kind of look and pan around and look at these massive big big panoramas, and that will be built in a gallery space that will be sort of a permanent exhibition that people will be able to go. You can also sell them as NFT’s in those spaces as well. And we’ll see this possibly, also for photography. It’s a great space, sort of the first rendition, a very simplistic bit of a first sort of rendition of creating spaces where people can move around, but most of it is all around experiences. So I’ve talked about documenting events. But also, you want to give people experiences which is different from just sort of sitting and observing. So we’ll see, I think 360 photography becoming an art. It’s something that I think that people or photographers will will start to gravitate more to, as time goes on, that they’ll see it— it’s not just a case of going in, and just taking a snap, because it captures everything you still, as we’re learning now, you know, you still need to get really good light, you still need to position the camera in the right place where you’re at the right distance from the objects. It’s a different type of photography, but it’s also within that tradition of capturing a moment that people can then experience but rather than experiencing by just looking at it, they place themselves in the middle of it, and they can look around it, and kind of get a real sort of sense of what it’s like. And that could be, you know, just from a, you know, your local street store, but it could also be in the middle of conflict somewhere, it can be in the middle of a natural disaster. And with a still picture, as always, you know, you have that opportunity to really kind of examine things. And look at that moment in time and capturing action as well as things happen. And just that split second that you’re the—
you know that one moment that people can, but rather than just look at it, they can now sort of sit in the middle of it and look around it and see everything that’s happening in that space and choose what they feel is the most interesting part of that. And video, of course, as well, like we saw with PolyU, that opportunity to really kind of experience not just in terms of being able to watch it, but feel that you’re in the middle of it and and have that spatial awareness with all the audio and the sound that makes you feel that you’re really there. I often talk about this one experience that happened to me quite recently, in one of the meetings that we’ve had. And that is a colleague, who was sitting next to me in this virtual meeting was having problems trying to use the— you can use these things as drawing tools and they were trying to draw and they didn’t know how to do it. And I just reached across naturally to show them, completely forgetting that this lady was in Beijing and I was in Hong Kong. But because of this Spatial Sound, because you’re hearing her on the left hand side, you’re hearing her voice, you’re familiar with her. You’re looking at her, you just naturally move over because you forget. It completely fools the senses. And I think that’s really what is very different about metaverse. It’s fooling the senses to have this suspension of disbelief of— suspension, really feeling that you’re there in these places. And that’s something that I think you have to experience. It’s very difficult to explain it to people who haven’t experienced it with the the glasses and things. Again, sorry David, I’ve got no idea what the time is, but—
David Campbell: We’re fine. We’re fine. We’ve got time— Just to remind people in the audience that if you have questions drop them—
DJ Clark: Are you on mute, David? Are you on mute?
David Campbell: Am I on mute? No.
DJ Clark: Oh, I’m on mute. Sorry. Carry on.
David Campbell: Yeah, I can hear you. Now just a reminder to people in the audience that if you’ve got questions, drop them into the Q&A box. We’ve got one that we will come to. Do you want to wrap up, DJ? I mean, that’s been really quite literally mind blowing. I knew you were up to interesting things. And I also think it’s really interesting how much creativity is taking place in the Asia News Network, which is something that people in North America and Europe don’t hear a lot about, unfortunately. Some ways, I think, I’m not aware of similar things in Europe or North America. It may well be. But yeah, a lot of creativity, a lot of experimentations. It’s incredible.
Dj Clark: Yeah, I mean, I mean, I think it is interesting time. I mean, it is also important in terms of wrapping up, I think it’s also really important to say is that I haven’t seen anyone really sort of nailing this yet. You know, I think that every news organization is just trying to get their heads around it and trying to figure out how they’re going to deal with this. Interestingly, the South China Morning Post as I said, The the CEO of the South China Morning Post has recently moved the position to head up their— I don’t know what his official position is. But it’s basically taking over that that Metaverse space. And I remember in the very early days, when you saw people move from the newspaper to the website, and first it was seen like as a demotion, and then suddenly it became a promotion. And seeing, as I said, the CEO who comes from a tech back background, move across to this thing, everyone was very surprised. But I think that he clearly, or they clearly see this as the future, and they want their best people in this space. But as I said, lots of movement in that respect, and lots of experimentation, but no one has really kind of come out. And I’m guessing, you know, we still got some years before this is gonna really take off in a way that people can experience it. And at the moment, it is very costly, because it’s difficult to get your money back from these kinds of projects with the kind of amount of time and research. We’ve just hired our first or with—so haven’t hired, we’re actually hiring. We’re advertising for our first metaverse position. We’ll be specifically working just on this.
David Campbell: Well, Mark has got a question. And he is asking, how do you see these technologies affecting the abilities, and empowerment of communities telling their first person stories, rather than having stories told about them or without them?
DJ Clark: I mean, there’s always this sort of leap. So we’ve seen this happen. I mean, as you know, David, that I’ve worked a lot with, particularly with — in Bangladesh, but within the Asia Center for Journalism, you know, working with a lot of communities who are sort of behind in terms of the technology, and you see them catch up and then suddenly start producing this amazing work. And then a new technology comes out, which needs faster speeds, and faster, better equipment more expensive and then suddenly they’re behind again and we’ve seen this time and time again, happen through the, with photography in particular. And so, I mean, I think that, you know, this is a great opportunity for people to tell their own stories, it’s, you know, it’s very easy to access. And it’s just a different experience. It gives people a much closer understanding of what it’s really like to be in a place or really like to talk to so. But as I said, but there’s always going to be a bit of a lag between the people that know how to produce it, and they’ll send people around the world to make these amazing things before that technology and that understanding of how to produce that becomes easy enough for everyone to be able to do it.
David Campbell: Yeah. Yeah, just a reminder, if you want to ask a question, drop it in the Q&A box, we’ve got some time. One thing I want to ask DJ was, particularly those red letter projects, so that, I put that link in the chat to that, you had the one from Nepal. Do you have an audience—do you have a sense amongst the Asia News Network of how the audience is responding to those? Are they popular? Are people engaged with them? Does the general audience spend time on them?
DJ Clark: I mean, good, good news about those kinds of projects as they are evergreen projects. So they do last a long time. And we tend to pick up audiences over time when we’ve kept, particularly the AI one going because it’s quite expensive to keep the the subscriptions with Amazon servers and all and then the AI computing and stuff. So it’s kind of difficult to say. We kind of see them more as— I mean, they always win awards. I think all of those projects have won awards and done quite well. And so that gives us some international exposure, it’s good for brand etc. So the management see value in doing that, but we never see massive audiences on them. But as I said, quite often those audiences sort of ebbs and flows with time and you may suddenly see that you have this big spike because somebody shared it three years on and suddenly a whole load of people have started watching it again. So that’s the kind of good news is that they do, it’s not just something that is for now. They have a good time. And as I said, we’re hoping to deport them into into our new metaverse experiences so those that currently just exist on the web we’re hoping to be able to rebuild those into our metaverse or into the space that we have in the metaverse when we launch that.
David Campbell: Question from Jennifer. She says that so much of visual journalism to date is about documenting the present. But the technology can turn us into documenting eternity, I guess, meaning that it lasts for a very long time. Past and present become now forever available. Privacy is the first ethical issue that comes to mind when that happens. What are the main challenges you foresee? I guess the main ethical challenges you foresee? With the technology.
DJ Clark: I mean, surprisingly, the the biggest ethical issue we had with the, with the PolyU you footage, and generally around that project, was about encouraging journalists to be that close in a conflict situation, because the big issue with 360 cameras is that they work really well if you’re very close to the action. And that, that means that myself and other members of our team were taking much greater risks than we would do normally. You’ll see in the PolyU film that I missed by inches twice with a petrol bombs, which explode, which make fantastic footage because they explode right next to the camera—
David Campbell: But—
DJ Clark: —in two instances, but you know, if I had been just a few inches or whatever, in the wrong position, that could be very dangerous. And, you know, we have had some pushback, and quite rightly, I think from some journalists who are saying, you know, you shouldn’t really be encouraging this because you’re suggesting you need to, you know, in order to get this kind of footage, you have to be very close. So that’s been our biggest ethical challenge. There have been lots of other ethical questions that have kind of popped up, but I think most of them kind of answer themselves. I think with with 360 video, I find it much less problematic ethically because you are just representing what is happening around you. And people make choices themselves. Obviously, you still choose which which parts you’re going to put in and which parts you’re going to take out. So there is some selection process. But as I said, in terms of choosing what to look at, the audience gets to choose themselves. And the videographer or visual journalists, whoever you want to call it, is making less decisions in terms of what they think is important and what is not.
David Campbell: A technical question from Eric. He’s asking is there anywhere where he can get info on your workflow, particularly with 360 video? He says that Adobe Premiere, for example, has removed 360 editing. Is that correct?
Dj Clark: Not that I know of. No. I was editing yesterday in Final Cut 360 video, which was fine. The biggest issue is the ambisonic audio, the 360 audio, which GoPro put into that camera, but never gave you a way of getting it off it. And so you have to use, we have to basically use a coding method to extract the audio out and then edit it, which makes it quite complicated. But I’m happy to share some links on how to do that if anyone wants to know that. If they want to message me, you can you can reach me on my website at djclark.com. And I’m happy to— there’s a contact form in there.
David Campbell: That’s good. I’ll just type that into the chat. Djclark.com. Got a capital D there but it’s a small d. Doesn’t really matter. Do you want to make any predictions about what you think are going to be the technologies out of all the ones that have virtual, augmented, AI that are going to be most prominent in the next few years? Or do you think it’s always going to be kind of a some sort of network combination of these things together?
DJ Clark: Well, I think in terms of visual journalism, you know, AI has got a lot more practical implications. And I think that example that Google gave of just being able to have conversations with people who speak a different language to you in real time. I mean, I can see living in Hong Kong that that would be an immediate buy for me just to be able to go out and particularly traveling all around Asia, to be able to just be able to understand what people say. So you know, these have got practical implications. And I think AI is probably going to be the first technology to really take off and it’s also the one that’s easiest to do. But for visual journalists, I think VR is the one that we need to really look at. And sort of look at it as being a sort of new genre of visual journalism that I would say, is going to develop fairly fast both in terms of still images and in terms of video for reporting news events. I mean, I think, as I said, for news events and documentaries, I think it’s a very powerful tool that is not going to replace by any means. I don’t think regular film and regular stills, but I think it is going to be a standard that we see as just normal, probably within the next 10 years.
David Campbell: Yeah. A technical question from John Stanmeyer. Would binaural —if I pronounced that correct. I don’t even know what that is. Would binaural recording help with 360 audio?
DJ Clark: Yes.
David Campbell: Explain binaural recording.
DJ Clark: It’s the same, it has the same, it’s basically when you have—So the idea of having headphones. You’ve only got two speakers.
David Campbell: Yes.
DJ Clark: But how do you create sound in front of you and behind you and right and left. As far as I understand, and I might be wrong, John, but my understanding is that binaural is a way of creating that. I did buy some Sennheiser binaural audio recording headphones at the very beginning of the protest because I wanted to do recordings, like with this binaural sound. But when I when I extract it out from the recordings, they always come out as just stereo tracks, and I’ve never been able to understand that, whereas the GoPro Max will will create six tracks of audio, which you then place in the right order, which will be one front, one right, one left, one back, one up, one down. And so you basically have these six places where you have audio which creates it. I don’t know how they do it in binaural, but somehow they managed to create the same thing, but with only two tracks.
David Campbell: John says he uses Sound Man binaural microphones from Germany. It’s full 360 though not in six tracks.
DJ Clark: That’s right, exactly what I said. Thank you.
David Campbell: Yes. Good. And I’m learning something along the way. Well, I think that brings us to the end of our time. I mean, that’s been an absolutely fantastic introduction, DJ, thank you very much. I’ve learned a phenomenal amount. I imagine everyone in the audience has too and yeah, it looks incredibly- combination of exciting and also slightly daunting about what’s coming. But mostly exciting.
DJ Clark: Yeah, thank you. I’m certainly, I mean, I think it keeps me excited anyway. And we’ve got lots of projects in the in the making that I’ve not talked about yet, but you’ll see them develop over time.
David Campbell: Super. Well, thanks for your time. I know it’s very late there in Hong Kong. Definitely time for you to have another beer and after all, but it’s been great that you can be with us. And yeah, we hope to have you back sometime soon to follow up on this.
DJ Clark: Cool. Thanks for listening. Yeah.
David Campbell: See you later.
DJ Clark: Bye
David Campbell: Bye