Site Overlay

1.2

Introduction

Welcome to ‘The Future of Text’ Journal.

Why read this?

This Journal serves as a monthly record of the activities of the Future Text Lab, in concert with the annual Future of Text Symposium and the annual ‘The Future of Text’ book series. We have published two volumes of ‘The Future of Text’† and this year we are starting with a new model where articles will first appear in this Journal over the year and will be collated into the third volume of the book. We expect this to continue going forward.
This Journal is distributed as a PDF which will open in any standard PDF viewer. If you choose to open it in our free Reader’ PDF viewer for macOS (download†), you will get useful extra interactions including these features enabled via embedded Visual-Meta:

  • the ability to fold the journal text into an outline of headings.
  • pop-up previews for citations/endnotes showing their content in situ.
  • a Find, with text selected, locates all the occurrences of that text and collapses the document view to show only the matches, with each displayed in context.
  • if the selected text has a Glossary entry, that entry will appear at the top of the screen.
  • inclusion of Visual-Meta. See:
  • http://visual-meta.info to learn more about Visual-Meta.

    Frode Alexander Hegland & Mark Anderson Editors, with thanks to the Future Text Lab community: Adam Wern, Alan Laidlaw, Brandel Zachernuk, Fabien Benetou, Brendan Langen, Christopher Gutteridge, David De Roure, Dave Millard, Ismail Serageldin, Keith Martin, Mark Anderson, Peter Wasilko, Rafael Nepô and Vint Cerf.

    https://futuretextpublishing.com

    Jad Esber: Guest Presentation

    Transcript

     
    Video: https://youtu.be/i_dZmp59wGk?t=513 

    Jad Esber: Today I’ll be talking a little bit about both, sort of, algorithmic, and human curation. I’ll be using a lot of metaphors, as a poet that’s how I tend to explain things. The presentation won’t take very long, and I hope to have a longer discussion.
    On today’s internet, algorithms have taken on the role of taste-making, but also the authoritative role of gatekeeping through the anonymous spotlighting of specific content. If you take the example of music, streaming services have given us access to infinite amounts of music. There are around 40,000 songs uploaded on Spotify every single day. And given the amount of music circulating on the internet, and how it’s increasing all the time, the need for compression of cultural data and the ability to find the essence of things becomes more focal than ever. And because automated systems have taken on that role of taste-making, they have a profound effect on the social and cultural value of music, if we take the example of music. And so, it ends up influencing people’s impressions and opinions towards what kind of music is considered valuable or desirable or not.
    If you think of it from an artist’s perspective, despite platforms subverting the power of labels, who are our previous gatekeepers and taste-makers, and claiming to level the playing field, they’re creating new power structures. With algorithms and editorial teams controlling what playlists we listen to, to the point where artists are so obsessed with playlist placement, that it’s dictating what music they create. So if you listen to the next few new songs that you hear on a streaming service, you might observe that they’ll start with a chorus, they’ll be really loud, they’ll be dynamic, and that’s because they’re optimising for the input signals of algorithms and for playlist placement. And this is even more pronounced on platforms like TikTok, which essentially strip away all forms of human curation. And I would hypothesise that, if Amy Winehouse released Back in Black today, it wouldn’t perform very well because of its pacing, the undynamic melody. It wouldn’t have pleased the algorithms. It wouldn’t have sold the over 40 million copies that it did.
    And another issue with algorithms is churning standardised recommendations that are flattening individual tastes, they’re encouraging conformity and stripping listeners of social interaction. We’re all essentially listening to the same songs.
    There are actually millions of songs, on ‘Spotify’, that have been played only partially, or never at all. And there’s a service, which is kind of tongue-in-cheek, but it’s called ‘Forgotify’, that exists to give the neglected songs another way to reach you. So if you know are looking for a song that’s never been played, or hardly been played, you can go to ‘Forgotify’ to listen to it. So, the answer isn’t that we should eliminate algorithms or machine curation. We actually really need machine and programmatic algorithms to scale, but we also need humans to make it real. So, it’s not one or the other. If we solely rely on algorithms to understand the contextual knowledge around, let’s say, music, that’ll be impossible. Because, at present, human effort, popularity bias, which means only recommending popular stuff, and the cold start problem is unavoidable with music recommendation, even with very advanced hybrid collaborative filtering models that Spotify implies. So pairing algorithmic discovery with human curation will remain the only option. And with human curation allowing for the recalibration of recommendation through contextual reasoning and sensitivity, qualities that only humans really can do. Today this has caused the formation of new power structures that place the careers of merging artists, let’s say on Spotify, in the hands of a very small set of curators that live at the major streaming platform.
    Spotify actually has an editorial team of humans that adds context around algorithms and curates playlists. So they’re very powerful. But as a society, you continuously look to others, to both validate specific tastes, and to inspire us with new tastes. If I were to ask you how you came up discovered a new article or a new song, it’s likely that you have heard of it from someone you trust.
    People have looked to tastemakers to provide recommendations continuously. But part of the problem is that curation still remains an invisible labour. There aren’t really incentive structures that allow curators to truly thrive. And it’s something that a lot of blockchain advocates, people who believe in Web3, think that there is an opportunity for that to change with this new tech. But beyond this, there is also a really big need for a design system that allows for human-centred discovery. A lot of people have tried, but nothing has really emerged.
    I wanted to use a metaphor and sort of explore what bookshelves represent as a potential example of an alternative design system for discovery, human-curated discovery. So, let’s imagine the last time you visited the bookstore. The last time I visited the bookstore, I might have gone in to search for a specific book. Perhaps it was to seek inspiration for another read. I didn’t know what book I wanted to buy. Or maybe, like me, you went into the bookstore for the vibes, because the aesthetic is really cool, and being in that space signals something to people. This book store over here is one I used to frequent in London. I loved just going to hang out there because it was awesome, and I wanted to be seen there. But similarly, when I go and visit someone’s house, I’m always on the lookout for what’s on their bookshelf, to see what they’re reading. That’s especially the case for someone I really admire or want to get to know better. And by looking at their bookshelf, I get a sense of what they’re interested in, who they are. But it also allows for a certain level of connection with the individual that’s curating the books. They provide a level of context and trust that the things on their bookshelves are things that I might be interested in. And I’d love to, for example, know what’s on Frode’s bookshelf right now. But there’s also something really intimate about browsing someone’s bookshelf, which is essentially a public display of what they’re consuming or looking to consume. So, if there’s a book you’ve read, or want to read, it immediately triggers common ground. It triggers a sense of connection with that individual. Perhaps it’s a conversation. I was browsing Frode’s bookshelf and I came across a book that I was interested in, perhaps, I start a conversation around it. So, along with discovery, the act of going through someone’s bookshelf, allows for that context, for connection, and then, the borrowing of the book creates a new level of context. I might borrow the book and kind of have the opportunity to read through it, live through it, and then go back and have another conversation with the person that I borrowed it from. And so recommending a book to a friend is one thing, but sharing a copy of that book, in which maybe you’ve annotated the text that stands out to you, or highlighted key parts of paragraphs, that’s an entirely new dimension of connection. What stood out to you versus what stood out to them. And it’s really important to remember that people connect with people at the end of the day and not just with content. Beyond the books on display, the range of authors matters. And even the effort to source the books matters. Perhaps it’s an early edition of a book. Or you had to wait in line for hours to get an autographed copy from that author.
    That level of effort, or the proof of work to kind of source that book, also signals how intense my fanship is, or how important this book is to me.
    And all that context is really important. And what’s really interesting is also that the bookshelf is a record of who I was, and also who I want to be. And I really love this quote from Inga Chen, she says, “What books people buy are stronger signals of what topics are important to people, or perhaps what topics are aspirationally important, important enough to buy a book that will take hours to read or that will sit on their shelf and signal something about them.” If we compare that to some platforms, like Pinterest for example. Pinterest exists to not just curate what you’re interested in right now, but what’s aspirationally interesting to you. It’s the wedding dresses that you want to buy or the furniture that you want to purchase. So there’s this level of, who you want to become, as well, that’s spoken to through that curation of books, that lives on your bookshelf.
      I wanted to come back and connect this with where we’re at with the internet today and this new realm of ownership and people are calling social objects. And so, if we take this metaphor of a bookshelf and apply it to any other space that houses cultural artefacts, the term people have been using for these cultural artefacts is social objects. We can think of, beyond books, the shirts we wear, the posters we put on our walls, the souvenirs we pick up, they’re all, essentially, social objects. And they showcase what we care about and the communities that we belong to. And, at their core, these social objects act as a shorthand to tell people about who we are. They are like beacons that send out the signal for like-minded people to find us. If I’m wearing a band shirt, then other fans of that artist, that band will, perhaps, want to connect with me. On the internet, these social objects take the form of URLs, of JPEGs, articles, songs, videos, and there are platforms like Pinterest, or Goodreads, or Spotify, and countless others that centre around some level of human-curated discovery, and community around these social objects. But what’s really missing from our digital experience today is this aspect of ownership that’s rooted in the physicality of the books on your bookshelves. We might turn to digital platforms as sources of discovery and inspiration, but until now we haven’t really been able to attach our identities to the content we consume, in a similar way that we do to physical owned goods. And part of that is the public histories that exist around the owned objects that we have, in the context that isn’t really provided in the limited UIs that a lot of our devices allow us to convey. So, a lot of what’s happening today around blockchains is focused on how can we track provenance or try to verify that someone was the first to something, and how do we, in a way, track a meme through its evolution. And there are elements of context that are provided through that sort of tech, although limited.
    There is discussion around ownership as well. Like, who owns what, but also portability. The fact that I am able to take the things that I own with me from one space to another, which means that I’m no longer leaving fragments of my identity siloed in these different spaces, but there’s a sense of personhood. And so these questions of physical ownership are starting to enter the digital realm. And we’re at an interesting time right now, where a lot of, I think, design systems will start to pop up, that emulate a lot of what it feels like to work, to walk into a bookstore, or to browse someone’s bookshelf. And so, I wanted to leave us with that open question, and that provocation, and transition to more of a discussion. That was everything that I had to present.
    So, I will pause there and pass it back to Frode, and perhaps we can just have a discussion from now on. Thank you for listening. 


    Dialogue

    Video: https://youtu.be/i_dZmp59wGk?t=1329 

    Frode Hegland: Thank you very much. That was interesting and provocative. Very good for this group. I can see lots of heads are wobbling, and it means there’s a lot of thinking. But since I have the mic I will do the first question, and that is:
    Coming from academia, one thing that I’m wondering what you think and I’m also wondering what the academics in the room might think. References, as bookshelf, or references as showing who you are, basically trying to cram things in there to show, not necessarily support your argument, but support your identity, do you have any comments on that? 
    Jad Esber: So, I think that’s a really interesting thought. When I was thinking of bookshelves, they do serve almost like references, because of the thoughts and the insights that you share. If you’re sitting in the bedroom, in the living room, and you’re sharing some thoughts, perhaps you’re having a political conversation, and you point at the book on your shelf that perhaps you read, that’s like, “Hey, this thought that I’m sharing, the reference is right there.” It sort of does add, or kind of provide a baseline level of trust that this insight or thought has been memorialised in this book that someone chose to publish, and it lives on my bookshelf. There is some level of credibility that’s built by attaching your insider thoughts to that credible source. So, yeah, there’s definitely a tie between references, I guess, in citations to the physical setting of having a conversation and a book living on your bookshelf, that you point to. I think that’s an interesting connection beyond just existing as social objects that speak to your identity, as well. That’s another extension as well. I think that’s really interesting. 
    Frode Hegland: Thanks for that. Bob. But afterward, Fabien, if you could elaborate on your comment in the chat, that would be really great. Bob, please. 
    Video: https://youtu.be/i_dZmp59wGk?t=1460 
    Bob Horn: Well, the first thing that comes to mind is: 
    Have you looked at three-dimensional spaces on the internet? For example, Second Life, and what do you think about that? 
    Jad Esber: Yeah. I mean, part of what people are proposing for the future of the internet is what I’m sure you guys have discussed in past sessions. Perhaps is like the metaverse, right? Which is essentially this idea of co-presence, and some level of physicality bridging the gap between being co-presented in a physical space, in a digital space. Second Life was a very early example of some version of this. I haven’t spent too many iterations thinking about virtual spaces and whether they are apt at emulating the feeling of walking into a bookstore, or leafing through a bookshelf. But I think if you think about the sensory experience of being able to browse someone’s bookshelf, there are, obviously, parallels to the visual sensory experience. You can browse someone’s digital library. Perhaps there’s some level of tactile, you can pick up books, but it’s not really the same. But it’s missing a lot of the other sensory experiences, which provide a level of context. But certainly, allow for that serendipitous discovery that another doesn’t. Like the feed dynamic isn’t necessarily the most serendipitous. It’s it is to a degree, but it’s also very crafted. And it there isn’t really a level of play when you’re going around and looking at things that you do on a bookshelf, or in a bookstore. And so, Second Life does allow for that. Moving around, picking things up and exploring that you do in the physical world. So, I think it’s definitely bridging the gap to an extent, but missing a lot of the sensory experiences that we have in the physical world. I think we haven’t quite thought about how to bridge that gap. I know there are projects that are trying to make our experience of digital worlds more sensory, but I’m not quite sure how close we’ll get. So, that’s my initial thought, but feel free to jump in, by the way, I’d welcome other opinions and perspectives as well. 
    Bob Horn: We’ve been discussing this a little bit, partially, at my initiative, and mostly at Frode’s urging us on. And I haven’t been in Second Life for, I don’t know, six, or seven, or eight years. But I have a friend who has, who’s there all the time, and says that there are people who have their personal libraries there. That there are university libraries. Their whole geographies, I’m told, of libraries. So, it may be an interesting angle, at some point. And if you do, I’d be interested, of course, in what you came up with. 
    Jad Esber: Totally. Thank you for that pointer, yeah. There’s a multitude of projects right now that focus on extending Second Life, and kind of bringing in concepts around ownership, and physicality, and interoperability, so that the things that you own in Second Life, you can take with you, from that world, into others. Which, sort of, does bridge the gap between the physical world and the digital, because it doesn’t live within that siloed space, but actually is associated to you, and can be taken from one space to another. Very early in building that out, but that’s a big promise of Web3, so. There’re a lot of hands. So, I’ll pause there.  
    Frode Hegland: Yeah, Fabien, if you could elaborate on what you were talking about, virtual bookshelf. 
    Video: https://youtu.be/i_dZmp59wGk?t=1737 
    Fabien Benetou: Yep. Well, actually it will be easier if I’ll share my screen. I don’t know if you can see. I have a Wiki that I’ve been maintaining for 10 plus years. And on top, you can see the visualisation of the edits when I started for this specific page. And these pages, as I was saying in the chat, are sadly out of date, that’s been 10 years, actually, just for this page. But I was listing the different books I’ve read, with the date, what page I was. And if I take a random book, I have my notes, the (indistinct), and then the list of books that are related, let’s say, to the book. I don’t have it in VR or in 3D yet, but it’s definitely from that point wouldn’t be too hard, so… And I was thinking, I have personally a, kind of, (indistinct) that they’re hidden, but I have some books there and I have a white wall there and I love both because when I bring back if either I’m in someone else’s room or my own room. Usually, if I’m in my own room, I’m excited by the book I’ve read or the one that I haven’t read yet. So it brings a lot of excitement. But also, if I have a goal in mind, a task at hand, let’s say, a presentation on Thursday, a thing that I haven’t finished yet, then it pulls me to something else. Whereas if I have the white wall it’s like a blank slate. And again, if I need to pull some references on books and whatnot. So, I always have that tension. And what usually happens is, when I go in a physical bookstore, or library, or bookshop, or friends, serendipity is indeed, it’s not the book I came here for, it’s the one next to it. Because I’m not able to make the link, and usually, if the creation has been done right, and arguably the algorithm, if it’s not actually computational, let’s say, if you use the doing annotation or any other basically annotation system, in order to sort the books or their references, then there should be some connection that were not obvious in the first place. So, to me, that’s the most, I’d say, exciting aspect of that. 
    Jad Esber: This is amazing, by the way, Fabien. This is incredible that you’ve built this over a decade, that’s so cool. I think what’s also really interesting to extend on that thought, and just to kind of like, “yes” and that, there is a certain level of, I mean, I think what you’ve built is very utilitarian, but also the existence of the bookshelf as an expression of identity, I think is interesting. So, beyond just organising the books, and keeping them, storing them in a utilitarian way, then serving as signals of your identity, I think are really interesting. And so, I think a lot of platforms today cater to the utility. If you think about Pocket or even Goodreads to an extent, there is potentially an identity angle to Goodreads versus Tumblr, back in the day, or Myspace or (indistinct) which were much more identity-focused. So there is this distinction of utilitarian, organising, keeping things, annotating, etc. for yourself. But there’s also this identity element of like, by curating I am expressing my identity. And I think that’s also really interesting. 
    Frode Hegland: Brandel, you’re next. But just wanted to highlight today to the new people in the room including you, Jad. This community, at the moment, is really leaning towards AR and VR. But in a couple of years’ time, what can happen? And that also includes projections and all kinds of different things, so we really are thinking connected with the physical, but also virtual on top. Brandel, please. 
    Video: https://youtu.be/i_dZmp59wGk?t=1984 
    Brandel Zachernuk: So, I was really hooked on when you said that you like to be seen in that London bookstore. And it made me think about the fact that on Spotify, on YouTube, on Goodreads for the most part, we’re not seen at all, unless we’re on the specific, explicit page that is there for the purposes of representing us. So, YouTube does have a profile page. But nothing about the rest of our onward activity actually is represented within the context of that. If you compare that to being in the bookstore, you have your clothes on, you have your demeanour, and you can see the other participants. There’s a mutuality to being present in it, where you get to see that, rather than merely that a like button maybe is going up in real-time. And so, I’m wondering what kind of projective representation do you feel we need within the broader Web? Because even making a new curation page still silos that representation with an explicit place, and doesn’t give you the persistent reference that is your own physicality, and body wandering around the various places that you want to be at and be seeing at. Now, do you see that as something that there’s a solve to? Or how do you think about that?  
    Jad Esber: Yeah, I think Bob alluded to this to a degree with Second Life. And the example of Second Life, I think the promise of co-presence in the digital world is really interesting, and potentially could solve for this, part of. I also go to cafes, not just because I like the coffee, because I like the aesthetic, and the opportunities to rub shoulders with other clientele that might be interesting, because this cafe is frequented by this sort of folk. And that doesn’t exist online as much. I mean, perhaps, if you’re going to a forum, and you frequent a specific subreddit, there is an element of like, “Oh, I’ll meet these types of folks or this chat group, and perhaps, I’ll be able to converse with these types of folks and be seen here.” But I think, how long you spend there, how you show up there, beyond just what you write. That all matters. And how you’re browsing, there’s a lot of elements that are really lost in current user interfaces. So, I think, yeah, Second Life-like spaces might solve for that, and allow us to present other parts of ourselves in these spaces, and measure time spent, and how we’re presenting, and what we’re bringing. But, yeah. I’m also fascinated by this idea of just existing in a space as a signal for who you are. And yeah, I also love that metaphor. And again, this is all stuff that I’m actively thinking about and would love sort of any additional insights, if anyone has thoughts on this, please do share, as well. This is, by no means, just a monologue from my direction. 
    Frode Hegland: Oh, I think you’re going to get a lot of perspectives. and I will move into… We’re very lucky to have Dene here, who’s been working with electronic literature. I will let her speak for herself, but what they’re doing is just phenomenally important work. 
    Video: https://youtu.be/i_dZmp59wGk?t=2231 
    Dene Grigar: Thank you. That’s a nice introduction. I am the managing director, one of the founders, and the curator of The NEXT. And The NEXT is a virtual museum, slash library, slash preservation space that contains, right now, 34 collections of about 3,000 works of born-digital art and expressive writing. What we generally call ‘electronic literature’. But I’ve unpacked that word a little bit for you. And I think this corresponds to a little bit of what you’e talking about in that when we cut when I collect when I curate work I’m not picking particular works to go in The NEXT, I’m taking full collections. So, artists turn over their entire collections to us, and then that becomes part of The NEXT collections. So it’s been interesting watching what artists collect. So it’s not just their own works, it’s the works of other artists. And the interesting, historical, cultural aspect of it is to see, in particular time frames, artists before the advent of the browser, for example, what they collected, and who they were collecting. Michael Joyce, Stuart Moulthrop, Voyager, stuff like that. Then the Web, the browser, and the net art period, and the rise of Flash, looking to see that I have five copies of Firefly by Nina Larson because people were collecting that work. Jason Nelson’s work. A lot of his games are very popular. So it’s been interesting to watch this kind of triangulation of what becomes popular, and then the search engine that we built pulls that up. It lets you see that, “Oh, there’s five copies of this. There’s three copies of that. Oh, there’s seven versions of Michael Joyce’s afternoon, a story.” To see what’s been so important that there’s even been updates, so that it stays alive over the course of 30 years. One other thing I’ll mention, back to your early comment, I have a whole print book library in my house, despite the fact I was in a flood in 1975 and lost everything I owned, I rebuilt my library and I have something like 5,000 volumes of books, I collect books. But it’s always interesting for me, to have guests at my house and they never look at my bookshelf. And the first thing I do when I go to someone’s house, I see books is like, “Oh, what are you reading? What do you collect?” And so, looking at having The NEXT and all that 3,000 works of art and then my bookshelf, and realising that people really aren’t looking and thinking about what this means. The identity for the field, my own personal taste, I call it my own personal taste, which is very diverse. So, I think there’s a lot to be said about people’s interest in this. And I think it’s that kind of intellectual laziness that drives people to just allow themselves to be swept away by algorithms, and not intervene on their own and take ownership over what they’re consuming. And I’ll leave it at that. Thank you. 
    Jad Esber: Yeah, I love that. Thank you for sharing. And that’s a fascinating project, as well. I’d love to dig in further. I think you bring up a really good point around shared interests being really key and connecting the right type of folks, who are interested in exploring each others libraries. Because not everyone that comes into my house is interested in the books that I’m reading, because, perhaps, they’re from a different field, they’re just not as curious about the same fields. But there is a huge amount of people that potentially are. I mean, within this group, we’re all interested in similar things. And we found each other through the internet. And so, there is this element of, what if the people walking into your library, Dene, are also folks that share the same interests as you? That would actively look and browse through your library and are deeply interested in the topics that you’re interested in so there is something to be said around how can we make sure that people that are interested in the same things are walking into each others’ spaces? And the interest-based graphs exist on the Web. Thinking about who is interested in what, and how can we go into each others’ spaces. And browse, or collecting, or curating, or creating is a part of what many algorithms try to do, for better or for worse. But sometimes leave us in echo chambers, right? And we’re in one neighbourhood and can’t leave, and that’s part of the problem. But yeah, there is something to be said about that. And I think just to go back to the earlier comment that the Dene made around the inspirations behind artists’ work. I would love to be able to explore what inspired my favourite artist’s music, and what went into it and go back and listen to that. And I think, part of again, Web3’s promise is this idea of provenance, seeing how things have evolved and how they’ve become. And crediting everyone in that lineage. So, if I borrowed from Dene’s work, and I built on it, and that was part of what inspired me, then she should get some credit. And that idea of provenance, and lineage, and giving credit back, and building incentive systems that allow people to build works that will inspire others to continue to build on top of my work is a really interesting proposal for the future of the internet. And so, I just wanted to share that as well. 
    Frode Hegland: That’s great. Anything back from you, Dene, on that? Before we move to Mark? 
    Dene Grigar: Well, I think provenance is really important. And what I do in my own lab is to establish provenance. Even if you go to The NEXT and you look at the works, it’ll say where we got the work from, who gave it to us, the date they gave it to us, and if there’s some other story that goes with it. For example, I just received a donation from a woman whose daughter went to Brown University and studied under Coover, Robert Coover. And she gave me a copy of some of the early hypertext works, and one was Michael Joyce’s Afternoon Story and it was signed. The little floppy disk was signed, on the label, by Michael and she said, “I didn’t notice there was a signature. I don’t know why there’d be a signature on it.” And, of course, the answer is, if you know anything about the history is that Joyce and Coover were friends, there’s this whole line of this relationship and Coover was the first to review Michael Joyce, and made him famous in the New York times, in 1992. So, I told her that story, and she’s like, Oh, my god. I didn’t know that.” So, just having that story for future generations to understand the relationships, and how ideas and taste evolve over time, and who were the movers and shakers behind some of that interest, so. Thank you. 
    Bob Horn: Could I ask, just briefly, what the name of your site is or something? Because it went by so fast that I couldn’t even write it down. 
    Dene Grigar: https://the-next.eliterature.org/. Yeah, and I encouraged you to look at it. 
    Frode Hegland: Dene, this is really grist for the mill of a lot of what we’re talking about here. Because, with Jad’s notions of identity sharing via the media we consume, and a lot of the visualisations we’re looking at in VR. One of the things we’ve talked about over the last few weeks is guided tours of work where you could see the hands of the author or somebody pointing out things whether it’s a mural, or a book, or whatever. And then, to be able to find a way to have the meta-information you just talked about, be able to enter the room, maybe it could be simply recorded as you saying it, and that is tagged to be attached to these works. Many wonderful layers, I could go on forever. And I expect mark will follow up. 
    Vidoe: https://youtu.be/i_dZmp59wGk?t=2771 
    Mark Anderson: Hi. I just think, they’re really reflections, more than anything else. Because one of the things that really brought me up was this idea of books being a performative thing, which I still can’t get my head around. It’s not something I’ve encountered, and I don’t see it reflected in the world in which I live. So maybe a generational drift in things. For instance, behind me you might guess, I suppose, I’m a programmer. Actually what that shows is it’s me trying to understand how things work, and I need them that close to my computer. My library is scattered across the house, mainly to distribute weight through a rather old crumbly Victorian house. So, I have to be careful where we put the bookcases. I’m just, really reflecting how totally alien I find the notion of books, I certainly don’t have… I struggled to think of, I never placed a book with the intention it’ll be seen in that position by somebody else. And this is sort of not a pushback, it’s just my reflection on what I’m hearing. Because I find it very interesting because it had never occurred to me. I never, ever thought of it in those terms. The other sad thing about that means that, so, are the books merely performative? Or the content is there? I mean, one of the interesting thing I’ve been trying to do in this group is trying to find ways just to share the list of the books that are on my shelf. Not because they are any reflection of myself, but literally, I actually have some books that are quite hard to find, and people might want to know that it was possible to find a copy. And whether they need to come and physically see it, or we could scan something. The point is, “No, I have these. This is a place you can find this book.” And it’s interesting that that’s actually really hard to do. Most systems don’t help because, I mean, the tragedy of recommender systems is they make us so inward-looking. So, instead of actually rewarding our curiosity, or making us look across our divides, they basically say, “Right. You lot are a bunch. You go stand over there.” Job done, and (the) recommender system moves on to categorising the next thing. So, if I try to read outside my normal purview, and I’m constantly reflecting on the fact that the recommended system is one step behind saying, “Oh, right. You’re now interested in…” No, I’m not. I’m trying to learn a bit about it. But certainly, this is not my area of interest in the sense that I now want to be amidst lots of people who like this. I’m interested in people who are interested by it, but I think those are two very different things. So, I don’t know the answers, but I just raise those, I suppose, as provocations. Because that’s something that, at the moment, our systems are really bad at allowing us to share content other than as a sort of humblebrag. Or, in your beautifully curated life on Pinterest, or whatever. Anyway, I’ll stop there. 
    Jad Esber: Yeah, thank you for sharing that. I think it does exist on a spectrum, the identity expressive versus utilitarian need that it solves. But if you take the example of clothing, that might help it a little bit more. So, if we’re wearing a t-shirt, perhaps there’s a utilitarian need, but there is also a performative, or identity expressive need that it solves the way we dress, speaks to who we are as well. So I think the notion of a social object being identity expressive, I think is what I was trying to convey. Think, if you think about magazines on a coffee table. Or you think about the art books that live scattered around your living room, perhaps. That is trying to signal something about yourself. The magazines we read as well. If I’m reading Vogue, I’m trying to say something about who I am, and what I’m interested in reading. The Times, or The Guardian, or another newspaper is also very identity expressive. And taking it out on the train and making sure people see what I’m reading is also identity expressive. So, I think that everything sort of around what we consume and what we wear and what we identify with being a signal of who we are. It’s what I was trying to convey there. But I think you make a very good point. The books next to your computer are there because they’re within reach. You’re writing a paper about something and it’s right there. And so, there is a utilitarian need for the way you organise your bookshelf. The way you organise your bookshelf can be identity expressive or utilitarian. I’ll give you another example. On my bookshelf, I have a few books that are turned face forward, and a few that I don’t really want people to see them, because I’m not really that proud of them. And I have a book that’s signed by the author, I’ll make sure it’s really easy for people to open it and see the signature. And so, there is an identity expressive element to the way I organise my bookshelves as well that’s not just utilitarian. So, I think another point to illustrate that angle. 
    Mark Anderson: To pull us back to our, and as a sub-focus on AR, VR, it just occurred to me it’s something that, the (indistinct) reminder that Dene was talking about, people don’t look at the bookshelves. I’m thinking, yeah and certainly not saying I miss, and it happens less frequently that the evening ends up with a dinner table just loaded with piles of books that have been retrieved from all over the house and are actually part of the conversation that’s going on. And one thing that some of our new tools would be nice to help us recreate that, especially maybe, if we’re not meeting in the same physical space, is to have that element of recall of these artefacts, or at least some of the pertinent parts of the content they’re in. It would be really useful to have because the fact that you bothered to walk up two flights of stairs or something to go and get some book off the top shelf, because that’s, in a sense, part of the conversation going on, I think is quite interesting and something we’ve sort of lost anyway. I’ll let it carry on. 
    Frode Hegland: It’s interesting to hear what you say there, Mark, because in the calls we have, you’re the one who most often will, “Look, the book arrived. Look, I have this copy now.” And then we all get really annoyed at you because we have to buy the same damn book. So, I think we’re talking about different ways and to different audiences, not necessarily to dinner guests. But for your community of this thing, you’re very happy to share. Which is interesting it’s also two points, to use my hand in the air here. One of them is, clothing came up as well. And some kind of study, I read showed that, we don’t buy clothing we like, we buy clothing that is the kind of clothing we expect people like us to buy. So, even somebody who is really, “I don’t care about fashion” is making a very strong fashion statement. They’re saying they don’t care. Which is anti-snobbery, maybe. You could say that I’m wondering how that enters into this. But also, when we talk about curation, it’s so fascinating how, in this discussion, music and books are almost interchangeable from this particular aspect. And what I found is, I don’t subscribe to Spotify, I never have, because I didn’t like the way the songs were mixed. But what I do really like, and I find amazing, is YouTube mixes. I pay for YouTube premium so I don’t have the ads. That means I’ll have an hour, an hour and a half, maybe two-hour mixes by DJs who really represent my taste. Which is a fantastic new thing. We didn’t have that opportunity before. So that is a few people. And there, the YouTube algorithm tends to put me in direction of something similar. But also this is for music when I work. It’s not for finding new interesting Jazz. When I play this music, when I’m out driving with my family, I hear how incredibly inane and boring it is. It is designed for backgrounding. So the question then becomes, maybe, do we want to have different shelves? Different bookshelves for different aspects of our lives? And then we’re moving back into the virtuality of it all. That was my hand up. Mark, is your hand up for a new point? Okay, Fabien? 
    Video: https://youtu.be/i_dZmp59wGk?t=3321 
    Fabien Benetou: Yeah a couple of points. The first to me, the dearest to me, let’s say, is the provenance aspect. I’m really pissed or annoyed when people don’t cite sources. I would have a normal conversation about a recipe or anything completely casual, doesn’t have to be academic, and if that person didn’t invent it themselves, I’m annoyed if there is not some way for me to look back to where it came from. And I think, honestly, a lot of the energy we waste as a species comes from that. If you’re not aware, of course, of the source, you can’t cite it. But if you learn it from somewhere not doing that work, I think is really detrimental. Because we don’t have to have the same thought twice if we don’t want to. And if we just have it again, it’s just such a waste of resources. And especially since I’m not a physician, and I don’t specialise in memory, but from what I understood, source memory is the type of memory where you recall, not the information, but where you got it from. And apparently, it’s one of the most demanding. So for example, you learn about, let’s say, a book, and you know somebody told you about that book, and that’s going to be much harder but eventually, if you don’t remember the book itself, but the person who told you about it, you can find it back. So, basically, if as a species, we have such a hard time providing sources and understanding where something comes from, I think it’s really terrible. It does piss me off, to be honest. And I don’t know if metadata, in general, is an answer. If having some properly formatted, any kind of representation of it, I’m not going to remember the ISBN of the book, on the top of my head in a conversation, but I’m wondering in terms of, let’s say if blockchain can solve that? Can Web3 solve it? Especially you mentioned the, let’s say, a chain of value. If you have a source or the reference of somewhere else whose work you’re using, it is fair to reattribute it back. They were part of how you came to produce something new. So, I’m quite curious about where this is going to be. 
    Jad Esber: Yes, thank you for that question. And, yeah. I think there are a few points. First is, I’m going to just comment really quickly on this idea of provenance. And I want to just jump back to answer some of Frode’s comments, as well. But I think, one thing that you highlighted, Fabien, is how hard it is for us to remember where we learned something or got something. And part of the problem is that, so much of citing and sourcing is so proactive and requires human effort. And if things were designed where it was just built into the process. One of the projects I worked on at YouTube was a way for creators to take existing videos and build on them. So, remixing essentially. And in the process of creating content, I’d have to take a snippet and build on it. And that is built into the creation process. The provenance, the citing are very natural to how I’m creating content. TikTok is really good at this too. And so I wonder if there are, again, design systems that allow us to build in provenance and make it really user-friendly and intuitive to remove the friction around having to remember the source and cite. We’re lazy creatures. We want that to be part of our flow. TikTok duets feature and stitching is brilliant. It builds in provenance into the flow. And so, that’s just one thought. In terms of how blockchains help. So, part of what is a blockchain other than a public record of who owns what, and how things are being transacted. If there was a way if we go back to TikTok stitching, or YouTube quoting a specific part of a video, and building on it, if that chain of events was tracked and publicly accessible, and there was a way for me to pass value down that chain to everyone that contributed to this new creative work, that that would be really cool. And that’s part of the promise. This idea of keeping track of how everything is moving, and being able to then distribute value in an automated way. So, that’s sort of addressing that point. And then really quickly on, Frode, your earlier comments, and perhaps tying in with some of what we talked about with Mark, around identity expression. I think this all comes back to the human need to be heard, and understood, and seen, and there are phases in our life, where we’re figuring out who we are, and we don’t really have our identities figured out yet. So, if you think about a lot of teenagers, they will have posters on their walls to express what they’re consuming or who they’re interested in. And they are figuring out who they are. And part of them figuring out who they are is talking about what they’re consuming, and through what they’re consuming, they’re figuring out their identities. I grew up writing poetry on the internet because I was trying to express my experiences, and figure out who I was. And so, I think what I’m trying to say is that there will be periods of our life where the need to be seen, heard, understood or we’re figuring out, and forming our identities are a bigger need. And so, the identity expressive element of para-socially expressing or consuming plays a bigger part. And then, perhaps when we’re more settled with our identity, and we’re not really looking to perform that, becomes more of a background thing. Although, it doesn’t completely disappear because we are always looking to be heard, seen, and understood. That’s very human. So, I’ll pause there. I can keep going, but I’ll pause because I see there are a few other hands.  
    Frode Hegland: Yeah, I’ll give the torch to Dave Millard. But just on that identity, I have a four-and-a-half-year-old boy, Edgar, who is wonderful. And he currently likes sword fighting and the colour pink. He is very feminine, very masculine, very mixed up, as he should be. So, it’s interesting, from a parental, rather than from just an old man’s perspective to think about the shaping of identity, and putting our posters and so on. It’s so easy to think about life from the point we are in life, and you’re pointing to a teenage part, which none of us are in. So, I really appreciate that being brought into the conversation. Mr. Millard? 
    David Millard https://youtu.be/i_dZmp59wGk?t=3770 
    David Millard: Yeah, thanks, Frode. Hi, everyone. Sorry, I joined a few minutes late, so I missed the introductions at the beginning. But, yeah. Thank you. It’s a really interesting talk. One of the things we haven’t talked about is kind of the opposite of performative expression, which is privacy. One of the things, a bit like Mark, I’ve kind of learned about myself listening to everyone’s talking about this, is how deeply introverted I am, and how I really don’t want to let anybody know about me, thank you very much, unless I really want them to. This might be because I teach social network and media analytics to our computer scientists. So, one of the things I teach them about is inference, for example, profiling, I’m reminded of the very early Facebook studies done in the 2000s, about the predictive power of keywords. So, you’d express your interests through a series of keywords. And those researchers were able to achieve 90% accuracy on things like sexuality. This is an American study, so republican, democratic preferences. Afro-American, Caucasian, these kinds of things. So I do wonder whether or not there’s a whole element to this, which is subversive or exists in that commercial realm that we ought to think about. I’m also struck about that last comment, actually, that you mentioned, which was about people finding their identities. Because I’ve also been involved in some research looking at how kids use social media. And one of the interesting things about the way that children use social media, including some children that shouldn’t be using social media, because they’re pretty 13 or whatever the cut-off date is. Is that they don’t use it in a very sophisticated way. And we were trying to find out why that was because we all have this impression of children as being naturally able. There’s the myth of the digital native and all that kind of stuff. And it’s precisely because of this identity construction. That was one of the things that came out in our research. So, kids won’t expose themselves to the network, because they’re worried about their self-presentation. They’re much more self-conscious than adults are. So they invest in dyadic relationships. Close friendships, direct messaging, rather than broadcasting identity. So I think there’s an opposite side to this. And it may well be that, for some people, this performative aspect is particularly important. But for other people, this performative aspect is actually quite frightening, or off-putting, or just not very natural. And I just thought I wanted to throw that into the mix. I thought it was an interesting counter observation. 
    Jad Esber: Absolutely. Thank you for sharing that. To reflect on my experience growing up writing online. I wrote poetry, not because I wanted other people to read, it was actually very much for myself. And I did it anonymously. I wasn’t looking for any kind of building of credibility or anything like that. It was for me a form of healing. It was for me a form of just figuring out who I was. But if someone did read my poetry, and it did resonate with them, and they did connect with me, then I welcomed that. So, it wasn’t necessarily a performative thing. But it was a way for me to do something for myself that, if it connected with someone else, that was welcomed. I think to go back to the physical metaphor of a bookshelf. Part of my bookshelf will have books that I’ll present, and have upfront and want everyone to see, but I also have a book box with trinkets that are out of sight and are just for me. And that perhaps there are people who will come into my space and I’ll show them what’s in that box, selectively. And I’ll pull them out, and kind of walk them through the trinkets. And then, I’ll have some that are private, and are not for anyone else. So, I totally agree. If we think about digital spaces, if we were to emulate a bookshelf online, there will be elements, perhaps, that I would want to present to the world outwardly. There are elements that are for myself. There are elements that I want to present in a selective manner. And I think back to Frode’s point around bookshelves for various parts of my identity. I think that’s really important. There might be some that I will want to publicly present, and others that I won’t. If you think about a lot of social platforms, how young people use social platforms, think about Instagram. Actually, on Tumblr, which is a great example, the average user had four to five accounts. And that’s because they had accounts that they used for performative reasons. And they had accounts that they used for themselves. And had accounts for specific parts of their identity. And that’s because we’re solving different needs through this idea of para-socially curating and putting out there what we’re interested in. So, just riffing on your point. Not necessarily addressing it, but sort of adding colour to it. 
    David Millard: No, that’s great. Thank you. So, you’re right about the multiple accounts thing. I had a student, a few years ago, who’s looking at privacy protection strategies. I’m basically saying, people, don’t necessarily use the preferences on their social media platforms, who can see my stuff. They actually engage differently with those platforms. So they do like that, as you said. They have different platforms, or they have different accounts, for different audiences. They use loads of fascinating stuff, things like social stenography, which is, if they have in-jokes or hidden messages to certain crowds, that they will put in them, their feeds will never miss it. There are all of these really subtle means that people use. I’m sure that all comes into play for this kind of stuff as well. 
    Jad Esber: Totally. I’ll add to that really quickly. So, if you look at… I did a study of Twitter bios, and it’s really interesting to look at how, as you said, young folks will put very cryptic acronyms that indicate or signal their fanships. They’re looking for other folks who are interested in the same K-pop band, for example. And that acronym in the bio will be a signal to that audience. Like, come follow me, connect with me around this topic, just because the acronym is in there. A lot of queer folks will also have very subtle things in their bios, on their profile to indicate that. But only other queer folks will be aware of. And so, again, it’s not something you necessarily want to be super public and performative about, but for the right folk, you want them to see and connect with. So, yeah. Super interesting how folks have designed their own way of using these things to solve for very specific needs. 
    Frode Hegland: Just before I let you go, Dave. Did you say steganography or did you say stenography? 
    David Millard: I think it’s steganography. It’s normally referred to as hiding data inside other data but in a social context. It was exactly what Jad and I was just saying about using different hashtags or just references, quotes that only certain groups would recognise that kind of stuff, even if they’re from Hamilton.  
    Frode Hegland: Brendan, I see you’re ready to pounce here. But just really briefly, one of the things I did for my PhD thesis is, study the history of citations and references. And they’re not that old. And they’re based around this, kind of, let’s call it, “anal notion” we have today that thing should be in the correct box, in the correct order, if it isn’t, it doesn’t belong in the correct academic discipline. Earlier this morning, Dave, Mark, and I were discussing how different disciplines have different ways of even deciding what kind of publication to have. It’s crazy stuff. But before we got into that, we have a profession, therefore, we need a code of how to do it. The way people cited each other, of course, was exactly like this. The more obscure the better, because then you would really know that your readers understood the same space. So it’ s interesting to see how that is sliding along, on a similar parallel line. Brendan, please. Unless Jad has something specific on that point. 
    Jad Esber: I was just sourcing a Twitter bio to show you guys. So, maybe, if I find one, I’ll walk through it and show you how various acronyms are indicating various things. And I was just trying to pull it from a paper that I wrote. But, yeah. Sorry, go ahead, Brendan.  
    Frode Hegland: Okay, yeah. When you’re ready, please put that in. Brendan? 
    Video: https://youtu.be/i_dZmp59wGk?t=4340 
    Brendan Langen:  Cool. Jad, really neat to hear you talk through, just really everything around identity as a scene online. It’s a point of a lot of the research I’m doing as well. So, interesting overlaps. First, I’ll kind of make a comment, and then I have a question for you that’s a little off base of what we talked about. But the bookshelf, as a representation, is extremely neat to think about when you have a human in the loop because that’s really where contextual recommendations actually come to life. This idea of an algorithm saying that we’ve read 70% of the same books, and I have not read this one text that you have held really near and dear to you might be helpful but, in all honesty, that’s going to fall short of you being able to share detail on why this might be interesting to me. So I guess to, kind of, pivot into a question, one of my favourite things that I read last year was something you did with, I forget the fella’s name, Scott, around reputation systems and novel approach, and so, I’m studying a little bit in this Web3 area, and the idea of splitting reputation, and economic value is really neat. And I’d love to hear you talk a little bit more about ‘Koodos’ and how, either you’re integrating that, or what experiments you’re trying to run in order to bring like curation and reputation into the fold. I guess like, what kind of experiments are you working on with ‘Koodos’ around this reputational aspect?  
    Jad Esber: Yeah, absolutely. I’m happy to share more. But before I do that, I actually found an example of a Twitter bio, I’ll really quickly share, and then, I’m happy to answer that question, Brendan. So this is from a thing I put together a while ago, and if we look at the username here. So, ‘katie, exclamation mark, seven, four Dune’. So, the seven here actually is supposed to signal to all BTS fans, BTS being a K-pop band that she is part of that group, that fan community. It’s just that simple seven next to her name. Four Dune is basically a way for her to indicate that she is a very big fan of Dune, the movie, and Timothée Chalamet, the actor. And pinned at the top of her Twitter account is this list of the bands or the communities that she stands, stands meaning, being a big fan of. And so, again, sort of like, very cryptically announcing the fan communities she’s a part of just in her name, but also, very actively pinning the rest of the fan communities that she’s a member of, or a part of. But, yeah. I just want to share that really quickly. So, to address, Brendan, your questions, just for folks who aren’t aware of the piece, it’s basically a paper that I wrote about how to decouple reputation from financial gain in system and reputation systems, where there might be a token. So, a lot of Web3 projects promise community contributions will earn you money. And the response that myself and Scott Kominers wrote was around, “Hey, it doesn’t actually make sense for intrinsic motivational reasons, for contributions to earn you money. In fact, if you’re trying to build a reputation system, you should develop a system to gain reputation, that perhaps spins off some form of financial gain.{ So, that’s, sort of, the paper. And I can link it in in the chat, as well, for folks who are interested. So, a lot of what I think about with ‘Koodos’, the company that I’m working on, is this idea of, how can people build these digital spaces that represent who they are, and how can that may remain a safe space for identity expression, and perhaps, even solving some of the utilitarian needs. But then, how can we also enable folks, or enable the system, to curate at large, source from across these various spaces that people are building, to surface things that are interesting in ways that aren’t necessarily super algorithmic. And so, a lot of what we think about the experiments we run around how can we enable people to build reputation around what it is that they are curating in their spaces. So, does Mark’s curation of books in his bookshelf give him some level of reputation in specific fields? That then allows us to point to him as a potential expert on that space. Those are a lot of the experiments that we’re interested in running, just sort of, very high level without getting too in the weeds. But I’m happy to discuss, if you’re really interested in the weeds of all of that, without boring everyone, I’m happy to take that conversation as well. 
    Brendan Langen:  Yeah. I’ll reach out to you because I’m following the weeds there. 
    Jad Esber: Yeah, for sure. 
    Brendan Langen: Thanks for the high-level answer.
    Jad Esber: No worries, of course. 
    Frode Hegland: Jad, I just wanted to say, after Bob and Fabien now, I would really appreciate it if you go into sales mode, and really pitch what you’re working on. I think, if we honestly say, it’s sales mode, it becomes a lot easier. We all have passions, there’s nothing wrong with being pushy in the right environment, and this is definitely the right environment. Bob? 
    Video: https://youtu.be/i_dZmp59wGk?t=4723
    Bob Horn: Well, I noticed that your slides are quite visual and that you just mentioned visual. I wonder if, in your poetry life, you’ve thought about broadsheets? And whether you would have broadsheets in the background of coming to a presentation like this, for example, so that you could turn around and point to one and say, “Oh, look at this.” 
    Jad Esber: I’m not sure if the question is if I… I’m sorry, what was the question specifically about? 
    Bob Horn: Well, I noticed you mentioned that you are a poet, and poets often, at least in times gone by, printed their poems on larger broadsheets that were visual. And I associated that with, maybe, in addition to bookshelves, you might have those on a wall, in some sort of way, and wondered if you’d thought about it, and would do it, and would show us. 
    Jad Esber: Yeah. So, the poetry that I used to write growing up was very visual, and it used metaphors of nature to express feelings and emotions. So, it’s visual in that sense. But I am, by no means, a visual artist or not visual in that sense. So, I haven’t explored using or pairing my poetry with visual compliments. Although, that sounds very interesting. So, I haven’t explored that. Most of my poetry is visual in the language that I use. And the visuals that come up in people’s minds. I tend to really love metaphors. Although, I realise that sometimes they can be confining, as well. Because we’re so limited to just that metaphor. And if I were to give you an example of one metaphor, or one word that I really dislike in the Web3 world it’s the ‘wallet’. I’m not sure how familiar you are with the metaphor of a wallet in Web3, but it’s very focused on coins and financial things, like what live in your physical wallet, whereas what a lot of wallets are today are containers for identity and not just the financial things you hold. You might say, ‘Well, actually, if you look into my wallet, I have pictures of my kids and my dog or whatever.’ And so, there is some level of storing some social objects that express my identity. I share that just to say that the words we use, and the metaphors that we use, do end up also constraining us because a lot of the projects that are coming out of the space are so focused on the wallet metaphor. So, that was a very roundabout answer to say that I haven’t explored broadsheets, and I don’t have anything visual to share with my poetry right now. 
    Video: https://youtu.be/i_dZmp59wGk?t=4938
    Bob Horn: What is, just maybe, in a sentence or two, what is Web3? 
    Jad Esber: Okay, yeah. Sure. So, Web3, in a very short sense, is what comes after Web2, where Web2 is what we as, sort of, the last phase of the internet that relied on reading and writing content. So if you think about Web1 being read-only, and Web2 being read and write, where we can publish as well. Web3 is read-write and on. So, there is an element of ownership for what we produce on the internet. And so, that’s, in short, what Web3 is. A lot of people associate Web3 with blockchains, because they are the technology that allows us to track ownership. So that’s what Web3 is in a very brief explanation. Brendan, as someone who’s deep in this space, feel free to add as well to that, if I’ve missed anything. 
    Bob Horn: Thank you.  
    Brendan Langen: I guess the one piece that is interesting in the wallet metaphor is that, I guess, the Web2 metaphor for identity sharing was like a profile. And I guess I would love to hear your opinion on comparing those two and the limitations of what even a profile provides as a metaphor. Because there are holes in identity if you’re just a profile. 
    Jad Esber: Totally, yeah. Again, what is a profile, right? It’s a very two-dimensional, like… What was a profile before we had Facebook profiles? A profile when you publish something is a little bit of text about you, perhaps it’s a profile picture, just a little bit about you. But what they’ve become is, they are containers for photos that we produce and there are spaces for us to share our interests and we’re creating a bunch of stuff that’s a part of that profile. And so, again, the limiting aspect of the term ‘profile’ exists a lot of on what’s been developed today, again, just hinges on the fact that it’s tied to a username and a profile picture and a little bio. It’s very limiting. I think that’s another really good example. Using the term ‘wallet’ today, again, is limiting us in a similar way to how profiles limited us in Web2. If we were to think about wallets as the new profile. So that’s a really good point I actually hadn’t made that connection, so thank you.
    Video: https://youtu.be/i_dZmp59wGk?t=5107 
    Frode Hegland: Fabien? 
    Fabien Benetou: Thank you. Honestly, I hope there’s going to be, let’s say, a bridge to the pitch. But to be a little bit provocative, honestly, when I hear Web3, I’m not very excited. Because I’ve been burnt before. I checked bitcoin in 2010 or something like this, and Ethereum, and all that. And honestly, I love the promise of the Cypherpunk movement or the ideology behind it. And to be actually decentralised or to challenge the financial system and its abuse speaks to me. I get behind that. But then, when I see the concentration back behind the different blockchains, most of the blockchains are rougher, then I’m like, “Well, we made the dream”. Again, from my understanding of the finance behind all this. And yet, I have tension, because I want to get excited, like I said, the dream should still live. As I was briefly mentioned in the chat earlier, civilians, capitalism, and the difference between doing something in public, and doing something on Facebook, it’s not the same. First, because it’s not in public, it’s not a proper platform. But then, even if you do it publicly on Facebook, is the system to issue value and transform that to money. And I’m very naive, I’m not an economist, but I think people should pay for stuff. It’s easy. I mean, it’s simple, at least. So, if I love your poetry, and I can find a way that can help you, then I pay for it. There is no need for an intermediary, in between, especially if it’s at the cost of privacy and potentially democracy behind. So that’s my tension, I want to find a way. That’s why I’m also about provenance, and how we have a chain of sources, and we can reattribute people back down the line. Again, I love that. But when I hear Web3 I’m like, “Do we need this?” Or can we can, for example, and I don’t like Visa or Mastercard, but I’m wondering if relying on the centralised payment system is still less worse than a Cypherpunk dream that’s been hijacked. 
    Brendan Langen:  Yeah, I mean, I share your exact perspective. I think Web3 has been tainted by the hyper-financialisation that we’ve seen. And that’s why, when Bob asked what is Web3, it’s just what’s after Web2. I don’t necessarily tie it, from my perspective to crypto necessarily. I think that is a means to that end but isn’t necessarily the only option. There are many other ways that people are exploring, that serve some of the similar outcomes that we want to see. And so, I agree with you. I think right now, the version of Web3 that we’re seeing is horrible, crypto art and buying and selling of NFTs as stock units is definitely not the vision of the internet that we want. And I think it’s a very skeuomorphic early version of it that will fade away and it’s starting to. But I think the vision that a lot of the more enduring projects in the space have around provenance and ownership, do exist. There are projects that exist that are thinking about things in that way. And so, we’re in the very early stages of people looking for a quick buck, because there’s a lot of money to be made in the space, and that will all die out, and the enduring projects will last. And so, I think decoupling Web3 from blockchain, like Web3 is what is after Web2, and blockchain is one of the technologies that we can be building on top of, is how I look at it. And stripping away the hyper-financialisation, skeuomorphic approaches that we’re seeing right now from all of that. And then, recognising also, that the term Web3 has a lot of weight because it’s used in the space to describe a lot of these really silly projects and scams that we’re seeing today. So, I see why there is tension around the use of that term.  
    Frode Hegland:  One of the discussions I had with the upcoming Future of Text work, I’m embarrassed right now, I can’t remember exactly who it was (Dave Croker), but the point was made that, version numbers aren’t very useful. This was in reference to Visual-Meta, but I think it relates to Web2. Because if the change is small you don’t really need a new version number, and if it’s big enough it’s obvious. So, I think this Web3, I think we all kind of agree here, is basically marketing.   
    Jad Esber: It’s just a term, yeah. I think it’s just a term that people are using to describe the next iteration of the Web. And again, as I said, words have a lot of weight and I’m sure everyone here agrees that words matter. So yeah, I think, when I reference it, usually I’m pointing to this idea of read-write-own. And own being a new entry in the Web. So, yeah. 
    Bob Horn: I was wondering whether it was going to refer to the Semantic Web, which Tim Berners-Lee was promoting some years ago. Although, not with a number. But I thought maybe they’ve added a number three to it. But I’m waiting for the Semantic Web, as well. 
    Jad Esber: Totally. I think the Semantic Web has inspired a lot of people who are interested in Web3. So, I think there is a returning back to the origins of the internet, right? Ted Nelson’s thinking as well as a big inspiration behind a lot of current thinking in this space. It’s very interesting to see us loop back almost to the original vision of the Web. Yeah, totally. 
    Frode Hegland: Brandel?
    Video: https://youtu.be/i_dZmp59wGk?t=5497
    Brandel Zachernuk: You talked a little bit about algorithms, and the way that algorithms select. And painted it as ineffable or inaccessible. But the reality of algorithms is that they’re just the policy decisions of a given governing organisation. And based on the data they have, they can make different decisions. They can present and promote different algorithms. And so that ‘Forgotify’ is a take on upending the predominant deciding algorithm and giving somebody the ability through some measure of the same data, to make a different set of decisions about what to be recommended. The idea that I didn’t get fully baked, that I was thinking about is the way that a bookshelf is an algorithm itself, as well. It’s a set of decisions or policies about what to put on it. And you can have a bookshelf, which is the result of explicit, concrete decisions like that. You can have a meta bookshelf, which is the set of decisions that put things on it, that causes you to decide it. And just thinking about the way that there is this continuum between the unreachable algorithms that people, like YouTube, like Spotify, put out, and the kinds of algorithms internally that drive what it is that you will put on your bookshelf. I guess what I’m reaching for is some mechanism to bridge those and reconcile the two opposite ends of it. The thing is that YouTube isn’t going to expose that data. They’re not going to expose the hyper parameters that they make use of in order to do those things. Or do you think they could be forced to, in terms of algorithmic transparency, versus personal curation? Do you see things that can be pushed on, in order to come up with a way in which those two things can be understood, not as completely distinct artefacts, but as opposite ends of a spectrum that people can reside within at any other point? 
    Jad Esber: Yeah. You touch on an interesting tension. I think there are two things. One is, things being built, being composable, so people can build on top of them, and can audit them. So, I think the YouTube algorithm, being one example of something that really needs to be audited, but also, if you open it, it allows other people to take parts of it and build on top of it. I think that’d be really cool and interesting. But it’s obviously completely orthogonal to YouTube’s business model and building moats. So composability is sort of one thing that would be really interesting. And auditing algorithms is something that’s very discussed in this space. But I think what you’re touching on, which is a little bit deeper, is this idea of algorithms not capturing emotions, and not capturing the softer stuff. And a lot of folks think and talk about an emotional topology for the Web. When we think about our bookshelf, there are memories, perhaps, that are associated with these books, and there are emotions and nostalgia, perhaps, that’s captured in that display of things that we are organising. And that’s not really very easy to capture using an algorithm. And it’s intrinsically human. Machines don’t have emotions, at least not yet. And so, I think that what humans present is context and that’s emotional context, nuance, that isn’t captured by machine curation. And so, that’s why, in the presentation, I talk a little bit about the pairing of the two. It’s important to scale things using programmatic algorithms, but also humans make it real, they add that layer of emotion and context. And there is this parable that basically says that human curation will end up leading to a need for algorithmic curation. Because the more you add and organise, the more there’s a need for then a machine to go in and help make sense of all the things that we’re organising. It’s an interesting pairing, what balance is important, and it’s an open question. 
    Frode Hegland: Yeah, Fabien, please. But after that, Brendan, if you could elaborate on what you wrote in the chat regarding this, that would be really interesting. Fabien, please.
    Video https://youtu.be/i_dZmp59wGk?t=5836
    Fabien Benetou: It’s to pitch something to potentially consider linking with your platform, it’s an identity management targeting mostly VR, at least at first. And there is completely federated and open source. The thing is it’s very minimalist. It just provides an identity. And you have, let’s say, a 3D model and a name and a list of friends. I think that’s it. But if you were to own things, and you were to be able to either share or display them across the different platforms, I think it could be quite interesting. Because, in the end, we discussed this quite a bit, so I’m going to go back, but there is also a social or showcasing aspect to creation we want to exchange. Honestly, when I do something that I’m proud of, first thing I want to do is to show someone. I’m going to see if my better half is around, she’s not going to get it, but still, I can’t stop myself, I want to show it. I have a friend, they’ll get it, hopefully. I want to show you also here. And so, I want to build, and I want to show it. And I imagine a lot of the creation is, as soon as you find something beautiful, it’s like, “No, I don’t want to keep it to myself. I want to share with my people.” So, I’m wondering at which point that could also help this kind of identity platform or solution, because they were quite abstract in the sense that they’re not specific, let’s say, to one platform, they are on top of that. But then people think, “What for?” Okay, I can log in with, let’s say, Facebook or Apple. I know them. I trust them. So that’s it. I’m just going to click on that button. But it’s always a way for the identity maybe, like again, the discussion we had here is, my identity, me also, what I showcase around me that define me, and I want to not just share it to establish myself as, but also help others discover. So maybe it could be interesting to check how there could be a way to be more than an identity.  
    Jad Esber: Totally if you think about DJs, their job is essentially, their profession is essentially to curate music and stitch things together. There are professions that centre around helping other people discover, and that that becomes work, right? So I think helping other people discover can be considered something that gives you back status or gives you back gratification in some form. Perhaps, it just makes you happier. But it also could give you back money and that it’s a profession. Arts curators, DJs. So, there’s a spectrum as well, I think a lot of folks will recommend it because they like it. They will recommend it because gives them some level of status. At the end of the spectrum, it becomes a job. Which I think is certainly an interesting proposition, is like, what does it look like if internet curators are recognised as professionals? Could there be a world where people who are curating high value stuff could be paid? And I think, Brendan alluded to this briefly, beyond just adding links, like the synthesis, the commentary is really valuable, especially with the overload that we have today. And so, I think I alluded to this idea of invisible labor, curation being invisible labor. What if it was recognised? And what if it became a form of paid work? I think that could also be very interesting as an extension to your thought around curating to help others.  
    Fabien Benetou: So, sorry. I’ll just bounce back because it’s directly related, but I’m just going to throw it out there. If someone wants to tour through WebXR and have some of their favourite spaces and give me a bit of money for doing it, I’m up for attempting that. I know exactly how, but I think it could be quite interesting to have a tour together, and maybe put in our backpack whatever we like, or with whom we connect. And again, across platforms, not just one. 
    Jad Esber: Totally, yeah. There is precedent to that in a way, like galleries, and museums are institutionalised, like spaces of curated works. We pay to enter them. Is there a way where we can bring that down to the individual, right? A lot of the past version of the Web is taking institutionalised things and making them user-generated. Is there a version of galleries or museums that are user-generated and owned? And that’s an exploration that we’re interested in, as well, at ‘Koodos’. So, something we’re exploring.
    Frode Hegland: Fabien, I saw you put a link here to web.immers.space. Reminds me to mention to you guys that someone from ‘Immersed’, the company that makes the virtual screens in Oculus will be doing a hosted meeting soon. On a completely different tangent from what this is about, but I just wanted to mention to you guys. Brendan, would you mind going further about what you’re talking about?
    Video: https://youtu.be/i_dZmp59wGk?t=6211
    Brendan Langen: Sure. I think it’s minimal, but the act of curation, I suppose, I should have qualified the type of research that I’m talking about. My background is in UX research. So, when you’re digging into any one of our experiences with a tool, and we run into a pinpoint, or we stop using, we leave the page. The data can tell us, we were here when this happened. But it takes so much inference to figure out what it actually was that caused it. Could be that we just got a phone call, and it was not a spam call for once, and we’re thinking, “Oh, wow. I have to pick this up and talk to my mother.” Or it could be that this is so frustrating, and as I kept clicking, and clicking, I just got overwhelmed, and I didn’t want to deal with it anymore. And everything between there. And that’s really where the role of user research comes in. And that was the comparison to curation, is that, we can only understand what feeling someone had, when they heard that song that changed their life, or read a passage that triggered a thought that they then wrote an essay out. And it’s something that I have to dive into further, and further. It’s like, the human is needed in the loop at all times. Mark and I have talked a lot about this. It does not matter how your data comes back to you, regardless, you’re gonna need to clean it. And you’re going to need to probe into it, and enrich it with a human actually asking questions.
    Jad Esber: Totally, yeah. That resonates very deeply. And I can share a little bit about ‘Koodos’, because I’ve alluded to it, but I will also share that it’s very early, and very experimental. So that’s why there isn’t really that much to share. But I think it centres around that exact idea of, how can we bottle or memorialise the feeling that we have around discovering that thing that resonated. And the experience, right now, centres on this idea of, “Hey. When I’m listening to this song, or I’m reading this article, or watching this video, and it resonates. What can I do with it to memorialise it, and to keep it, and to kind of create something based on it?” And so, right now, people create these cards that sort of link out to content that they love from across the Web. And on those cards, they can add context or commentary. And a lot of what people are adding tends to be emotional. The earliest experiment centred on people adding emojis, just emoji tags to the content to summarise the vibe of the content. And these cards are all time-stamped, so there’s also a way for you to see when someone came across something. And they’re all added to a library, or an archive, or a bedroom, or bookshelf, whatever you’re going to call it, that aggregates all the cards that you’ve created. So it becomes a way for you to explore what people are interested in. What they’re saying and feeling about the things that they come across that resonates. The last thing I’ll share, as well, is that these cards unlock experiences. So, if I created a card for Brendan’s paper, for example, I’ll get access to a collection, where other people have created cards for Brendan’s work live, and I can see all of what they commentated and created, and who they are, and maybe go into their libraries and see what it is that they are creating cards for. So, that’s the current experience. And again, in the early stages. Most of our users are quite young, that’s why I sort of speak a lot about identity formative years, when you’re constructing your identity being a really important phase in life. And so, our users are around that age. And that’s what we’re doing and we’re thinking about. And just provide some context for a lot of the perspectives that I share.
    Brendan Langen: I have to comment. I love the idea of prompting reflection. Especially at a stage where you are identity-forming. There’s nothing like cultivating your taste by actually talking about what you liked and disliked about something. And then, being able to evoke that in the frame of, how it made me feel in a moment, can build up a huge library of personal understanding. So, that’s rather neat. I need to check this out a little further.
    Jad Esber: Totally, yeah. We can chat further. I think the one big thought that has come about, from the early experimentation is that, people use it as a form for mental health reasons. Prompting you to reflect, or capture emotion over time, and archiving what has resonated, and what you felt over time is a really healthy thing to do. So that was an interesting outcome of the early product.
    Closing Comments
    Video: https://youtu.be/i_dZmp59wGk?t=6526
    Frode Hegland: There are so many opportunities with multiple dimensions of where this knowledge can go. We also have, upcoming, Phil Gooch from Scholarcy, who will be doing a presentation. He doesn’t do anything with VR, AR or anything. But what he does do is, scholarcy.com analyses documents, academic documents. So they do all kinds of stuff that seems to be on more of the logical side, where it seems, Jad, you’re more of the emotional side. And I can imagine, specifically for this community, the insane amount of opportunities for human interactions in these environments. And then how we’re going to do the plumbing to make sure it is vulnerable. You said earlier, when defining Web3.0, one of the terms is ownable. The work we’ve been doing with Visual-Meta is very much about, we need to be able to own our own data. So, it was nice to hear that in that context. We’re winding down. It’s really nice to have two hours, so it’s not so rushed. So we can actually listen to each other. Are there any closing comments, questions, suggestions, or hip-hop improvisations?
    Fabien Benetou: I’m not going to do any hip-hop improvisation, not today at least. Quick comment, though is, I wouldn’t use such a platform. And also, I would say, without actually owning it, meaning for example, at least a way to export data, and have it in a meaningful way And I don’t pour my life into things, because especially here, is the emotional aspect without some safety, literal safety of being able to extract it, and ideally live, because I’m a programmer. So, if I can tinker with the data itself, that also makes it more exciting for me. But I do hope there is some way to easily, conveniently do that and hopefully, there is a need to consider leaving the platform. Tinkering I think it’s always worthwhile. No need to leave, but it’s still being able to actually have it do whatever you want. I think is pretty precious.
    Jad Esber: Yes, thank you. Thank you for sharing that, Fabien. And absolutely. That’s a very important consideration. So, the cards you create are tied to you, not to the space that you occupy or you create on ‘Koodos’. That’s a really key part of the architecture. And I hear you on the privacy and safety aspect. Again, this is a complex human system and so, when designing them, beyond the software you’re building, I think the social design is really important. And aspects of what is in the box, that’s for yourself. The trinkets that you keep to yourself, versus the cards that are the books that you present to the rest of the folks that come into your space. I think is an important design question. So, yeah. Thank you for sharing, Fabien.
    Fabien Benetou: A quick little thing, that is a lot more open, let’s say, unfortunately, I can’t remember the name, but three or four years ago, there was a viewer experience done by Lucas something, maybe somebody will remember, where you had like a dozen or two dozens of clouds on top of your head, couple of scenes, and you could pull a cloud, in order to listen to someone else’s voice. And each space, virtual space was a prompt to, when is the last time you cried? Yes, www.lucasrizzotto.com. And so, his experience must be there in his portfolio, is three or four years old. But maybe half a dozen different spaces, with different ambiance, different visuals, and sounds. And every time prompting, well, I don’t know, what’s the meaning of life, simple, easy questions. And then, if you want to talk, you can talk and share it back with the community. And if you don’t want to talk. you don’t have to. So, it’s not what you do, but I think there are some connections, some things could be inspiring, also, to check it out.
    Jad Esber: I guess, on my part, I just want to say thank you for the conversation, and for being here for the two hours. It’s a long time to talk about this stuff. But I appreciate it. And yeah, I look forward to, hopefully, joining future sessions, as well. Sounds like a really interesting string of conversations. And it’s great to connect with you all virtually and to hear your questions and perspectives. Yeah, thank you.
    Frode Hegland: Yeah. It’s very nice to have you here. And the thing about the group is, okay, we are today, except for Dene, we’re all male and so on. But we do represent quite a wide variety of mentalities. And this is something we need to increase as much as we can. It is crucial. And also, I really appreciate you bringing in, literally, a new dimension dealing with emotions and identities into the discussion. So, it’s going to be very interesting moving forward. I was not interested in VR, AR at all in December. And then, Brandel came into my life. And now it is all about, I’m actually decided I can use the word metaverse because Meta doesn’t own it, I’ve decided to settle down. But the point is, I feel we’re already living in the metaverse. We’re just not seeing it through as many rich means as we can. And I don’t want to go into the metaverse with only social and gaming. And today, thank you for highlighting that we need to have our identities managed in this environment, and taken with us. So, I’m very grateful. And I look forward to seeing those of you who can on Friday. And we’re going to be doing, as I said, every two weeks presentations in this format. And yes. Anything else before I rush off and make some dinner for the family?
    Fabien Benetou: I have a quote for this. It’s on my desktop, actually. It’s, “When technology shifts reality, will we know the world has changed?” it’s from Ken Perlin that we mentioned last time. I’ll put it in the chat.
    Frode Hegland: Very nice indeed. Thanks for that. And I do hope to see you in our regular calls when you can. Please know, it’s very casual. If you can make one and not another, that’s totally fine. There’s no, “you are in or out,” this is not a mafia. Have a good week, everyone. 


    Chat Log

    16:15:34 From Fabien Benetou : Sadly outdated but my virtual bookshelf
    https://fabien.benetou.fr/Content/PersonalInformationStream#Finished
    16:15:47 From Frode Hegland : oooooh…..
    16:22:07 From Frode Hegland : References as bookshelves
    16:22:22 From Frode Hegland : Hey Dave!
    16:23:10 From Fabien Benetou : wanted to project my virtual bookshelf (as I have covers) on a wall at home but never bothered to do so, especially if I can keep it synced with the growing percentage coming from e-ink reading
    16:25:33 From Dene Grigar : ELO’s The Next collects and makes available collections artists and scholars have collected over their lifetimes.
    16:26:02 From Frode Hegland : Dene, this is interesting, would you like to talk about that?
    16:27:04 From Dene Grigar : Yes.
    16:27:08 From Dene Grigar : Thank you
    16:32:44 From Frode Hegland : Daunt?
    16:32:47 From Jad Esber : yes!
    16:34:23 From Fabien Benetou : oops actually I do have a visualisation of my virtual bookshelf
    https://fabien.benetou.fr/?n=Wiki.BookshelfVisualization?action=whiteboard
    in fact I did even project with a tactile video projector on a wall but I don’t think I have a video of that.
    16:34:41 From Frode Hegland : Sitting in a coffee shop and reading and wanting others to see what you are reading… Yes, identity projection. Same as with music I guess. DJ!
    16:35:50 From Frode Hegland : ELO: Electronic Literature Organisation https://eliterature.org
    16:35:51 From Fabien Benetou : Ironically enough a cafe does bring an environment conductive to thinking arguably thanks to the noise…
    16:36:12 From Frode Hegland : Indeed Fabien
    16:36:40 From Brandel Zachernuk : Etsy had a really joyful celebration of the entire audience on their main page, with everyone’s cursors existing at the same time for everyone to see
    16:37:07 From David Millard : More generally, this reminds me of social bookmarking (e.g.
    https://en.wikipedia.org/wiki/Delicious_(website)
    ) although this has fallen out of favour now.
    16:38:46 From Jad Esber : That’s fascinating!!
    16:39:41 From Frode Hegland : Since you asked Jad, my book shelf behind the computer
    16:40:13 From Dene Grigar To Frode Hegland(privately) : the-next.eliterature.org
    16:44:10 From Jad Esber : And that context is completely lost online—imagine that existing on the internet.
    16:44:53 From Dene Grigar : the-next.eliterature.org
    16:47:18 From Frode Hegland : But Mark, you love to hold up new books and share with us, which is super nice.
    16:50:04 From Dene Grigar : The whole notion of coffee table books is about showing guests your personal taste
    16:50:06 From Fabien Benetou : I think I’m a bit in between. I’m utilitarian in the sense that I give away books to friends but I keep my ~top20 in a glass bookshelf in the living room, for others to see and spark a conversation but also me to periodically reflect back because they do in way define either me or at least some truth of my worldview
    16:51:47 From Brandel Zachernuk : This makes me realise I am radically utilitarian – I don’t have magazines at all, and and don’t have any displays outside of the books I have literally grabbed to show people
    16:52:20 From Fabien Benetou : Arguably showcasing is utilitarian too
    16:54:33 From Frode Hegland : YouTube DJ mixes. I have found a few DJ’s who are my human curators. Some well known and public, some anonymous only known by some made up name and no face is shown. Before we used to have sales people in music shops who could recommend. This is missing now of course. ALSO: The way music and books are (almost) interchangeable in this is very interesting. CLOTHING: We buy clothes which people like us buy
    16:55:00 From Frode Hegland : Brendan? Dave?
    16:56:56 From Dene Grigar : I need to get to my next meeting. I thank you for your excellent presentation. I will see you all at the next event.
    16:57:06 From Jad Esber : Thank you for attending Dene!
    16:57:07 From Dene Grigar : Thanks, Frode
    16:57:09 From Frode Hegland : Look froward Dene, great to see you today
    16:57:09 From Dene Grigar : Frode
    16:57:32 From Mark Anderson (Portsmouth, UK) : For ‘dinner guests’ read ‘guests’. I do notice that people have lost a habit of going to where a book may be (shop, library, or someone’s house) substituting instead a signal of interest by listing the books as of interest.
    16:58:40 From Frode Hegland : TikTok stitching?
    17:00:43 From Brandel Zachernuk : At base a blockchain isn’t a reflection of “who owns what”, but just “what happened” – which actions have been taken, which may represent ownership changes but also basic transformations with no transitive component
    17:01:31 From Brendan Langen : re: design systems embedding provenance, I’ve liked the recent additions from Chrome to add ‘Copy Link to Highlight’ in the right-click/context menu.

    Gavin Menichini: Product Presentation of Immersed

    Transcript

    Pre-Presentation

    Video: https://youtu.be/2Nc5COrVw24

    Gavin Menichini: My name is Gavin, I work here, at Immersed. I’ve been here for about two years. And I, essentially, lead all of our revenue and business development operations here. Worked closely with their founder and CEO on basically everything in the company. I was the seventh hire and considered, along with our founding team, as a mini co-founder. So I’m very familiar with the Immersed platform, and then the mission of what we’re building in the VR, AR space, as well as metaverse crypto as well, which we can talk on later. Yeah, today I’m going to be talking about Immersed, so I can have a quick presentation discussing what we’re doing. But also would love, I guess, some more feedback too of, open candid conversation, I know you have some questions you want to ask about this idea of working in the metaverse, going from 2D to VR, AR, the implications it could have. So, to be working in the VR space, very familiar with the technology and the implications of what we think is coming. I work closely very directly with Meta, HTC, Microsoft, as well as Future Apple. So pretty engrained in the market, and understanding what’s coming. So, yeah, if we can get a quick presentation on Immersed but maybe just go around and do a quick round of introductions and it’d be helpful for me. 
    Frode Hegland: Sure, okay. I’ll start. This group is… We’ve had over 100 of these meetings now. Every Monday and Friday. It started as an outgrowth of the annual symposium on The Future of Text, we’ve got two books published. We’ve now started a journal. The journal will be collated into a book, at the end of the year. We are passionate about text. And currently we are really living in VR land. The guys will introduce themselves in a minute. But from my perspective, what I’m really strongly focused on is the lack of imagination around this. We’re focused on work in VR, and I’m really, to put it plainly, shit scared that in a year or two, with Apple and other advanced devices out there, people will think that it’s basically a meeting room, and a game, and that’s about it. So we’re trying to look at ways of making, working collectively, and individually in VR, something else. And we’re doing it just as an open group of people, doing demos, and yeah, figuring it out together. Okay, who’s next?
    Elliot Siegel: Well, I’m Elliot Siegel. And the repository that Frode mentioned was in the National Library of Medicine. I was an executive with NLM for 35 years. I retired in 2010 and continued working with them as a consultant, and I still have involvement with them. And so, I basically was interested in the kind of matchmaking and arrangement with then Frode and Vint Cerf and in my old organization. I must confess I feel very intimidated. I’m obviously several generations apart from you guys. And it’s quite impressive. And my son, who’s in his early 40s, he is an Oculus user, and he said, “Dad, you’ve got to try this out”. And I’m thinking, “What the hell am I going to do with it? I’m not a gamer.” And so I’m looking for… They want to give me an 80th birthday present, this might be it. But I first have to find out whether there’s a use of it for me, beyond playing games. I did experiment with Second Life, by the way. That was probably before you guys were born. And I did bring that into NLM. We did some work applications with that. So I do have an interest in VR, and so I’m here to learn. I’ve got a phone call coming in. At any time I may have to get off at that point. But I’ll listen for as long as I can. Thank you.
    Brandel Zachernuk: Welcome. It’s exciting to have somebody with deep domain knowledge and awareness of what kind of problems need to be solved in such an important and serious space, as the NLM. My name is Brandel. I am the person who set the cat amongst the pigeons somewhat within The Future of Text book. I’m a creative technologist, working for a big tec in silicon valley. But very much here in a personal capacity. Something that I’ve been very passionate about, trying to investigate, and play with over the last 10 years or so is, what is the most pedestrian thing that you can do with virtual reality and emerging technology? And y version of that was word processing, was writing and reading and thinking about what are the basic building blocks of that process of writing and reading that can fundamentally be changed by virtual reality. Realizing that if you don’t have a screen, you have the ability for information to mean what it means for your purposes rather than for the technical limitations that apply as a consequence of a mouse or keyboard or things like that. And also, deeply invested in understanding some of the emerging cognitive science, and neurophysiological sort of views about what it is that the mind is and the way that we work best. So reading about, and learning about, what people call 4E Cognition that’s embedded, embodied, and active in the extended mind. And how that might pertain to what we should be doing with software and systems, as well as hardware, if necessary, to make it so that we can think properly, and express properly, and stuff. So, that’s why I’m here. And that’s what I’ve been playing with, and I’m looking forward to seeing what you’ve got.
    Frode Hegland: Brandel, I finally have a bio for you now. That was wonderful!
    Fabien Benetou: I’ll jump in because I’m here thanks to Brandel. And I think we have a, somewhat, similar profile. I’m a prototypist. I (indistinct) 15 years ago, in any kind of substrate and transition to Wiki. And now I’m basically bringing that Wiki to VR and AR. And I say this because there are a lot of people getting excited by VR, but few who think not games, to paraphrase Elliot, is interesting and I think it goes a lot further than this. This looks a bit like a city and it’s basically a representation of part of my mind through my notes at the 3D space that can be navigated in VR. To be fair, I don’t know what I’m doing, literally, I really don’t know how to do this. So I’m just tinkering, building prototype, here sharing very candidly what I build with everyone, because I believe I’m quite eager to hear the opinion, the criticism. And I’m genuinely convinced, as Brandel highlighted, that the neurological aspect of how we move in space can greatly benefit from the new medium. But like I said, very candid about it. I can tinker, I can code, but I don’t know how to do this. 
    Frode Hegland: You do know what you’re doing. But that’s another discussion right there. Alan?
    Alan Laidlaw: Yes, hello. I’m Alan. I work at Twilio, which is an SMS and a sort of multi-channel company. And I work there because of my interest in the various forms of text and communication. Twilio is unique in that, and it’s probably the closest to a ubiquitous governance model in that. Most of my job switches between technology and policy. The interest, the why I’m here, and how I think it dovetails nicely with VR is, as Brandel said, embodied, enacted cognition. When is text, the most natural way to communicate, versus other visual forms, or interactive forms. I think there’s a lot of potential there. At the same time, we’re in the middle of a shared experience, where we can’t even seem to agree on the same definitions of words. Or get excited almost about technology without looking at the base misunderstandings at the word level. So, that’s why I’m here.
    Frode Hegland: Great. So, Gavin, did you want to do a screen share? You’re welcome to do that, or talk, or however. But before you do that, just a quick check everybody here knows what Immersed is. And you’ve at least tried it or something similar, right?
    Alan Laidlaw: I know what it is, but I’d love a general description for the recording.
    Gavin Menichini: Yep. More than happy to talk through it. And so it seems that each of you are VR users to some extent. Own a Quest 2 or an HTC device or something. Am I understanding that correctly or is anyone here never used VR before?
    Frode Hegland: Bob, have you used VR recently? Within the last few years? You haven’t? Right. But Bob has a long history making information, generation murals, and Brandel has been working on putting those into VR. So, that’s the perspective and wisdom of Bob, who’s not in VR yet, but he will be soon. So, awesome. And you’re in, Gavin.
    Gavin Menichini: Awesome, of course. Well, thanks everyone for the introduction. It’s an honour to be here and chat with each of you about Immersed. So, what I think I’d like to do with our time is, I can give a high-level description of Immersed. I actually like to also show you a video to help encompass what the experience looks like. And so, for each of you who have used Immersed, and are very familiar. For those of you who haven’t checked out Immersed, I think the video that our marketing team put together is very helpful, just at the high level. And then, I can just walk through a basic slide deck that I like to show companies, and showcase the value a little bit. Some more on the sell-side, I assure you this is not a sales pitch, but I think it should be helpful to showcase some of the value that we have. 
    Frode Hegland: A little bit of an intro is nice. But consider that the experience is quite deep, in general. And also because it will be recorded. If you can do, kind of, a compressed intro, and then we go into questions and deeper, that would be really great.

    Presentation

    Video: https://youtu.be/2Nc5COrVw24?t=1353
    Gavin Menichini:  Immersed is a virtual reality product, working productivity software, where we make virtual offices. And so, what that means is, Immersed is broken down into two categories, in my opinion. We have a solo use case, and we have a collaboration meeting use case. So, the main feature that we have in Immersed is the ability to bring your computer screen, whether you have a Mac, a PC, or Linux, into virtual reality. So, whatever is on your computer screen is now brought to Immersed. And we’ve created our own proprietary technology to virtualize extensions of your screen. Very similar to, if you had a laptop or computer at your desk, and you plugged in extra, physical monitors, from our screen real estate. We’ve now virtualized that technology. It’s proprietary to us. And we’re the only ones in the world who can do that. And then, now at Immersed, instead of you working on one screen, for example, I use the MacBook Pro for work, so instead of me working on one MacBook Pro, with an Oculus Quest 2 headset, or a compatible headset, I can connect it to my computer, have a Immersed software on my computer, in my headset, bring my screen into virtual reality, have the ability to maximize it to the size of an iMac screen. I can shrink it and then create up to five virtual monitors around me for a much more immersive work experience for your 2D screens. And you can also have your own customized avatar that looks like you, and you can beam into all these cool environments that we’ve created. Think of them as higher fidelity, higher quality video game atmospheres. But not like a game, more like a professional environment. But we also have some fun gaming environments, or space station offices, or a space orbitarium, auditorium. We have something called alpine chalet, like a really beautiful ski lodge. Really, the creativity is endless. And so, within all of our environments, you can work there, and you can also meet and collaborate with people as other avatars, instead of us meeting here on zoom, where we’re having a 2D, very disconnected experience. I’m sure each of you probably heard the term Zoom fatigue or video conference fatigue? That’s been very real, especially with the COVID pandemic. And so, fortunately, that’s hopefully going away, and we can have a little bit more in-office interactions. But we believe Immersed is the perfect solution for hybrid and remote working. It’s the best tech bridge for recreating that sense of connection with people. And that sense of connection has been very valuable for a lot of organizations that we’re working with, as well as enhancing the collaboration experience from our monitor tech, and our screen sharing, screen streaming technology. So, people use it for the value, and the value that people get out of it is that, people find themselves more productive when working in Immersed, because now, they want to have more screen real estate, like all the environment we’ve been potentially created, to help preach cognitive focus. So, I have lots of news for customers and users who tell us that when they’re Immersed. They feel hyper-focused. More productive. In a state of deep workflow, whatever term you want to use. And people are progressing through the work faster, and feel less distracted. And then, just also, generally more connected, because when you’re in VR, it really feels like you have a sense of presence when you’re sitting across from a table from another avatar that is your friend or colleague. And that really boosts employee and person satisfaction, connection, just for an overall engaging, better collaborative experience when working remotely. Any questions around what I explained, or what Immersed is?


    Dialogue

    Video: https://youtu.be/2Nc5COrVw24?t=1549 
    Fabien Benetou: Super lovely. When you say screen sharing, for example, here I’m using Linux. Is it compatible with Linux? Or is it just Windows or macOS? Is it web-based?
    Gavin Menichini: So, it is compatible with Linux. And so, right now, you can have virtual monitors through a special extension that we’ve created. We’re still working on developing the virtual display tech to the degree we have for Mac and Windows. Statistics says that Linux is only one of two percent of our user base. And so, for us, as a business, we obviously have to optimize for most of our users. Since we’re a venture-backed startup. But that’s coming in the future. And then, you can also share screens with Linux. And so, with some of the extensions, you can use it for having multiple Linux displays, you can share those screens, as well, within Immersed.
Video: https://youtu.be/2Nc5COrVw24?t=1594
    Alan Laidlaw: That’s great. Yeah, this is really impressive. This is a question that may be more of a theme to get into later. But I definitely see the philosophy of starting with, where work is happening now, and like the way that you make train tracks, bringing bits and pieces into VR so that you can get bodies in there. I’m curious as to, once that’s happened or once you feel like you’ve got that sufficiently covered, is there a next step? What would you want the collaborative space in VR to look like that is unlike anything that we have in the real world, versus… Yeah, I’d love to know where you stand philosophically on that, as well, as whatever the roadmap is?
    Gavin Menichini: Sure. If I’m understanding your question properly, it’s how do we feel about how we see the evolution of VR collaboration, versus in-person collaboration? If we see there’s going to be an inherent benefit to VR collaboration as we progress, versus in person?
    Alan Laidlaw: Yeah, there’s that part. And there’s also, the kind of, is the main focus of the company to replicate and provide the affordances that we currently have, but in VR? Or is the main focus, now that you know once we’ve ported things into a VR space, let’s explore what VR can do?
    Gavin Menichini: Okay. So, it’s a little bit of both. It’s mostly just, we want to take what’s possible for in-person collaboration and bring it into VR, because we see a future of hybrid remote working. And so, COVID, obviously, accelerated this dynamic. So, Renji, our founder, started the company in 2017, knowing, believing that hybrid remote work was gonna become more and more possible as the internet and all things Web 2.0 became more prevalent. And we have technology tools where you don’t have to drive into an office every single day to accomplish work and be productive. But we found that the major challenges were, people aren’t as connected. The collaboration experience isn’t the same as being in person. So those are huge challenges for companies, in a sense of a decrease in productivity. So, all these are major challenges to solve. And those are the challenges that Renji set out to go build and fix with Immersed. So when we think about the future, we see Immersed as the best tech bridge, or tool for hybrid or remote working. Where you can maximize that sense of connection that you have in person, by having customizable avatars, where fidelity and quality will increase over time, giving you the tech tools through multiple monitors and solo work. Enhancing the solo work experience. So people become more productive, which is the end goal of giving them more time back in the day. And then also, corporations can continue to progress, as well, in their business goals, while balancing that with giving employees more time back of their day to find that beautiful balance. And so, we see it as a tech bridge, but we, as a VR company, we’re also are exploring the potentials of VR. Is there something that we haven’t tapped into yet that could be extremely valuable for all of our customers and users to add more value to their life and make their life better? So, it’s less so that, it’s more so we want to virtualize, make the hybrid remote collaboration, work experience, much more full, better value, with more value than it currently exists today with the Zoom, Slack, Microsoft Teams paradigm. 
Brandel Zachernuk- https://youtu.be/2Nc5COrVw24?t=1796
    Brandel Zachernuk: Yeah, I’m curious. It sounds like, primarily, or entirely, what you’ve built is the the connective tissue between the traditional 2D APPs that people are using within their computer space, and being able to create multi-panels, that people are interacting with that content on. Is that primarily through traditional input? Mouse, keyboard, trackpad? Or is this something where they’re interacting with those 2D APPs through some of the more spatial modalities that are offered hands or controllers? Do you use hands or is it all entirely controller-based?
    Gavin Menichini: Yeah, great question. So, the answer is, our largest user base is on the Oculus Quest 2. It’s definitely the strongest headset, bang for your buck on the market for now. There’s no question. But, right now, you can control your VR dynamics with the controllers or with hand tracking. We actually suggest people use hand tracking, because it’s easier, once you get used to it. One of the challenges we face right now is, there is an inherent learning curve for people learning how to interact with VR paradigms. And, as me being on a revenue side, I have to demonstrate Immersed to a lot of different companies and organizations, and so it can be challenging. At some point, I imagine it would be very similar. And I was born in 95, and so I wasn’t around these times. But I imagine it feels like demoing email to someone for the first time, on a computer, and they’ve never seen a computer, where they totally understand the concept of email. No more paper memos, no more post-it notes. Paper organization and file cabinets, all exist in the computer, and they get it. But, when I put a computer in front of them for the first time, they don’t know how to use it. What’s this track? They had the keyboard, the mouse, they don’t understand the UI, UX of the Oculus, the OS system. They don’t understand how to use that, so it’s intimidating. So, that’s the challenge we come across. And then, that answers your point with your first question, Brandel?
    Brandel Zachernuk: Yeah, I’ve got some follow-ups, but I’ll cede the floor to Frode.
    Video: https://youtu.be/2Nc5COrVw24?t=1918
    Frode Hegland: Okay. I’m kind of on that point. So, I have been using Immersed for a bit. And the negatives, to take that first, is that I think the onboarding really needs help. It’s nice when you get that person standing to your side and pointing out things, but then… So, the way it works is, the hand tracking is really good. That is what I use. I use my normal keyboard, physical keyboard on my Mac, and then I have the monitor. But it’s, to me, a little too easy to go in and out of the mode where my hands change the position and size of the monitor. You’re supposed to do a special hand thing to lock your hands to not be doing that. And so there’s pinning. So, when you’re talking about these onboarding issues, that’s still a lot of work. And that’s not a complaint about your company. That’s a complaint across the board. The surprise is also, it really is very pleasant. I mean, here, in this group, we talk about you know many kinds of interactions, but what I would like, in addition to making it more locked, to make the pinning easier. I do find that, sometimes, it doesn’t want to go exactly where I want. I’m a very visual person, kind of anal in that way, to use that language. I want it straight ahead of me, but very often it’s a little off. So, if I resize it this way, then it kind of follows. So, in other words, I’m so glad that you are working on these actual realities, boots on the ground thing, rather than just hypotheticals. Because it shows how difficult it is. You get this little control thing on your wrist, if there was one that says “hyper control mode”, different levels. Anyway, just observation, and question, and point.
    Gavin Menichini: Yeah. I can assure you that we obsess over these things internally. Our developers are extremely passionate about what we’re building. We have a very strong XR team. And our founder is very proud about how hard it is to get to our company, and how many people we reject. So, we really are hiring the best talent in the world, and I’ve seen this first-hand, getting to work with them. And we also have a very strong UI, UX team. But we’re really on the frontier of, this has never been done before. And we are pioneering. What does it mean to have excellent UI, UX paradigms and user onboarding paradigms in virtual reality? And one of the challenges we face is that, it’s still early. And so people are still trying to figure out, even foundations for what is good UI, UX. And we’re now introducing space, like spatial computing. And we’re going from 2D interfaces to 3D. What have we learned from good UI, UX or 2D translate to 3D, and paradigms of this? And people are now not just using a controller and mouse, they’re using hand tracking and spatial awareness. And how do we build good, not only do we understand what’s a good practice for having good paradigms in UI, UX, how do we code that well? And how do we build a good product around that, while also having dependencies on Oculus, HTC, and Apple? Where we’re dependent upon hardware technology to support our software. So we still live very much in the early days, where there’s a lot of tension of things are still being figured out. Which is why we’re a frontier tech. Which is why it takes time to build. But even with VR, AR, I think, it’s just going to take longer because there are so many more factors to consider. The people who pioneered 2D technology, Apple, Microsoft, etc, they didn’t have to consider. And so, I think the problem we’re solving candidly is exponentially harder than the problem they had to solve. But we also get to stand on their shoulders, and take some precedence that they built for us, and apply that to VR, where it makes sense.
Video: https://youtu.be/2Nc5COrVw24?t=2130Brandel Zachernuk: So, in terms of those new modalities. In terms of the interaction paradigms that seem to make the most sense, it sounds like you’re not building software that people use, as much as you’re using making software that people reach through to their other software with, at this point. Is that correct? You’re not making a word processor, you’re making the app that lets people see that word process. Which is a big problem. I’m not minimizing it. My question is:
    Do you have observations based on what people are using the way that they’re changing, for example, the size of their windows, the kinds of ways that they’re interacting with it? Do you have either observations about what customers are doing as a result of making the transition into effective productivity there? Or do you have any specific recommendations about things that they should avoid or reconsider given the differences in, for example, pixel density, or the angular fidelity of hand tracking within 3D, in comparison to the fidelity of being able to move around a physical mouse and keyboard? Given that those things are so much more precise. But also, much more limited in terms of the real estate that they have the ability to cover. Do you have any observations about what people do? Or even better, any recommendations that you make to clients about what they should be doing as a result of moving into the new medium?
    Gavin Menichini: Yeah, really good question. There are a few things. There’s a lot of things we could suggest. So, a lot of what we’re building is still very exploratory, of what’s the best paradigm for these things? And so, we’ve learned a lot of things, but we also understand there’s a lot more for us to build internally and explore. First and foremost, we definitely do not take, hopefully, this is obvious, but to address it, we definitely do not take a dystopian view of VR, AR. We don’t want people living in the headset. We don’t want people strapped it to their face extremities, like a feeding tube and water, etc. That’s not the future we want. We actually see VR, AR as a productivity enhancer, so people can spend less time working, because they’re getting more done in our products, because we’ve created a product so good that allows them to be more productive, so they get more done at work, but also, have more time to themselves. So, we suggest people take breaks, we don’t want you in a headset for eight hours straight. The same way no person would suggest for you to sit in front of your computer, and not stand, use the restroom, eat lunch, go on a walk or take a break. We could take the same paradigms. Because you can get so focused on Immersed, we also encourage our users to like, “Yeah, get stuff done, but take a break”. But then we’re also thinking through some of the observations we found. We’ve been surprised at how focused people have been. And the onboarding challenge is a big challenge, as Frode was mentioning. It’s one that we think about often. How do we make the onboarding experience better? And we’ve made progressions based on where we came from in the past. So, Frode, you’re seeing some of the first iterations of our onboarding experience, in the past, we didn’t have one. There’s something we actually pushed really hard for. We saw a lot of challenges of users sticking around because we didn’t have one. And we’re now continuing to push how do we make this easier. Explain things to people without making it too long, where people get uninterested and leave. It’s a really hard problem to solve. But we found, as we’re having an easier onboarding experience, helping people get used to the paradigms of working in VR and AR, and explaining how our technology works, and letting them get to, what we like to call this magic moment, of where they can see the potential of seeing and having their screens in VR. Having it be fully manipulative, you’re like the Jedi in the force. You can push and pull your screens with hand tracking, to pinch and expand. Put them all around you. If I’m answering your question, Brandel, we’re still exploring a lot of paradigms. But we found that it’s surprising how focused people are getting, which is awesome and encouraging. We find, which isn’t surprising as much anymore, companies, organizations, and teams are always very wild at how connected they feel to each other. So we always try to encourage people to work together. So, even on our elite tier, which is just our middle tier, like a pro think of it as a pro solo user, you have the ability to collaborate with up to four people in a private room. But we also have public spaces, where people can hang out and it’s free to use. Just think of it as a virtual coffee shop. You can hang out there, and meet with people. You can’t share your screens, obviously, for security reasons. But you can meet new people and collaborate. And it’s been cool to see how we’ve informed our own community where people can be connected with each other to be able to hang out and meet new people. So, hopefully, that answers a little bit of your question. There’s still a lot more we’re learning about the paradigms of working in 2D screens, and what people prefer, what’s the best practice.
Video: https://youtu.be/2Nc5COrVw24?t=2410
    Brandel Zachernuk: Yeah. One of the issues that I face when I think about where people can expect to be in VR productivity at this point, is the fact that Quest 1, Quest 2 and Vive, all of these things have a focal distance. Which is pretty distant, normally a minimum accommodation distance is about 1.4 meters, which means that anything that’s at approximately arm’s length. Which is where we have done the entirety of our productivity in the past. Is actually getting to within eye strain territory. The only headset that is out on the market that has any capacity for addressing that kind of range is actually the Magic Leap. Which I don’t recommend anybody pursue, because it’s got a second focal plane at 35 centimetres. Do you know where people put those panels on Quest? On Vive? I don’t know if you’ve got folks in a crystal or a coral value, whether that has any distinction in terms of where they put them? Or alternatively, do you recommend or are you aware of anybody making any modifications for being able to deal with a closer focal distance? I’m really interested in whether people can actually work the way they want to, as a consequence of the current limitations of the hardware at the moment.
    Gavin Menichini: Yeah. There are a few things in response to that. One: We’ve actually found, internally, even with the Quest 2, although the screen distance, et cetera, focal point, is a challenge, we’ve actually found that people in our experience are reporting less eye strain working in VR, than they are working from their computer. We’re candidly still trying to figure out why that’s the case. I’m not sure how the distance and the optics games that they’re playing in the Quest 2 and other headsets we use. But we’ve actually found that people are reporting less eye strain, just solely on customer reviews and feedback. So we haven’t done any studies. I personally don’t know a lot around IPDs and focal length distance of the exact hardware technology of all the headsets on the market. All I’m doing is paying attention to our customers, what they’re saying, and our users. And we’re actually, surprisingly, not getting that much eyestrain. We’ve actually said that a lot of people say they prefer working in VR than from their computers, without even blue light glasses. And they’re still getting less eye strain. So, the science and technicalities of how it’s working, I’m not sure. It’s definitely out of my realm of expertise. But I can assure you that the hardware manufacturers, because of our close relationship with Meta, HTC, they’re constantly thinking about that problem too, because you’re strapping an HMD to your face, how do you have a good experience from a health standpoint for your eyes?
    Brandel Zachernuk: Do you know how much time people are clocking in it?   
    Gavin Menichini: On average, our first user session is right around an hour 45 minutes to two hours. And we have power users who are spending six to eight hours a day inside of Immersed, clocking that much time in and generating getting value out of it. And it’s consistent. And I’m not sure what our average session time is. I would say it’s probably around an hour, two hours. But we have people who use it for focus first, where they want to go and focus sessions on Immersed, or people will spend four or five hours in it, and our power users will spend six, seven, eight hours.
    Video: https://youtu.be/2Nc5COrVw24?t=2609Frode Hegland: I can address these few points. Because, first of all, it’s kind of nice. I don’t go on Immersed every week, but when I do, I do get an email that says how many minutes I spent in Immersed, which is quite a useful statistic. So, I’m sure, obviously, you guys have more on that. When it comes to the eye strain, I tend to make the monitor quite large and put it away to do exactly the examination you’re talking about, Brandel. And I used to not like physical monitors being at that distance. It was a bit odd. But since I am keyboard, trackpad, where I don’t have to search for a mouse, I don’t need to see my hands anyway, even though I can. I do think that works. But maybe, Gavin, would you want to, you said you had a video to share a little bit of what it looks like?
    Gavin Menichini: Sure, yeah. I can pull that up real quick. So it’s a quick marketing demo video, but it does do a good job of showcasing the potential of what’s possible. And I’m not sure if you guys will be able to hear the audio. It’s just fun background music. It’s not that important. The visuals are what’s more important. Let me go ahead and pull this up for us real quick.
    Frode Hegland: I think you can just mute the audio and then talk if you want to highlight something, I guess.
    Gavin Menichini: Okay. Actually, yeah. That’s probably a good idea. So, this is also on YouTube. So just for each of your points, if you guys are curious and want to see more content, just type in Immersed VR on YouTube. Our Immersed logo is pretty clear. Our content team and marketing team put out a lot of content, so if you’re curious. We also have a video called “Work in VR, 11 tips for productivity”, where a head of content goes through some different pro tips. If you’re curious and just want to dive in more of a more nuanced demo of how you do things, etc, to see more of the user experience. So, this is a good, helpful high level video. So you can see you can have full control of your monitor. You can make it ginormous, like a movie screen. We have video editors, day traders, finance teams, and mostly developers are our main customer base. As you can see here, the user just sitting down at the coffee table, the keyboard is tracked. We also have a brand new keyboard feature coming out, it’s called keyboard passthrough, where we’ll leverage the cameras of your Oculus Quest to hold the VR and see your real-life keyboard, which we’re very excited about. And here you can just see just a brief collaboration session of two users collaborating with each other side by side. You can also incorporate your phone into VR, if you want to have your phone there. And then, here you’ll see what it looks like to have a meeting in one of our conference rooms. So, you can have multiple people in the room, we usually had 30 plus people in an environment, so it can easily support that. It also depends on, obviously, everyone’s network strength and quality, very similar to Zoom, or phone call. And that shows how quality the meeting is from their audio and screen sharing input, but if everyone’s on a good network quality, that’s not an issue. And then, lastly here, you can see one of our users with five screens, working in a space station. And that’s about it. Any questions or things that stood out from that, specifically?
    Video: https://youtu.be/2Nc5COrVw24?t=2800
    Frode Hegland: Yeah. A question about the backgrounds. You have some nice environments that can be applied. I think we can also import any 360 images, is that right, currently? And if so, can we also load custom 3D environments in the future? Are you thinking about customization for that aspect of it?
    Gavin Menichini: Yes. So, we are thinking about it, and we do have plans for users to incorporate 3D environments. There are a few challenges with that, for a few obvious reasons, which I could touch on a second. But we do support 360 environments, 360 photos for users to incorporate. And we also have a very talented artist and developer team that are constantly making new environments. And we have user polls, and we figure out what our users want to build and what they’d like to see. And as we, obviously, continue to grow our company, right now we’re in the process of fundraising for a series, and once we do that, we’re hoping to go from 27-28 employees right now, to at least 100 by the end of the year. The vast majority of them will be developers to continue to enhance the quality of our product. And then, we also will support 3D imports of environments. But because the Quest 2 has some compute limitations, we have to make sure that each of our environments have specific poly counts, and specific compute measurements, so that the Quest 2 won’t explode if they try and open that environment in Immersed, as well as making sure that your Immersed experiences can be optimized in high quality and not going to lag, et cetera. So right now, we’re thinking: How do we enable our users to build custom environments? And then, two: How do we make sure they meet our specific requirements for the Quest 2. But naturally, over time, headsets are getting stronger, computing powers are getting better. Very similarly when you go from a Nintendo 64 graphics, to now the Xbox Series X. The ginormous quality. Headset quality will be the same. So, we’ll have more robust environments to have some more, give and take optimizations for environments our users give to us. So it isn’t our pipeline, but we’re pushing it further down the pipeline than we originally wanted. Just doe to some natural tech limitations. And also the fact that we are an adventure back startup, and we have to be extremely careful of what we work on, and optimize for the highest impact. But we’re starting to have some more fun and having some traction in our series A conversations. And hopefully have some more flexibility, financially, to continue pushing.
    Frode Hegland: Thank you. Alan?
Video: https://youtu.be/2Nc5COrVw24?t=2943
    Alan Laidlaw: Yes. So, this is maybe a, kind of, Twilio-esque question about the design material of network strength bandwidth and compute, like you mentioned. And I’m wondering, I saw in the demo, the virtual keyboard that, of course, the inputs would be connected to a network versus a physical keyboard that you already have in front of you, if it were possible to use the physical keyboard and have those inputs go into the VR environment, or AR environment, in this case, would that be preferred? Is that the plan? And if so, you know, that opens up, I mean, this is such a rich pioneer, as you mentioned, territory, so many ways to handle this. Would there be a future where, if my hands are doing one thing, then that’s an indication that I’m in my real world environment, but if I hand at something else and that’s suggesting, you know, take my hand into VR, so I can manipulate something? I’m curious about. Any thoughts about, essentially, that design problem, versus the hard physical constraints of bandwidth? Is it just easier? Does it make a better experience to stick with a virtual keyboard for that reason? So, you don’t, at least, have a disconnect between real world and VR? And I’m sure there are other ways to frame that question.
    Gavin Menichini: No, that’s fine. And I can answer a few points and a few follow up questions to make sure I understand you correctly. For the keyboard, specifically, the current keyboard tracking system we have in place is not optimal. It was just the first step of what we wanted to build to help make the typing VR problem easier, which is our biggest request. So we are now leveraging, I think, a way stronger feature, which is called “Keyboard pass-through”. So, for those who you know, the Oculus Quest 2 has a pass-through feature, where you can see the real world around you through the camera system, and they’re stitching the imagery together. We now have the ability to create a pass-through portal system, where you can cut out a hole in VR over your keyboard. So, whatever keyboard you have, whether it’s Mac, Apple, whatever. The funky keyboards, that a lot of our developers really like to use for a few reasons, you can now see that keyboard in your real hands through a little cut-out in VR. And then, when it comes from inputs, of what you mentioned of doing something with your hands, it being a real life thing versus VR thing. Are you referring to that in regards to having a mixed reality headset where it can do AR and VR and you want to be able to switch from real world to VR with the hand motion?
    Alan Laidlaw: Yeah. A piece of my question. I can clarify. I am referring to mixed. But specifically where that applies is the cut-out window approach, is definitely a step in the right direction. But it seems that’s still based entirely on the Oculus understanding of what your fingertips are doing. Which will obviously have some misfires. And that would be an incredibly frustrating experience for someone who’s used to a keyboard always responding, hitting the keys that you’re supposed to be hitting. So, at some point, it might make more sense to say, “Okay, actually we’re going to cut out. We’re going to forget the window approach and have the real input from the real keyboard go into our system”. 
    Gavin Menichini: So, that’s what it is, Alan. Just to further clarify, we always want our users to use their real hands on the real keyboard. And you’re not using your virtual hands on a virtual keyboard. You’re now seeing, with pass-through, your real hands and your real keyboard, and you’re typing on your real keyboard.
    Frode Hegland: A really important point to make in this discussion is, if for a single user, there are two elements here: There is the thing around you image of 3D, and then you have your screen. But that is the normal Mac, Linux or Windows screen. And you use your normal keyboard. So, I have, actually, used my own software. I’ve used Author to do some writing on a big nice screen, so it is exactly the keyboard I’m used to.
    Alan Laidlaw: Right. So, how that applies to the mixed reality question is, if I’m using the real keyboard, have the real screen, but one of my screens is an iPad, a touch screen, that’s in VR, where I want to move some elements around, how do I then, transition from my hands in the real world to now I want my hand to be in VR?
    Gavin Menichini: So, you’re going to be in Immersed, as of now. You’re going to be in VR, and you’re going to have a small cut out into the real world. And so, it’s just, right here is a real world, through a cutout hole, and then, if you have your hands here, and you want to move your hands into here, the moment your hands leave the pass-through portal in VR, it turns into virtual hands. And so, to further clarify, right now, your virtual hands, you have in hand tracking, will still be over your hands on the pass-through window. We’re experimenting taking that out for further clarity of seeing your camera hands on your keyboard. But, yes. When you’re in Immersed, it’ll transition from your camera hands, real life hands, to virtual hands. If you have an iPad and you want to swipe something, whatever, it’s that’s seamless. But then, for mixed reality dynamics, in the future, we’re not sure what that’s going to look like, because it’s not here yet. So, we need to experiment, figure out what that looks like.
    Frode Hegland: Fabien?
Video: https://youtu.be/2Nc5COrVw24?t=3265
    Fabien Benetou: Yeah, thank you. It’s actually a continuation of your question because you asked about the background environment using 360, and including the old model. It’s also a question that you know I was going to ask, and I guess Gavin did, because I’m a developer, you can imagine it too. If it’s not enough, if somehow there are features that I want to develop, and they are very weird, nobody else will care about it, and, as you say, as a start-up you can’t do everything, you need to put some priorities. What can I do? Basically, is it open source? if not, is there an API? If there is an API, what has the community built so far?
    Gavin Menichini: Yeah, great question. So, as of now, we currently don’t have any APIs or open SDKs, open source code for users to use. We’ve had this feature request a lot. And our CEO is pondering what his approach wants to be in the future. So, we do want to do something around that in the future. But, because we’re still so early stage, and we have so many things we have to focus on, it’s extremely important that we’re very careful with what we work on, and how focused, and how hard working we are towards those. As we continue to progress as a company, and as our revenue increases, as we raise subsequent rounds of funding, that gives us the flexibility to explore these things. And one of the biggest feature requests we’ve had is having an Immersed SDK for our streaming monitor technology so people can start to play with different variations of what we’re building. But I do know that Renji does not allow for any free, open source coding work whatsoever. Just for a few reasons legality-wise, and I think we had a few experiences in the past where we experiment with that, and it backfired to where developers were claiming they owed, they deserved equity, or funding. It was a hot mess. So, we don’t allow anyone to work for us for free, or to give us any form of software, to any regard, any work period, to prevent any legal issues, to prevent any claims like that ,which is kind of unfortunate. But he’s a stickler and definitely will not budge on that. But in the future, hopefully, we’ll have an SDK or some APIs that are opened up, or open source code, once we’re more successfully established for people to experiment and start making their own fun iterations to immerse on. 
    Video: https://youtu.be/2Nc5COrVw24?t=3396
    Brandel Zachernuk: I have a question about the windows. You mentioned that, when somebody has a pro subscription, they can be socially connected, but not share screens. I presume, in an enterprise circumstance, people can see each other’s windows. Have you observed any ways in which people have used their windows more discursively, in terms of having them as props, essentially, for communicating with each other, rather than primarily, or solely for working on their own? The fact that they can move these monitors, these windows around, does that change anything about the function of them within a workflow or a discussion context?
    Gavin Menichini: Yeah. So, to clarify under the tier and your functionality. We have a free tier, where you can connect your computer and traverse the gap. You get one free virtual display. You cannot, on a free tier, ever share screens in all of our public rooms. You can’t share screens, regardless of your license. Here, the only place you can share screens is in a private collaboration room. Which means, you have to be on our elite tier, or a teams tier. On our elite tier, which is our mid-pro-solo tier, you can have up to three other people in the room with you, four total, and you can share screens with each other. And the default is, your screens are never shared. So, if you have four people in a room, and they each have three screens up, you cannot see anyone else’s screen until you voluntarily share your screen and confirm that screen. And then, it will highlight red, for security purposes. But if you’re an environment where, Brandel, you wanted to share your screen, when you share your screen and say, we’re all sitting at a conference room table, if I have my screens like, one, two, three, right here, and I share my middle screen, my screen is then going to pop up in your perspective to you. To where you have control of my shared screen. You can make it larger. Make it bigger. Shrink it, etc. And we’re also going to be building different environment anchors to where say, for example, in your conference room, and in a normal conference room you have a large tv on the wall, say, in virtual reality, you could take your screen and snap it to that place, and once it’s snapped into that little TV slot, that screen will be automatically shared and everyone sees it at that perspective, rather than their own perspective. And then, from a communication standpoint, we have teams who will meet together in different dedicated rooms, and then they’ll share screens, and look at data together. There’s… I can’t remember quite the name, it’s a software development team where something goes down, they have to very well come together. Devops teams come together, they share screens looking at data to fix a down server or something, and they can all see, and analyse that data together. And we’re exploring the different feature adds we can add to make that experience easier and more robust.
    Brandel Zachernuk: And so, yeah. My question is: Are you aware of the ways in which people make use of that in terms of being able to share and show more things? One of the things about desktop computing, even in the context where people are co-located, co-present in physical meet space, you don’t actually have very good performability of computer monitors. It kind of sucks in Zoom. It kind of sucks in real life, as well. Do people show and share differently, as a consequence of being in Immersed? Can you characterize anything about that?
    Gavin Menichini: Yes. So, the answer is yes. They have the ability to share more screens, and so, in meet space, in real-world, a funny term there for meet space, but. You can only have one computer screen if you’re working on a laptop, and that’s frustrating. Unless you have a TV, you have to airdrop, XYZ, whatever. But, in Immersed, you have up to five screens. And so, we have teams of four, and they’ll share two or three screens at once, and they can have a whole arrangement of data, 10 screens are being shared, and they can rearrange those individually so it all pops up in front of them, and then, they all rearrange them in order that they want, and they can all watch a huge sharing screen of data. That is not possible in real life because of the technology we provide to them. And then, there’s different iterations of that experience where, maybe, it’s two or three screens, it’s here, it’s there. And so, because of the core tech that we have where you can have multiple screens and then share each of those, that opens up the possibility for more data visualization, because you have more screen real estate. This opportunity to collaborate more effectively, and if you had one computer screen on Zoom, which as you mentioned, is challenging, or even in real life, because in real life you could have a computer and two TVs, but in Immersed you could have eight screens being shared at once. 
    Brandel Zachernuk: And do you share control? Is it something where it’s only the person sharing it has the control, so other people would have read-only access? Or do you have the ability for people to be able to pass that control around? Send the user events such that everybody would be able to have shared control?
    Gavin Menichini: So, not right now, but we’re building that out. For the time being, we want everyone just to use collaboration tools they are currently using. Use Google Docs. Use Miro. Use Slack. Whatever. So, the current collaboration documents you guys are using now, we just want to use those applications on Immersed, because whatever you can run on your computer, you can run on your screen in Immersed. It is just your computer in Immersed. So, we tell people to do that. But now they get the added benefit of deeper connection. Just actually to be sitting next to your employee, or your colleague and then, now you can have multiple screens being shared. So, now it’s like a supercharged productivity experience, collaboration experience. Any other questions? I have about four minutes left, so I want to make sure I can answer all the questions you guys have.
    Video: https://youtu.be/2Nc5COrVw24?t=3724
    Fabien Benetou: I’ll make a one minute question. I’ll just say faster. If I understood correctly, the primitive is the screen. But is there anything else beyond the screen? Can you share 3D assets? Would the content can be pulled from the screen? If not, can you take capture of the screen. either as image, or video? And is it the whole screen only or part of the screen? And imagining you’ve done that, let’s say, part of the screen as a video of 30 seconds, can you make it permanent in the environment so that if I come back with colleagues tomorrow? Capture? Because that’s the challenge we have here all the time, we have great discussions and then, what happens to the content?
    Gavin Menichini: So, it’s in our pipeline to incorporate other assets that will be able to be brought into Immersed, and then remain persistent in the rooms. So, we’ve created the technology for persistent rooms, meaning, whatever you leave in there, it’s going to stay. Very similar to a conference room that you’ve dedicated for project. You put post notes around the wall, and obviously, come back to it the next day. So there same concept when in VR. And then, we also have plans to incorporate 3D assets, 3D CAD models, et cetera, into Immersed. But because you have your screens and teams are figuring out how to collaborate on 2D screens, we’re just, for the time being, we’re saying just continue to use your CAD model software on your computer 2D. But in the future we’ll have that capability. We also don’t want to be like F3D modelling VR software. So, we’re trying to find that balance. Which is why it’s been de-prioritized. But it is coming. And hopefully, in 2022 and then, we have also explored having video files that are in form of screens, or an image file, or post-it notes, We’re also going to improve our whiteboard experience, which is just some of one of our first iterations. And so, there’s a lot of improvements we’re going to be making in the future, in addition to different assets, photos, videos, 3D modelling software, et cetera. We’ve had that request multiple times and plan on building it in the future.
    Fabien Benetou: Oh, and super quick. It means you get in, you do the work, you get out, but you don’t have something like a trace of it as is right now?
    Gavin Menichini: As in persistence? As in you get in, you leave your screens there?
    Fabien Benetou: Or even something you can extract out of it. Frode was saying that, for example, he gets an email about the time he spent on a session, but is there something else? Again, because usually, you have maybe another eureka moment, but you have some kind of realization in the space, thanks to the space and the tools. And how can you get that it’s really a struggle.
    Gavin Menichini: I’m not sure, I’m sorry. I’m not sure I’m understanding your question correctly, but well, so it’s…
    Brandel Zachernuk: Maybe I can take a run of it. So, when people play VR games, at a VR arcade, one of the things that people will often produce is a sizzle reel of moments in that action. There’s a replay recording, an artifact of the experience. Of that process.
    Gavin Menichini: Okay, yes. So, for the time being there is no functionality in Immersed for that. But Oculus gives you the ability to record what you’re watching in VR. And you can pull that out and take that experience with you, as well as take snapshots. And then, we have no plans on incorporating that functionality into Immersed because Oculus has it, and I think HTC does, and other hardware manufacturers will provide that recording experience for you to then take away with you.
    Frode Hegland: Thank you very much, Gavin, a very interesting, real-world perspective on a very specific issue. So, very grateful. We’ll stay in touch. Run to your next meeting. When this journal issue is out, I’ll send you an update.
    Gavin Menichini: Thank you, Frode. It was a pleasure getting to chat with each of you. God bless. Hope you guys have a great Friday, weekend, and we’ll stay connected.
    Frode Hegland: You too. Take care, bye. 
    Gavin Menichini: Thanks, y’all. 
    Brandel Zachernuk: I’m going to drop at some point, as well. My Fridays are missing the Near Future Laboratory chats from joining the second hour of this. So, I want to make sure that I keep my hand in that community as well, because they’re very interesting people too.


    Further Discussion

    Video: https://youtu.be/2Nc5COrVw24?t=3987
    Frode Hegland: Oh, okay. That sounds interesting. Yeah, we can look at changing times and stuff. So, briefly on this, and then on the meeting that I had with Elliot earlier today. This is interesting to us, because they are thinking a lot less VR than we are. But it is a real and commercial company and obviously a lot of his words were very salesy. Which is fine. But it literally is, rectangle in the room. That’s it. So, in many ways, it’s really, phenomenally, useful. And I’m very glad they’re doing it. I’m glad we have a bit of a connection to them now. But the whole issue of taking something out of the screen and putting it somewhere else, it was partly using their system that made me realize that’s not possible. And that’s actually kind of a big deal. So that’s that. And the meeting that Elliot and I had today, he mentioned who it was with. And I didn’t want to put too much into the record on that. But it was really interesting. The meeting was because of Visual-Meta. Elliot introduced us to these people. And Vint. Vint couldn’t be there today. We started a discussion. They have all kinds of issues with Visual-Meta. They love the idea, but then their implementation issue, blah, blah, blah. But towards the end, when I started talking about the Metaverse thing, they had no idea about the problems that we have learned. And they were really invigorated and stressed by it. So, I think what we’re doing here, in this community, is right on. I’m going to try now to rewrite some of the earlier stuff, to write a little piece over the weekend on academic documents in the Metaverse to highlight the issues. And if you guys want to contribute some issues to that document, that would be great or not, depending on how you feel. But I think they really understood that, what I said to them at the end is, if you have a physical meeting of a piece of paper, you can do whatever you want. But in the Metaverse, it can only do with the document, whatever the room allows you to, which is mind-blowingly crazy. And they represent a lot of really big publishers within medicine. They are under the National Institute of Health, as I understand. I’m not sure if Elliot is still in the room. So, yeah. It is good that we are looking in the right areas. 
    Brandel Zachernuk: Yeah, that’s really constructive. For my part, one of the things that I’ve realized is that the hypertext people, the people who understand the value of things, like structured writing, and relationship linking, and things like that, are far better positioned than many, possibly most, to understand some of the questions and issues that are intrinsic to the idea of a Metaverse. I was watching, so I linked a podcast to some folks, it’s called, I think is it called Into The Metaverse, but it was a conversation between a VP of Unreal and the and the principal programmer, whatever, architect of Unity. So Vladimir Vukićević, who was who created Unreal and Unity, and Vukićević, I don’t know if I’m garbling that name, he was the inventor of WebGL. Which is the foundation for all of the stuff that we do in virtual reality on web, as well as just being very good for being able to do fancy graphics, as I do at work and things like that. But their view of what goes into a Metaverse what needs to be known about entities relationships descriptions and things was just incredibly naive. I’ll link the videos, but they see the idea of a browser as being intrinsic. And another person, who’s a 25-year veteran of Pixar and the inventor of the Universal Scene Description format, USD, which as you may know, Apple is interested in, sort of, promoting as being useful in the form of what this format of choice for augmented reality, quick look files, things like that. And again, just incredible naivete in terms of what are important things to be able to describe with regard to relationships, and constraints, and linkages of the kind that hypertext is. It’s the bread and butter of understanding how to make a hypertext relevant notionally and structurally, in a way that means that it’s (indistinct). So, yeah. It’s exciting, but it’s also distressing to see how much that thinking of people who are really titans of an interactive graphics field don’t know what this medium is. So, that looks fun.
    Frode Hegland: Yeah, it’s scary and fun. But I think we’re very lucky to have Bob here, because I’ve been very about the document and so on, and for about to say, “Well, actually, let’s use the wall as well”. It helps us think about going between spaces. And what I highlighted in the meeting earlier today was, what if I take one document from one repository, and let’s say, it has all the meta, so I’ve put a little bit here, a little bit there, but then, I have another document, from a different repository over here and I draw a connection between them. That connection now is a piece of information too. Where is stored? Who owns it? And how do I interact with that in the future? These are things that are not even begun to be addressed, because I think, all the companies doing the big stuff just want everything to go through their stuff.
    Bob Horn: And what kind is it? That is the connection.
    Frode Hegland:  Yeah, exactly. So, we’re early naive days, so we need to produce some interesting worthwhile questions here. Fabien, I see your big yellow hand.
Video: https://youtu.be/2Nc5COrVw24?t=4369
    Fabien Benetou: I’ll put the less yellow hand on the side. Earlier when I said, I don’t know what I’m doing, it wasn’t like fake modesty or trying to undermine my work or this kind of thing. I actually mean it. I do a bunch of stuff and some of the stuff I do, I hope is interesting. I hope is even new, and might lead to other things. But in practice, it’s not purely random, and there are some let’s say, not heuristic, but there are some design principles, philosophy behind it, understanding of some, hopefully, core principle of urology, or cognitive science, or just engineering. But in practice, I think we have to be humble enough about this being a new medium. And figuring it out is not trivial, it’s not easy, and it’s not, I think, it is part of it, is intelligence and knowledge, but a lot of it is all that, plus luck, plus attempting.
    Frode Hegland: Oh, I agree with you. And I see that in this group, the reason I said it was I just wanted him to have a clue of the level of who we are in the room. That’s all. I think our ignorance in this room is great. I saw this graphic when I started studying, I haven’t been able to find the source, but it showed if you know this much about a subject, the circumference is the ignorance, it’s small. The more you know, the bigger circumference it is. And I found that to be such a graphic illustration of, you know something, you don’t know. We need to go all over the place. But at least we’re beginning to see some of the questions. And I think that’s a real contribution of what we’re doing here. So, we just got to keep on going. Also, as you know, we now have two presenters a month, which mean, for the next two or three months, I’ve only signed up one. Brandel is going to be doing, hopefully, in two to three weeks something, right?
    Brandel Zachernuk: Yeah. I’m still chipping away. Then I realized that there’s some reading I need to do, in order to make sure that I’m not mischaracterizing Descartes.
    Frode Hegland: Okay, that sounds like fun. Fabien, would you honour us, as well, with doing a hosted presentation over the next month or two or something?
    Fabien Benetou: Yeah, with pleasure.
    Frode Hegland: Fantastic! Our pathetic little journal is growing slightly less pathetic by the month.
    Fabien Benetou: I can give a teaser on… I don’t have a title yet, but let’s say, how a librarian, what a librarian would do if they were able to move walls around.
    Frode Hegland: That’s very interesting. It was good the one we had on Monday, with Jad. It was completely different from what we’re looking at. Looking at identity. And for you to now talk about that aspect, is kind of a spatial aspect, that’s very interesting.
    Bob Horn: I’m looking forward to whatever you write about this weekend, Frode. Because for me, the summaries of our discussions, with some organization, not anywhere near perfect organization, not asking for that, but some organization, some patterns are what are important to me. And when I find really good bunches of those, then I can visualize them. So, I’m still looking for some sort of expression of levels of where the problems are as we see it now. In other words, there were the, what I heard today, with Immersed, was a set of problems at a certain level, to some degree. And then, a little bit in the organization of knowledge, but not a lot, but that’s what came up in our discussion afterwards and so forth. So, whenever there’s that kind of summary, I really appreciate  whatever you do in that regard, because I know it’s the hardest work at this stage. So I’m trying to say something encouraging, I guess.
    Frode Hegland: Yeah, thank you, Bob. That’s very nice. I just put a link on this document that I wrote today. The next thing will be, as we discussed. But information has to be somewhere. It’s such an obvious thing, but it doesn’t seem to be acknowledged. Because in a virtual environment, we all know that you watch a Pixar animation, they’ve made every single pixel on the screen. There is no sky even. We know that. But when it becomes interactive, and we move things in and out. Oh, Brandel had a thing there.
    Brandel Zachernuk: One of the things that they that Guido Quaroni talks about, as as well as people have talked a bunch about, some of the influences and contributions of. Quilez makes Shadertoy, I don’t know if you’ve ever seen them or heard of that. But it’s this raymarched based fragment shader system for being able to do procedural systems. And so, none of the moss in brave, if you’ve seen that film, exists. Nobody modeled it. Nobody decided which pieces should go where. What they did was, Quilez has this amazing mind for a completely novel form of representation of data. It’s called the Signed Distance Fields raymarched shader. And so it’s all procedural. And all people had to do was navigate through this implicit virtual space to find the pieces that they wanted to stitch into the films. And so, it never existed. It’s something that was conjured on a procedural basis and then people navigated through it. So yes, things have to exist. But that’s not because people make it, sometimes. And sometimes it’s because people make a latent space, and then, they navigate it. And I think that the contrast between those two things is fascinating, in terms of what that means creative tools oblige us to be able to do. Anyway.
    Frode Hegland: Oh, yeah. Absolutely. Like No Man’s Sky and lots of interesting software out there. But it’s still not in the world, so to speak. One thing I still really want, and I’m going to pressure you guys every time, no, it’s not to write your bio, but it is some mechanism where, as an example, our journal, I can put it in a thing so that you guys can put it in your thing. Because then we can really start having real stuff that is our stuff. So if you can keep that in the back of your mind. Even if you can just spec how it should work, I’ll try to find someone to do it, if it’s kind of rote work and not a big framework for you guys.
    Brandel Zachernuk: Yeah, I definitely intend to play more with actually representing text again. And somebody made a sort of invitation slash prompt blast challenge to get my text renderings to be better. Which means that I’ll need something to do it better on. And so, yeah. I think that would be a really interesting target goal.
    Frode Hegland: Awesome. Fabien, I see you have your hand, but on that same request to you guys, imagine we already have some web pages where you can click at the bottom, view in VR, when you’re in the environment. That’s nice. Imagine if we have documents like that, that’ll be amazing. And I don’t know what that would mean, yet. There are some thoughts, but it goes towards the earlier. Okay, yes. Fabien, please?
    Fabien Benetou: Yeah, I think we need to go a bit beyond imagining. Then we can have some sandbox, some prototypes of the documents. We have recorded, that’s how I started, the first time I joined, you mentioned Visual-Meta. And then, I put a PDF and some of the media data in there. No matter how the outcome was gonna exist, so I definitely think that’s one of the most interesting way to do it. The quick word on writing, my personal fear about writing is that, I don’t know if you know the concept, and I have the name of the people of my tongue, but yeah, ID Depth. So the idea is that you have too many ideas, and then at some point, if you don’t realize some of them, if you don’t build, implement, make it happen, however the form is, it’s just crushing. And then, let’s say, if I start to write, or prepare for the presentation I mentioned just 30 minutes or 10 minutes ago, the excitement and the problem is, it’s for sure, by summarizing it, stepping back, that’s going to bring new ideas. Like, “Oh, now I need to implement. Now I need to test it”. There is validation on it. I’m just not complaining or anything. Just showing a bit my perspective of my fear of writing. And also because in the past, at some point I did just write. I did not code anything. It felt good in a way. But then also. a lot of it was, I don’t want to say bullshit but, maybe not as interesting as that or it was maybe a little, so I’m just personally trying to find the right balance between summarizing, sharing, having a way that the content can be reused, regardless of the implementation, any implementation. Just sharing my perspective there.
    Frode Hegland: That is a very important perspective. And it is very important to share. And I think we’re all very different in this. And for this particular community, my job as, quote-unquote editor, is to try to create an environment where we’re comfortable with different levels. Like Adam, he will not write. Fine. I steal from Twitter, put it in the journal, and he approves it. Hopefully. Well, so far he has. So, if you want to write, write. But also, I really share, so strongly, the mental thing you talked about. We can’t know what it’s like to hear something until it exists. And we say, if an idea is important write it down, because writing it down, of course, helps clarifying. But that’s only if it’s that kind of an idea. Implementing, in demos and code is as important. I’ve been lucky enough to be involved with building our summer house, in Norway, doing a renovation here. And because it’s a physical environment, even doing it in SketchUp it’s not enough. I made many mistakes. Thankfully, there were experienced people who could help me see it in the real thing. Sometimes we had to put boards up in a room to see what it would feel like. So, yeah. Our imaginations are hugely constrained. So, it’s now 19 past. And Brandel was suggesting he had to go somewhere else. I think it’s okay, with a small group, if we finish half-past, considering this will be transcribed, anyway. And so, let’s have a good weekend. Unless someone wants a further topic discussion, which I’m totally happy with also.
    Brandel Zachernuk: Yeah. I’m looking forward to chatting on Monday. And I will read through what you sent to the group that you discussed things with today. Connecting to people with problems that are more than graphical, and more than attends to the Metaverse, I think is really fascinating. Providing they have the imagination to be able to see that, what they are talking about is a “Docuverse”. Is these sort of connected concepts that Bob has written about. I’ve got a book but it’s on the coffee table. The pages after 244. The characterization of the actual information and decision spaces that you have. It’s got the person with the HMD but then it’s sort of situated in an organization where there are flows of decisions. And I think that, recognizing that we can do work on that is fascinating. 
    Bob Horn: I can send that to everybody, if you like.
    Frode Hegland: Oh, I have it. So without naming names or exactly who I was speaking to today since we’re still recording. The interesting thing is, of course, this feeds the, starting with the Visual-Meta, it feeds into some part of the organization desperately wants something like that and they’ve been pushing for years. But there are resources, and organization, and communication, all those real-world issues. So then, a huge problem is, I come in as an outsider and I say, “Hey, here’s a solution. It’s really cheap and simple”. It’s kind of like I’m stealing their thunder, right? I am not doing that, I’m just trying to help them realize what they already want to do. And today, when they talked about different standards, I said, “Look. Honestly, what’s in Visual-Meta, I don’t care. If you could, please, put it in BibTeX, the basic stuff, but if you want to have some json in there, it’s not something I would like, but if you want to do it there’s nothing wrong with that”. So, to try to make these people feel that they are being enabled, rather than someone kind of moving them along is emotionally, human difficult. And also, for them to feel that they’re doing something with Vint Cerf. All of that, hopefully, will help them feel a bit of excitement. But I also think that the incredibly hard issues with the Metaverse that we’re bringing up also unlock something in their imagination. Because, imagine if we, at the end of this year, we have a demo, where we have a printed document, and then we pretend to do OCR, we don’t need to do it live, right? And then, we have it on the computer, very nice. And now, suddenly, we put on a headset. You all know where I’m going with this, right? We have that thing. But then, as the crucial question you kept asking Gavin, and I’m glad you both asked it, Fabien and Brandel, what happens to the room when you leave it? What happens to the artifacts and the relationship if we solve some of that? What an incredibly strong demo that would be. And also, was it a little bit of a wake-up call for you guys to see that this well-funded new company is still dealing with only rectangles?
    Brandel Zachernuk: No. I know from my own internal experience just how coarse the thinking is, even with better funding.
    Frode Hegland: Yeah. And the greatest thing about our group is, we have zero funding. And we have zero bosses. All we have is our honesty, community, and passion. Now, it’s a very different place to invent from. But look at all the great inventions. Vint was a graduate student, Tim Berners-Lee was trying to do something in a different lab. You know all the stories. Great innovations have to come from groups like this. I don’t know if we’re going to invent something. I don’t know. I don’t really care. But I really do care, desperately, that we contribute to the dialogue.
    Brandel Zachernuk: Yeah, I think that’s valuable. I think that the fact that we have your perspective on visual forms of important distilled information thought is going to be really valuable. And one of the things I’d like to do, given that you said that so many people make use of Vision 2050 is start with that as a sculpture, as a system to be able to jump into further detail. Do you have more on that one? 
    Bob Horn: Well, I can take it apart. I can do what different things we want to do with it. For example, when we were clearing it with the team that worked that created some of the thought that went into it, the back cast thought, I would send the long trail of the four decades of transportation to Boeing, to Volkswagen, and to Toyota. I didn’t send it to the rest of the people. So, I could take that, I actually took that out and sent a PDF of that, only that to them. And that’s one dimension. Another dimension is that five years later, I worked on another project that was similar called Poll Free. Which is also on my website. And it narrowed the focus to Europe, to the European Union, rather than the whole world. But the structure is similar in many ways. So each one of those are extractable. Then also, I have a few…  The two or three years after working on the Vision 2050, I would give lectures of different kinds. And people would ask me, “Well, how are we doing on this or that requirement?” And so, I would try to pull up whatever data there was, two, or three, or four years later, and put that in my slides, so there, that material is available. So, that we can extract, you could demo, at least that, “Here’s what we thought in 2010 and here’s what it looked like in 2014”. For one small chunk of the whole picture. So, yeah. And I have several, maybe I don’t know, six or eight, at least of those, that where I could find data easily and fast. So, there’s a bit of demo material there that one could portray a different kind of a landscape than the one that you were pointed out just a minute ago.
    Brandel Zachernuk: Yeah. That would be really interesting to play with. I was just looking to add some of the things. I think that the one thing that I had seen of the Vision 2050 was the fairly simple one, it’s a sort of a four, this node graph here, the nine billion people live well and within the limits of the planet I hadn’t seen yet. The sustainable pathway toward a sustainable 2050 document that you linked here on your site, which has a ton more information. And, yeah. One of the things that I’m curious about, one of the things that I think I will do to play with it first is actually get it into, not into a program that I write, but into a 3D modelling APP, to tear it apart, and think about the way in which we might be able to create and distribute space for it. But first, do you have thoughts about what you would do if this was an entire room? It obviously needs to be a pretty big mural, but if it was an entire room, or an entire building, do you have a sense of the way in which it would differ?
    Bob Horn: Until you ask the question, and put it together with the pages from the old book, I haven’t really thought of that. But from many of the places in Vision 2050 one would have pathways like this. This was originally a pert chart way back when that I was visualizing, because I happened to have, early my career edited a book on pert charts for Dupont. And so, that’s a really intriguing question. To be extracting in and laying it out and then, connecting those and also flipping the big mural, the time-based mural in Vision 2050, making that flat, bringing different parts of it up, I think would be one of the first ways that one would try to explore that, because then, one could (indistinct) pathways, and alternatives, and then linkages. So, they’re different. Depending on one’s purpose, thinking purpose, one would do different things.
    Fabien Benetou: Brief note here. I believe, using Illustrator to make the visuals, I believe Illustrator can also save to SVG. And SVG then can be relatively easily extruded to transform a 2D shape into a 3D shape. Honestly, doing that would be probably interesting but very basic, or very naive. It’s still, I think, a good step to extrude part of the graph with different depth based on, I don’t know, colour, or meaning, or position, or something like this. So, I think it could be done. But, if you could export one of the poster in that format, in SVG, I think it would be fun to tinker with. But I think, at some point, you personally will have to consider, indeed, the question that Brandel asked. If you have a room, rather than a wall beyond the automatic extraction or extrusion, how would you design it?
    Brandel Zachernuk: Yeah. It’s something that I think would be really useful as an exercise, if you want to go through one of those murals and with a sketchbook, just pencils. And at some point, you can go through with us to characterize what I think, like you said, different shapes, different jobs call for different shapes through that space. But one can move space around, which is exciting. Librarians can move their walls around.
    Bob Horn: I was going to say the other, if you strike another core, just as from the demonstration we saw earlier this morning. The big mural could be on one wall. There was a written report. There is a 60 or 80-page report that could be linked in various ways to it. And it exists. And then, there’s also, in that report, there’s a simplification of the big mural. It reduces the 800 steps in the mural to about 40. And it’s a visual table look. So, already there are three views, three walls, and we’ve already imagined putting it flat on the floor and things popping up from it. All right, there we go. There’s a room for you.
    Brandel Zachernuk: Exciting, yeah. I think that’s a really good start. And from my perspective, I think that’s something that I can and will play with is, starting from that JPEG of the PDF, I’ll peel pieces of that off and try to arrange them in space, thinking about some of the stuff that Fabien’s done with the Visual-Meta, virtual Visual-Meta. As well as what Adam succeeded in doing, in terms of pulling the dates off, because I think that there’s some really interesting duality of views, like multiplicity of representations that we can kind of get into, as well as being able to leverage the idea of having vastly different scales. When you have a, at Apple we call it a type matrix, but just the texts and what what’s a heading what’s a subhead. But the thing is that, except in the most egregious cases, which we sometimes do at Apple, the biggest text is no more than about five times the smallest text. But in real space you can have a museum, and the letters on the museum wall or in a big room are this big. And then you have little blocks like that thing. And there’s no expectation for there to be mutually intelligible. There’s no way you can read this, while you’re reading that. But because of the fact that we have the ability to navigate that space, we can make use of those incredibly disparate scales. And I think that’s incumbent on us to reimagine what we would do with those vastly different scales that we have available, as a result of being able to locomote through a virtual space.
    Bob Horn: Well, let me know if you need any of these things. I can provide, somehow. I guess you and I could figure out how to do a dropbox for Illustrator or any other thing that can be useful for you.
    Brandel Zachernuk: Yeah, thank you. I may ask for the Illustrator document. One of the things that I’ve been recently inspired by, so there’s an incredible team at Apple that I’m trying to apply for called prototyping. And one of the neat things that they have done over the years is describe their prototypic process. And it mostly involves cutting JPEGs apart and throwing them into the roughest thing possible in order to be able to answer the coarsest questions possible first. And so, I’m very much looking forward to doing something coarse ground with the expectation that we have a better sense of what it is we would want to do with more high fidelity resources. So, hopefully that will bear fruit and nobody should be, hopefully not, too distraught by misuse of the material. But I very much enjoy the idea of taking a fairly rough hand to these broad questions at first, and then, making sure that refinement is based on actual resolution, in the sense of being resolved, rather than pixel density.
    Bob Horn: Yeah, well, okay. If you want JPEGs we can make JPEGs too.
    Frode Hegland: You said almost as a throwaway thing there. Traverse. But one thing that I learned, Brandel, particularly with your first mural of Bob’s work is that, traversal, unless you’re physically walking if you have room scale opportunity, is horrible. But being able to pull and push is wonderful. And I think that kind of insight that we’re learning by doing is something we really should try to record. So, I’m not trying to push you into an article. But if you have a few bullets that you want to put into Twitter, or sent to me, or whatever, as in, this, in your experience has caused stomach pain, this hasn’t. Because also, yesterday, I saw a… You know I come from a visual background, and have photography friends, and do videos, and all that stuff, suddenly, a friend of mine, Keith, from some of you have met, we were in SoHo, where he put a 8k 360 camera, and it was really fun. So, I got all excited, went home, looked up a few things, and then I found the Stereo 180 cameras. And I finally found a way to view it on the Oculus. It was a bit clunky, but I did. It was an awful experience. There’s something about where you place your eye. When we saw the movie, Avatar, it was really weird that the bit that is blurry would actually be sharp as well, but somewhere else. Those kinds of effects. So, to have a stereoscopic, if it isn’t exactly right on both eyes and you’re looking at the exact, it’s horrible. So, these are the things we’re learning. And if we could put it into a more listy way, that would be great. Anyway, just since you mentioned.
    Brandel Zachernuk: Yes. It’s fascinating. And that’s something that Mark Anderson also observed when he realized that, unfortunately, the Fresnel lenses that we make use of in current generation hardware means that, it’s not particularly amenable to looking with your eyes like that. You really have to be looking through the center of your headset in order to be able to get the best view. You have this sense of the periphery. But will tire anybody who tries to read stuff down there, because their eyes are going to start hurting.
    Frode Hegland: Yeah. I still have problems getting a real good sharp focus. Jiggle this, jiggle that. But, hey! Early days, right? So when it comes to what we’re talking about with Bob’s mural, and the levels, and the connections, and all of that good stuff, it seems to be an incredibly useful thing to experiment with exactly these issues. What does it actually mean to explode it, et cetera? So, yeah. Very good. 
    Fabien Benetou: Yeah. I imagine that being shared before. But just in case, Mike Elgier, who is, or at least who was, I’m not sure right now, but a typist and designer at Google, on the UXL product. Wrote some design principle a couple of years ago. And not all of these were his, but he illustrated it quite nicely. So, I think it’s a good summary. 
    Brandel Zachernuk: Yes, I agree. He’s still at Google he was working on Earth and YouTube. Working on how to present media, and make sure that it works seamlessly so that you’re not lying about what the media is, but in terms of presenting a YouTube video in VR in a way that it isn’t with no applied and like I see it screen or whatever. But also, making sure that it’s something that you can interact with as seamlessly as possible. So, it’s nice work, and hopefully, if Google ramps up its work back into AR, VR, then they can leverage his abilities. Because they’ve lost a lot of people who are doing really interesting things. I don’t know if you saw, Don McCarthy has now moved to New York Times to work on 3D stuff there. And that’s very exciting for them. But a huge blow for Google not to have them back.
    Frode Hegland: Just adding this to our little news thing. Right. Excellent. Yeah. Let’s reconvene on Monday. This is good. And, yeah. That’s all just wonderful. Have a good weekend.


    Chat Log

    16:46:14 From Fabien Benetou : my DIY keyboard passthrough in Hubs 😉
    https://twitter.com/utopiah/status/1250121506782355456
    using my webcam desktop
    16:48:25 From Frode Hegland : Cool Fabien
    16:50:49 From alanlaidlaw : that’s the right call. APIs are very dangerous in highly dynamic domains
    16:51:47 From Fabien Benetou : also recent demo on managing screens in Hubs
    https://twitter.com/utopiah/status/1493315471252283398 including capturing images to move them around while streaming content
    17:03:43 From Fabien Benetou : good point, the limits of the natural metaphor, unable to get the same affordances one does have with “just””paper
    17:04:07 From Frode Hegland : Carmack?
    17:04:16 From Frode Hegland : Oh that was Quake
    17:04:48 From Frode Hegland : Can you put the names here in chat as well please?
    17:05:16 From Fabien Benetou : Vladimir Vukićević iirc
    17:05:53 From Frode Hegland : Thanks
    17:06:40 From Brandel Zachernuk : This is Vukićević:
    https://cesium.com/open-metaverse-podcast/3d-on-the-web/
    17:07:17 From Brandel Zachernuk : And Pixar/Adobe, Guido Quaroni:
    https://cesium.com/open-metaverse-podcast/the-genesis-of-usd/
    17:11:09 From Frode Hegland : From today to the NIH:
    https://www.dropbox.com/s/9xyl6xgmaltojqn/metadata%20in%20crisis.pdf?dl=0
    17:11:25 From Frode Hegland : Next will be on academic documents in VR
    17:12:07 From Fabien Benetou : very basic but the documents used in
    https://twitter.com/utopiah/status/1243495288289050624 are academic papers
    17:13:19 From Frode Hegland : Fabien, make an article on that tweet?…
    17:13:30 From Fabien Benetou : length? deadline?
    17:13:34 From Frode Hegland : any
    17:13:44 From Frode Hegland : However, do not over work!
    17:13:54 From Frode Hegland : Simple but don’t waste time editing down
    17:14:07 From Fabien Benetou : sure, will do
    17:14:11 From Frode Hegland : Wonderful
    17:14:52 From Fabien Benetou : (off topic but I can recommend
    https://podcasts.apple.com/be/podcast/burnout-and-how-to-avoid-it/id1474245040?i=1000551538495
    on burn out)
    17:28:05 From Brandel Zachernuk :
    https://www.bobhorn.us/assets/sus-5uc-vision-2050-wbcsd-2010-(1).pdf
    17:28:17 From Brandel Zachernuk :
    https://www.bobhorn.us/assets/sus-6uc-pathwayswbcsd-final-2010.jpg
    17:39:10 From Fabien Benetou : https://www.mikealger.com/
    17:39:27 From Fabien Benetou : design principles for UX in XR, pretty popular

    Frode Hegland: Academic & Scientific Documents in the Metaverse

    Recall the world before it all became digital. You are in a meeting where you have a printout of a relevant document and a notepad. You underline relevant parts of the document, you write notes and draw diagrams in your notepad. You are also given a stack of index cards so that you can all do some brain-storming and those cards are pinned to a wall and moved around as you discuss them as a group. The facilitator even pins a few lines of string between related cards. You take a picture of this and since you don’t need the document you printed out–since the meeting went so well—you fold it into a paper airplane and fly it into the bin.
    Now picture yourself in a fully digital environment where you have the same document and notepad and you use systems like Google Docs to collaborate and even a projector or a big screen for the cards to be put up and moved around by the facilitator. This is pretty much the office life many of us live in today. You can’t exactly fly the airplane to the bin, you have given up arbitrary interactions for those which are more useful in a work environment, such as the ability to instantly edit and share your information. Every environment you work in will of course have tradeoffs as to what you can do there.
    So let’s go to the near-future and don our AR/VR headgear and enter a meeting in the Metaverse with the same document and a notepad, in richly interactive knowledge room. You will now be able to do magical things, as we can dream about today, and even build demos of:

  • You can spread the document out in and have it float in the air where you want it to.
  • Any included diagrams can be pulled out and enlarged to fill a wall, where you can discuss it and annotate it.
  • Any references from that document can be visualised as lines going into the distance and a tug on any line will bring the source into view.
  • You can throw your virtual index cards straight to a huge wall and you and the facilitator can both move the cards around, as well as save their positions and build sets of layouts.
  • Lines showing different kinds of connections can be made to appear between the cards.
  • If the cards have time information they can also be put on a timeline, if they have geographic information they can be put on a map, even a globe.
  • If there is related information in the document you brought, or in any relevant documents, they can be connected to this constellation of knowledge.
  • What you can do is only limited by our imagination and the tools provided. And it is also limited by the enabling infrastructures. What you cannot do is leave the room with this knowledge space intact. The actions you can perform on the knowledge elements in the room is entirely predicated by the ‘affordances’ the room gives you, to use a term from psychology which is also used for human-computer-interaction. It is akin to taking a picture from one picture editing program to another program–even though it’s the same picture, you cannot expect to be able to perform the exact same functions–such as special photographic filters. The difference in the metaverse will be that the entire environment will be software, both the visual aspects of the environment and the interactions you will have, and that means it will be owned by someone. Meta owns everything you do in their Quest headsets when in their environments, such as Horizon Workrooms, you cannot perform operations which they have not made possible through programming the space they own. Apple and Google will try to own the knowledge spaces they provide as well.
    Consider just a few documents: Currently you cannot fully open a document into a VR space, you can either view your Mac or Windows computer screen or you can have the document as sheets, but let’s skip ahead to when you can indeed open the document and its metadata is available to you.

  • You open a document in the knowledge space and you:
  • Pull the table of contents to one side for easy overview.
  • Throw the glossary into another part of the room.
  • Throw all the sources of the document against a wall.
  • You manipulate the document with interactions even Tom Cruise would have been jealous of in Minority Report†.
  • You read this new document with the same interactions and decide to see the two documents side by side with similarities highlighted with translucent bands, Ted Nelson style.
    Then you have a meeting and you have to leave this knowledge room. Your next meeting is in a different type of room developed by a different company but the work you have just done is so relevant to your next meeting so you wish you could take across the work you have done but you cannot. The data for how the information is displayed and what interactions you can do are determined by the room you are in, since that is the software which makes the interactions possible. What we need is to develop open standards for how data, in the form of documents but also all other forms of data, can be taken into these environments and for how the resulting views, which is to say arrangements, of this information is stored and handled. How will the stored, how will it be accessible and who will own it? This will be for us to decide, together. Or we can let commerce fence us in.

    Frode Hegland: Reusable ‘bits’ : Lists and Constellations

    Another think piece, where the only real point is for a user to give a bullet list a name in a document and have this be added to Visual-Meta on export, so that it becomes available externally, same as headings and glossary etc.

    Lines of thought

    There are many things I write which I would like to keep and reuse as and when I need them. One is definitions, which can be covered by glossary terms, another is references to books, articles, websites and social media posts, which can be covered by a library and the other is lists, including links from lists and collaborations for lists. As I was working on the document setting out my initial thoughts for augmented environments† I added quite a few lists, outlining possibilities for what elements we have to work with. And then hid them in the Appendix and they are not available for the rest of the team to work on. OK, I call them lists but they should not have to be linear, let me use one of my new favourite terms: Constellations.
    So, let’s consider a basic unit, a list of data types we can potentially use in our augmented environment. I wrote one of those in the last article so I’ll put it below for reference. I gave it a very casual heading and that may be useful in the original document, but
    Let us consider that in the world I am interested in we can’t simply have things in an app and store it that way. We have to design robustness way past our own passing. Let’s also keep in mind that if we use stable/frozen documents as a core component (Visual-Meta PDFs). There has been interesting work in this area, including Ted Nelson’s ‘transclusions’ and Apple’s ‘publish and subscribe’.
    If I now select the bullet points below and receive a menu option for this re-usability, what should it say and how should I, or someone I share it with, retrieve it later? Should other’s be able to edit it? What happens if I update it and it’s already in a PDF?
    I suggest we have a version in the PDF/published, a version in the user’s system (yes, the application’s database, for private use, same as the manuscript document itself) and one posted online, maybe to a WordPress site. When the document is exported to PDF, it is encoded in an Endnote (or something else), that this is also posted as a unit to a WordPress page. The user-reader can then manually check if it’s updated and the reader software can do the same, maybe adding a visual indicator on the list in the document to suggest the user check. Changing the full display will likely be too messy in PDF.
    So here we go, after messing around with it quite a few times. When user does cmd-8 to make a bullet the system asks for a name for that list (user can leave default name which is selected, ‘List 1’ or type to overwrite). The result is a title shown before the list, as illustrated below, indented, bold and with a colon. User can cmd-click on the heading to change the name and double click to fold (hide all the bullets) and unfold. We may also experiment with this text being in heading font to make it clearer to the user that it is not normal text.

    Available Data for AR:
    Book data

  • Journal data (though in a limited form apart from Mark’s work)
  • Financial data (stocks, currencies)
  • Weather information
  • Historical news
  • Twitter feeds
  • Our own dialogue in text (rough) and video and audio forms
  • Websites and web searches
  • Likely anything Siri can provide
  • Maths equations and formulas
  • Scientific data
  • Wikipedia
  • Wikipedia sidebars
  • Wikidata
  • Laws
  • Individual health data/performance
  • Instant messages (at least in our own world)
  • And folded in Author (any authoring software) this becomes the same, but the bullet is now solid and ellipsis follows:

    • Available Data for AR…

    The title of this bullet list and all the bullets will be added to the Visual-Meta on export. This means it will be available to extract/view separately just like all the other elements in Visual-Meta, such as headings, glossary terms and references. This then allows the user in augmented space to ‘pull out’ any of the lists they want and they can then pin them in 3D space or insert them into any document (cite them), where they will be cited and the list will be pasted. This will allow the resulting document produced later to either show original list, complete, or have a citation stating ‘based on’ to show it’s been modified. Benefits of this is that when documents with this Visual-Meta is in a known environment we can have back-links even for this, to see how lists have changed over time and by whom.
    When these pulled out elements are in a space, they are ‘owned’ by the space in terms of recording their location etc. but they are still tied to the original document in the sense that the pull-out has the full citation information of where it came from. I guess this means that it has to be divorced from the original document since that document can be removed at any time from the space and this list should not also go. Ideally I would like to this of this list as a data snippet with Visual-Meta

    Visual-Meta Backlinks

    In the way we can have such backlinks for lists, we can have backlinks for citations as well, and they can even include references to exactly where they come from in the document if the user cited a specific section, giving us the possibilities of Ted Nelson lines.

    Questions for future items

    What about a larger corpus, such as the timeline I have edited and added to the Future of Text? It would be useful to have that available as a thread in a timeline on the history of text, which should be viewable alongside any other threads.

    So… Keep it in the document

    So what I thought about is that maybe the data should continue to live in the published/shared document but be easy to access? How about I selected that list above and assigned it a heading/title of ‘AR Data’ and then published this document, with that listed in this document’s Visual-Meta. With this document in my ‘shelf’ I should be able to extract this information, same as the glossary terms, references and headings, and ‘drag it’ onto the space I want to have it and use it.
    What if I could tag these lists as well as give them a name, much like we can for WordPress posts, and use this? I could then choose to see all the lists (this is a type of data, which is referable without having to explicitly assign it since Author knows the bulleted items is a list) with the tag AR and there see that I want the one called ‘AR Data’ and use this?
    Once this is available to view and work with, I have to decide how it is anchored for me: Is it still anchored by the original document or the new one I am working on, or the physical space, the selection of data or what? In other words, if I move to a different location, is it stuck to me or the document(s)? What happens when my workspace gets messy, how can I hide and reveal it?
    These are questions we will have to work with, but I think it’s a safe method of keeping the data in the document, but viewable anywhere, rather than putting it in some sort of a database.

    Conversation: Adam’s Experiment

    Adam Wern, Frode Hegland, Alan Laidlaw, Brandel Zachernuk

    Adam Wern: Been playing with the Library idea for a while, and can also show PDF pages in 3D. But that is not very useful in itself. Much more is needed than visualisation to beat 2D, or analog.


    1. Wern, 2022.


    Adam Wern: Here is my first Active Reading test in 3D. You can select text and bring phrases out, floating and movable in 3D. Hovering over snippet’s highlights where they came from in the text (the yellowish in the screenshot):


    2. Wern, 2022.


    Adam Wern: Mostly eye candy. But with some imagination we can see that it would be really useful to read like this. There is something about the 3D that makes text “Rich” for me. I connect better.
    Especially with manipulation: Fiddling. Moving. Thinking with hands.
    But as you say Frode, without storing the work it’s not much. The actual saving it is where it is. And a system like this must really be able to work with both PDFs and HTML. Two incredible important formats in digital (in what is stored in them).
    Brandel has done some interesting translations of HTML text into 3D (preserving typographic styling).
    Frode Hegland: Yes! How shall we do that? We have a WordPress plug-in but should go further…
    Adam Wern: Two main approaches (not mutually exclusive): start hanging VM ‘directly’ onto text – like an Author to a paragraph, or carve out a section of the document (for example with a custom visual-meta tag) and put VM there. A main questions is whether it should be formatted as BibTex, JSON or markup when in HTML-land.
    Meanwhile, here is FoT vol 1 in 3D. Testing larger amounts of text.


    3. Wern, 2022.


    Adam Wern: OK, it looks like we can fly through half a million characters on screen in 120 fps in the browser without a problem. Clickable characters. Good to know 🙂
    Alan Laidlaw: Hi all. Will be out of the woods soon. Adam, if you’re willing, I’d like to try to install your code on my machine in order to tinker.
    Integrating w real data is stage two, now? We’re still at faker.js stage
    Adam Wern: Yes, it’s pretty fake. Well, the data comes from a real PDF (earlier screenshot), and a real EPUB but it has no interface (just hardcoded file references).
    Would at least be interesting to support dragging a PDF/EPUB onto the web-app and open it, and save the montage to file after you are done (to be opened again)
    Frode your export/save-to-Wordpress option in Author, does it export HTML, or how does it work?
    Color coding and seeing lengths of articles feels nice, and it kind of forms the navigation equivalent of a ragged margin for better memorability (if the view stay constant)


    4. Wern, 2022.


    Adam Wern: The actual screenshots is just material for discussion. I’m very aware that showing lots 3D texts can look cool, but it maybe useless for doing anything more meaningful. So feel free to shoot down bad things, everyone!
    One thing that I do like is the seamless transition between overview and detailed view of texts. Basically a ZUI
    Frode Hegland: Author posts to WordPress but it’s broken at the moment, please tell me what it should do, for our purposes and I’ll see what I can do. This is all very exciting!
    Brandel Zachernuk: I really liked what Mike Alger said about text, backing and contrast – and there was a good talk at Google IO a few years ago where they introduced ‘Distance-Independent Millimeters’ (‘DMM’) as a perceptual unit for describing sizes too: https://t.co/3VrvoN6v1L
    Adam Wern: It’s interesting – with text in 3D on a regular screen (like the above prototypes) it feels natural to ‘just pinch’ to get a nice zoom level for your particular eye-sight and type of reading. It’s effectively moving your body (camera view) there, but in VR that movement feels like a more brutal thing, as it affect your sense ‘gravity’, peripheral vision, and has implications for nausea, etc. So something like DMMs, perhaps adjusted for your vision like your default browser font-size, will be much more important in VR than ‘flat’ 3D.
    Brandel Zachernuk: Absolutely—there are implications from the stereo-parallax aspect of size as well as the perceptual degrees of arc. There are also aspects of technical implementation for display—small-and-close text will feel different because of the eye-strain caused trying to focus on it.
    But, small-and-close feels more different than you would expect to large-and-distant, especially in 6DoF.
    Adam Wern: Contrast will also be interesting, especially for AR. Colourful text with poor contrast or eye-sore combinations (like my things above 😉 is more useful for categorising & scanning than reading. In XR we'll have to use blur, font outlines, and semi-transparent materials to do effective floating text – as the background can be anything from real life
    Brandel Zachernuk: Yes I can definitely expect that real-world detail will get in the way pretty badly.
    It’s fantastic that Troika actually creates “Signed Distance Field” type – it’s one thing I didn’t succeed in implementing in the rich format, I wonder if it would be possible to supply my dom-to-three stuff as encouragement to support more complex hierarchies of type.
    SDF retains crispness at small sizes without jagged aliasing effects, and surprisingly smooth contours from a relatively small source texture size—Valve introduced it here:
    https://steamcdn-a.akamaihd.net/apps/valve/2007/SIGGRAPH2007_AlphaTestedMagnification.pdf
    Adam Wern: SDF is nice, but not perfect. Sharp edges are slightly 'tapered'. So it’s more round than ideal
    Fonts look friendlier 🙂
    But overall it feels nice, and scaled well both in quantity and zoom-level
    Brandel Zachernuk: Absolutely! I haven't peeked under the hood yet – do you know if Troika-text does directionally-biased SDF atlases or are they greyscale? The Valve paper indicates that by using multiple channels for directional biases you can dramatically increase the maximum detail for sharp contours
    Adam Wern: DOM to Troika would be really useful. Basic rich text – bold, headlines, links, and we have ourself a hypertext system that scales well – counted 500K characters until it started complaining.
    Haven't looked under the hood in Troika yet.
    Brandel Zachernuk: ooh actually either spector.js or @thespite’s WebGL inspector tools might help (and to a lesser extent the canvas inspector in Safari).
    Adam Wern: Thanks! Will try them to figure out. 500K was probably not a limit that can’t be worked around. Something exceeded some GL “texture dimension”. Troika can also be used with shaders that curve text, or automatically billboard things on the gpu.
    Brandel Zachernuk:Something I did and really like for timeline VR is ‘greeking’—creating nontransparent quads per-word in a document for display below the threshold of legibility based on display size. It’s effectively a level-of-detail option to alternate between in order to balance the value of seeing layout vs. the cost of rendering all those glyphs. Adobe InDesign has done it in the past for 2D, it looks like present-day Illustrator doesn’t.
    Frode Hegland: And then there is this: TikTok: https://t.co/0giU6iRfAl

    Date Chooser Solar System. Vidovic, 2022.


    Adam Wern: That date-picker may be a joke, but the underlying idea of rotations for scrubbing time is very solid. And virtual controls have the advantage over a regular rotational knobs in that you can move outwards from the centre to get more fine-grained control. I’ve missed that in video scrubbing many times: scrubbing roughly first – and then going to specific frame with precision. Regular sliders don’t cut it.
    Frode Hegland: Yes, a joke, but a thought provoking one.



    Adam Wern:
    On interfaces, I wonder if something looking like this would work for voice recognition, surfacing alternative interpretations for a phrase. To me, being misunderstood by voice recognition feels so irritating that I never use it. Fixing mistakes should be much easier.
    https://twitter.com/azlenelza/status/1331623011049500678


    Threads Interface. Elza, 2020.

    Conversation: Experiments with Bob Horn Mural

    Brandel Zachernuk, Frode Hegland, Adam Wern, Brendan Langen

    Frode Hegland: Yesterday, 11th of February we had a regular Friday meeting where we were joined by Fabien Benetou and semi-regular, now more regular, Bob Horn. Because of Bob’s work with murals, we spent some time going through the basics of what a mural could be in AR and VR, so Brandel built the following. The dialog below is from our discussion on Twitter. The video is quite hard to watch because of the constant movement, which is a great example of the power of VR: For Brandel this was a completely smooth experience and we really should experience it in VR ourselves. I have put up a link to the VR version in our blog, so that when you are in VR you can simply go to our page and easily access this. It is in the VR Resources Category:
    https://futuretextlab.info/category/vr-resource/

    Chat log is on our blog, as usual: https://futuretextlab.info/2022/02/11/chat-11feb-2022/
    Video of full meeting: https://youtu.be/Oh8yDKtPXD8
    Transcript will be up in this category when done:
    https://futuretextlab.info/category/transcript/

    Brandel’s Mural


    Bob Horn Mural. Zachernuk, 2022.


    Brandel Zachernuk: I dropped a static export of the mural in here: https://t.co/jH26I9JFIY
    Video walkthrough : https://t.co/jH26I9JFIY with transcript:
    “The NIREX poster in WebXR right now, it’s just a series of 2048 by 2048 rectangles and the end as well. But it’s nice, you know, it’s big and we can kind of navigate around it. I have this. Navigation is non-linear, so that small movements are small, but big movements result in big translations and sort of it’s proportionate to the square of the magnitude of the original motion so that we have the ability to get from one side of it to another without losing that fine detail. But now I’m zeroing out the vertical translation for the most part. This is kind of navigable with my hand at that this height. But it’s interesting. It’s really cool to be able to have these views of it and to be able to appreciate it at the size at which it’s sort of intended to be viewed at. Yeah, I’m pretty interested in it, and if necessary, obviously this information here is giving it the limits of its readability based on this particular set of pages that I’ve exported. But it, if necessary or possible, you can increase the resolution of this double edit or more or make use of some kind of adaptive display. I’m not aware of a specific PDF or at this point that would be able to pull this in natively, but a little bit of working. That’s definitely possible. Yeah, I like it. I also like this nonlinear thing. This is something that I’ve kind of made use of quite a bit in in my own work is having something that always has some action. But given that we only have a certain arm reach range, being able to kind of pinch here and then throw this way back. It’s really useful. These are one meter by one meter squares on the ground, and I don’t have arms that long, but it means that we are able to relatively fluidly and effortlessly. And if? Get into these different kind of vantage points without having to have strict changes in modality. So, yeah. Hmm. I think it’s an interesting thing to play with, and I look forward to making use of more data for this kind of visualisation in the future.”
    oh, it only works with the left hand – I am left handed and also inconsiderate
    Adam Wern: Nice – and really like the non-linear navigation!
    Could be coupled with gestural ‘modifiers’ so it’s turned on by when needed. (like sticking out a pinky).
    The whole idea of mural in VR is interesting. A perfect fit for the really big posters.
    And where the depth dimension fits in. Could imagine stepping through a region to get more information.
    Or that the some labels like headings and dates stick out like tabs when the mural is viewed from a sharp angle. Or that the floor doubles as that timeline.
    An audio guide with moving hands for the actual mural would be nice. The place of the listener can be further back and floating hands can be expressive near the material without a body covering the view. Can be sectioned like a museum guide with numbers and indicated by floating markers.

    Adam Mural with Extracted Dates

    Adam Wern: Brandel, Here are dates dynamically extracted from the PDF-text through pdf.js (which also renders the texture via a canvas) and added as rotated text tabs. Imagine searching by voice and tabs pop out with results🔥.


    Extracted Dates. Wern, 2022.


    Brendan Langen: Yes! very good to see the data we can pull out from this.
    Brandel Zachernuk: Oooh excellent! I got recording voice in places in VR working last night, it’s at
    https://zachernuk.neocities.org/2022/audio-record/
    Adam Wern: Other ideas: folding out large font-size text (probably headings) so that you can see headings from far away.
    Could be added to his mural, and is would be very interesting with voice search or Named Entity Recognition (NER).

    Fabien Benetou: First Visual-Meta in VR

    What I did with Visual-Meta was : From a PDF supporting Visual-Meta information extract the relevant information for the process, e.g. here extract information including headings and filter to keep only the first hierarchical level then generated the content as images with some representation of their types (e.g. white from name and grey for headings) then inject them in a social VR environment. This in turned allowed these pieces of information to be manipulated, in VR with controllers or on desktop with the mouse, and also with a predefined layout (forcing the orientation to be on a single plane).


    Visual-Meta in VR. Benetou, 2022.

    The PDF and resulting structure as a JSON with a list of images was stored on a publicly accessible server so that the social VR environment could store them locally and display to all participants. Snapping is a set of ideas in social #VR, loading and saving multiple layouts and finally pinning the whole set in order to be able to work on them during the next session.
    https://twitter.com/utopiah/status/1262788181184974854?s=20&t=irUMBExzsncqst8alYnI3A
    Code:
    https://gist.github.com/Utopiah/6fca8ef5ca087df8d88875360a27898e

    Frode Hegland: In & Out

    One of the fundamental issues which I think we got further along on today. If we look at a normal computer screen being viewed in VR, such as like this, then it is important to note, I think, that in the VR room it is simply a picture, it is pretty much the same as a texture on the wall, there is no interactive elements in that room.
    https://youtu.be/1TLC2tdImZ0?t=250
     


    Immersed. Anon, 2022.

      
    So say I am reading a PDF (with or without Visual-Meta) on my laptop and I don my VR headset, I can keep reading that document on the screen in VR, but I cannot do anything with it outside the frame. This is important because I think we agree it would be useful to have the table of contents in one place maybe, the glossary terms maybe floating in a graph and all the references from the document available in a list or library to interact with.
    This is why I have been hammering away with this issue because when Adam and Brandel creates a beautiful Bob Horn mural, what they build is useful and I would like to bring in my own PDF to view like that to work in that environment and test. Same with the beautiful work Fabien did with ACTUAL VISUAL-META! 
    I think it will be crucial to be able to easily and without bothering the developers to move the PDF (and other document types) into VR from the screen. It looks like a simple drag and drop onto a web page which is set up for this can do the trick, and that this can be added to software like Author. 
    Then the question becomes how we can interact with it–with real, glorious documents!
    Everything in VR could of course be in a database but where would the database be and who would own it? If every ‘node’ in this space is a document which can be transferred then we have flexibility. A document should also be ‘for the room’ to store the room’s preferences and layouts, including what is in it.
    And then the question becomes what a document is and should be. I think I agree with Keith who said it’s simply a container we can carry with us. Documents don’t need to be large, they can be a unit of thought, or a unit of expression. If we think of them as being small then maybe we can think of them as being index cards but some of them ‘happen to’ contain much more information.
    Might it be a good idea for us to build an environment where we can put up thoughts as simply index cards and use them to discuss what we want to do, and to move the cards around and show/hide connections between them based on contents or relationships made during discussion?

    Frode Hegland: I want to have a button to open in VR

    1 Statement and 3 questions (plus one stuck in at the end):

    I want to have a button in Reader which I can click and it will show me the current VR environments I am using (in other words, something built by Adam, Brandel or Fabien) and I choose one and it sends the current PDF to that environment.

  • How would this button know how to send to your environment?
  • How much would you like to work on for this to work and how much might we have to pay someone to do?
  • I would like to have this button on our (and in the future, any) website and in my software Reader. How much work would it be for my programmer to integrate it do you think?
  • To me, this seems like one of the very first infrastructure pieces we should build. If/when these environments support Visual-Meta we can start really playing with interactions but this would allow any one of us to upload any PDF we actually want to read, to work on in the environments being built. Can we do this in a reasonable period of time and do you think it will be worth it?
    I must emphasise that I really think we need to work on having the user choose what data to interact with in VR (not always and for every project of course) since going in to look at something the user is interested in studying is a very different interaction to going in to kick the tires so to speak and interact for the sake of interaction or testing. I hope we can also work on this.

    Fabien Benetou: Utopiah/visual-meta-append-remote.js

    Not very helpful for publication in a PDF but at least demonstrate a bit how part of the poster (or another sliced document) can be manipulated in social VR. Would be better I didn’t let it go through the wall or if another avatar was present to better illustrate the social aspect but at least it is somehow captured.
    Also here is the code to save back some meta-data, e.g in VR world position, in visual-meta in an existing PDF on a remote server https://t.co/yYH9yuSkUs as I noticed the other one is in the PDF of the preview of the journal issue.
    It’s challenging to capture it all as its constantly changing but I’m dearly aware of the value of it, having traces to discuss on and build back on top thanks to that so precious feedback, constructive criticism and suggestion to go beyond.

    ~~~~~ code sample ~~~~~


    const fs = require('fs');
    const bibtex = require('bibtex-parse');
    const {PdfData} = require( 'pdfdataextract');
    const {execSync} = require('child_process');
    const PDFDocument = require('pdfkit');
    const express = require("express");
    const cors = require("cors");
    const PORT = 3000
    const app = express();
    app.use(cors());
    app.use('/data', express.static('/'))

    const doc = new PDFDocument();
    let original = '1.1.pdf'
    let newfile = '1.2.pdf'
    let startfile = '/tmp/startfile.pdf'
    let lastpage = '/tmp/lastpage.pdf'
    let stream = doc.pipe(fs.createWriteStream(lastpage))
    let dataBuffer = fs.readFileSync(original)
    var newdata = ""

    /* client side usage :
    *
    * setup
    * const source = new EventSource('https://vmtest.benetou.fr/'+"streaming"); source.onmessage = message => console.log(JSON.parse(message.data));
    *
    * query
    * fetch('https://vmtest.benetou.fr/request/test2')then( response => { return response.text() } ).then( data => { console.log(data)})
    */

    function addDataToPDFWithVM(newdata){
    PdfData.extract(dataBuffer, {
    get: { // enable or disable data extraction (all are optional and enabled by default)
    pages: true, // get number of pages
    text: true, // get text of each page
    metadata: true, // get metadata
    info: true, // get info
    },
    }).then((data) => {
    data.pages; // the number of pages
    data.text; // an array of text pages
    data.info; // information of the pdf document, such as Author
    data.metadata; // metadata of the pdf document
    var lastPage = data.text[data.pages-1]
    bibRes = bibtex.entries( lastPage.replaceAll("¶",""))
    newContent = lastPage.replace("@{document-headings-end}","@{fabien-test}"+newdata+"@{fabien-test-end}\n@{document-headings-end}")
    doc
    //.font('fonts/PalatinoBold.ttf')
    .fontSize(6)
    .text(newContent, 10, 10)
    .save
    doc.end();
    execSync('pdftk '+original+' cat 1-r2 output '+startfile)
    stream.on('finish', function () {
    execSync('pdftk '+startfile+' '+lastpage+' cat output '+newfile)
    })
    sseSend('/'+newfile)

    });
    }

    var connectedClients = []
    function sseSend(data){
    connectedClients.map( res => {
    console.log("notifying client") // seems to be call very often (might try to send to closed clients?)
    res.write(`data: ${JSON.stringify({status: data})}\n\n`);
    })
    }

    app.get('/streaming', (req, res) => {

    res.setHeader('Cache-Control', 'no-cache');
    res.setHeader('Content-Type', 'text/event-stream');
    //res.setHeader('Access-Control-Allow-Origin', '*');
    // alread handled at the nginx level
    res.setHeader('Connection', 'keep-alive');
    res.setHeader('X-Accel-Buffering', 'no');
    res.flushHeaders(); // flush the headers to establish SSE with client

    res.write(`data: ${JSON.stringify({event: "userconnect"})}\n\n`); // res.write() instead of res.send()
    connectedClients.push(res)

    // If client closes connection, stop sending events
    res.on('close', () => {
    console.log('client dropped me');
    res.end();
    });
    });

    app.get('/', (req, res) => {
    res.json('vm test');
    });

    app.get('/request/:id', (req, res) => {
    const {id} = req.params;
    console.log(id)

    res.json({"status":"ok"});
    addDataToPDFWithVM(id)
    })

    app.listen(PORT)
    console.log("listening on port", PORT)

    ~~~~~ end code sample ~~~~~

    Fabien Benetou: Live tinkering with CloudXR & Flattening

    Fabien Benetou

    Is anybody extending <meta property="og:image" …> across new media?
    https://video.benetou.fr/videos/watch/ae04cf20-2b67-4bd0-b0df-350372fbd336

    Beyond the existing og:video og:audio og:image og:description properties, would it be useful to have og:360image og:180image for #VR content preview or even og:lodgltf ?
    https://twitter.com/utopiah/status/1495308388451950593

    This challenge and our discussion prompt me to want to dig a bit deeper in this
    https://twitter.com/utopiah/status/1495308388451950593… because truly collapsing or flattening is a challenge with all new media, so it isn’t actually new. Maybe a media theorist as a ready-made solution to that, a question of literal perspective. I have a hard time being able to conceive it for interactive materials but again, eager to see what past work explored this.


    experiment. Benetou, 2022.

    Adam Wern: Boundings

    I wonder how we should bound floating media collections. When we bring in or create several collections, and we should be able to treat whole collections as singular objects. Trying a transparent bubble here as a 3D boundary. Could also be be boxes, cylinders, etc, or nothing at all, just highlighting the objects.
    When there is a substrate like a scroll or a wall for text to be on we have more of a natural object to move around. But even there there may be a need to group many walls into something bigger.
    It also ties into how we select some objects out of collection—objects that may be spread in all three dimensions. The 3Dd lasso, and the resulting Selection.


    Circle. Wern, 2022.

    Testing long texts inside a cylinder.
    Longer vertical texts, potentially in a greater montage, can be handled in many ways, for example:

  • you can go to a text passage, which may mean ‘flying’ up and down if there is a virtual ground level
  • you can use a virtual telescope, and read at a distance.
  • texts can come to you, as copies or by moving the originals, potentially with parts sticking down through a virtual floor or being folded.
  • texts can be ‘chopped’ up in pages placed horizontally, to avoid long texts altogeth
  • er.

    So a major question is to which degree we should mimic whiteboards—wide, and not so tall, surfaces with minimal overlap. To prioritise horizontal movement and preserve a ground level. Or should we just go flying, still with a sense of up & down, but not restricting us or our texts by a floor at all?


    Cylinder. Wern, 2022.

    Conversation: USD (Universal Scene Description)

    Brandel Zachernuk, Adam Wern

    This is from the creator of the USD format, ex-Pixar, now-Adobe:
    https://youtu.be/FAY39CUEKpE
    “I keep thinking about – when I think about the metaverse, I keep imagining there is a metaverse browser, that is, you send links and the USD is the HTML of it that gives you the – it’s not a web page, it’s a web space now. And so then this browser, you know, it’s a browser that on desktop looks like a browser but in VR—you get in there and so that is more immersive but conceptually has the same model. And so there’s almost like a JavaScript or something that on top of it that gives you that execution and things that can happen based on triggering from events and things like that.”
    Guido Quaroni

    Oh, the full, tidied transcript is up here too:
    https://t.co/qVzD3PEIwu or
    https://cesium.com/open-metaverse-podcast/the-genesis-of-usd/
    https://en.wikipedia.org/wiki/Universal_Scene_Description for wikipedia definition.

    Adam Wern: I’m glad they seem very aware of the limits of declarative formats for things like animation, UI behavior, physics, etc. Looking at SVG, CSS, HTML, SwiftUI, and tons of other (mostly) declarative formats we always seem to need to bypass rigid default behaviours for doing anything ambitious. And the expressiveness of imperative code has proven unmatched.
    Ambitious interactive 3D would be more like and app – a world rather than a model. And JavaScript is the main switch board for very dynamic things like webapps, while HTML becomes more of an empty shell. In that sense I would rather go to an index.js directly (or JavaScript baked into the USD), and skip the double declaration of HTML and JavaScript.
    On the other hand: an index.html can act as a loading screen, fallback page, manifest file, and metadata wrapper – functions which should be filled anyway.

    Frode Hegland: Journal as a workflow to augment into VR

    What do you think about maybe using our Journal as a workflow to augment into VR? In the future potentially from Visual-Meta but initially by using the native document. 
    I am not at all saying we have to do this, but it strikes me as potentially useful since we then control the whole workflow. None of this data is proprietary to me, it’s all open anyway, and organised since Jacob made it pretty clean and we can document the format publicly should we move ahead.

    Real Data

    We would be in charge of the publishing and we have the .liquid document format (Author’s native document format). We could then have every new Journal issue in there and we can have read data to play with.

  • We would have access to organised citations, in order to build an ‘outside’ library with connections, maybe gathering access to the books via Google Books or something. 
  • We would have the Glossary. We could experiment with types/tags in the glossary, to easily show all people and so on.

  • We would also have the Map, though I’m not sure how we’d use it at this point because I don’t use it as the editor yet, but maybe we could figure out how to make it useful for a published work.
  • Images are available in native formats and sizes.
  • We could use the margins for something, no idea what.

  • Endnotes could have interesting ways of being accessed.
  • We would have a ton of transcribed text from our Guest host days. A specific issue in itself that this is long a flowing.

  • We could separate bullets out, plus sentence or heading above them.

  • Real Interactions 

    We would likely get into interesting issues for how the initial view of the Journal issues should be in VR and how the user could change them, much like the argument about who should be in charge of the look of a digital book, but in 3D.
    Imagine if this could help kick off a move towards an OpenDoc style of working in VR? https://en.wikipedia.org/wiki/OpenDoc
    We could experiment with what would snap onto a timeline/mural from the contents and from other sources, such as Wikidata/Wikipedia/Google Books etc.
    I would suggest we focus on helping the user/reader to see relationships, rather than read long passages in VR. Ideally it would be easy to go in and out of VR and what ‘page’ the user is on should ideally be stored, which a .liquid on iCloud could do. 

    The .liquid Format

    The reason I am asking is that Brandel and I went through the ‘package contents’ of a .liquid document (which is what we have made for Author) and there is a lot of organised metadata there for relatively easy access, as shown below. 
    What do you think?

    Follow up email to the group

    Perhaps even more fundamentally, how about we focus our initial effort on how we can make decisions?
    Imagine a collaborative space (not necessarily co-present) with an index card sized version of what I proposed in the last email, let’s call it “Journal as a workflow to augment into VR”. I enter this into the space and you guys can read it and if you agree, you can someone tag it as agreed or if you disagree (much more interesting) then you can write your own comments somehow, in this space, and it will all be part of a big ‘decision tree’ or something, where we build the shape of our discussion.
    We need to decide on what problem to solve first. Ideally I think it will be building a space for discussion.

    Brandel Zachernuk: Test of .liquid in VR

    I took a stab at opening a .liquid file and here is the result: youtu.be/LKjuSOH27ec
    It only works in Chrome at present, there are issues relating to library imports and file imports that I need to look into on Safari, and I haven’t looked at Firefox yet. It’s available here:
    zachernuk.neocities.org/2022/author-parse/ Beware there is a strange but happening at the moment – if the Chrome window is focused while the document is dragged, the sticks point the wrong direction. I haven’t identified why. Given that this requires drag-and-drop from the file system, you’d either need to get your .liquid file on to your Quest or to have a desktop upload result in sending all the data across the network to recompose the content on Quest/etc. Nothing about the size of the data is concerning, though, so that could be just as good an option down the road.


    .liquid to VR. Zachernuk, 2022.

    Omar Rizwan: Against ‘text’



    Figure 1. https://twitter.com/rsnous/status/1300565745147863040. Rizwan, 2022.


    I don’t know if text has a future, or even if it should have a future.
    I guess, fundamentally, I’m uncomfortable with the whole framing of ‘text’. I think that it comes with a lot of unhelpful baggage and connotations. When I start with ‘text’ as my basic concept, at some level, I’m starting with English prose, and alphabetic letters, and keyboards, and a rectangular screen or a piece of paper on a desk, and ‘plain text’ files†.
    Yes, you can say that 'text' also includes mathematical notation, or YouTube videos, or comics, or other writing systems, or any other media that humans have come up with, but I think that’s a sort of slippage. I think that if you articulate your goals in terms of text, you may pay lip service to all of those other forms, but you will always tend to treat them as exceptions and deviations from the norm. The picture in your mind will always start with the blank Word document or text file where you type some words in, and then you'll jam in some carve-outs to ‘embed’ everything else among the words†. Things other than words will always be second-class.
    My background is in computing, and in programming, and in trying to come up with new ways to interact with computers, and I think that computing has suffered very deeply from the centrality of text. Maybe that centrality was understandable, say, fifty years ago—computers were slow†, and text is relatively easy to store and process, after all. But today, our computers are more than capable of processing graphics and video and sound and other rich media, and I’m struck by how weak our tools still are when it comes to anything that isn’t text†.

    Figure 2. https://twitter.com/rsnous/status/1351319206692868097. Rizwan, 2022.


    I’m struck by the fact that if I write a paper with LaTeX, or make a Web page with Markdown, it’s trivial to add prose, and it’s a monstrous inconvenience to add a figure. The figures are the important part!† Text exerts this gravity, because it’s the container, it’s the norm. The text lives directly in the file you’re editing (and the figures live in separate ‘mage files’ outside it). You’re constantly (subconsciously) pushed to explain things with text, because it’s so much easier at a micro-interaction level to edit text than to add or change a ‘figure’†.
    (I think that this constant low-level push to use text is a way in which computing is a regression from paper—on a computer, it’s so easy† to produce and edit text that it dominates other†, richer, potentially more appropriate media. On a piece of paper, if you want to draw something in the middle of your prose, you can just draw it. Imagine if making these were as easy as typing:)

    Figure 3. https://twitter.com/rsnous/status/1201359487661223936. Rizwan, 2022.


    Figure 4. https://twitter.com/Sonja_Drimmer/status/1368966157106114561. Rizwan, 2022.


    (On a piece of paper, drawing is no different from writing; it doesn’t represent a change of mode; you don’t have to build up the emotional energy to move off your keyboard and open a different file and a different application.)
    Even when I’m programming—there are so many things that deserve a graphical representation. I see it even when I have a bug or when I just want to know what’s going on with my program. It’s easy to log text, but it’s also so limited. What if I have a pile of data and I want a chart of it, not just summary statistics or random samples? What if I’m working in a domain (like designing a user interface, or drawing a map, or designing a building) that is inherently spatial and graphical? Yes, I can make a computer program that produces graphics, but it often feels† like ten times the effort† of producing text. Text is the default, and it’s a bad default.
    As you think about the future of media, I want to make the case that micro-interactions† will dominate over conceptual models and data structures. I think that how it feels is a lot more important than what the concepts are†. I think that people will gravitate toward interactions that feel† good and interactions that are immediately at hand.


    Figure 5. https://twitter.com/rsnous/status/1327901730235793411. Rizwan, 2022.


    That’s why I’m so concerned with whether I have to go into a separate file, and whether I have to switch from the keyboard to something else, and whether I can just call a print() function versus having to look up some graphics library, and with what things I have to go out and ‘embed’ into my document as opposed to entering in place. I believe that these little frictions and barriers are overwhelmingly important.
    I think that we live in a world that is dominated by systems that get the micro-interactions right. The iPhone, video games†, social media (scrolling† as a formative interaction†)…
    And I think that a lot of the power of ‘text’ on the computer is that it has some really great† interactions associated† with it (typing, selection, copy and paste, Unix tools, text editors, files…). Text has this manipulability and ‘open space’ nature†, a bit like the nature of files or of objects in the physical world. There are all these operations† you can do (and know how to do) to text. Part of this is built-up capital that already exists: the hardware capital that every computer has a keyboard, and the human capital that everyone knows how to use that keyboard. How can we get those kinds of interactions, that at-hand-ness, for other media?
    But that’s also why I don’t know if text has a future. What if the smartphone is the real personal computer in the end†? Then we have a future where the microphone and camera and multitouch surface, not text input, increasingly become the favored modes of interaction.


    Figure 6. https://twitter.com/rsnous/status/1351377818769231875. Rizwan, 2022.


    As much as anyone, I admire Douglas Engelbart, Ted Nelson, and all their colleagues and heirs. But I also think that there is a certain arrogance to saying that the task ahead is simply to complete and execute their vision, that any problems are just problems of implementation. What can we learn from how the computer has actually been adopted†? What can we learn from the actual interactions and applications that have appealed to people? What can we learn from the genuinely new media that have popped up on laptop screens and smartphones, that could not have existed before the Internet or the phone camera?


    Figure 7. https://twitter.com/rsnous/status/1073639143878492161. Rizwan, 2022.


    Text is a strangely (historically and culturally) specific bundle of technology to orient a vision of the future around. Text is important, but it’s gotten a lot of attention already. There’s something that’s always a little exclusionary about text. It excludes the complexity that can go into full-fledged speech and writing†. It excludes inline graphics and diagrams and notations that are often vital tools for understanding and problem-solving. I hope that the future of media will be broader than that.
    And – above all – to build that future of media, I believe that we'll have to find a set of interactions that really work, not just a set of concepts.

    Apurva Chitnis: The future of knowledge management on the internet – public Zettlekastens

    These last few weeks I've been building my own Zettlekästen†. It’s an intimidating German word, but the idea is simple: when you’re learning something, take many small notes and link these notes to one another to create a web of connected notes. This is more effective than taking notes in a long, linear form (as you might do in Apple Notes or Evernote) because you can see the relations between ideas, which helps with your understanding and retention.


    Zettelkasten. Clear, 2019.


    The core idea behind Zettlekästens is that knowledge is interrelated — it builds off one another, so your notes — your understanding of knowledge — should be too. Wikipedia is structured in a similar way, using links between related pages, and in fact even your brain stores knowledge in a hierarchical manner†.

    Limitations today
    But as powerful as they are, Zettlekastens implemented today are limited in two ways: firstly, they are only used for knowledge-work†, and secondly, they only represent knowledge in your mind, and no one else's. These limitations are debilitating to the potential of Zettlekastens, and more broadly how we communicate online.
    I believe that not only knowledge, but all sentiment and expression is interrelated. Further, my knowledge and sentiment is built off of other people’s knowledge and sentiment, ie it extends beyond myself.

    For example:

  • I think that “NFTs are the future” after listening understanding “@naval’s belief that NFTs are necessary technology for the metaverse” in “this podcast”
  • I love “A Case of You” by “James Blake”, and “this is my favourite live performance”
  • Public Zettlekästen
    So what would happen if we removed these constraints? Imagine if we each built our own, individual Zettlekästen, representing our thoughts, opinions and experiences, made them public, and related our knowledge and sentiment to each other. What could we do with that? A few ideas:

  • We could look back in time and see how someone we admire learnt about a topic. In the first case above, we can understand why @naval believes what he does about NFTs and the metaverse. We can see what influenced him in the past and read those same sources. Further, we could then build on his ideas, and add our own ideas, for example “someone needs to build a platform for trading NFTs in the metaverse”. Others could build off of our ideas, and others could follow their journey as they learn about something new.
  • We can understand how an artist we admire created something. In the second case above, we can see when James Blake first listened to the original “A Case of You” by Joni Mitchell, what he thought and felt about it, and why he decided to perform a cover. We could use that understanding to explore Joni Mitchell’s back catalog, or be inspired to create our own content, for example by performing a cover. Followers of Joni Mitchell and James Blake could easily see our covers by following edges along the graph.
  • These are just a few ideas, but if we each made our Zettlekästen public and interrelated to one another, then there would be as many interaction patters as there are people in the world. This would unlock new forms of consumption and creation that are not possible today.
    This knowledge and sentiment graph could be queried and accessed in a huge number of ways to answer a broad range of questions. You could effectively upload your brain to the internet, search through it (and those of others), and build on top of everyone’s ideas and experience. This is a new way of representing knowledge and expression that goes beyond the limitations of paper and Web 2.0: it allows us to work collaboratively, in ways that Twitter, Facebook and friends just aren’t able to offer today.

    Implementation
    What data-layer should be used for storing this data? A blockchain is one idea: the data would be open and accessible by anyone, effectively democratising all knowledge and sentiment. It would be free of any centralised authority – you could port your knowledge in whatever application you wanted to use, and developers could build whatever UIs make most sense for the task at hand. Finally, developers could create bots that support humans in linking and connecting relevant ideas to one another — a boon for usability efficiency and discoverability.

    Challenges
    The biggest challenge with this idea, if we use the blockchain as the data-layer, is that the information a user would create is public and permanent. You may not want the world to know you believed something in the past (eg if you were a fan of X in your youth), but you cannot easily delete data on the blockchain†. You could, however, add a new note to explain that you no longer believe some idea — this would be particularly useful to any followers of yours, who now have additional context about why your opinion changed.
    Similarly, you'd be revealing all of a piece of knowledge or none of it; with a rudimentary implementation, you couldn't partially reveal a belief to just those you trust. Zero Knowledge Proofs might be a fruitful solution here.
    The second big challenge is how to present this data visually to end-users. Solving this particular challenge is outside the scope of this article, but it suffices to say that linear feeds (such as Twitter or Facebook) wouldn’t work well.
    If these barriers could be overcome, public Zettlekastens could not only be how we represent knowledge online, but also how we understand ourselves and each other in the future.

    Resources & News

    Items spied on the web or taken from our more casual discussions worth having a look at:

    A VR Wikipedia with Fabien Benetou

    Fabien Benetou

    “In this video/podcast, we join Fabien Benetou in his online, personal VR environment in Mozilla Hubs Cloud. We discuss the possibility of storing information visuospatially, such as Benetou’s VR Wiki, Memory/Mind Palaces, and the possibility of exploring a shared VR Wikipedia in the future. Particularly, we discuss how we can extend our minds through VR.”
    https://video.benetou.fr/videos/watch/b0955077-54bf-43b8-ae42-7280b9ae7a34

    Google VR Headset

    Alex Heath

    “Project Iris could see Google go up against Meta and Apple in the coming headset wars. The search giant has recently begun ramping up work on an AR headset, internally codenamed Project Iris, that it hopes to ship in 2024, according to two people familiar with the project who requested anonymity to speak without the company’s permission. Like forthcoming headsets from Meta and Apple, Google’s device uses outward-facing cameras to blend computer graphics with a video feed of the real world, creating a more immersive, mixed reality experience than existing AR glasses from the likes of Snap and Magic Leap. Early prototypes being developed at a facility in the San Francisco Bay Area resemble a pair of ski goggles and don’t require a tethered connection to an external power source.”
    https://www.theverge.com/2022/1/20/22892152/google-project-iris-ar-headset-2024

    The Design Guide to the Metaverse

    “We’re all trying to wrap our heads around the metaverse, the emerging network of virtual worlds focused on social connection, commerce, gaming, and much more. Designers are already expanding that network and will continue to do so. Will we end up creating a world that mirrors our own or something completely different? Will it be a place of freedom, equality, and self-expression, or one of corporate control and environmental and social degradation?”
    https://metropolismag.com/viewpoints/metaverse-design-guide/?fbclid=IwAR3zYYykW6Xi-rm7NjHFflSsLF46nbC8p-3f4I3cWNrk_xmpG7Pq4KJGHEY

    https://metropolismag.com/viewpoints/what-will-our-virtual-reality-be/

    This New Platform Can Give AR Apps a Memory Boost

    Julian Chokkattu

    “Perceptus can identify and continuously remember the objects in the physical world, grounding augmented reality with more real-world context.
    IMAGINE SPILLING A box full of Lego bricks over a table. Now—take a leap with me—don your imaginary augmented reality glasses. The camera in the AR glasses will immediately start cataloging all the different types of bricks in front of you, from different shapes to colors, offering up suggestions on models you can build with the pieces you have. But wait, someone is at the door. You go to check it and come back. Thankfully, your glasses don’t need to rescan all of those pieces. The AR knows they're sitting on the table where you left them.”
    https://www.wired.com/story/perceptus-augmented-reality-object-tracking/

    Dreamscapes & Artificial Architecture Imagined Interior Design In Digital Art

    gestalten

    “A journey through dreamlike landscapes, bizarre buildings, and whimsical interiors floating between reality and fantasy.
    Digital renderings have long served architects and interior designers to help visualize spaces before the building begins. But a new generation of digital artists are taking this craft a step further to create otherworldly scenes that can’t, and won’t, ever be built. This inspiring compilation of the most innovative projects in digital art covers the work of the artists and creatives at the forefront of this aesthetic. Discover Filip Hodas and his captivating pop culture dystopia artwork series, explore Massimo Colonna’s surrealist urban landscapes and dive into the abstract compositions of Ezequiel Pini, founder of Six N. Five studio.”
    https://gestalten.com/products/dreamscapes-artificial-architecture

    Medieval Photoshop

    Anna Dlabacová

    “Manipulating and enhancing images may seem something that is particular to the current digital age. The Kattendijke Chronicle, a late fifteenth-century manuscript from the Low Countries, contains fascinating examples of analogue image editing.”
    https://leidenmedievalistsblog.nl/articles/medieval-photoshop

    How Scholars Once Feared That the Book Index Would Destroy Reading

    Dennis Duncan

    “In literature, the novelist Will Self has declared that the serious novel is dead: we no longer have the patience for it. This is the Age of Distraction, and it is the search engine’s fault.
    A few years ago, an influential article in the Atlantic asked the question, “Is Google Making Us Stupid?” and answered, strongly, in the affirmative. But if we take the long view, this is nothing more than a recent outbreak of an old fever.
    The history of the index is full of such fears that nobody will read properly any more, that extract reading will take the place of lengthier engagements with books, that we will ask new questions, perform new types of scholarship, forget the old ways of close reading, become deplorably, incurably ­inattentive-and all because of that infernal tool, the book index.
    In the Restoration period, the pejorative ­index-­raker was coined for writers who pad out their works with unnecessary quotations, while on the Continent Galileo grumbled at the armchair philosophers who, “in order to acquire a knowledge of natural effects, do not betake themselves to ships or crossbows or cannons, but retire into their studies and glance through an index or a table of contents to see whether Aristotle has said anything about them.
    The book index: killing off experimental curiosity since the 17th century.”
    https://lithub.com/how-scholars-once-feared-that-the-book-index-would-destroy-reading/

    Designing Screen Interfaces for VR (Google I/O '17)

    Google Developers

    “When we think of VR, our minds naturally gravitate towards three-dimensional environments and interactions. But, there are times when it’s necessary to present content in a two-dimensional way. With VR, we have the opportunity to reevaluate the nature of screens, and how we view and interact with screen-based content. This talk will cover techniques the Daydream team uses to create legible, usable screen interfaces in VR. We will introduce new workflows, new units, and highlight interaction opportunities and pitfalls.”
    https://www.youtube.com/watch?v=ES9jArHRFHQ

    Design principles for UX in XR

    Mike Alger

    https://www.mikealger.com

    LiDAR Scanning My House into VR

    Kevan

    “Using the Lidar scanner on the iPhone 12 Pro, I put my House into Virtual Reality and 3D printed my car.” Something of interest if we want to take our real office into VR, or any other real environment.

    https://www.youtube.com/watch?v=iAF8AxBNWhs

    Notes on The Journal’s Structure

    [description is work in progress, to inform as to the general types of content in the journal]

    Future Text Lab website: https://futuretextlab.info

    Our twice weekly ‘office hours’ online meetings are recorded and available via YouTube at:
    www.youtube.com/watch?v=-AeDv-upjEo&list=PLYx4DnFWaXV9xoAIagq6cA5piwQEiCuU7

    The YouTube collection holds meeting from August 2020 onwards, currently over 110 discrete recordings. The full meeting transcripts (computer-transcribed) are available at:
    futuretextlab.info/category/transcript/
    and the meeting chat logs are at: futuretextlab.info/category/chat-log/.

    Some current tool/stylistic limitations

    Inline links

    In parts of the production process links that cross link breaks may not function. As a current defence against that, you may see some URLs placed on a new line: this has no implication beyond protecting URL link export.

    Code Samples

    A further tool limitation is not having a discrete mark-up method/font for code. So, explicit sections of code are proceeded by a centred banner “~~~~~ code sample ~~~~~” and closed by a similar banner “~~~~~ end code sample ~~~~~

    Colophon

    Published February 2022. All articles are © Copyright of their respective authors. This collected work is © Copyright ‘Future Text Publishing’ and Frode Alexander Hegland. The PDF is made available at no cost and the printed book is available from ‘Future Text Publishing’ (futuretextpublishing.com) a trading name of ‘The Augmented Text Company LTD, UK. This work is freely available digitally, permitting any users to read, download, copy, distribute, print, search, or link to the full texts of these articles, crawl them for indexing, pass them as data to software, or use them for any other lawful purpose, without financial, legal, or technical barriers other than those inseparable from gaining access to the internet itself. The only constraint on reproduction and distribution, and the only role for copyright in this domain, should be to give authors control over the integrity of their work and the right to be properly acknowledged and cited.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.