Site Overlay

1.4 : Featuring Andreea Ion Cojocaru

This is a simple HTML rendering of the Journal. Please refer to the main Journal page for the authoritative PDF version.

Introduction

Welcome to ‘The Future of Text’ Journal. This Journal serves as a monthly record of the activities of the Future Text Lab, in concert with the annual Future of Text Symposium and the annual ‘The Future of Text’ book series. We have published two volumes of ‘The Future of Text’† and this year we are starting with a new model where articles will first appear in this Journal over the year and will be collated into the third volume of the book. We expect this to continue going forward.
This Journal is distributed as a PDF which will open in any standard PDF viewer. If you choose to open it in our free Reader’ PDF viewer for macOS (download†), you will get useful extra interactions including these features enabled via embedded Visual-Meta:
• the ability to fold the journal text into an outline of headings.
• pop-up previews for citations/endnotes showing their content in situ.
• a Find, with text selected, locates all the occurrences of that text and collapses the document view to show only the matches, with each displayed in context.
• if the selected text has a Glossary entry, that entry will appear at the top of the screen.
• inclusion of Visual-Meta. See: http://visual-meta.info to learn more about Visual-Meta.

Notes on style:
• In talks and discussions, the speaker’s name is noted before the start of their spoken passage, thus: ‘ Speaker Name: … ’. Names of books, artworks, apps, games, etc. are in italics.
• Otherwise any other ad hoc bolding in the body text of articles should be treated as an editorial highlight.
• In some places URLs are deliberately placed on a separate line. This is to ensure PDF processing doesn’t break the URL embedding function.

Frode Alexander Hegland & Mark Anderson (Editors), with thanks to the Future Text Lab community: Adam Wern, Alan Laidlaw, Brandel Zachernuk, Fabien Benetou, Brendan Langen, Christopher Gutteridge, David De Roure, Dave Millard, Ismail Serageldin, Keith Martin, Mark Anderson, Peter Wasilko, Rafael Nepô and Vint Cerf.
https://futuretextpublishing.com

Andreea Ion Cojocaru Transcript : Guest Presentation
13 May 2022

Video: https://youtu.be/4YO-iCUHdog

Pre-Presentation

Frode Hegland: Okay. Glad you are here. Could I ask the people who are here today to do a 15-second introduction of who they are?

Karl Hebenstreit: I’m in Alexandria Virginia, outside of DC. I work for the Federal Government, and the U.S. General Services Administration on supporting people with disabilities.

Frode Hegland: And quite a big follower of Doug Engelbart, I might add. Me, you mostly know, and then Ted. It’s not going to be possible for 15 seconds other than to say that most of what we use today is from your head, although we’re using a sliver of what it is. Not only you are someone I consider a close friend because you’re an incredible human being, but I’m also beyond honoured that you’re here today. So any other words you want to say, Ted?

Ted Nelson: I coined a lot of words in common use, especially Hypertext, Hypermedia, Transclusion, Dildonics, and well, I actually implemented the first Hypertext in 1968-69 the Brown University HES System.

Frode Hegland: Wow. Shortest summary of the biggest mind. Bob Horn is here.

Bob Horn: Well, I wrote a book about hypertext before the internet actually happened. At Stanford for the last 27 years until retiring. I make murals. [Like] the one behind me.

Frode Hegland: Fantastic. Yeah, you do. Which is just a huge issue that we’re working in, in our normal sessions. Even at what seems to be a basic mural has myriads of potential aspects, and I’m sure Andreea will maybe have thought around that kind of stuff. But anyway, we’ll see. Peter?

Peter Wasilko: Well, I’m an attorney and programmer with secondary research interests in university futures, technology, world’s fairs, and programming language design, a whole wide gamut of technologies.

Frode Hegland: Great. And a regular with the Future of Text community, which is wonderful. Claus?

Claus Atzenbeck: Hello, I’m Claus. I’m a professor of visual analytics at Hof University in Germany. I was the 2019 ACM Conference general co-chair and I’m organising the annual Human Factors in Hypertext workshop.

Frode Hegland: Mark Anderson, formerly Mr. Anderson from the Matrix, now, Dr. Anderson.

Mark Anderson: No, no. I’m not a doctor, I just play one at school. So the very main reason I’m involved in this, is I’m really interested in text, and more in the case of Hypertext. And Ted, if you’re watching, kick me because we still need to do the eBooks of these one which I did a lot of work a while ago and have completely forgotten. And, yeah. I’ve spent much of my working life, basically, as an information emergency plumber, which is a rather different take on data to the rest of the world. But I’m interested in text and moreover Hypertext.

Brandel Zachernuk: I am a creative technologist and augmented and virtual reality at Apple. A representative for the World Wide Web Consortium on Immersive Web on behalf of Apple as well. And in my private planes, I’ve been trying to bring the worlds of text, in particular hypertext along through virtual reality, trying to make sure that the opportunities for information processing aren’t lost on people and then connecting expert communities, like the Future of Text, to some of the worlds of graphics that other people are familiar with because they’re surprisingly disconnected at times. So it’s really exciting that the Future of Text has taken on this virtual reality opportunity.

Frode Hegland: Yeah, I just want to tell everyone it’s Brandel’s fault that my brain is 99% VR at the moment. And I think VR, he says AR XR are all good. I’m just VR. I couldn’t think about this last year, handed in my thesis, and that was thanks to Dave Millard, who is next to introduce himself, that I managed, and now Brandel has completely changed my brain. Thanks.

David Millard: Thanks, Frode. So I’m Dave Millard from the University of Southampton. I did my PhD in Hypertext systems back in the 1990s and I still do a lot of research in Hypertext, mixed reality, and interactive digital narrative. I’m the current steering chair of the ACM Hypertext Conference as well.

Frode Hegland: And you have really good energy. Thank you for your advisory sessions.

David Millard: Yes, my pleasure to work with you, Frode.

Frode Hegland: Oh. Monica, whom I met today for the first time less than an hour ago.

Monica Rizzolli: So, hi everybody. I’m Monica. I am a tech designer and also a generative artist. My last work, Fragments of an Infinite Field was about botanic and generative art.

Frode Hegland: Yeah, wonderful. And we have someone entirely new, Lisa.

Lisa Eidloth: Hey, everybody. I’m part of Claus’s research group. A rather small part. I’m interested in all concerning hypertext, VR, AR, and Claus invited me to this meeting. So, hey. Thanks.

Frode Hegland: Very glad you’re here. And finally, and definitely not least, Ken Perlin.

Ken Perlin: I am flattered to be in such an august group and it’s a great honour. I do computer graphics, computer animation, and more recently virtual augmented extended reality in a social context. And I am hoping to help make the thing that Ted started and that Brandel and others are working on, eventually, ten years from now we just put these things on and it’s just part of our everyday world and kids take it for granted as normal reality. So that, in a nutshell, is what my interests are.

Presentation

Andreea Ion Cojocaru: Hi everyone. It’s such an honour to be part of the group, and present to this group. Because this group is very different than the usual audiences that I speak to, I took the presentation in a very new direction. It’s a bit of a risk in that I’m going much deeper than I’ve ever gone before in public in showing people the insides of how my method works. So part of what you will hear will be the messiness of what is a very active and sometimes stressful process for us at Numena. But hopefully, yes, there will be time at the end for you to ask questions, and for me to have the chance to clarify the aspects that were maybe a bit too unclear. Okay, with that mentioned I’m going to share my screen. All right. I just gave a title to this talk. This talk did not have a title until five minutes ago, and now it’s called An Architect Reads Cognitive Neuroscience and Decides to Start an Immersive Tech Company. And this is pretty much what the story will be today.
I’m an architect. I have a master’s degree in architecture. I’ve been in love with architecture and the idea of space-making for as long as I can remember. But there’s a bit of a twist in my background in that, when I was young, I was learning letters by typing with my dad on a keyboard in the 80s, and I have this childhood relationship with computers and coding. And I’ve always been very passionate about philosophy. So a while back I discovered cognitive neuroscience and I began reading that from the perspective of an architect who can code and who is also an amateur philosopher. Reading this from this perspective and I don’t know how many people read content neuroscience with this kind of background gave me all sorts of ideas.
When I discovered AR and VR, and specifically VR, I just found this opportunity to start pursuing some of the ideas that have been floating around my mind, in reading cognitive neuroscience for a while, this started. So the company started about four years ago, and it’s been a crazy ride.
But I’m not going to start with what the company is doing.
I’m going to start at the deepest depth that I’ve ever started a presentation. So I believe that for us to be able to successfully discuss these concepts in the end, I need to be very clear about what my background assumptions are. Then, I also believe I need to be clear about how I think those assumptions work or can be implemented.

• What kind of theories and knowledge do I use to imagine a mechanism?
• Then, I’m going to go into how I’m using all of that to think of virtual space.
• And then, how we are using those ideas about virtual space to try to create AR and VR applications that begin to test some of those assumptions.

So, the position part of the presentation. What are my assumptions? I want to propose first what’s called ‘The Correspondence Theory of Truth’. This says that there is a reality out there, and its structure is homomorphic to our perceptions. What does this mean? It means that we don’t know really what’s out there, but we know that there is some correspondence between some sea of particles and radiation and whatever comes to our senses. In the history of human thought, this is a relatively new idea. And in everyday thinking and knowledge and culture, we still don’t really take this seriously, as in, we still assume that we’re seeing a chair, and the chair is brown, and we look outside the window and we see flowers and there’s a certain colour. And that that reality is out there outside of ourselves. And even in reading a lot of the papers that are coming out of the scientific establishment, a lot of it is really not quite taking this proposition to heart that actually there is a huge gap between whatever that reality is and ourselves. And here I want to add a note that, actually, if you read words that are coming out from the computational branches of evolutionary theory, you will see that the correspondence theory of truth has refutations and it has fascinating mathematical refutations. So they’re actually people out there who believe that there is no homomorphism between whatever reality would capital R is out there in our perceptions, that we might be completely imagining everything. But I will not go quite to that depth today. 
So there’s something out there but there’s a gap between that thing out there and ourselves, our perceptions.
In practical terms, I like to make sense of this through what’s called enaction theory. This was introduced by Varela and a few others in the 60s and 70s. I think in the book called The Embodied Mind (Varela, Thompson, Rosch, 2017) was published in 1990. And basically, this starts to deal with the fact that, this mapping between who we are and how we perceive the world in the world is really not tight at all. And it’s not just that it’s not tight, but we’re continuously negotiating what this relationship is. And the reason why embodied cognition and the forum called inactive cognition is very important is because it triggered a dialogue across science and culture that was about escaping what’s called the Cartesian anxiety. So for many centuries, especially European-centric thinking was based on this idea that there is the subject and object, and they are two different things. That we have subjectivism, how things feel, and then there’s objectivism, there is the world out there. And there are still a lot of struggles going on in a lot of fields to escape this Cartesian anxiety. It even goes into interesting discussions these days of what is consciousness and qualia and all of that and if we have free will, this is also about free will and all of that. My particular stance is to embrace Varela’s inactive cognition and to stay there is no strict separation between who and what we are in the environment. We are defined by the environment and the environment defines us, and our entire organism is about negotiating this relationship. I know this is still a bit unclear, so I will just try to go a bit further into this. Basically, the proposition is that environments are shaped into significance, and these are quotes from the Embodied Mind by Varela. “Shaped into significance and intelligence shifts from being the capacity to solve a problem to the capacity to enter into a shared world of significance.” Or, “Cognition consists in the enactment or bringing forth of a world by a viable history of structural coupling.” So we become structurally coupled with the environment, and both our minds, our organism, and environment are adjusted through this structural coupling. And one interesting example that he gives in the book is of bees and flowers. We don’t know if bees evolve the way they are because they are attracted to flowers who offer them nourishment, or the other way around, that flowers evolve beautiful colours because there were these creatures called bees that were attracted to them. Varela proposes that is neither or and that most likely both flowers and bees evolve together, to work together. So there was a common evolution because, from the point of view of the bee, the flower, and the environment, and from the point of view of the flower, the bee, and the environment. So each is both environment and subject from a different kind of perspective. And in that context, they evolved together through this structural coupling.
This also ties back in terms of examples. To focus a little bit on examples now, if you’re in Macy’s papers from at the first conferences on cybernetics in the 50s, they were very concerned with research on frogs and I found that very interesting. So why were they so concerned with frogs? Because new research, at the time, showed that frogs cannot see large moving objects that… Actually, they can technically see but their brain just does not process large objects. So a frog is very good at catching small moving things like mosquitoes, but a frog will get run over by a truck. And it’s not because the eyes of the frog cannot perceive the truck, is because the brain just doesn’t process the truck. Large moving objects are not part of the frog’s world. So that was actually very interesting and I think you can easily think of similarities or start to have questions going through your mind about what things out there, that are very much in the environment and they very much exist, we might even see but just not perceive because they’re just not part of how we deal with the world and how we interact with the world, they’re outside the structural coupling that we have formed with the environment. And, although, this has been proved when it comes to frogs and many other kinds of organisms, we still have a hard time to imagine that, when we look out the window, there might be things out there which our cognitive system is just ignoring, perhaps, seeing but just ignoring, and I’ll bring up some examples later in this regard. 
Another interesting thing is the ongoing research that’s coming out about how the human eye is perceiving information. Here it turns out that, according to the latest studies, only about 20% of information that comes through the retina contributes to the image that we see to the image that a visual cortex forms. The other 80% is what’s called top-down. So there’s just other kinds of information happening in the organism that determines what we think we see out there, outside the window. Again, that number is now 80% and going up. And then, there’s so much more out there in research in this sense. There’s research that shows that if your hand is holding a cup of hot water, what you perceive from your other senses is different than when your hand is holding a cup of cold water. So just mind-blowing stuff that is just scratching the surface of this. Because we are still shaking off an intellectual culture of dualism, but also of this idea that we see what we see is what’s really out there, many people still read about these things and catalogue them as illusions. And my work and my interests are about trying to understand to what is their limit and to what extent are they really illusions. And the more I work on this, and the more I read about this, the more I’m going down the rabbit hole of believing that they’re not just illusions, they’re probably correct. They’re probably what the situation actually is. But why? Why do we think these are illusions? Why don’t we perceive these variations? Or why is it so hard for us to even take these things into account? A lot has been written in what’s called experimental phenomenology about the Necker cube. That cube that if you focus on it a little bit, it kind of shifts. And sometimes it seems like you’re looking at it from the top-down, and sometimes from the bottom up. And again, everyone is cataloguing that as an illusion. It is not an illusion. And none of these things are illusions. But what’s happening is, in the words of Merleau-Ponty, a French philosopher, very famous in the school of phenomenology says, “The world is pregnant with meaning.” So, we are born into a social world that fixes our perception to match a certain story. Our society tells us a story, and this story is very catchy. It’s so catchy to the point where a lot of work and energy has to go into escaping that story. So our perceptions do not flip on us like the Necker cube. Because we are social animals and we share a story about what the world is. And what is that story? How powerful is that story? Well, it is that 80%. It is that, at least, 80% that is influencing the way we process the information that comes from the retina, for example. 
The other word that I like in this context, also from Merleau-Ponty, is thickness. He says, “The world is also thick with meaning.” So it is very hard for us to cut to this thickness. And because most of the time we cannot, or it takes too much energy, we just buy into this idea that there is a fixed way to interpret information and that is the shared reality that we all live in. And, of course, a huge component of this, that he also goes into in his work is a bunch of norms that dictate not just what you should expect to see when you look outside the window, but also what’s the appropriate way of looking out the window, and the appropriate way of behaving, the appropriate way of even thinking about these things, as in, cataloguing them as illusions that come with a certain baggage and so on. Okay. So what can we go deeper into the mechanism that starts to unpack how we interact with the perceptions and how they’re fixed and what they’re fixed by. And something that I found very striking when I was looking for the first studies and information on this topic, is the work of Lakoff and Johnson. They wrote a very famous book called Metaphors We Live By (Lakoff & Johnson, 2008). They are cognitive neuroscientists interested in or working in the field of linguistics. And you’re probably familiar with the work. The Metaphors We Live By was about how language has words like up, down, backwards, downwards, that are used in an abstract sense. And their conclusion was that metaphors are neural phenomena. They recruit sensory-motor interfaces for use in abstract thought. And this was just mind-blowing to me as I read it. I had to read it several times, not because I didn’t understand what it meant the first time, but it was just so unbelievable. They’re actually proposing that we take things that we learn by walking around in the environment, and then we use those structures to think. So in terms of a mechanism, explaining thoughts and perception I thought this was just absolutely mind-blowing. And there’s actually a whole body of research that, both Lakoff and Johnson have done, together and separately, and other people, that are putting meat onto this theory. But again, because it’s so unbelievable I feel like we’re still struggling to really incorporate this into our intellectual culture. Varela also talks about how we lay down a path in walking. And a lot of people like this phrase, but many use it in a sense that’s not literal. But read in the context of Lakoff and Johnson, I think, he might have actually meant it literally. As in, “Our thinking and our walking might not be different things.”
Something that also points at a very interesting mechanism that deals with the muddiness of perception and thought is an article that came out in 2016, and it’s about a very strange phrase called Homuncular Flexibility, the human ability to inhabit non-human avatars. And again, when this came out I had to read the title a few times because it was just so unbelievable. And it states basically that this thing, called Homuncular Flexibility posits, this theory posits that the homunculus is capable of adapting to novel bodies, in particular bodies that have extra appendages. And that the recent advent of virtual reality technology, which can track physical human motions and display them on avatars, allows for the wholly new human experience of inhabiting distinctly non-human bodies. Ever since I read this, I started my own series of experiments in VR and I have discovered, to my surprise, that is actually extremely easy to, let’s say, adapt to non-human bodies, to feel like you’re truly embodying all sorts of things. I thought it would take much longer than it actually did. So, with technology like VR, these kinds of things are not even some super theoretical thing that can be achieved in a high-tech lab in some universities somewhere. It’s actually in the hands of teenagers right now who are spending more and more hours a day on VR platforms, like VR chat. But I’m digressing a bit from the mechanism. So this is pointing again to a mechanism that is quite fascinating. Even things that we thought were fixed, like our identification with our body and our limbs, might really not be that fixed at all. And again, reading this, Lakoff and Johnson, metaphors that we recruit through sensory-motor interfaces are used in abstract thought, all sorts of things crossed my mind like, “Okay, so I’m inhabiting the octopus for a few hours. What kind of sensory-motor interface has that introduced into my brain and how will my abstract thoughts be changed by the fact that I’ve just spent half a day as an octopus?” Now, Merleau-Ponty and the traditional phenomenology and inactive cognition that I’ve started with, have been talking about things like this since the beginning and they all contain very precise examples of these mechanisms. For example, Merleau-Ponty has a famous story about how a man with a cane is actually using the cane as an extension of his body, because people who use canes, blind people who use canes, report feeling the tip of the cane touching the sidewalk. So they’re actually very precise in that description if you read what they say about how they feel the graininess of the asphalt and the pavement. They really feel that they are there at the tip of that cane. So these mechanisms have been known, but I feel like now they are starting to be taken, quote-unquote, a little bit more seriously or their implications are starting to unfold much, much faster before us, because of technology like virtual reality. 
And here is something that, for me, it’s also a mechanism, but it does not deal directly with perception, the movement of the body, and thoughts. It deals more with the sense of self. And I know that the sense of self is a very different topic than movement and environment, but it’s going to come up later so I want to throw this in here. Foucault, the last book that was published about Foucault’s writings is a series of lectures he gave called, Technologies of The Self. He never finished those lectures. He passed away. But this is what he describes as where he saw his work going, and what he would like to do next. What does he mean by technologies of the self’? He’s very interested in what he calls the ‘emergence of a subject’. He’s very interested in how people feel like they have a ‘self’ and an ‘I’. How they describe that self and how that self changes. In this context, he’s looking a lot at people like Rousseau and how Rousseau not only described the modern subject, but his writings actually contributed to what Foucault calls ‘The creation of the modern subject’. And this is important in the context of us dealing with, or having on our hands a piece of technology that allows people to spend half a day as an octopus. Foucault says for a long time ordinary individuality, the everyday individuality of everybody remained below the threshold of description, and then, people like Rousseau come in and start to describe how it feels to be human, and how it feels to be a subject of the modern state of France and so on. So, from now on, I will refer to this as subjectivity in the sense of, how does it feel to be a human self, a human individual, what could contribute to creating that particular form of how it feels to be you, and what could change how it feels to be you, and under what context does that change? And it’s very interesting to me that Foucault himself uses the word technology, although in his writing he’s not specifically looking at tech the way we think of technology right now. So just a quick summary, we’re like halfway through. 
But I want to summarise a bit of what I’ve been trying to, kind of, do so far:

• I’ve been trying to establish the fact that there is a gap between objective reality and our human world.
• And my work is about trying to understand this gap a little bit better.
• And the mechanism that, basically, connects us to the world, that does this structural coupling, in the words of Varela, is malleable.
• And we are just starting to scratch the surface of what that means.

But the establishment of this gap is the one thing that I want you to take away from the first part. I think I’m going to skip through this, but these are some of my favourite articles that I’ve been reading lately. They’re all about how the things that we see might not, really, be about what’s outside the window. They might be more about our own stories, and our own cognitive processes. It’s that 80-plus percent that’s about something else. And yet, we’re talking about imagery, we’re talking about what we think we see.
This paper, in particular, maybe I’m just going to explain to you very quickly what this one is about, it’s about this fascinating thing called ‘binocular rivalry’. These terms are, kind of, interesting sometimes: ‘binocular rivalry or ‘homuncular flexibility’. I’m very happy when scientists get so creative with naming these things. So, what is binocular rivalry? Basically, they did this experiment where they got a person in a room, and they showed that person either a face or a house, and then, they put some kind of glasses with a screen on that person, some kind of VR glasses, that flashed for a fraction of a second either a house or a face. And what they found was that the brain decided to, quote-unquote, show the person, or the person then reported that they saw either a house or a face based on one they had seen previously, basically. So the pressing mechanism was like, Okay. I’m seeing a house, and I’m seeing a face. What should I give access to consciousness? Which one would be more relevant for the story of this individual? And the one that was, quote-unquote, shown to consciousness was, of course, the one that related to what the individual was shown at length before these flashes of images.
So in this gap that we have established between reality, human beings, and our perception and thoughts, where and what are the strings, and can tech pull them? I think we have already answered this with things like, the homuncular flexibility and showing that we can inhabit an octopus and almost anything non-humanoid in VR. But I haven’t seen any papers yet, maybe because this is just too crazy of a proposition, that takes the next step towards Lakoff and saying, “Okay. How does inhabiting that octopus then change the way you think? Change your thought process?” And, of course, there is no clear answer to that. The waters are very murky. The situation is incredibly complex.
But the fact remains that, tech is starting to interfere with these things.
And it’s starting to get more and more powerful.
And we are starting to see cognitive processes being altered.
I believe we just don’t have a choice but to start daring, proposing things and forming hypothesis, and going into the murky waters of the complexity of this whole thing as long as we want to work in tech. So how does this relate to virtual space? Because at the end of the day I’m an architect. And I’m reading these things, and what goes through my mind is the possibility to test these things by designing spaces.
But before I go into a tentative framework that I’m using now, I want to start with what I call ‘Observations from Field Work’. So I spend a lot of time in VR. We develop a lot of VR applications in the office. I do a lot of events and talks in AltSpace and VR Chat. And I think it’s important, before we dive into the theory, to also take into account just what are the stuff that I see out there that seems important. What is the bottom-up side of the work?
The one thing that I find fascinating is what I call the Control+Z effect. This is a series of behaviours that I started to notice in myself, and sometimes in other people as well, that has to do with things you learn in VR, or in another kind of environment that, then cross over to physical reality and they reflect an inability of the brain to understand or to make a call between, “Okay. What are the rules of this reality that I’m in now and what are my behaviour allowances here versus my behaviour allowances in that other kind of reality?” And I’m calling this Control+Z because I first noticed it many years ago, and it was before VR, but I’m seeing similar things coming out of VR. I want to say when I was an architect, I’m still an architect, but when I used to just do architecture every day without this whole tech stuff, I used to build a lot of cardboard models. But the workflow for my architecture projects was actually just many hours a day in a screen-based software product where I would just model things with the mouse and the keyboard, and then I would also, have in parallel, sometimes a cardboard model running of the same thing, so sometimes I would make decisions in the screen-based software, and sometimes in the cardboard model. And on several occasions, late at night, when I was tired, so my brain was kind of struggling a little bit. While working on the cardboard model and making a mistake, my left hand would immediately make this twitching movement, and my fingers on my left hand would position themselves in the Control+Z position of the keyboard while I was working on a cardboard model. And I would always be kind of surprised, and then, of course, similarly realize what had happened and catch myself in the act and shamefully, a little bit, put my left hand down, “Okay. There is no Control+Z.” But what was happening was, basically, my brain was, kind of, deep into this screen-based computer software where there is a ledger that records all the actions that you do in that environment in time. And you do Control+Z and then you go back one step in that ledger. So my brain had gotten used to the idea that, that environment, quote-unquote, and reality can also go backward. And then, of course, in physical reality the hour of time does not go backward. So that’s the first observation.
Then, I’m seeing a lot of emerging phenomena in virtual worlds. I’m seeing people discover new possibilities for being, for interacting, crazy things happening in VR Chat, if you’re not familiar with that platform, I highly recommend it. I think it’s by far the most advanced VR interaction you’ll see, and worlds being developed, and forms of community building, and community life intermediated by this technology. All of that is happening in VR Chat. And they’re years, years, years ahead from any other kind of experience, or game, or anything else that I’ve been seeing. So I’m seeing signs that there are emerging social dynamics and mechanisms for negotiating meaning in these collective groups and interactions that are extremely interesting. 
This is also a bit of a topic for another day, but I feel like it’s so important that I cannot not mention it. We’re slowly but surely not the only intelligent agents anymore. We interact with bots on Twitter every day and we don’t even know that they’re bots sometimes. And people are experimenting with introducing all sorts of AI-driven agents into virtual worlds. We have Unreal and Unity putting out their extremely realistic-looking avatars that are AR driven and so on. So we’re not really at the point where we go to VR Chat, my favourite platform, and we’re not sure that the other person is human or not. But I think, well, I don’t know, if we’re not already there, we will be there pretty soon. So there’s a significant layer of complexity that’s being added right now on top of this already complex and messy situation, by the introduction of non-human cognitive systems. 
All right, so what is the proposition for what is virtual space? This is how I think about it. A new environment is basically a system you’re trying to solve. It’s a little bit like a game. So this is the structural coupling of Varela. You go into a game, you go into a new building, you go to a new country to visit, you’ve just landed at the airport, the first thing you do is, you’re trying to figure it out. You’re trying to understand where you are and which way you go. Are there any things that are strange? Your brain is turning fast to establish, as soon as possible, this structural coupling with the environment, that gives you control over the environment and understanding.
But I want to argue, in that process, you’re not just dealing with this foreign environment, you’re actually also encountering the system that is you. You’re also dealing, and discovering your own cognitive processes that are engaging with the environment in attempting to couple. So roughly put, designing the environment is designing the subject that interacts with it. So how would an approach to space making look like if we just assumed, in the light of all of this talk about cognitive neuroscience, that the environment and the person are the same thing? That, somehow, they’re so tightly connected we cannot disconnect them. It’s like the bee and the flower.
If we were to pursue this kind of methodology, what would our tools be? Where would we even start? And I can only tell you how I’ve started doing it. I’m basically doing the best that I can to form hypotheses that have to do with knowledge that I’m taking from these papers, and knowledge that I’m taking from my own experiences and introspection.
One of the mechanisms that I’m very interested in now, and I will show you how we use that in one of our projects is the fact that, unlike other kinds of screen-based software or interfaces, screen-based interfaces that only address or mostly address our visual cortex, VR throws in the ability to control or encourage behaviour that activates the motor cortex. And this is an absolute game-changer because, as a lot of these papers reveal, it is the organism’s attempt to integrate sometimes, perhaps, conflicting information that comes from the motor cortex and the visual cortex, that it’s one of the most important paths that we have in trying to understand more complex cognitive paths.
One way is to try to understand this relationship, and then to try to use VR to test things. So what if the eye sees something, and then the body does that, what happens next? Can you always predict what the person there will do? You can if you only show them and make them do what they would see or do in physical reality. But the moment you depart from that, the moment they either see something else and do something they would do in physical reality or the other way around, very interesting things, very quickly start to happen. Now, to what end? I think this is something that will have a different answer for every developer or every company. I think this is, primarily for me, a methodology that I’m only able to pursue and explore using VR and AR. This is not something that’s possible for me with traditional forms of architecture. That’s primarily the reason why, as an architect, I am in AR and VR and not just in traditional architecture. To what end? For me, the answer is that there are many answers, but one today is that I’m interested in new ways of thinking, and new ways of subjectivity. So that’s why I introduced that slide earlier about Foucault and subjectivity. I’M INTERESTED IN NEW FORMS OF BEING HUMAN. And I think that can be pursued through this kind of methodology, but we’ll see how things go in AR and VR. I think, new forms of subjectivity can also be pursued through traditional architecture, but there are many reasons why that is a little bit slow. 
Okay. And now the last part of the presentation is the fun part. This is where later you can tell me, “Hey Andreea. The things you said, and the things you did, or just the way things turned out do not quite match.” But I would love to hear those kinds of questions.

All right. Implementation.

So this is an older project, but I think it’s very relevant in this context, so I decided to start with it. This is a, let’s call it art project, it’s called Say It. Basically, I designed these different shapes, they’re in wax here because I was planning on pouring them in bronze. I never got to pour them in bronze and integrate these RFID tags into them. But basically, this is based on a story from Gulliver’s Travels. Gulliver goes to Lilliput. That’s the country with the little people. And he runs into these Lilliputians that cannot speak in words, they speak with objects. They carry on their back a big bag with an object, that’s a sample object of all the objects that they need to communicate. So if they want to tell you something about spoons, they will go into their bag and pull out a spoon and show it to you, and then you’re supposed to like, quote-unquote, read that they mean to say spoon. So this intersection between language and objects, or objects as language, and then, the many complications that result when trying to use objects as language, because you don’t have syntax, was something I became very interested in. So what is the syntax if you just have the objects? How does that arise? So, the idea with this project was to have two people and then give them a bag of these objects, and these are somewhere in between letters and objects. And to design ways in which this could maybe give some sort of feedback. But to observe how fast, or to what extent, or in what direction people start to use these to communicate. The people are not allowed to talk to each other, of course, so they’re given something they’re meant to communicate to each other and only have these objects. And then, they’re given an hour to try to use these things to communicate, and basically, they have to negotiate meaning for these abstract shapes. 

This is an AR game that we have developed for a museum. And here we used one of these approaches that I mentioned earlier. We hypothesise a certain reaction that would happen if we present the visual cortex with conflicting information from what the motor cortex is reporting to the central nervous system. And it worked. We were able to trick people into believing that their body is floating upward. About 20 meters. So we basically trigger the mild out-of-body experience. This is mild, it’s something quite nice, it’s a game that happens outdoors, it’s triggered by GPS coordinates and you’re basically exploring a story of the German [indistinct] in the south of Germany. It’s very integrated with a story. It’s a very mild thing. It’s not scary at all. But we were surprised ourselves that we were able to use some of these theories to make something like this that actually, quote-unquote, works.

This is a three-dimensional menu. What you’re looking at here is, basically, a folder with files. It’s something that, from the technical knowledge that we have today, it’s something very basic. Something a programming student will understand everything about in the first hour. But we wanted to see how we can take a folder with files and make that a three-dimensional experience. So we went very literal about it. We used what is called the metaphor approach to UI, UX, and interfaces, but with a bit of a twist. So you are in an elevator where you can go up and down to infinity. And in each one of these TV slots, you can save one of your files, that you produced in this application that we’re working on. You can save it in here, and you can then rearrange them, because we’re working on putting smart tags on them. So it’s kind of like creating a map, but then, you can reorganise them so that they form a different kind of map. And what’s even more interesting it’s, we also tested another thing. You can go in, on this chair, and pull a file out of this slot next to this strange TV screen and throw it down into the abyss. It’s like a big VHS tape that you kick outside of this chair and you can look down and see it drop. We’re very interested in understanding how people react when they have to interact with abstract things like files as if they were physical objects they can throw. And this is part of a much more complex exploration that we’re pursuing. This is part of the same application.

This is the kind of environment you can make that you then save on the screen. And the one thing that I want to point out here is that, you basically see the scene two times. What you’re seeing here is that, you are in this roof that’s shown to you at one to one scale, and you also have a mini version of that roof. So you’re simultaneously perceiving, quote-unquote, this fake reality inside of your headset two times. And we’re experimenting with all sorts of interactions in here, because you also exist in here two times. You exist at your perceived one-to-one scale. And what we call “mini-me” is also in here. So there’s mini you in there that you can also interact with. So we’re seeing very interesting things happening because, of course, this environment, where everything is twice and there’s a mini you that you can do things to, it’s a very different logic of the universe than what we are used to having in physical reality.

This is a Borgesian Infinite Library based on a Penrose tile pattern. We made this kind of for fun to explore the limit like the psychological limits of environments. This is actually a VR environment, but it’s a bit much so when you go in, your mind starts to lose it a little bit. But we just wanted to make an environment where we observe, at what point is an environment too much, and what exactly are the psychological effects that you start to experience in the first person when that environment becomes too much. And why is it too much? Is it the repetition? Is it the modularity? What exactly makes triggers those psychological effects?

And this is my last slide. This is a game that we’re working on, also highly experimental, where we’re putting a lot of these things that we’re thinking, and reading about, and exploring. We’re collecting all of this into what we call a VR testing environment that is called GravityX. And the motto for this is, the first line from John, but with a bit of a change. So it goes, “In the beginning there was space, and the space was with God, and the space was God.” So we basically replaced the word, “Word” with space in the first line from John. All right that was it. Thank you for bearing with me through this. 

Q&A

Frode Hegland: It was an absolute pleasure. Very, very grateful. I mean, obviously, lots of questions and dialogue now, and amazing. My initial observations, kind of, to you and to the group. First of all, thank you. And secondly, I was asked a while ago about, “Do I think the future is going up, improving? Or going worse?” And my answer was, “It seems to be diverging. Getting much better and much worse”. You’re in Germany now, right? So we’re dealing with a full-on war in Europe. We’re dealing with horrible things in other parts of the world. And then, we have this. When I defended my viva to Claus and Nick about two weeks ago, they very rightly questioned some of my language use around mental capacities. And my defence to them was, “We just don’t know enough to use hard language”. So Claus, if you don’t mind taking the first half of this presentation mentally into my thesis, that would be great. What I’m trying to say with that is, if our species is to survive, we have to evolve. And we’re the only species known who has a chance to have a say in our own evolution. So I think that what you have shown today is foundationally important. It was just really beautiful. We have to take this very seriously. In our group here, we call ourselves the Future of Text Lab. But we have decided that what we mean by Text is almost anything. It used to be very narrow, but because of VR, we’re doing something else. And just two more comments before I open up the virtual floor here. One of them is: I believe that the most powerful thing human beings have is imagination. And imagination has an enemy, truth. A teacher, when I was in university, many years ago said, “Truth kills creativity. Because when something is something, it is something and you’re not going to look at it in a different way”. We saw that with the normal, traditional desktop computing, it basically became word processing, email, web, and a few other things. A lot of the early stuff isn’t there. When we today, in our community, try to make more powerful things, people say, “Huh. But that’s not a word processor”. Or, “Uh. That’s not that”. Because imagination has been killed by truth. It is something. A little thing that I read on New Scientist, I think two days ago, in our bodies we have this thing called fascia, which is a connective tissue that goes around all our organs. I’m mentioning it for two reasons. First of all: it is kind of like an internet for our body that’s not our central nervous system. But until 2019 it was just thrown away. If you’re doing a dissection, or if you’re cooking a beef dinner, you would just get rid of this stuff. Because we didn’t have the ability to investigate it. And again, 2019, nobody had looked at it before. And now we’re realising that it has about as many nerve cells, roughly 250 million, as our skin. When you are looking at the way that our brain connects with the world, what I really liked about the way you do it, you are clearly very intelligent, but you’re also very humble. Clearly we have evolved with our environment, but the implications of what that means is extremely hard for us, humans, to fathom, I think. So, I just wanted to thank you very, very much for having the guts to look at this most foundational thing of what is to be human. And for us to together try to use virtual reality type things to examine how that may change.

Andreea Ion Cojocaru: Yes, thank you so much for saying that. Well, I think I have the guts to talk about these things because I’m an architect.

Frode Hegland: That’s a good point on many levels.

Bob Horn: I’m so excited by this presentation. It’s just so delightful. George Lakoff was a friend of mine and colleague. I audited his course over in Berkeley. I wrote the obituary for Varela, for the World Academy of Art and Science. The whole framework in which you enmeshed us in now is wonderful, and it really excites me now to get into virtual reality. I’m among the older people here in this group and I’ve resisted. Gulliver’s Travels metaphor was wonderful. I have a collection, one of the things I do is put words and images together. Visual spaces. As you can see behind me. Mostly I do it into two-dimensional murals that are 12 feet long and so forth. I actually work with the International Task Forces on this. The one behind me is the one I did on the avian flu 15 years ago. On what could have been the worst pandemic. And so, anyway in looking into into just the Gulliver thing. I mean, that I want to get off my mind. I had forgotten all about this bag of stuff. I have a bag of objects which are arrows. Which I use in these murals. I have a bag of 200 arrows. Different kinds of arrows, that have different kinds of meanings, that I would like to throw out there and give to you and see what you do with them, and see what you do with them in in virtual reality. So, anyway, I’m just filled with exciting possibilities after this. I don’t want to occupy any more time, but thank you very much. It was wonderful.

Brandel Zachernuk: Thank you. This is super exciting. And your comment on the, sort of, the homuncular flexibility and, sort of, hinting at neuroplasticity is something that I’ve definitely observed in my work. I was one of the responsible for some of the launch titles for Leap Motion. One of the things that were really fascinating for me there was having the number of degrees of freedom that one has there, and being able to just turn those things into whatever you wanted. And after a while, the contortions that one’s hands were undertaking, completely disappeared. And the more simple of which was just tilting a hand, but then, amplifying that three to four times. Most people didn’t realise that this angle wasn’t that angle. They completely thought that their hand was down, despite the fact that that would have been anatomically impossible. So I think that we have an enormous range of opportunities available to us once we have the ability to, kind of, recruit more of our stuff. One of the first things that I wanted to talk about, or ask you about is; You were pretty disparaging of the term “Illusion,” which I’m in agreement with. It reminds me a lot of Gerard [indistinct]’s frustration with people talking about cognitive bias and the sort of embodied situated cognition kind of things you’re talking about also, prioritise cognition for a reason. So have you come across or what is your take on cognitive bias and how it relates to this, as well?

Andreea Ion Cojocaru: Well, most of the things I’ve encountered that were referred to as cognitive bias, where bias, with respect to some kind of main understanding of cognition, but we do not agree on what the main understanding of cognition is. So I don’t know from what point of view do you think that that particular thing is biased. So I don’t find those conversations particularly useful, or the term itself, from the perspective of my interests. Because I don’t think we have that common ground or understanding that would allow us to meaningfully talk about bias.

Ken Perlin: Everything you’re saying is absolutely wonderful and resonates very strongly. And it also, in support of this, I’m thinking that there’s this phenomenon that, when something becomes normal, we tend to forget that there was a time when it wasn’t normal. So everyone here has had the experience of an automobile being an extension of our body. And we all read a book, which is an object that kind of didn’t exist at some point. Even the fact that we wear shoes now when there was a time when people didn’t wear shoes, the whole world would have seemed very strange. And obviously, phones and all these things. So it seems to me what you’re talking about is kind of the next phase, or actually putting some rigour behind, a phenomenon that is because we are the creatures of language, so, therefore, we live in this world where I say the word ‘elephant’, you’ve got an elephant in your head. And that happened a hundred thousand years ago. We’re kind of catching up in some sense to understanding what we do as a species. And I think I agree with you completely that, because of the more radical vestibular nature of, “I put on a VR headset, and now I start having these new kinds of novel mappings”. But, on the other hand, the language of cinema is something that might not have made any sense to someone before we all learned how to watch movies, and that’s a completely crazy mapping, if you were not used to it, that radical point of view changes from moment to moment, but yet doesn’t drive us crazy. So I feel like, not only is what you’re saying make a tremendous amount of sense, but it’s also making sense of things that happened long before we even had computers. And that’s kind of what we do in a way, we just didn’t kind of acknowledge it yet. And I wonder, what do you think about that?

Andreea Ion Cojocaru: I think we’re social creatures. So sharing a reality is how we survive. It’s the kind of organism that we are. So it’s important that we can share a reality, and the reality that we share cannot be the actual reality. It’s just not. So we share a story about that reality. And it takes society to change the story. Individual people cannot change the story at a level that’s profound or meaningful enough at all. There are these lonely people that sometimes can become important, and we call them innovators, when everything is good we call them a pain in the ass. I think now is a particularly difficult time in which we happen to need innovators. I think now things are not looking good at all in terms of where society is going and what we’re doing to the planet. So I think there’s a particular urgency to call the people that can shake up the story. That’s also a bit the reason why I introduced the talk about subjectivity. I believe that there are two reasons why I go into these things with VR. One is because I personally believe this is a path and a methodology that gives us the most ability to understand what the technology can do. But I also think the promise of a change, in the subjectivity of a change in the story, collective story, of a change in how it feels to be human is appealing to me, because we are, at a point, where we really need that right now and we can’t afford to wait. So there are two slightly different reasons why I chose to kind of go down this path. And, yes. I think all of this has happened in the past. I think the collective story controls the narrative of everything. That’s why, for me, the moment VR will reach mass market is actually very important, because, right now, we’re still talking about this technology being at the fringes. We have what? Half a million people? A million people in VR Chat? But I think the numbers are much less in terms of concurrent users. But where are we taking things if half of our teenagers start spending half a day as an Octopus, how do we make sense of that, and how do we take this tech to a point where we… It’s like, I think that if we continue to avoid a serious discussion on these mechanisms and methodology for XR developers, we will fail to have a good grasp on this technology. It’s a hard conversation because a lot of people, as I said, either believe that these things are illusions or do not think is part of their discipline to go into this discussion. My position is, you just don’t have a choice. We just have to go this path. Or at least have a conversation and debate methodologies. Because we will be in a situation where, on one hand the whole planet is going down the drain, and on the other hand we have to put half of our teenagers in some mental institution because they spend their days as an octopus. So this is putting it extremely bluntly. I should mince my words, but sometimes I get this sense of urgency coming from these two directions. And the best I can do, with my ability to think through things, is to go as deep as I did today and try to ask these difficult, unanswerable questions, to try to prevent, perhaps, or contribute to the prevention of these two big dangers that I’m seeing.

Ken Perlin: Thank you, yeah. It will come a day when the people who get put into institutions are the ones who refuse to learn how to be an octopus.

[internet connection issue]

https://youtu.be/4YO-iCUHdog?t=4915

Mark Anderson: I’m responsible for the recording which started basically at the end, sadly, just at the end of your answer to Brandon’s question and your answer. I love this. Interesting enough, actually, it was interesting the bit about homunculus, because that actually, my understanding sort of came at a completely different angle, because I came across it in V. S. Ramachandran’s book, Phantoms in The Brain, back in the late 80s, where this was about to do with neurological people with damage and how they were adapting their bodies. But, of course, it’s blindingly stating the obvious, to me it says that this would map across, why would it not? Because just if you can wrap your mind around mapping your mind away from a limb you no longer have, putting a couple of extra octopus arms on isn’t such a big stretch. I just come back to a couple of things that it’s interesting to sort of getting your thoughts on a bit more. I was listening to your thing about the Command+Z and I was just wondering, it was hard to phrase this in a way that doesn’t sound glass half empty, which isn’t where I come from, but so when we bring these things back, I suppose the answer is we don’t know whether we bring back good things or bad things because, in a sense, we can train ourselves to do things we do normally for not particularly societally good reasons. We train people to do things very well. And then we have problems teaching them to not do that. So I’m wondering if there’s another interesting element in this as we explore it. On the one hand, potentially the gain, even the things, going back to my opening point about the neuroscience people at San Diego trying to mend broken bodies and things. But just being able to effectively work through a different set of control mechanisms is really interesting. So I don’t know if you have any thoughts on that. And the other thing that I was interested in, when you mentioned sort of the 80/20 thing back you were also saying effectively we’re not using, or we don’t know how we’re using 80% of our neurological inputs. Is it that we don’t know what it’s doing or we just think it’s not being used?

Andreea Ion Cojocaru: Yeah. Oh, I can clarify that. The first example of this that I’ve looked at, is actually Varela’s own research. He was studying vision. And he talks about this in The Embodied Mind in 1990. He talks about how, basically, 20%… So the information is entering through the retina, the optical nerve. And the visual cortex is forming the image. So that’s what our consciousness perceives as it’s out the window. And Varela concluded from his own studies on vision that only 20% of the information that’s coming through the optical nerve is used by the visual cortex. And there’s very recent research, a few months ago, that is reinforcing that about various parts of the brain. So 20% is like, quote-unquote, actual. But actually, the thing is, the percentage, in the beginning Varela was not really believed, and there was a lot of pushback on that. They were like, “There’s no way this is true”. I’ve recently listened to a podcast by a neuroscientist saying amazing, completely shocking things are coming out of research right now showing that 80% or more is what’s called top-down influences. And she sounded completely like, “Well. But this is science, so we must believe it. But we still can’t really, or really want to believe it. And it looks like there could be more than 80%”. And she was kind of shaking. Her voice was shaking as she was saying that. And I was like, Well, Varela said this 30 years ago. So there’s some degree of homomorphism, but again, if you listen to other people, there’s no homomorphism, there’s some degree of homomorphism between the environment. It is that 20% or less, the rest we’re making it up. We’re making it up. But it’s a collective making it up.

Peter Wasilko: I was wondering if you had any thoughts about the use of forced perspective and other optical illusions in real-world architecture in order to create a more immersive environment?

Andreea Ion Cojocaru: I think, in the physical world, we are experimenting with AR in creating illusions. I don’t know if that’s what you mean. So my example of the AR app where we create this out of body experience was a little bit like that. But for me, it’s very much connected with what are we trying to achieve. And for our work, it’s not immersion. I’m not very interested in immersion for its own sake. It’s like, what does that mean? Does it mean you really believe that you’re in VR? I don’t know if that’s so relevant for my interest. We create illusions but only because we want to achieve a certain feeling, or emotion, or cognitive process, or trigger a certain thought process. So the illusion has to be connected to that by itself just being in an environment, and thinking it’s another kind of environment, or if thinking, or having the illusion that is bigger, or smaller, or just different on its own, without part of the largest strategy, is not something that we would typically pursue. I don’t know if this answer your question.

Peter Wasilko: Yeah, pretty much. I was thinking of trying to design environments to achieve certain emotional cognitive effects. So I think we’re running in the same direction.

Claus Atzenbeck:  Yeah. First of all, thanks for this talk. I have three quick questions, I guess. So you showed one project. It was this elevator, basically, which you can use to go to some TV screens. Can you say a little bit about the limitations we may face in a virtual 3D world? For example, if I imagine that I have some zooming factors implemented that the user could zoom in to up to infinity, basically. This would change the perception of the room. So I would become smaller, and smaller, and smaller, and the space would just become bigger and bigger so I could, actually, have different angles. So is this something the human could still work with? Or for example, what about rooms which are of contradicting dimensions? I imagine this Harry Potter tent, for example, which is larger inside than outside. Is this something a human can actually deal with? Could a human, actually, create a mental model of, since this cannot happen in the real world? This was the first question.
The second one is a general question about vision space, VR, I mean, this is all about visuals. This is just one channel, basically, we look at. Did you think about, well, first of all, why did you pick that and not other channels which would target other senses? What do you think about multi-modality, for example? Using different senses? And also, what would be the potential, basically? When you said this Control+Z thing, I thought about the muscle memory I have for typing a password, for example. When I actually look at the keyboard, it becomes harder for me to type in the password. And if I see a keyboard which has a slightly different layout, possibly two keys would be exchanged, like the German keyboard and the U.S., American keyboard, it becomes almost impossible to type this password fast enough, because I’m kind of disturbed by the visuals. So wouldn’t it make sense to actually ignore the visuals for some projects, at least, just thinking about the other senses, basically?
And the last question is more of a general nature. Do you think it’s really beneficial to try to mimic the real world within the computer? Like a 3D world which almost feels like being in the real world? Or do you think we should focus on more abstract information systems which may be more efficient, for example, than using an elevator going up and down?

Andreea Ion Cojocaru: Yeah, thank you for that. I think one and three are connected. One and three are about the elevator. The first question was; could it be too much for us to deal with these infinite spaces and this shrinking and expansion of our perception of the body because it’s so drastically different? Up to a point, we can definitely do it. Just like the octopus. I do think we can do it. We will hit boundaries and borders, and I’m fascinated by that. So part of our more experimental work is to see where those boundaries are, and what does that mean. Because, yes, we have adapted for quite a while through the physical Reality with, capital R, whatever cloud of particles and radiation that is for quite a while, right? But if the people that do not believe in homomorphism are right, and mathematically so far they look like they’re right, we actually have no structural coupling with what is out there. We completely make up the collective reality. But again, I’m going into speculation. Since I’m like not a scientist, I try not to speculate in public. And when I speak in public, I just focus on the papers and keep the speculation to my interpretation of the papers. Going in this direction would mean going into papers that are not commonly accepted as science. So it’s a big parenthesis. I believe, assuming we have homomorphism structural coupling with Reality with capital R, I think we will hit boundaries. I think VR can quickly put us in environments that we can’t deal with and will feel uncomfortable. I’m interested in exploring that boundary and have… I don’t want to go beyond boundaries, I have no interest in making anyone feel uncomfortable. But I feel like we don’t really know what the boundary is. So we’re talking about what we think the boundary might be, without actually having a good understanding of where that is.
Then, the third question was related to the chair. So I would argue that that chair is like nothing you would ever experience in reality. We’re taking something that is a little bit familiar to you, which is a chair and a joystick that moves the chair up and down, but the experience and the situation are drastically different than anything you would do in reality. Because you cannot take a chair to infinity in reality. So what we were doing in that environment, people say skeuomorphic, I’m like, “What is skeuomorphic about driving a chair to infinity?” So what we were doing is, we had some variables, some things that were controlled. We couldn’t have variables everywhere. We couldn’t have variables on the infinite wall, and variables on the chair and what’s around you, because it would have been too much. So we made the chair and the control skeuomorphic, quote-unquote, so we can experiment with the other stuff. And the fascinating thing was that, basically, that environment is just a folder with files. But just by doing this, it’s stupid, the whole thing is on the infinite elevator, and the infinite wall, on a basic level is the dumbest thing, but all of a sudden, people started to get exactly the same ideas that you just got with like, “Oh. What if I go to infinity? What if I start to have the feeling that I’m shrinking or expanding?” And you do. You do start to feel like you’re shrinking and expanding and you’re losing your mind. People started to think, “Oh. I could have infinite scenes”. This is like, they started to ask us, “Is this the metaverse? Oh, my God! The possibilities of seeing all of my files in here”. And people got excited about something that they already have. They already have that in a folder. You could almost have, well, not infinite, but you could have more files than you would ever want in a folder running on a PC. But their minds were not going, and exploring, and feeling excitement about those possibilities. So it was interesting how, just by changing the format, like spatialising something you already have, just open up this completely different perspective. So, yeah. We call that our most spatial menu yet, because that’s basically a menu. I think there is tremendous potential in this very simple, almost dumb, shift from screen-based 2D interfaces to 3D. It’s dumb but for some reason no one is doing it. For some reason like, I posted this stupid elevator and some people were like, “Andreea, this is stupid. What the hell is this? Why are you doing skeuomorphism?” Because I’m known for these ideas, and known for hating skeuomorphism. And everyone saw my elevator was skeuomorphism and I’m like, “No, no, no. That’s really not what we’re doing”. And every single VR application out there opens a 2D menu on your controller and you push buttons. And it has like 2D information. So they’re still browsing files and information in VR on a little 2D screen. So this elevator was our attempt to put out there a truly spatial file browser. And the extent to which it triggered this change in perspective over who you are, what do these files represent, who you are in relationship to them, what is the possibility, was really striking. We didn’t really expect that. We almost did it as a joke. We were almost like, “Why don’t we model this like 60s soviet-looking elevator and then, have an infinite wall and see what happens”. The idea with the infinite wall also came from like, I have a few pet peeves:
One is like, homomorphic avatars, which I hate.
The other one is the infinite horizontal plane that all the VR applications have.
Why in the world do we have this infinite horizontal plane in VR?
So we wanted to make an infinite vertical plane in VR. Muscle memory, yes. So the reason why we’re focusing on visuals is because that’s what we’ve been focusing on. But in the game that I mentioned, we have an entire part of the game which is called, The Dark Level. So what we’re doing in the dark level is exactly what you said, which is we’re exploring sound and space. You don’t see anything. So basically, the VR headset is just something to cover your eyes and to get sound into your ears. That’s something brand new that we’re embarking in, because I agree with you, everything that I talk about is not necessarily specific to visuals, it just happens that we’re just now starting to do space and sound, as opposed to space and visuals.

Claus Atzenbeck: Just one more question on what you just said. Do you think this infinity virtual 3D environment is something that people like because it’s something new but you’re not solving a particular problem? Because I can imagine that we have a plain zoomable user interface like Jef Raskin did something like that, which you can zoom in and check your files on an infinite 2D space on canvas, basically, on the screen. So it’s just because it’s something new and people are happy to use that because it’s new? So it’s like a game? That’s gamification, basically?

Andreea Ion Cojocaru: There are two things we’re pursuing with that.
One is spatial memory as opposed to semantic memory. There are studies that show that spatial memory is more efficient than semantic memory. In other words, you’re more likely to remember where you put something than how you named it. So we’re interested in where people put things. And we don’t want people to put something somewhere, this object that is their file, with the mouse. We want people to physically move their bodies to put that something there. So we’re taking the file, which is an abstract thing, we’re embodying it into an object in VR, and we’re making people, literally, take it with this forklift, because we’re just being stupid right now, with this forklift and literally putting it somewhere else. So that kind of testing of spatial versus semantic memory, I think, can only be done in this context. And I don’t know of any other project that’s doing it.
And the second thing is, yeah, just this pure idea of interacting with abstract entities as if they were embodied objects, and being able to apply physical movements of the body, and moving the body through space to interact with these abstract objects. So that’s kind of clashing together Lakoff with all of these other theories. It’s like, you’re learning how to manipulate abstract thoughts, by learning mechanisms from how the body moves to space but in a perverted kind of way, VR allows us to smash the two together.
So we are, and we are just observing how it happens. So, no. At a conceptual level, we would love for people to have fun, but it is these two things that we are interested in learning more about. We have not just made it so people think it’s just cool to go up and down.

Frode Hegland: I’m going to go all the way back to that 80% stuff. That, of course, in a very real sense doesn’t mean anything. I’m sitting outside now and there are our trees, and birds, and everything. And we have to talk, of course, about affordances. What these things are to me, which is interesting. I can see that there’s grass over there. There’s no chance and no usefulness for me to know exactly how many blades of grass, exactly what angle they are, exactly what colour level they are, etc. That is not useful information for me. So obviously, the 80% stuff is all about where in our system, information gets filtered. And how it’s used. There are, of course, different levels of this, and the reason I wanted to discuss this point is, in the physical world, if there is a fox or something that may come gnarling up at me, then a certain type of shadow has information that otherwise wouldn’t have information for me. And it’ll be very interesting to see when we start designing our environments in virtual reality, how we can choose to, more intelligently say, “This stuff is meant to be here because if it wasn’t here, you would wonder why it’s missing”. Like a wall. You know you don’t need a wall in VR. But otherwise, it would feel unbounded, literally. And here’s another piece of information about this wall, which has actual meaning to you. So I’m wondering if you have any reflections on, let’s call it hyper surrealist worlds, where you look out the window and you can choose to see the weather tomorrow. Some of it’s kind of real and fancy, some of it is just completely insane. But that thing where some information is meant to be there, otherwise, you’d miss it. Other information has actual meaning. Thank you.

Andreea Ion Cojocaru: Yeah, thank you for this question. I’m going to say some things now that I allow myself to say in public because I am an architect and not a cognitive scientist, so I’m not going to risk my reputation. But the reason why the 80% is meaningful to me is, because it means the 80% can be changed. The 80% is the story. So, again, this is kind of very out there statement, but I’m more interested in figuring out, rather than changing the environment and designing super interesting environments, and putting people in there. I’m very interested in pursuing what these research studies are implying and seeing to what extent the story can change what you see. Because the “over 80%” is the story, so if we change the story, you will not see grass anymore. Just like the way the frog cannot see a truck. Again I don’t mean this quite so literally, but on the other hand, I do. On the other hand is the study that shows that if you’re holding a glass with hot water, you hear different things than when you hold the glass with cold water. So the evidence is on the wall, but we are really scared of going into the implications of this. And the cognitive scientists do not risk their reputation. Some do and talk about things, but they’re not exactly considered mainstream. So it is there. I mean, the study is there. 

Frode Hegland: Oh, yeah. And I think that’s phenomenally useful, but another half of this is the issue of… I had a friend who was obsessed with cars. He would know everything. So we’d be walking down the road and he would see, at night, a taillight from behind, at an angle, and he could tell me who designs the wheels of that car. So what he saw, what was information to him, was very different from what it is for me. And looking at my son, first time I’m bringing him up today, so I need a medal. Anyway, if he has touched grass, for instance, of a certain thing, when he sees the grass, he doesn’t just see lines of green. We obviously feel something with it. So along with what you’re talking about, I look forward to being able to put visual information that can have rich meaning for us, but in entirely new ways or something, the two literal examples. That’s all, and thank you very much for your answer.

Brandel Zachernuk: Yeah. So you mentioned a neuroscientist. Was that Lisa Feldman Barrett†? Because if not then I’d love to know another one. Yes? Okay, good. Yeah, she’s amazing in terms of her exposure to the way that priors are so important, in terms of what we’re perceiving. So I’m glad we’re on the same page there.

Andreea Ion Cojocaru: Yes. She was recently on my Mindscape Podcast with Sean Carroll, yeah.

Brandel Zachernuk: So that, specifically, was on Mindscape? Okay, great. Thank you. And then, the next thing I wanted to talk about was, so I’m really glad to hear about your disinterest, potentially, and antipathy for immersiveness, for its own sake, because I share that. People who are regulars to this meeting know my hostility to the notion of story for its own sake as well. But you’ve also brought up being an octopus. So it strikes me that you would probably not consider being an octopus to be, sort of, significant in and of itself. But for some kind of functional practical benefit, some cognitive change that you would expect to occur. Have you played with Octopus? And what kinds of things have you observed there? Are there any signs that you do different things there as a consequence?

Andreea Ion Cojocaru: Yeah. So I use their methods. Giuseppe Riva is a researcher from Italy who is using VR and these theories of embodiment to treat our sort of mental conditions. And he has an onboarding protocol for helping people identify with an avatar. He’s using it with hominoid avatars. But I’ve used that onboarding protocol, again, on myself, these are not things I make public or ever will, but on myself. You basically tap, you use the thing from the rubber hand illusion. You have someone tap your actual body, and then, you program something that will tap your other body in a place that’s kind of in the same place. And then, I did an experiment to see the extent to which I can embody other kinds of stuff. So this tapping helps quite a lot to go into it fast. And I like to embody spaces.
And this sounds nuts, but let’s talk about it. I like to embody a room. I like to experiment with how big I can get. And again, this is completely crazy talk, but then here we are, in 2022, with VR in the hands of teenagers. So, yeah. It happens. I mean, it’s real. How fast it happens and how profound that experience is will vary from person to person. It’s kind of like, some people have lucid dreams, some people can trigger out-of-body experiences and some cannot. But the mechanism is there. And the technology now is there and costs 400 bucks. Why do I do it? I’m interested in observing how I change. I’m interested in observing myself, and most particularly how I perceive physical reality afterward. So I’m trying to understand this transfer and see if I can have any kind of insight into that, then, I can phrase it in a more methodological way and start to form hypotheses. There are changes that are happening in me. I’m not at a point yet where I can talk about them with enough clarity to communicate them to other people, but they exist.
And at the end of the day, I’m interested in what Foucault called, ‘Technologies of Self’. Because what I’m doing to myself is, I’m making myself the subject of technology of self, I’m using VR. But you can use other things that are not technically technology or not technology in the modern sense, you can use books or other kinds of things to push a change in myself that is very new.
And I need to understand what I’m becoming. What’s the possible direction of that? Because we might potentially face this happening on a global scale soon with very young people. And because scientists are so scared to talk publicly about this, they’re so scared to throw things out there, because the VR developers are so scared to really go into this, we are left in a bad place right now, where we know we struggle. And I mean, I get a lot of shit for talking about these things. There’s a lot of people telling me on Twitter that I’m wrong but I do think it’s necessary, so I do it.
I’m interested in how these things will change us, and what’s the potential in that as well. I think it’s even harmful to try to avoid it. So those developers working hard not to trigger these things are harming everyone. The tech will do that anyway, so we might as well understand it and let it happen, or at least control how it happens. But we can’t if we don’t look at the mechanism. And I think that when these developers are talking about what they do to avoid it, they are not talking about the mechanism. They’re not even trying. They’re not hypothesising any mechanism that triggers them. They’re kind of like band-aids, right? They’re kind of like seeing something happening there and then they think it’s something and trying to have local solutions for that. I don’t know, did that answer your question?

Brandel Zachernuk: Yes, absolutely. And your point about being a building I think is really thrilling. Reminds me of some stuff that Terry Pratchett, in Discworld, was a remarkably neuroplastic kind of writer. But it also reminded me of, when we were talking about the channels of information that we’re using to, sort of, explore and mess with, that proprioception is completely distinct from visual. And to that end, the most exciting thing for me is virtual reality’s capacity to impact what it is that we mean to do with our bodies, and what kind of impact that has. So it’s very exciting to hear all of these things put together. Thank you.

Peter Wasilko:  I was wondering if you’d ever read Michael Benedict’s 1991 book, Cyberspace: First Steps (Michael, 1991)?

Andreea Ion Cojocaru: I did not no. Should I?

Peter Wasilko: Yes, you should. It has very interesting presentations of abstract information spaces. And one of the ideas was, to have higher dimensional space represented as multiple three-dimensional spaces that can unfold to reveal nested subspaces inside. Sort of like, you’re looking at three walls of the cube, then another sub-cube could open based upon a point that was selected within the first cube representing another three dimensions of the abstract information object. Also it introduced the idea that you could be representing a physical object in a space, but the space itself could represent a query into higher dimensional space. So the point in the space would represent the query corresponding to the three dimensions that were currently displayed in the one space, and that would then, control what was being displayed in another link space. So just the most fascinating thing I’ve read in a long time. And I keep coming back to that book and encouraging everyone in our group to take a look at it. So I highly recommend it. And when you do get a chance to read, I’d be extremely interested in what your reaction is to those chapters.

Andreea Ion Cojocaru: I want to add something quickly. So the thing that crosses my mind, which again, it’s not something I just say in public, but like, why not? Because today’s discussion is already going interesting places. What crossed my mind, as you describe the book which I will absolutely read, is this: so let’s say, I just said that I, sometimes, like to embody an entire room. We can’t understand these complex spaces and nested spaces on four-dimensional spaces and so on. But can we, if we are a room? What kind of perceptual possibilities and cognitive possibilities would that open up? Because, of course, if you truly believe that you are the room, your brain is in an altered state of consciousness, basically. Not in the like spiritual sense in any way, but at the cognitive of the cognitive level. So again, this is kind of wild speculation. But that’s just the thought that crossed my mind.

Frode Hegland: Andreea, thank you incredibly much for today.

Andreea Ion Cojocaru: Thank you so much. Bye-bye.

Colophon

Published July 2022. All articles are © Copyright of their respective authors. This collected work is © Copyright ‘Future Text Publishing’ and Frode Alexander Hegland. The PDF is made available at no cost and the printed book is available from ‘Future Text Publishing’ (futuretextpublishing.com) a trading name of ‘The Augmented Text Company LTD, UK. This work is freely available digitally, permitting any users to read, download, copy, distribute, print, search, or link to the full texts of these articles, crawl them for indexing, pass them as data to software, or use them for any other lawful purpose, without financial, legal, or technical barriers other than those inseparable from gaining access to the internet itself. The only constraint on reproduction and distribution, and the only role for copyright in this domain, should be to give authors control over the integrity of their work and the right to be properly acknowledged and cited.

1 thought on “1.4 : Featuring Andreea Ion Cojocaru

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.