Hi everyone, my name is Matt, and I’m part of the design team on the Ailuros Project. I want to use this blog to give you all a little more information about how we arrived at this point. I’ll be talking about some of the early testing that was carried out, and maybe even some of the really cool bits that we do to make the project work.
I first became aware of the technology involved about two years ago. I’d seen an advert from a local, Government-funded research team, looking for volunteers to test ‘new, experimental technology with a wide range of applications.’ It was one of those ads that was vague enough to make me interested in knowing more, but I’ll be honest, it was the promise of receiving a full wage for taking part that really grabbed me. I was between jobs at the time and doing anything that would bring in even another couple of days pay was welcome.
So, I applied for more information and it was all explained as using a portable brain scan system to record and, more importantly, decipher brain processes. As I understood it, it could be used for everything from accessing memories to aiding those with limited mobility. It was all really interesting. And, as much as it all felt very sci-fi, I was already aware of this sort of experiment being run on the medical side.
Once I signed all the consent forms and the non-disclosure agreements, I was given a full rundown of what to expect. The first few days were spent sat in a room, carrying out a couple of repetitive tasks. I was asked to concentrate on photos, describe parts of the images, just sit and think about the same parts. Then, after they’d recorded enough data on that, I was asked to pick cards containing stock phrases relating to some of the imagery and concentrate on the words while the research team tried to figure out what I was thinking about.
It was dull. Really dull. And believe me, while the Calibration Day is both quicker and more interesting than the five days I had to go through the process, that’s gonna be dull too. It’s necessary though. Even in these early days, the team knew what they had in front of them, and they knew where they wanted to go with it, I think.
So, after they reviewed the data they had gathered, the research team picked a few of us for an additional task. I never did ask, but logically, we were probably picked because they had the best results on the card task with us. This second stage was described to us as being similar to dream recording. It’s important to note the word similar there. You see, none of us were actually asleep during this. We were placed in a trance-like state by hypnosis and prompted to imagine we were dreaming. The key thing was that the prompt specified us to allow the mundane to take over, rather than the more fantastical side of the imagination.
The project, at this time, had been concentrating on common, real-world things when it came to data gathering. Animals, household items, walking, that sort of thing. So, those were the types of thing that would be most likely to be understood by the machine. That version of the machine recorded everything it could while we sat there ‘dreaming’ and then, that data was put through another computer to build it into a video.
The video, when I saw it, amazed me. It was a little blurry, and it didn’t move entirely smoothly, but it did accurately represent what I had seen during the data gathering. I got permission to provide part of the official transcript of the interview with myself here to give you an idea of what we were looking at. Have a read, you might find it interesting.
TED GRANT: My name is Ted Grant, and I’m going to be discussing your results with you today. Before we continue, I just need to need to ask you some quick questions. If you agree, state I do. If you do not agree, you can just say no. First, do you give consent for the recording, both audio and video, of our conversation?
MATT DOYLE: I do.
TED GRANT: And do you consent to the verbatim transcription of this conversation to be created, forming part of the overall research file?
MATT DOYLE: I do.
TED GRANT: Finally, do you consent to the use of this information to further the project, including the use of anonymised excerpts as part of advertising to potential sponsors, decision-makers, and customers?
MATT DOYLE: I do.
TED GRANT: That’s great, thank you. So, you’ve had a chance to watch the video we put together. I understand you raised some concerns over the image quality. We’re working on that. In general terms though, can you describe the video for me?
MATT DOYLE: Yeah, I mean, it was really good. It was me, definitely. I was in a kitchen, and I went and grabbed a mug from the side, walked to the sink, and filled it with water from the tap. I looked in the mug, grunted, and put the mug in the sink. Then, I repeated it all over again, taking a different mug each time. So, not really very interesting, but still really interesting, you know?
TED GRANT: I know what you mean. It’s not perfect yet, but the software is doing a good job of recreating the videos. Can you confirm if the video was accurate for what you saw during the hypnosis session?
MATT DOYLE: Mostly. The only real difference is that it was in third person. When I was experiencing it, I saw it all in first person, like I was actually doing it. The video was more like someone had filmed me doing it.
TED GRANT: Okay, good. That’s actually quite easy to explain. You see, the scans recorded literally everything. Every process your brain ran. Now, when you were actually experiencing this, you saw it all as though you were actually doing it. Dreams would be the same because your brain knows that’s how you see the world when you’re awake. You know how you look though, correct? You’ve seen photos, or looked in a mirror?
MATT DOYLE: Yeah.
TED GRANT: Your brain knows that too. When you walk down the street, it doesn’t just process the act of walking. It’s also aware of the way your body is moving as you breathe, which way your head is facing, where your hands are, and so on. So, during this pseudo-dream, your brain is also processing where the rest of your body would be and what it would look like in the background. The scans pick that up, and when the software recreates the video, it uses all the data. So, you see things in third person. Was the kitchen in the video one you’ve visited in real life?
MATT DOYLE: Uhm, yes, but it was a long time ago. It was my parents’ kitchen in their old house, not the current one.
TED GRANT: Was the layout accurate?
MATT DOYLE: The way it was set out was, definitely. I can’t really remember the colour scheme or anything like that, but where things were was right. Actually, when I was moving the mugs back and forth, I never looked at the far wall, did I? The video showed it though. Is that all just memories leaking in?
TED GRANT: It’s not actually what you’d call leaking in this case. It’s not always on a conscious level, but we’re all actually a lot more aware of our surroundings than we realise. Especially with familiar places. I’m guessing you were a child when you lived in the house with this kitchen.
MATT DOYLE: That’s right.
TED GRANT: And now, you can’t remember the colour scheme. Something in your brain is certain that it’s that colour though. That’s the same thing. You paid attention, even if you didn’t know you were.
Okay, so the videos are better now. They aren’t blurry (thanks to the switch to CG characters and settings) and the team has continued to improve the way the software interprets the readings. The general principle is the same though. They always come out in third-person, for exactly the reason Ted stated here. We pay attention to a lot of stuff. It’s why we sometimes just get a feeling that something is good or bad. Part of our brains picked up on something. Usually something small. And just like how the software interprets what we’re thinking, our brains interpret the world around us.
The whole thing got me really interested in what was happening, and what the end goal was. It took a lot of pushing, and a lot of repeatedly harassing the staff, but eventually, they agreed to let me start helping out a little. Now, two years on, I’m working hard to make the project everything it can be. When this rolls out on a wider scale, you’ll probably hear people say all sorts of bad things about it. Some people panic over new things. Others like to find invasiveness wherever they go. Whatever the reason though, Government schemes tend to come with a bad rep built-in. Take it from someone who has been here since the start of public testing though. Someone who wasn’t a career civil servant. What we’re trying to achieve with the Ailuros Project is not an invasion of privacy, and it’s not a means to exert mass control over the population. This is a good thing. And the tech is just really cool.