What does it mean that computers can peer a tiny distance into the future? I have the vaguest of vague senses that a few things I’ve seen recently are conceptually connected.
EXAMPLE #1
Apple announced its new headset Vision Pro the other day, and what’s neat is that they’re not framing it as “augmented reality” but as a spatial computing platform.
I’m into a vision of computing which is room-scale, embodied, and social (see this post about Map Rooms) so I’m into this.
What this means:
Things stay where you left them: I read in one review that you can decorate your room using the Vision Pro headset. Like, you can put virtual paintings on the real wall. If you go to your physical office, the paintings won’t be there, and when you return home, they’re where you left them. We take that for granted with physical things. Not so trivial with computers.
The OS is architected to make spatial design easy: here’s Apple’s developer video setting out their principles of spatial design - the operating system (called visionOS) is built to understand the context of the room and the objects, people, and light in it. Such as: virtual objects look and behave like physical things. Like they reflect ambient light, and they have a fixed location, and they respond when you interact with them with zero latency.
Ok so there’s a ton of wild technology required to make this work!
And, to highlight one particularly wild point, if you virtual objects to feel real, then the computer has to PEER INTO OUR (subjective) FUTURE to get ready to react.
From ex Apple engineer Sterling Crispin on Twitter:
One of the coolest results involved predicting a user was going to click on something before they actually did. That was a ton of work and something I’m proud of. Your pupil reacts before you click in part because you expect something will happen after you click. So you can create biofeedback with a user’s brain by monitoring their eye behavior, and redesigning the UI in real time to create more of this anticipatory pupil response. It’s a crude brain computer interface via the eyes, but very cool. And I’d take that over invasive brain surgery any day.
Other tricks to infer cognitive state involved quickly flashing visuals or sounds to a user in ways they may not perceive, and then measuring their reaction to it.
(Thanks Ed Leon Klinger for picking up on this.)
Detecting the Bereitschaftspotential!
btw on that biofeedback point, Crispin also says this in their tweet:
Another patent goes into details about using machine learning and signals from the body and brain to predict how focused, or relaxed you are, or how well you are learning. And then updating virtual environments to enhance those states. So, imagine an adaptive immersive environment that helps you learn, or work, or relax by changing what you’re seeing and hearing in the background.
BUT: let’s go back to talking about the future.
EXAMPLE #2
Unexpected waves are a problem in shipping.
Like, you know when you’re looking at the choppy sea in a harbour? And a big wave comes from nowhere and a random combination of ripples in a weird corner makes water leap and splash into the air?
That’s a problem if you’re trying to get cargo across a gangway. A gangway crossing takes roughly 30 seconds – and it’s catastrophic to get a disconnection halfway through.
Wouldn’t it be great… if you could see… into the future… of the ocean.
WELL.
Here’s WavePredictor by Next Ocean.
First they continuously scan the water around the ship with radar.
Then:
WavePredictor propagates the observed waves into the future resulting in a near future prediction of the waves arriving at the ship and the resulting ship motions.
It’s not just about avoiding freak big movements. It’s the reverse too:
Pick the right moment to hook onto the load on deck when motions are temporarily low.
Faster-than-realtime simulation of ocean waves to anticipate moments of still.
It’s hard to think of another tool that has because some popular so fast. GitHub is the main place where people store and share their code, outside big corps, and from their own stats back in February: 46% of all new code of GitHub is written using Copilot. (Oh and 75% of developers feel more fulfilled.)
It’s hard to put my finger on what it feels like, because it doesn’t feel like using autocomplete in my text messages.
It feels like flying. I skip forwards across realtime when writing with Copilot. Type two lines manually, receive and get the suggestion in spectral text, tab to accept, start typing again…
OR: it feels like reaching into the future and choosing what to bring back.
It’s perhaps more like the latter description. Because, when you use Copilot, you never simply accept the code it gives you.
You write a line or two, then like the Ghost of Christmas Future, Copilot shows you what might happen next – then you respond to that, changing your present action, or grabbing it and editing it.
So maybe a better way of conceptualising the Copilot interface is that I’m simulating possible futures with my prompt then choosing what to actualise.
(Which makes me realise that I’d like an interface to show me many possible futures simultaneously – writing code would feel like flying down branching time tunnels.)
Look: we cross a threshold when computers can do faster than realtime simulation.
I can imagine a wearable device that continuously snapshots the world around you, runs the simulation in fast forward, and pre-emptively warns you about a mugging, or a bike going off course. Call it augmented apprehension.
And so I’m connecting these three examples because they feel like glimpses of a different type of computing.
Let’s say that an interactive operating system contains within it a “world model” that makes it possible for apps to incorporate the world into their user interface.
i.e.:
the personal computer OS has a model of what’s in the user’s working memory (the screen) and the user’s focus (the cursor) and therefore apps can be interactive
the mobile computer OS has a native model of the context of the user (their geographic location) and their communication networks, and therefore we got apps like Google Maps and Facebook
the spatial computing OS contains a model of the room, and so we’ll get augmented reality
And therefore:
the future computing OS contains of the model of the future and so all apps will be able to anticipate possible futures and pick over them, faster than realtime, and so… …?
What happens when this functionality is baked into the operating system for all apps to take as a fundamental building block?
If you enjoyed this post, please consider sharing it by email or on social media. Here’s the link. Thanks, —Matt.
‘Yes, we’ll see them together some Saturday afternoon then,’ she said. ‘I won’t have any hand in your not going to Cathedral on Sunday morning. I suppose we must be getting back. What time was it when you looked at your watch just now?’ "In China and some other countries it is not considered necessary to give the girls any education; but in Japan it is not so. The girls are educated here, though not so much as the boys; and of late years they have established schools where they receive what we call the higher branches of instruction. Every year new schools for girls are opened; and a great many of the Japanese who formerly would not be seen in public with their wives have adopted the Western idea, and bring their wives into society. The marriage laws have been arranged so as to allow the different classes to marry among[Pg 258] each other, and the government is doing all it can to improve the condition of the women. They were better off before than the women of any other Eastern country; and if things go on as they are now going, they will be still better in a few years. The world moves. "Frank and Fred." She whispered something to herself in horrified dismay; but then she looked at me with her eyes very blue and said "You'll see him about it, won't you? You must help unravel this tangle, Richard; and if you do I'll--I'll dance at your wedding; yours and--somebody's we know!" Her eyes began forewith. Lawrence laughed silently. He seemed to be intensely amused about something. He took a flat brown paper parcel from his pocket. making a notable addition to American literature. I did truly. "Surely," said the minister, "surely." There might have been men who would have remembered that Mrs. Lawton was a tough woman, even for a mining town, and who would in the names of their own wives have refused to let her cross the threshold of their homes. But he saw that she was ill, and he did not so much as hesitate. "I feel awful sorry for you sir," said the Lieutenant, much moved. "And if I had it in my power you should go. But I have got my orders, and I must obey them. I musn't allow anybody not actually be longing to the army to pass on across the river on the train." "Throw a piece o' that fat pine on the fire. Shorty," said the Deacon, "and let's see what I've got." "Further admonitions," continued the Lieutenant, "had the same result, and I was about to call a guard to put him under arrest, when I happened to notice a pair of field-glasses that the prisoner had picked up, and was evidently intending to appropriate to his own use, and not account for them. This was confirmed by his approaching me in a menacing manner, insolently demanding their return, and threatening me in a loud voice if I did not give them up, which I properly refused to do, and ordered a Sergeant who had come up to seize and buck-and-gag him. The Sergeant, against whom I shall appear later, did not obey my orders, but seemed to abet his companion's gross insubordination. The scene finally culminated, in the presence of a number of enlisted men, in the prisoner's wrenching the field-glasses away from me by main force, and would have struck me had not the Sergeant prevented this. It was such an act as in any other army in the world would have subjected the offender to instant execution. It was only possible in—" "Don't soft-soap me," the old woman snapped. "I'm too old for it and I'm too tough for it. I want to look at some facts, and I want you to look at them, too." She paused, and nobody said a word. "I want to start with a simple statement. We're in trouble." RE: Fruyling's World "MACDONALD'S GATE" "Read me some of it." "Well, I want something better than that." HoME大香蕉第一时间
ENTER NUMBET 0016foncti.org.cn www.nxkqub.com.cn qnjswp.com.cn www.pihmwu.com.cn www.ptbick.com.cn rnchain.com.cn wehs.net.cn wtchain.com.cn www.woooyoo.net.cn wobtcy.com.cn
What does it mean that computers can peer a tiny distance into the future? I have the vaguest of vague senses that a few things I’ve seen recently are conceptually connected.
EXAMPLE #1
Apple announced its new headset Vision Pro the other day, and what’s neat is that they’re not framing it as “augmented reality” but as a spatial computing platform.
I’m into a vision of computing which is room-scale, embodied, and social (see this post about Map Rooms) so I’m into this.
What this means:
Ok so there’s a ton of wild technology required to make this work!
And, to highlight one particularly wild point, if you virtual objects to feel real, then the computer has to PEER INTO OUR (subjective) FUTURE to get ready to react.
From ex Apple engineer Sterling Crispin on Twitter:
(Thanks Ed Leon Klinger for picking up on this.)
Detecting the Bereitschaftspotential!
btw on that biofeedback point, Crispin also says this in their tweet:
BUT: let’s go back to talking about the future.
EXAMPLE #2
Unexpected waves are a problem in shipping.
Like, you know when you’re looking at the choppy sea in a harbour? And a big wave comes from nowhere and a random combination of ripples in a weird corner makes water leap and splash into the air?
That’s a problem if you’re trying to get cargo across a gangway.
– and it’s catastrophic to get a disconnection halfway through.Wouldn’t it be great… if you could see… into the future… of the ocean.
WELL.
Here’s WavePredictor by Next Ocean.
First they continuously scan the water around the ship with radar.
Then:
It’s not just about avoiding freak big movements. It’s the reverse too:
Faster-than-realtime simulation of ocean waves to anticipate moments of still.
EXAMPLE #3
So I use GitHub Copilot to write code now. It’s an AI that can autocomplete 20 lines of code at a time.
It’s hard to think of another tool that has because some popular so fast. GitHub is the main place where people store and share their code, outside big corps, and from their own stats back in February: 46% of all new code of GitHub is written using Copilot. (Oh and 75% of developers feel more fulfilled.)
It’s hard to put my finger on what it feels like, because it doesn’t feel like using autocomplete in my text messages.
It’s perhaps more like the latter description. Because, when you use Copilot, you never simply accept the code it gives you.
You write a line or two, then like the Ghost of Christmas Future, Copilot shows you what might happen next – then you respond to that, changing your present action, or grabbing it and editing it.
So maybe a better way of conceptualising the Copilot interface is that I’m simulating possible futures with my prompt then choosing what to actualise.
(Which makes me realise that I’d like an interface to show me many possible futures simultaneously – writing code would feel like flying down branching time tunnels.)
Look: we cross a threshold when computers can do faster than realtime simulation.
I’ve tried to put my finger on this before (2020):
(Or to fly in new edge-of-chaos ways by bumping off vortices using predictive fluid dynamics.)
And so I’m connecting these three examples because they feel like glimpses of a different type of computing.
Let’s say that an interactive operating system contains within it a “world model” that makes it possible for apps to incorporate the world into their user interface.
i.e.:
And therefore:
What happens when this functionality is baked into the operating system for all apps to take as a fundamental building block?
I don’t even know. I can’t quite figure it out.