Some Days Soon
Vignettes of life with tech we could build today (but haven’t yet)
Sunday
You’re learning to paint. You’re not good at it yet — the shadow of this vase looks more like a hole in the table — but there’s something about the realness of the physical manipulation that you’re finding quite relaxing. An AI coach watches through your tablet camera, mostly silent. Every few minutes you’ll turn to look at it, and it will say something like “you’re overworking that edge — try loading more paint and doing one stroke” and bring up a twelve-second video of someone demonstrating.
Your phone buzzes with a suggestion. An old friend from University is in town — your system noticed from semi-public calendar data that she might be free for lunch. You haven’t talked in over a year. You look at your paint-covered hands and almost dismiss it — you were planning to do this all morning and eat lunch alone with a book. But actually... yeah. You say yes, and go back to your vase. Your personal assistant AI will find somewhere you’ll both like, handle the back-and-forth, and tell you when to leave.
On the subway, your phone offers you a chapter of the biography you’ve been reading, or a long essay about AI in the legal system that it says you’ll find “interesting but probably disagree with.” It must have read your face — on Friday evening it was serving you memes. You pick the essay.
It is interesting. The author argues that AI-assisted adjudication will be standard in contract disputes within two years. You’re not sure you buy it. You double-tap the byline and get a reliability summary: strong track record on resolved predictions, no pattern of exaggeration, but only fourteen months of public forecasting history. Credible but not yet proven.
You highlight the key claim and ask for discussion. There are just two responses — one making a good point about precedent in the EU, one giving a bit more detail on implementation costs. You can see there’s more behind a “show unvetted responses” toggle. You tap it. A comment catches your eye: “this ignores the insurance liability question entirely!” Seems like a real point? You swipe for context and... oh. The commenter is misrepresenting a ruling that actually went the other way. The sourcing is right there. This is why the filter exists.
Fine, what do the forecasting systems say? You pull up a prediction market summary. It’s notably wide on this one — not like when you checked arrival dates for autonomous taxis in your city last month (90% confidence interval: 16-23 months). The AI analysis says, more or less: “this is a political question with strong arguments on multiple sides, and it’s hard to predict the outcome of value disagreements.” You could pay for a deeper dive, but your stop is coming up.
Walking to the cafe, you dictate a few sentences about the essay — what was compelling, what felt undersupported. This feeds back into your recommendations and gets published as a micro-review that friends’ systems can pick up. You don’t think about this part much anymore; it’s like leaving a rating.
Lunch is good. You talk about a TV show you’re both watching, and about a mutual friend who just had a kid, and about how weird it is that politics got boring. She laughs and says she misses the drama sometimes. You say you don’t.
At home, you sit down to deal with some life stuff. Your assistant system has three things queued as “important, not urgent.” It suggests starting with your landlord situation, but you’d rather tackle the family vacation first.
Here’s the thing about your family: you love them; and planning a trip together makes you want to scream. Everyone wants different things — your dad wants somewhere walkable, your sister wants a pool for the kids, your aunt keeps pushing for a cruise that nobody else wants but nobody wants to say nobody wants. In previous years this has produced a group chat that could be studied by conflict researchers.
This year you’re trying AI-mediated planning. You dumped your preferences in a five-minute voice note last week. The system has now talked to everyone separately — crucially, everyone can be honest without performing for the group — and produced four options. You look them over, talking out loud: “OK, this Sardinia place is interesting... the cruise is still here, seriously?... oh wait, this Cornwall one actually handles the walkability thing well.”
Then you notice something. The system has your sister’s kids down as ages 2 and 5. They’re 4 and 7. Which means the activity recommendations for them are probably slightly off — and more importantly, it might be pulling from the wrong school holiday dates. You flag it with a snarky comment — given how good these systems are at synthesizing preferences across a group, it really feels backwards how unreliable they can be about basic facts that aren’t in their structured data. It takes a few seconds, and then the options reshuffle slightly with the corrected data.
You move on.
Landlord time. You’ve been wanting a deck off the back of the kitchen — the morning sun hits that spot perfectly and right now it’s just scrubby concrete. You’re not going to pay for the whole thing, and your landlord isn’t going to do it out of kindness. But there might be a deal: you contribute to costs, they get increased property value, and you get some assurance you can stay a couple more years to enjoy it.
You talk this through with your AI, which will negotiate with your landlord’s AI. You like this part — you can be strategically honest with your own system (”I want this a lot but don’t lead with that”) in a way that would be impossible in a face-to-face negotiation. The AIs will explore whether there’s a deal that works for both sides without either human having to do the awkward dance of pretending not to care.
Last task, and this one is kind of fun. You’ve been wondering about getting an electric bike for your commute, but there’s a bunch of things you’re worried about. So you’ve got a report to think about — what the best route looks like, how dangerous those roads are, expected hassle and maintenance costs (vs time savings and subway costs), risk of theft, etc.. You’re feeling into it, so you look at the top options for bikes to buy (compiled from reviews, and taking account your circumstances). It’s a bit hard to judge between two of the top three, so you let your system book a time next weekend when you can visit a local bike shop to try them out.
While you’re in the reviewing zone, you check for upgrades to your AI augmentation suite. You know that this is more frequent than is really important, but you enjoy the feeling of keeping up with the latest tech. Looks like there have been a few new model releases … most of them don’t seem relevant for you, but there’s a fast-and-cheap one it could be worth trying. Accuracy drops from 99.994% to 99.986% — fine for pretty much everything. But epistemic cooperativeness drops from 98% to 94%.
You pause on this. Epistemic cooperativeness is the metric you care about most, even though most people don’t pay much attention to it. It measures whether the system is actually trying to help you believe true things — as opposed to telling you what’s technically accurate but framed to support whatever you seem to already think, or hedging so much it’s useless, or being subtly overconfident in ways that are hard to catch … The difference between 98% and 94% doesn’t sound like much, but you’ve used a 94% system before and you could feel it — a slight slipperiness, like talking to someone who’s agreeing with you a little too readily. Maybe you’ll try the new model for low-stakes stuff. For anything that matters, no.
Walking the dog before dinner, your system suggests a podcast about how politics changed when track records became more transparent. You’re not too surprised — it was listening to your lunch conversation. You put it on. The hosts are funny and a little irreverent. Perfect.
Their basic argument: politics used to be about making yourself look good, and the other side look bad, in soundbites. But people don’t like being lied to! When voters could trivially see when they were being manipulated, catch mistakes at source, and check how often a politician’s claims held up, the incentive structure flipped, and straight talking was much more rewarded. What you find most interesting is that it wasn’t just that different people won — some of the same politicians just... started being more honest.
You think about this for a few blocks. It sort of feels like the technology forced honesty on people, but that’s not quite right — it’s just that it made honesty a better strategy than it used to be. The politicians who didn’t adapt started losing. Huh.
Monday
You work at your country’s Foreign Ministry, on the AI Accords (“humanity coming together to decide how to meet this moment”).
It’s less glamourous than it sounds. The Accords are an international process — kind of like climate negotiations, but for AI development. The big action is between the US and China, and your country is mid-sized, but you’re part of a coalition that helped pressure the superpowers to the table in the first place. That happened before you joined, but it makes the work feel real.
The process here is kind of like negotiating with your family or your landlord, only about a million times more complicated. You’re helping to coordinate the national submission to the “official” mediating AI system. Of course since this process was codified a few months back, there’s been a proliferation of backchannel mediation — between different groups of countries, big companies, small companies, religious groups, you name it.
This morning you’re going over some material on access rights. Who gets to use the most advanced AI systems, and for what? Everyone has opinions here, and it’s your job to run the process to make sure the PM’s office is well-informed on what those are — and not just the surface-level opinions, but the things people would think if they slowed down, talked to folks on the other side, and thought about it. After a pressure campaign, the government is committed to making its official submissions to the process a matter of public record, and the processes publicly-auditable without being public, so they really want them to do a good job.
Something bothers you. A cluster of academic researchers are strongly advocating for maximal open access — which has some legitimacy as a position — but they’re not engaging with the counterarguments at all, and several keep citing a body of work that traces back to an analysis that was debunked eight months ago. You sigh, and draft instructions to go back to these groups with the specific counterarguments and ask for direct responses. AI systems will handle the actual deliberative interactions — you’re just steering. This probably won’t change your country’s submission. Your country’s submission probably won’t change the Accords. But it’s conceivable they might; and it’s your job to make sure the voices get a chance to be heard.
After lunch, you open an email from a colleague who’s criticizing your strategic modeling work — an analysis of how different Accords provisions might shift power dynamics and incentives between major players. You start composing a reply and the desktop buddy you configured interrupts with a small icon: a face with one eyebrow raised.
You stare at it. You were definitely writing in anger.
...fine.
You double-click the icon and start venting properly — not composing a reply, just talking. About how you’ve had this exact disagreement four times. About how you suspect your colleague doesn’t actually understand the modeling methodology but won’t admit it. About how it’s exhausting to keep re-explaining. It feels good to say this to something that won’t judge you or repeat it.
Then the system asks: “what do you think the right move is here?“
You sit with that. The modeling is important to get right. And when you’ve tried to discuss it directly, you’ve talked past each other — partly because the analysis is complex (and AI-driven) and it’s tricky to tell quite where the disagreement is, and partly because you’re both a bit proud. You send your colleague an invitation to a mediated disagreement session — a structured conversation with an AI facilitator designed for exactly this kind of loop. You’ve done these before. They’re not magic; sometimes they surface a real crux and sometimes they just clarify that you disagree about something fundamental and need to escalate. But the async version is obviously better to try than another round of increasingly terse emails; and if schedules work out it might be worth a synchronous session.
The key sticking point for the Accords still seems to be verification. Almost everyone agrees on two things: (1) AI is transformatively important for the economy, and (2) it would be reckless to push into territory where AI systems are broadly replacing human judgment — “changing what it means to be human” is the phrase that’s caught on — without serious international coordination. But nobody wants to slow down if their rivals won’t. And AI research is pretty easy to hide.
There’s a tentative plan that has been gathering a bit of momentum. The basic shape of it: a short-term moratorium on specific categories of frontier research, with rough compute auditing — nothing that would hold up long-term, but solid enough for a year or maybe two. During that window, there will be some big joint projects, with open-source research, pushing ahead to build highly-reliable, verifiably-trustworthy AI auditors. Once these come online, they can serve as something like arms-inspectors, without leaking any commercial or national secrets.
There are a lot of details to be nailed down there. Nobody is thrilled that the plan depends on building technology that doesn’t exist yet. But the scenario planning — which everyone is doing, with AI assistance — suggests it’s pretty likely to work.
One of the main uncertainties right now is how fast a deal might be struck. There’s a lot of pressure to get something agreed quickly, but one bloc of countries is quietly stalling, hoping to finish training a stronger negotiation-support system before terms are locked in.
You close your laptop and head out for the walk home. It’s getting dark earlier now. Your system suggests a few playlist options — no podcasts; it can tell you have enough to think about already. The top option looks great: something powerful and alive and a little melancholy, right for autumn and the feeling of pushing on something important that moves slowly and might not work.
But might.
Thanks to Lizka Vaintrob, Oly Sourbut, and Rose Hadshar, for comments and for collaboration on the design sketches on which this story is based; and to Claude for helping make the prose flow more gracefully.



The human seems like she justs add friction to all the work things. The AIs alone would be more productive.