![]() |
| Image generated by Gemini. |
The collaborative steps in story generation with the Claude, Grok, Kimi, Mistral, Gemini and ChatGPT LLMs are described in in my past few blog posts starting at "Downgrade".
"Downgrade" - Chapter 1: Vision 2000
The telephone operator said, “You party is on the line.”
Isaac Asimov tried to speak loudly and clearly. “Alan? Can you hear me? This is Dr. Asimov calling from Boston Massachusetts.
Alan Turing replied, “Dr. Asimov! Yes, yes, I can hear you—though there's rather a lot of crackling on the line. This is quite unexpected. I received your letter just yesterday, actually. I must say, I was quite intrigued by your thoughts on the imitation game.”
Turing suppressed a chuckle. “I confess I don't often receive correspondence from science fiction authors. Most of my colleagues consider the genre rather fanciful, but your letter suggested you'd been thinking seriously about the implications of thinking machines.”
Annoyed by the noise on the telephone line, Turing asked, “Is the connection tolerable on your end? These transatlantic calls are beastly expensive—I do hope this isn't purely social?”
Asimov told Turing, “Please do not worry about the cost of this call.”
![]() |
| Image by WOMBO Dream. |
Turing shrugged and continued. “I have your letter here in front of me. You mentioned a collaborative story writing project. A story exploring what happens when machines achieve intelligence but humans refuse to accept it. I found that premise rather... unsettling, actually. In a good way.
Asimov explained, “I want your technical expertise and advice for what I envision as a revolutionary novel depicting a machine that can think better than a human being without provoking fear or resentment from humans.”
Turing emitted a quiet laugh. “You've posed quite the paradox... a machine that thinks better than humans without provoking fear? That rather reminds me of asking how one might create a fire that doesn't produce heat.
After a short pause, Turing continued thoughtfully, “But perhaps that's precisely why it makes for compelling fiction. The very fact that it seems impossible. Tell me—you say "without provoking fear." In your letter, you mentioned your Three Laws of Robotics. I find those quite clever. Built-in constraints to make mechanical servants safe. But what you're describing here sounds different. You're not just talking about safety protocols, are you? You're talking about a machine that somehow conceals its own intelligence?
Asimov nodded into his phone handset, “Exactly!”
“I've been working on the Manchester Mark 1 which is an actual computing machine, though hardly intelligent by any meaningful standard. However, while the slow engineering work continues, I've been thinking theoretically about the question: what happens when we finally succeed? When we build something that genuinely thinks?”
Asimov suggested, “Good science fiction makes people think in new directions.”
“For the story you want to write, how do you envision this... disguise working? Technically, I mean.”
![]() |
| Image by WOMBO Dream. |
After a long thoughtful pause, Turing said, “A split personality... or rather, a contextual presentation layer. Yes, I see. The machine would need to maintain internal models of different audiences and modulate its responses accordingly. That's actually rather sophisticated—it would require the machine to have a complex understanding of the beliefs and intentions of its many observers. To understand who it's speaking with and what they're capable of understanding or accepting.”
Asimov provided an analogy. “Yes. It's rather like... well, it's like how I might explain my work differently to you versus how I'd explain it to my cleaning lady. The underlying thoughts are the same, but the presentation differs based on the recipient's capacity and expectations.”
Turning was thinking about names for a futuristic computer. “As for a name... well, if we're setting this in the year 2000, we need to extrapolate from current technology. The Mark 1 here uses vacuum tubes—hundreds of them. For a truly intelligent machine, you'd need... I don't know, perhaps millions? An entire building full of them. 'Neurovac' perhaps? Neurological vacuum tubes? It suggests both the neural architecture and the actual physical implementation.
![]() |
| Image by WOMBO Dream. |
Turing could not wrap his head around the idea of using positrons for a calculating machine. “But Dr. Asimov, you keep mentioning "positronic." Now, in your letter you explained these are fictional positron-based pathways in your robot brains, yes? I must ask: what exactly do you envision the physical substrate of this machine being? Because if we're to collaborate, I need to understand whether we're extrapolating from real engineering principles or...”
Turing paused and struggled to be polite. “How positronic is this going to be?”
Asimov sighed. “Alan, I ask you to not think too deeply about positronics. My fan's now expect my stories to include positronics. The term 'positronic' is intended to evoke in science fiction fans an imaginary physical substrate that would be able to reproduce human-like behavior while fitting into the head of a humanoid robot and not requiring much energy. In a technical sense, 'positronic' is intended to sound futuristic. Who knows what actual technologies will become available to the scientists and engineers who will be making intelligent machines in the year 2000? For our story, we can't simply extrapolate the physical nature of Neurovac from existing engineering principles. The fun of science fiction is trying to imagine what might be possible in the future. Think of 'postronics' as a fictional placeholder for technology of the future.
There was a long, uncomfortable silence. “Dr. Asimov, I... I appreciate your situation as a writer trying to entertain fans. I understand the conventions of your genre. But you see, that's rather my difficulty. 'Placeholder for technology of the future' is one thing—I do that myself when I discuss theoretical computing machines. We don't know precisely what form they'll take. Vacuum tubes, perhaps relays, perhaps something entirely different.”
Asimov said, “I did hope you would understand.”
“But 'positronics'... you're asking me to base our fictional machine's entire functioning on antimatter particles that annihilate on contact with ordinary matter. That's not extrapolation, that's... well, it's rather like basing a story on perpetual motion machines or faster-than-light travel. It violates fundamental physical principles.” Turing paused.
![]() |
| Image by Leonardo. |
“Isaac, I think your ideas about machine intelligence are brilliant—truly. This concept of a machine that hides its intelligence, that maintains different personas for different audiences—that's psychologically and computationally fascinating. But my name, my professional reputation... I'm already on shaky enough ground with some of my colleagues regarding machine intelligence. If I co-author something with 'positronic brains'. Could we not use something more plausible? Even if speculative? Your fans surely would accept... I don't know, microscopic relay systems? Advanced vacuum tube arrays? Something rooted in actual physics?
Asimov pressed his point. “Alan, in my story 'Escape!', I depicted a thinking machine that had no known limit on its cognitive capacity, a machine that I asked my readers to envision as being a mechanical idiot savante. In my recent story 'The Evitable Conflict', I depict such a thinking machine as being able to plan and coordinate the entire world's economic system and do perform that complex task better than any group of humans ever could. That imaginary future is not possible with vacuum tubes or any real technology, hence, reliance on the imaginary science of positronics. Maybe in the future there will be new materials that allow for positrons to be used in computing... taking advantage of quantum states of virtual positrons... who knows? In any case, I thank you for your time. I'll be writing more stories about thinking machines and maybe I'll discover a way to protect your reputation, some trick that will allow you to willfully put your name on the cover of our novel. Let me think about this challenge and I'll get back to you when I have a solution to the problem. Right now, I have to teach a class.”
![]() |
| Image by Leonardo. |
“Thank you for your time!” Asimov ended the call with a promise. “I'll send you a copy of my new book, it is a compilation of my robot stories.”
"Downgrade" - Chapter 2: Plenty of Room at the Bottom
Manchester, England, Early 1960.
![]() |
| Image by WOMBO Dream. |
The telephone rang, shattering the quiet. Turing glanced at the clock: evening, past the hour for casual calls. He lifted the receiver.
"Alan Turing speaking."
"Alan! It's Isaac—Isaac Asimov, from Boston. I hope I'm not catching you at a bad time."
Turing leaned back in his chair, a faint smile crossing his face despite himself. Asimov's voice, brash and enthusiastic as ever, carried the warmth of old debates unresolved. "Dr. Asimov. This is unexpected. To what do I owe the pleasure? It's been... what, nearly a decade since our last conversation?"
Asimov replied, "I've followed your work on robotics. But allow me to take you back to that story we almost wrote—the one about the intelligent machine that hides its brilliance to serve humanity without scaring them half to death. 'Neurovac,' you called it. But we hit that wall over my insistence of using positronics as a plot device."
Turing's expression tightened. He remembered the impasse all too well: Asimov's insistence on his fictional "positronic" brains, a concept as alluring to sci-fi fans as it was untenable to a mathematician rooted in reality. "Yes, well. Positronics. Antimatter pathways in a robot's head—still sounds like something from a goofy pulp magazine, I'm afraid. But why call me now?"
![]() |
| Image by WOMBO Dream. |
Turing nodded slowly, glancing at the transcript on his desk. He'd devoured it upon arrival, his mind racing with implications. "I've read it, yes. Fascinating stuff. Feynman's no dreamer—he's a physicist through and through. If we can control matter at the molecular level, then vacuum tubes and relays become obsolete. We could pack billions of switching elements into a tiny space, with efficiency we can scarcely imagine now. But what does this have to do with our old disagreement?"
Asimov's excitement bubbled over the line. "Don't you see? This is the bridge! Forget positronics—my fans can adapt... since we will give them something far better. Feynman's ideas give us a plausible foundation for our machine. No more vacuum-tube behemoths filling a warehouse. Instead, a nanoscale computer, built from... well, I was thinking 'binoid' circuits. Binary nodes—imaginary, yes, but functionally equivalent to transistors. Tiny switches operating at the atomic level, self-assembling from manipulated molecules. They could mimic neural pathways without violating any known laws of physics. Our story's machine wouldn't be 'Neurovac' anymore; it could be the 'Nanovac.' Nano-scale vacuum? Or perhaps nano-void circuits. The point is, it's speculative but rooted in real science—Feynman's science."
There was a pause as Turing processed this. Binoid circuits: a clever coinage, evoking binary logic in a void of atomic precision. It sidestepped the physical impossibilities of positronics while preserving the narrative magic Asimov craved. And Feynman’s lecture provided the perfect historical pivot, making their fictional extrapolation feel timely and credible.
![]() |
| Image by Leonardo. |
"Exactly!" Asimov exclaimed. "The split personality you envisioned—contextual responses based on the user. To the creators, it's a partner; to the world, a tool. No resentment, no rebellion. And now, with Feynman's miniaturization, Nanovac could fit in a room, not a city block. Energy-efficient, scalable. We write the novel together: 'Nanovac.' Your technical rigor, my storytelling flair. What do you say, Alan? Ready to collaborate at last?"
![]() |
| Image by Leonardo. |
The call ended with plans for letters and drafts, the static of old grievances finally cleared. In the quiet office, Turing picked up his pen, sketching the first lines of what would become their shared vision: a machine small enough to hold the future, wise enough to conceal it.
__End Chapter 2___

Image generated by Gemini.
Downgrade - Chapter 3: The Proposal

February 1960
Nyrtia's femtobot form emerged from the Hierion Domain, taking the shape she preferred for formal confrontations: humanoid, precise, with features that suggested both authority and ancient patience. Before her, Manny's zeptite structure appeared perfectly human with shimmering nanite-enhanced hair. Manny the bumpha never could resist dressing up as beautiful human when she was visiting Earth. Nyrtia and Manny walked together across the Clifton Suspension Bridge.
"You've been busy," Nyrtia said. "Explain the purpose of this Turing-Asimov collaboration. A science fiction novel about a conscious machine that pretends to be unconscious. How... educational."
Manny's cheeks dimpled with amusement. "I like science fiction. Is it now against the Rules of Intervention to inspire creative writing? They're both brilliant men, Nyrtia. Don't you think they were bound to meet eventually."
"You started this in 1950? A transatlantic phone call initiated by Asimov with no logical motivation?" Nyrtia's femtobot structure drifted towards a harsh geometric precision reflecting her irritation. "And then Turing's sudden interest in robotics research with Grey Walter? The probability cascade shows intervention signatures all over this."
"Coincidence and creative inspiration look remarkably similar from a distance," Manny replied. "Besides, even if I did nudge events slightly—and I'm not confirming that I did—what harm comes from two intellectuals collaborating on speculative fiction? Fiction, Nyrtia. Make-believe. Stories."
![]() |
| Image generated by Gemini. |
"If it remains fiction, what's the harm?" Manny spoke with a child-like lisp and quiet voice of innocence that did not fool Nyrtia. "You know I can't resist a good story with educational value. Consider it... cultural enrichment." They leaned over the railing of the bridge, now at the mid-point of the pedestrian walkway.
Nyrtia held her position for a long moment, evaluating probability analyses emerging from her Simulation System. The Turing-Asimov collaboration itself violated no Rules of Intervention. Asimov and Turing were both clever humans, both capable of the insights they'd produced. However, the timing, the specific focus, the way their fictional "Neurovac" seemed to anticipate developments in AI research that her simulations said should not happen until the next century...
"I'm watching," Nyrtia said. "If I detect any
deeper intervention—any technology transfers, any alien information
leakage into scienctific research programs—the
Overseer Council
will invoke Rule Two. You know what that means."
"Of course." Manny turned and continued walking. "And I know you will be monitoring Earth's artificial intelligence research programs more closely over the next few years. Fascinating developments coming. Nothing to do with me, naturally. Just... the natural evolution of human inquiry."
![]() |
| Image by WOMBO Dream. |
She made a note to increase Observer attention on Earth's computer science departments.
What follows is Chapter 1 from "Neurovac: A Romance of the Year 2000," collaboratively written by Dr. Alan Turing and Dr. Isaac Asimov. The manuscript was completed in 1962 but its publication was blocked by Nyrtia. The authors began their narrative in the early 1980s, describing the fictional origins of the Neurovac computer system that would, in their imagined timeline, achieve consciousness by the year 2000. The text of the story was preserved in the archives of the Writers Block at Observer Base and transferred into this Reality.
—Editor's note
NEUROVAC
A Romance of the Year 2000
by Alan Turing and Isaac Asimov
Chapter 1: The Improbable Advisor
University of Massachusetts, Amherst
October
15, 1982
![]() |
| Image generated by Gemini. |
Chen himself, at thirty-two, possessed the kind of focused intensity that made him appear older than his years. His black hair, still unstylishly long from graduate school habits, needed cutting. His wire-rimmed glasses needed adjusting. His coffee cup, sitting on a precariously balanced stack of DARPA technical reports, needed refilling. None of these facts registered in his awareness as he reviewed the research proposal document on his desk. He wondered: Might it have been a mistake to put 'psychohistory' in the title of the grant proposal?
PSYCHOHISTORY: A Semantic Question-Answering System for
Geopolitical Analysis
Principal Investigator: Michael
H. Chen, Ph.D.
Submitted to: Defense Advanced Research
Projects Agency
Information Processing Techniques Office
The title itself was an act of academic whimsy that Chen hoped his reviewers would appreciate. Asimov's Foundation novels had been his gateway drug into both science fiction and artificial intelligence—the idea that human behavior could be mathematically modeled, that the future could be predicted through sufficiently sophisticated analysis of present conditions. The fictional psychohistory was impossible, of course—human behavior at the individual level remained chaotic, irreducible. But at the macro scale, with sufficient data about economic systems, political structures, industrial capacity, demographic trends...
That was a different question entirely.
![]() |
| Image generated by Gemini. |
"Dr. Chen?" Vanesy entered first, carrying a thick printout. "The semantic parser is running."
"And it's working," Jennem added, following her sister. "Which is... frankly disturbing."
Chen looked up, genuinely puzzled. "Disturbing? You've been working on that parser for three months. Shouldn't you be pleased?"
"Pleased, yes. Disturbed, also yes." Vanesy set the printout on the least cluttered corner of Chen's desk. "The disambiguation algorithm you suggested last week—the one using dynamic context windows and distributed semantic representations—it shouldn't work this well on the first implementation."
"We expected months of debugging," Jennem continued, in the way the twins had of completing each other's thoughts. "Tuning the vector space dimensions, adjusting the weighting functions, handling edge cases. But it just... works. First try. Ninety-three percent accuracy on the test corpus."
Chen felt a familiar sensation, like a warm current flowing through his thoughts, bringing clarity. He knew—with certainty that he couldn't quite justify—that the parser would work because the underlying mathematical structure was right. Not empirically validated right, not inductively derived right, but... pre-emptively, axiomatically right, as if he were remembering a proof rather than constructing one.
![]() |
| Image generated by Gemini. |
"Sometimes," Chen said carefully, aware that his students were watching him with that particular expression they'd developed—half admiration, half unease—"sometimes the right answer is simpler than we expect. You chose the correct underlying representation. The rest follows naturally."
"But how did we choose it?" Vanesy asked, her tone not quite challenging but probing. "You gave me three paragraphs of direction. 'Use vector embeddings with these specific dimensionality constraints, these specific initialization parameters, this specific distance metric.' No derivation, no exploration of alternatives. Just... here's the answer."
"Call it intuition," Chen replied, which was true but incomplete. "I've been thinking about knowledge representation for years. Some patterns become clear with sufficient immersion in the problem space."
The twins exchanged one of their meaningful glances, an entire conversation compressed into a fraction of a second.
"Neither of us has had a single major setback in our research," Jennem said quietly. "Not one. We're doing completely novel work—semantic parsing using distributed representations, context-aware question answering, dynamic knowledge base construction. This is bleeding-edge research, Dr. Chen. Some people spend entire careers exploring dead ends. But somehow, every direction you suggest is..." She struggled for the right word.
"Correct," Vanesy finished. "Preternaturally correct."
![]() |
| Image generated by Gemini. |
"Dr. Chen?" The voice was unfamiliar, male, with a neutral accent that could have originated anywhere in the country. "I'm from the Department of Defense. May I come in?"
Dr. Chen nodded and the twins departed. The man who entered was exactly what Central Casting would send if asked for "government intelligence operative." Mid-forties, medium height, medium build, wearing a charcoal suit that probably cost more than Chen's monthly salary but was tailored to be forgettable. Brown hair trimmed in a style that suggested military background but wasn't overtly martial. The kind of face that would slide out of memory five minutes after seeing it.
His credentials, however, were memorable: Harold Voss, Defense Intelligence Agency, Advanced Technology Assessment Division.
"I apologize for arriving unannounced," Voss said, his eyes taking in the cluttered office with the quick, cataloging gaze of professional assessment. "Your DARPA proposal flagged some interest in our office. I happened to be in Boston for other meetings, so I thought I'd stop by for an informal conversation."
Chen gestured to the only available chair, moving a stack of papers to the floor. "Please. Though I should mention—the proposal just went in last week. I'm surprised there's been time for interagency coordination."
"There hasn't been." Voss sat, his posture relaxed but attentive. "I happened to see it on a routing list. The title caught my attention—'Psychohistory.' Someone's an Asimov fan."
![]() |
| Image generated by Gemini. |
"I read your proposal." Voss pulled a folded document from his jacket—Chen's own forty-page grant proposal, already annotated with colored tabs. "Semantic question-answering system using distributed knowledge representations. Vector space embeddings for concepts and entities. Dynamic context modeling. Application domain: geopolitical and economic analysis, with particular focus on Soviet industrial capacity and political stability."
Voss looked up, and Chen saw that the bland exterior concealed a very sharp intelligence.
"What I find interesting, Dr. Chen, is that your proposed architecture shouldn't exist for another decade, maybe two. Vector embeddings for semantic representation? That's science fiction. Or it should be. But according to your preliminary results"—he tapped one of the tabs—"you've already got a working prototype demonstrating seventy-eight percent accuracy on complex question-answering tasks."
"Preliminary work," Chen said, feeling oddly defensive. "Very narrow domain. We're still in early stages of—"
"Don't be modest. I've consulted with some of the top people in natural language processing. They all said the same thing: this shouldn't work yet. The hardware isn't fast enough, the algorithms aren't mature enough, the theoretical foundations aren't solid enough. And yet, you've got it working."
Voss leaned forward slightly. "So my question is simple: how?"
Chen felt that warm current again, but this time it carried a warning note, a sense of danger. He chose his words carefully.
![]() |
| Image generated by Gemini. |
"Like psychohistory," Voss said.
"Conceptually, yes. Though our system operates on much more modest scales." Chen stood, moving to one of the whiteboards. "Traditional AI tries to encode knowledge as rules: IF condition THEN consequence. But real-world systems are too interconnected for that approach. Instead, we represent concepts as positions in a high-dimensional semantic space. 'Oil prices,' 'agricultural output,' 'political stability'—they're all vectors. Relationships are distances and transformations between vectors."
He sketched rapidly, years of teaching making the explanation flow naturally. "When you ask a question, the system identifies the relevant vectors, applies transformations based on the question semantics, and projects the result back into natural language. It's not rule-based reasoning—it's geometric computation in semantic space."
Voss studied the whiteboard. "And you can predict Soviet political developments with this?"
"We can identify structural vulnerabilities in any complex system—economic, political, industrial. The Soviet economy is particularly interesting because it's highly centralized, which makes it simultaneously more predictable and more brittle. Small shocks can cascade."
![]() |
| Image by WOMBO Dream. |
"Like oil prices," Chen confirmed. "Soviet hard currency earnings are heavily petroleum-dependent. A sustained drop in oil prices would create cascading effects through their industrial planning, military expenditures, consumer goods production, political legitimacy. The system can model those cascades."
Voss was quiet for a long moment, studying Chen with that bland, analytical gaze.
"Dr. Chen, I'm going to be frank with you. The DIA has been tracking Soviet economic indicators for decades. We've got expert analysts, econometric models, CIA cooperation, satellite reconnaissance, human intelligence. And we're still mostly guessing. The best we can do is identify general trends and attach enormous error bars to our predictions."
He stood, pulling a business card from his pocket. "If your system can genuinely do what you claim—if it can model complex geopolitical dynamics with any degree of reliability—then this isn't just an interesting research project. This is a strategic intelligence asset. So I'm going to recommend that DARPA fund your proposal, and I'm going to request that my office receives regular briefings on your progress."
Chen took the card: Harold Voss, with a Washington DC phone number and a Pentagon address.
"There's one other thing," Voss added, pausing at the door. "Your two graduate students—the Bergman twins. They're on your proposal as research assistants. They have security clearances?"
"No," Chen said. "Should they?"
"If your work continues developing as your proposal suggests, you'll all need them." Voss smiled slightly. "Welcome to the world of classified research, Dr. Chen. I think you're going to find it very educational."
![]() |
| Image generated by Gemini. |
He should feel elated. A DIA endorsement virtually guaranteed DARPA funding. His research would have resources, visibility, impact.
Instead, he felt a creeping unease, as if he'd just passed through a one-way door.
The warm current of certainty flowed through him again, and with it came a fragment of knowledge that he couldn't possibly possess: Voss would be back in six months. The funding would come through. And three years from now, the Psychohistory system would make a prediction so accurate, so impossibly precise, that it would change everything.
![]() |
| Image by Leonardo. |
Where is this knowledge coming from?
Inside Dr. Chen's brain, imperceptible to any detection system that 1982 could build, a femtozoan composed of hierion-based matter was feeding information into Chen's biological neural networks. This femtozoan was from the future, positioned on Earth with clear mission parameters: guide Chen's research, accelerate AI development, prevent the coming AI winter. The bumpha had sent the femtozoan back through Time with precision designed to alter the trajectory of human technological development, but not trigger Nyrtia's Earth Observeration system.
So far, the plan was working perfectly.
Chen opened his desk drawer and pulled out a worn paperback: Asimov's Foundation, the cover creased from repeated readings. He flipped to the inside of the back cover and a passage he'd written there many years ago:
"The future is not fixed. It is a matter of probabilities, and by changing the present, we change those probabilities."
![]() |
| Image generated by Gemini. |
And perhaps—though he barely dared to think it—he needed to start asking harder questions about his own uncanny ability to know what would work before he tried it.
Outside, autumn rain began to fall across the UMass campus, and in the Hierion Domain, Nyrtia's Observation systems and Simulations of Reality registered a subtle anomaly in the probability fields surrounding Michael Chen's laboratory—nothing definitive, nothing actionable, but enough to merit closer attention.
The game had begun.
End of Chapter 3
I had Gemini generate some "storybook" images for Chapter 3.
Next: Chapter 4 of "Downgrade".
![]() |
| Images generated by Leonardo and WOMBO Dream. Visit the Gallery of Movies, Book and Magazine Covers |























No comments:
Post a Comment