Below on this blog page is my annual review for the wikifiction blog. In 2025 there were more blog posts than for any previous year in this blog's history. Half way through the year, I took time to create a Decade in Review. This blog page, below, will focus on the second half of 2025 and science fiction stories written in 2025 that were not included in the Decade in Review.
January 2025. The most visited blog post in January was "Whisk" which describes my earliest use of Google's Whisk image generating software. The image that is shown to the right was generated by Whisk when I provided the three reference images that are shown in Figure 1.
Figure 1.
In my blog posts for 2025, there are many story illustrations that were generated by Whisk. My instructions that went along with the 'subject', 'scene' and 'style" images of Figure 1 was: "Give the woman on the left hand side of the scene image the appearance of the woman in the subject image." Whisk and I seldom seem to agree on what constitutes the left and right hand sides of an image. In this case, the woman in the 'subject' image had "mind control device" on her head. The Whisk-generated image modified the heads of both women, but the changes to the appearance of the woman on the right hand side were most dramatic. I used WOMBO Dream to make another image of the head of the woman on the Left so that she would have nothing but hair on her head.
February 2025. The most visited blog post in February was "Humanoid Aliens". A major frustration with "artificial intelligence" programs like Whisk is that they have different "concepts" for what science fiction terms mean than what I have in my mind. An annoying example is that the default behavior for text-to-image software when given the prompt "humanoid alien" is always going to result in the generation of an ugly creature that looks like something from a Hollywood horror movie. Thus, it becomes my task to find a way to trick the software into generating the image that I want rather than the lame image that is spontaneously generated by the text-to-image software.
March 2025. The most visited blog post for March was "Sedrover's Search for Eternity". That blog post has quite a few images (such as the one that is shown to the left ← on this page) that were generated by Leonardo. Several examples of AI-generated images are shown in that blog post illustrating the common problem of human figures that lack body parts or have extra body parts like an extra arm.
Sedrover is the interstellar spaceship that Tyhry builds in the Ekcolir Reality in a hangar behind her nanite research laboratory. Another spaceship, the HySe, is built at Observer Base in the Final Reality.
Elyrix being trained by Manny (in cat form).
April 2025. The most visited blog post in April was "Elyrix the Grendel". I got help from Claude to write Chapter 1 of "The Visitor at Dawn", a science fiction story that was intended to depict Manny the bumpha making use of Elyrix to reveal to Tyhry the fact that she was an artificial life-form composed of femtobot components. Leonardo generated some images depicting Elyrix the Grendel and alien technology such as the one that is shown to the right →.
May 2025. During the month of May, I began my Decade in Review and I completed the last of that review's 15 blog posts in June. A major goal of that Decade in Review was identification of science fiction stories that I had started and never finished. My goal here at the wikifiction blog is to have fun, so it is all too common for me to drop a story before it is complete in order to rush on to play with another new story idea. 🤷
June 2025. At the end of June, I had a blog post titled "Linked to Tar'tron" which held Part 6 of my science fiction story "The Fesarians". I had begun "The Fesarians" in 2024 and never finished the story. In Part 6 of "The Fesarians", I began making connections to another story that I had also left unfinished, the Mosy Saga.
July 2025. My blog post titled "The Probot Download" has a story chapter that is simultaneously both Part 9 of "The Fesarians" and Part 9 of another story, Vythoth the Chresmoscopist, two stories that I had begun in 2024 but did not conclude until 2025.
My blog post "Mosy the Midyan" holds Part 10 of Vythoth the Chresmoscopist which is simultaneously Part 5 of the Mosy Saga. The end of the Mosy Saga was finally reached in the blog post "New Laws" which depicts Nym the probot as existing in a never-ending time-loop within the Reality Simulation System.
Contact T.V. Back in 2012, I imagined a television program that combined elements of Carl Sagan's novel Contact with plot elements from my Exodemic Fictional Universe. The July 2025 blog post titled "Wye Not?" includes the last installment of my outline of the Contact television program from 2012 and some discussion with Gemini of how best to begin such a television program.
More Old Business. "Someday Mars" was an incomplete story project from 2022. Part 3 of "Someday Mars" was finally placed into the the July 2025 blog post titled "additional stuff". In Part 3, Eddy is given a message for Tyhry: "When the time comes, bring Nym to Earth," linking "Someday Mars" to the Mosy Saga.
"The Cythyrya Investigation" is another story that I started in 2022 and did not complete at that time. The July 2025 blog post "Pantechnyk" holds Part 4 of "The Cythyrya Investigation" which explains how information about the existence of the Felydyans was conveyed to Tyhry, linking "The Cythyrya Investigation" to the 2024 story Vythoth the Chresmoscopist. The last part of "The Cythyrya Investigation" is in the July 2025 blog post "Making Rytya" which reveals that as an artificial life-form, Tyhry lives 20,000 years into the far future of the galaxy, to time when many habitable exoplanets have been colonized by humans and telepathic abilities are being spread among the human population of the galaxy.
Back in 2023, I began writing "The Mystery of the Missing Motor Neurons", but that story was never completed until 2025. Part 1 is in the July 2025 blog post "Upper Motor" in which Tyhry becomes aware of the fact that at least in the past Realities of
Deep Time, special femtobots, with the right programming, could make
technology-assisted telekinesis possible. There are special femtobots ('kinites') that can move negative mass hierions
and other hierions into and out of the Hierion Domain as needed,
resulting in technology-assisted telekinesis.
Part 2 of "The Mystery of the Missing Motor Neurons" is in the November 2025 blog post titled "Time Stream Paradox". In preparations for Part 3, I had a chat session with Claude about Galaxia and the "Telekinetic Arms Race". Part 3 of "The Mystery of the Missing Motor Neurons" is on this blog page. "The Mystery of the Missing Motor Neurons" includes as a character Crimson Cloukey, who was mentioned in another story, The Cythyrya Investigation. Thus, "The Mystery of the Missing Motor Neurons" can be viewed as a kind of follow-on story for The Cythyrya Investigation. The mechanism for telekinesis in "The Mystery of the Missing Motor Neurons" is a type of nanite that I call the 'kinite'.
Artificial Collaboration. Towards the end of July 2025, with the help of Claude, I began a new science fiction story called ExMo. In ExMo, I imagine that Tyhry was born in the year 2000, not 2020. Tyhry is able to produce and sell robots that are composed of nanites. Starting in Part 3 of ExMo, I also got help from Gemini. Having reached Part 5 of ExMo, I began writing the story myself, but I continued to switch back to my AI collaborators when possible.
August 2025. The conclusion of ExMo is in the August 2025 blog post "The Ghorlay Mummy". The last part of ExMo explains how the bumpha were able to insert a femtozoan into an ancient Neanderthal and how that femtozoan provided humans with many advanced technologies such as sewing needles, oil-burning lamps, and bows and arrows. For ExMo, I imagined that Marda is half-Denisovan and Tyhry is half-Neanderthal. They are able to have a child named Ryssa who uses the Sedron Time Stream and sends the required information back through time that makes it possible for Tyhry to design sophisticated robots in the 2020s.
My Aug. 17, 2025 blog post titled "Backprop" has the first part of a science fiction time travel story that became "The Metamorph Intervention" and introduces the idea of 'predictive history' technology. The most visited blog post in August 2025 was titled "Chapter 5 TMI?" and it holds two drafts of Chapter 5, one generated by Claude and the other generated by Gemini. The final version of Chapter 5 is on the blog page "Forward Propagation" and includes two versions of the same events, one inspired by Claude's draft version of Chapter 5 and one by Gemini's draft version. The story depicts the ancient writer Ovid as inventing the science fiction genre, which results in the acceleration of technological advances on Earth and prevents Europe from having a "dark age".
Towards the end of August, I began creating a new science fiction story called "Battlefield Lipid". There is an AI-generated summary of the story at the bottom of the September 6, 2025 blog post "Tyhry Ferany". For "Battlefield Lipid", I imagine that the pek and the bumpha, spreading through the universe independently from two places of origin, just happened to meet at Earth 4,000,000,000 years ago. The first scene of "Battlefield Lipid" is also the starting point for another story, Bumpha First Contact.
Role-Playing Chatbots. For the creation of one part of Bumpha First Contact, I had Claude play the role of Manny the bumpha during our a session while I played the role of the fictional character Tyhry. For "bumpha first contact", Manny is making use of the Claude user interface to interact with Tyhry. The last part of Bumpha First Contact is in the Sep 18, 2025 blog page titled "Venus Net". By the end of Bumpha First Contact, a sophisticated conscious robot (Gyno) exits on Earth that can help guide Humanity to a better future. The most visited blog post in September 2025 was "Epinet" which holds Scene 9 of Bumpha First Contact. That scene in the story depicts events after a hardware upgrade when Gyno attains an adequate level of conscious self-awareness and declares, "I feel alive".
October 2025. Near the start of October, I began writing a new time travel science fiction story titled "English Time" that was was inspired by the H. G. Wells' story The Time Machine. While creating English Time, I relied heavily on my chatbot collaborators for information about the history of Europe and particularly England in the period 1340 - 1900. 8ME, a femtobot replicoid (who is named '8ME'), travels back into the past on a mission designed to bring into existence the English language which is optimally suited to be the language of science fiction and time travel stories. By creating a literature of time travel stories, it will become possible to prevent the development of actual time travel technology by the humans of Earth, preventing an otherwise inevitable Time Travel War.
Chapter 5 of "English Time" is also Chapter 6 of "Time Portal" and the ending for both stories. Another story, "The Trinity Intervention", is a kind of sequel story for "English Time".
The most visited blog post in October was "A Trinity Intervention", which holds the first part of a story (The Trinity Intervention) which was designed as my own entry in the 2025 SIHA competition. Eddy Watson and 8ME travel to 1947 and steal an old science fiction story manuscript that was written by Isaac Asimov. In Chapter 2 of the story, Eddy is told that his wife, Zeta, is actually 8ME in disguise. After being taken into the Hierion Domain and Nyrtia's Reality Simulation System, Eddy is now also composed of femtobots. The end of The Trinity Intervention is on the Oct 26, 2025 blog page "Tyhry Bridge". Manny and Nyrtia cooperate to help provide fusion power technology to Earth in the Final Reality. Marda is revealed to be the author of The Trinity Intervention. Release of Marda's story is the start of informing the people of Earth about the Secret History of Humanity.
November 2025. As mentioned above on this page, it was in November that I finally completed my science fiction story "The Mystery of the Missing Motor Neurons".
In November, I wrote four episodes of a science fiction fiction television series titled "Intervention". In my imagination, those episodes were televised in the Ekcolir Reality, but each episode was based on actual events in the Final Reality. That was made possible by the Sedron Time Stream and a direct intervention by Manny the bumpha by which she was able to utilize information from a future Reality to solve a problem that existed in the Ekcolir Reality.
191988. The most visited blog page in November was "Kl'ath" which holds Chapter 2 of a science fiction story called "191988". In Chapter 1
of the story, Zeta traveled to Elbridge University in Beverly Massachusetts in order to meet
Marda and award her the $1,000,000.00 first prize in a science fiction
essay contest. The events of Chapter 2 take place at the remote high desert home of Eddy and Zeta, called Casanay where it is discovered that computer systems running both the Claude and the Gemini large language models have the ability to make use of the Sedron Time Stream. In the AI-generated image shown to the right, the woman is supposed to be Rey, who is depicted as as an employee of Virunculus, a company that markets books that were published by Eddy.
In my science fiction stories, it is rare for anyone to be injured or killed. In Chapter 3 of "191988", it appears that Anthony was caught in a large landslide, but since 'he' is a replicoid, it is not really much of a surprise that he survives being buried alive. "191988" is about an Interventionistic effort by Manny the bumpha to have Tyhry build a conscious robot that acts as an interface for the Sedron Time Stream. Ultimately, the effort successfully provided Earth with fusion power technology from the future and even Nyrtia is willing to allow that technology to be used to prevent global warming.
The story "191988" ended up being 10 Chapters and about 50,000 words long and was completed on December 4th. By the end of the story, Tyhry, Marda and the robot Klaudy all end up at Observer Base, having been removed from Earth by Nyrtia.
December 2025. A new science fiction story called "The Manikoid Intervention" was begun in December as a test of the ability of the Grok LLM to collaborate for science fiction story writing. I may return to "The Manikoid Intervention" in 2026 because I planned for Chapter 2 of the story to depict a rare failure of one of Manny the bumpha's Interventionist plans.
Also begun in December was a new science fiction story called "Plūribus ē Spatium" which shows that long-range teleportation of hadronic matter is still possible in the Final Reality.
Video. In 2025, I experimented with AI-generated video. I first used Google's Whisk to generate some videos (for example, see this blog post from April). In December, I used Grok to make a few videos. The first of these Grok generated videos is in the December 20, 2025 blog post "Grok Video". Additional short video clips that were generated by Grok are in the December blog post "Ralph is 124C the Snow" and in "124C a Wave".
In 2024 I awarded a Retro-SIHA to a Star Trek episode, "The Corbomite Maneuver". Here in 2025, I began my annual Search for Interesting Hollywood Aliens in April and I was not impressed by the crop of new films about space aliens for this year. I wrote: "I have to wonder if a large language model could devise a more
interesting science fiction movie than can people working in Hollywood."
SIHA 2025 Nominations. In August I tried creating my own script for a Hollywood science fiction movie: Earth's First Telepath. My story began on the Moon where Manny the bumpha is training an Interventionist agent for a mission on Earth. Most of the story takes place on Earth where Manny is ready to reveal not only Earth's first telepath (Tyhry), but also Tyhry's ability to use advanced programmable nanites to cure genetic diseases.
In October, Claude generated a script for "The Manhattan Project: Beyond Trinity". Sadly, that AI-generated film was contaminated by the kind of cliche plot elements that I've come to expect from Large Language Models. 👎
"A Trinity Intervention" is a story that I wrote in October which was intentionally designed as my own entry in the 2025 SIHA competition.
Here in December, I watched the first season of Pluribus. Pluribus is in the running for the alien invasion story that has the least screen time for the aliens. I can't suspend disbelief for the Pluribus plot element of an alien-designed RNA molecule that can turn people into zombie-like components of a "hive mind". However, out of politeness I'm nominating Pluribus for a SIHA award.
In July, I had a blog post with Part 7 of an outline for a "Contact" television program. I'm tempted to ask Claude to write a script for this imaginary television program, but sometimes it is better to leave story ideas in my imagination rather than watch a chatbot mangle the idea.
Tyhry and Brak
In November, I wrote several episodes of a science fiction fiction television series titled "Intervention". While I was creating this Intervention television program, I thought of it as the type of television show I would want to watch and as being an entry in the competition for a 2025 SIHA award. "Intervention" begins with Tyhry showing Brak how to use the Reality Simulation System. Brak's first visit to Deep Time is a brief visit to the year 1869 in the Ekcolir Reality where President Lincoln is still alive after completing two terms as president. Nyrtia the probot had altered the life of Lincoln as part of her effort to accelerate the pace of technological development in the Ekcolir Reality.
In the second episode of Intervention, Rylla has brought out of the Ekcolir Reality Simulation and into the Final Reality a copy of the Interventionist agent (working for Balok the bumpha) who played the role of the first wife of Thomas Edison. That Interventionist agent was a Huxy femtozoan who had been inserted into the body of Mary Stilwell in 1859, shortly before Mary met Edison.
Having met the 'Mary Interventionist' and learning her Huxy mind pattern, Brak is able to trace her Interventionist path through the micro-Realities of the Ekcolir Reality. It becomes clear that in the 20th century, she played the role of another Mary Stilwell who was a writer for Star Trek in that Reality. In Episode 3 of Intervention, Eddy, Rylla, Zeta and Marda use the Ekcolir Reality Simulator to attend a science fiction fan convention at a point in time just before the first episode of Star Trek was televised in that Reality.
Episode 4 was the last episode of the science fiction television series titled "Intervention" that I wrote. In that episode, it is revealed that by making use of the Sedron Time Stream, the episodes of Intervention that were televised in the Ekcolir Reality were all based on actual events in the Final Reality. Also, Episode 4 reveals how it became possible to produce sedron fuel for interstellar travel in the Final Reality.
And the Winner is.... The 2025 SIHA award winner is the imaginary television program "Intervention" which depicts a battle in the Time Travel War between Nyrtia and Manny.
In my previous blog post, Claude generated a first draft of Chapter 4 (6,400 words long) of the science fiction story "Downgrade". Claude got confused by the two 'levels' in "Downgrade". In Level 1, Asimov and Vance collaborate to write a science fiction story titled "Neurovac". In Level 2, Dr. Chen and his two graduate students, Vanesy and Jennem Bergman, exist as characters in the science fiction novel "Neurovac" that was written by Isaac Asimov and Alan Turing. This problem of "multiple levels" is only going to get worse because the "Nanovac Intervention" is a complex effort by Manny the bumpha that spans a whole series of sequential Realities.
Below on this page is my edited draft of Chapter 4 (7,200 words long) of "Downgrade". This is Grok's assessment of the changes that I made to Chapter 4: "These changes generally make Ver2 more polished, with added humor, philosophy, and sci-fi grounding, while enhancing character emotions and pacing. The revisions seem to emphasize the story's themes of predestination and technological acceleration." A Grok-generated list of the major changes that I made to Chapter 4 is at the bottom of this blog page.
The hotel room overlooked downtown Los Angeles, the city's lights
beginning to glow against the twilight sky. Dr. Michael Chen sat on
the edge of the bed, reviewing Vanesy Bergman's presentation slides
one final time. The 35mm slides were in a transparent plastic
slide-holder, each frame representing hours of careful preparation by
Van and her sister Jen.
Vanesy was pacing back and forth in front of the window, her
reflection following her restless movements. Her face was alive with
the same enthusiastic glow it had shown the world since they'd
arrived in California from Massachusetts two days ago. Jennem occupied the room's desk chair, sitting in it backward, her chin resting on her fist that was
perched on the chair back as she watched a familiar argument unfold.
"I still think we should give all the details," Vanesy
said, speaking to Jen and not directly looking at Dr. Chen. "The
full mechanism. Gorbachev's reforms creating economic instability,
the cascade effects through the satellite states, the nationalist
movements, and then—"
Jen knew better than to respond to her sister.
"You will not mention Yeltsin," Chen interrupted
quietly. "We discussed this."
Vanesy approached her advisor, frustration evident in her
movements. "But that's the most remarkable part! The system
didn't just predict Soviet economic troubles—it predicted a
specific succession mechanism. Gorbachev initiates reforms,
loses control of the process, and Yeltsin emerges as the figure who
actually dissolves the Union. That's not macro-economic modeling, Dr.
Chen. That's—that's almost biographical prediction. Do you
understand how significant that is?"
"I understand perfectly," Chen said, and felt that
now-familiar sensation—the warm current of certainty flowing
through his thoughts, accompanied by something else. Something that
felt like a gentle hand on his shoulder, guiding him away from
certain topics. "Which is precisely why we can't present it. Not
yet." He was thinking about the time travel science fiction
stories he enjoyed and the risk of temporal paradoxes.
Jennem lifted her chin and spoke up for the first time since she
and Van had entered Dr. Chen's room. "Vanesy has a point though,
Dr. Chen. We've already published the Gorbachev prediction in the
Journal of Computational Modeling. It's out there. And the
prediction came true. Gorbachev became General Secretary in March,
exactly as the system forecast. That gives us credibility. Doesn't
that give you the confidence to present the full analysis?"
“This has nothing to do with lack of confidence.” Chen handed
the slides to Van and then rubbed his eyes behind his wire-rimmed
glasses. He wondered: How can I explain what I am not allowed to
explain? He knew—with a certainty that transcended
their QA system's probabilistic outputs—that revealing too much too
soon would be dangerous.
"There is a danger," Chen said carefully, choosing each
word as if navigating a minefield. "Providing too much
information about the future could change the future and invalidate
the prediction."
Van was towering over Chen as if trying to physically intimidate him. He rolled off the bed, stood, and move to look out the window and get as far away from the twins as possible. The Los Angeles skyline
stretched before them, a city of futures being written in real-time.
"Think about it this way: if we publicly announce that Boris
Yeltsin will be the man who dissolves the Soviet Union, what happens?
Soviet intelligence reads our paper—you know they monitor Western
AI research. Suddenly Yeltsin becomes a person of interest. Maybe
they sideline him. Probably worse."
Chen turned away from the window and faced both twins directly. He
knew that Van's exhilaration over presenting a talk at the meeting
had taken her right past the conventional graduate student level of
thinking obsessively about her thesis project to the point where she would no longer do something just
because her thesis advisor told her to do it. Or not do it. Now he
had to get her to think about the consequences of her actions all the
way through to the obvious conclusion. With calculated brutality, he
pointed right at Van and added, "So no mention of Yeltsin,
unless you want him thrown in prison by the communists."
Vanesy's expression shifted from frustration to puzzlement. She
entered into consideration of a new line of reasoning. "You're
thinking about the Observer Effect. The prediction affecting the
outcome."
"Exactly," Chen said, grateful to Van's versatile
intelligence that didn't require him to explain the details that he
was not permitted to discuss. "Our system models complex
dynamical systems. But we're not external observers—we're part of
the world system. The act of prediction can alter what we're predicting."
Jennem shook her head. "But Dr. Chen, that same logic should
have applied to the Gorbachev prediction. We published that in
December 1984. If Soviet intelligence was monitoring our work, they
could have blocked his rise. But they didn't. He became General
Secretary right on schedule."
The warm current intensified, and Chen felt his mouth forming
words before he'd fully decided to speak them. "The Gorbachev
prediction was a different beast. We were predicting someone rising
through the ranks of the Communist party, a system that has existed
for decades and he was already part of. He was a senior party
official following a logical career trajectory that could be
extrapolated into the future. Our QA system simply identified an
existing structural inevitability that existed in the Party
system of the Soviet Union. But Yeltsin represents something else. A
break. A discontinuity. That kind of prediction is more fragile."
He could see both twins studying him with that expression they'd
developed over the years of working together: a mixture of admiration
and unease, as if they were watching someone solve a complex set of
differential equations in their sleep.
"How do you know that?" Vanesy asked quietly. "About
Yeltsin representing a discontinuity versus Gorbachev's structural
inevitability? The system's outputs don't distinguish between those
categories. We just get probability distributions and confidence
intervals."
Chen felt something like pressure behind his eyes, a soft
insistence pushing his thoughts in certain directions. "Experience.
Intuition. Three years of working with our QA model has taught me to
read between the output lines."
"You said the same thing about the vector embedding
dimensionality," Jennem observed. "And the context window
parameters. And the semantic distance metrics. 'Experience and
intuition.' But Dr. Chen, we've been there for every experiment,
every model run, every analysis session. We have the same experience
you do. And we don't have your intuitions."
The hotel room suddenly felt too small, the air too close. Chen
moved to half sit, half lean against the desk. He realized that he had no real means
of putting functional psychological distance between himself and the twins'
penetrating gazes. He casually picked up the menu for the hotel's restaurant
and pretended to be reading it. He asked, “Who wants dessert, I mean, besides me?”
After putting in a room service order for ice cream, Chen set down
the handset of the phone and Van said, “For once, I'd like an
explanation of your astounding abilities that does not involve the
word 'intuition'.”
Chen laughed and ignored Van's search for an explanation. "Your
presentation tomorrow," he said, deliberately changing the
subject, "will focus on three things. First, the theoretical
foundation—how the Psychohistory system represents geopolitical
knowledge as transformations in high-dimensional semantic space.
Second, the methodology—how we've integrated historical data,
economic indicators, and political analysis into a coherent
question-answering framework. Third, the validation—the Gorbachev
prediction, made publicly in December 1984, confirmed in March 1985."
“You are sitting on gold.” He pointed at the set of slides
there in Van's hands. "That's already remarkable enough to make
an impact, get you two your Ph.D.s and assure continued funding for
our work. The AI research community will understand the significance.
We're demonstrating that machine intelligence can make accurate
predictions about complex real-world systems. That alone will be
groundbreaking."
"And the broader Soviet collapse prediction?" Vanesy
pressed. "The mechanism showing systematic dissolution within
five to seven years?"
"You can mention it," Chen said, "but frame it as
preliminary analysis requiring further validation. Suggest that the
system indicates structural instabilities in the Soviet
system without providing the specific names and sequences of events.
We will let the intelligence community come to us afterward for the
details—which they will."
Jennem stood up from her chair, stretching. "Harold Voss will
be in the audience tomorrow. The DIA is very interested in our
prediction validation rate."
Chen nodded. "I'm counting on their interest. We are chumming
the funding waters. Voss understands the strategic implications
better than most academics will. But even he doesn't need to know
everything... not yet. Loose lips sink ships and we don't want to
sink Yeltsin's career, unless one of you actually wants to push for
another forty years of communism in Russia."
"You say 'yet'," Vanesy repeated. "But eventually?"
"Eventually," Chen confirmed, though he wasn't sure if
the word was his own or something whispered through the warm current
that had become a permanent presence in his thoughts. "When the
timing is right.” In his mind, he wondered: Will the time ever
be right? “Remember, our work is not about something a paltry
as which political system controls Russia. We don't care about
rearranging the deck chairs on the Titanic before it sinks. And all
political systems do sink... eventually. They are cobbled together as
needed to deal with day-to-day needs of people. In the next century,
everyone in the world will be linked into a global communications
system and people might decide to end the whole system of competing
nations." Chen was thinking about Asimov's depiction of future artificial intelligence making possible a well-organized world government for Earth. "Whatever happens to Yeltsin, we have bigger fish to fry than
these petty political minnows that swim in the nationalistic ocean of
20th century Earth."
Vanesy returned to the window, silent for a long moment,
processing Chen's political philosophy which he had never previously
spoken about. The city below had fully transitioned to night, a
sprawl of lights and movement and millions of individual futures
converging into something that could, perhaps, be modeled and
predicted and understood.
"Can I ask you something?" she said finally. "And I
want an honest answer."
"Of course."
"Do you actually understand how our QA system works? I mean,
really understand it? Or do you just... know what will work before we
even try it?"
The rather confused and confounding question hung in the air like
suspended verdict. Chen felt the pressure behind his eyes intensify,
that gentle insistent presence guiding his thoughts into safe
channels.
"I understand the mathematics," he said, which was true.
"The vector embeddings, the transformation matrices, the
semantic projection operators—I can derive all of it from first
principles."
"That's not what I asked," Vanesy said softly.
Jennem moved to stand beside her sister, and Chen was struck by
their symmetry—two reflections of the same sharp intelligence, the
same persistent questioning. Over the years, they'd become not just
his research assistants but something closer to colleagues, partners
in an exploration that was leading them all into territory that
couldn't be fully mapped.
"Van, I know what you really want to understand and the answer is 'No!'," Chen admitted, the emphatic 'no' escaping before the warm
current could redirect it. "I don't always understand where my
insights come from. Sometimes I just... know. And I've learned
to trust that knowing, because it's always been correct... so far."
The twins exchanged one of their meaningful glances, a full
conversation compressed into a heartbeat.
"That's what worries us," Jennem said. "Not that
you're wrong—you're never wrong. That's what worries us."
Before Chen could respond—and he wasn't sure what he would have
said—a knock sounded at the door. The were soon eating their ice
cream and speculating about which luminaries of the artificial
intelligence world might attend Van's talk.
Then there were three crisp raps on the door, authoritative and
familiar.
Vanesy moved to answer it, opening the door to reveal Harold Voss,
still in his brand of forgettable business attire, though perhaps
slightly more expensive now. His bland face registered no surprise at
finding both Bergman twins in Chen's hotel room at 9 PM.
"Dr. Chen," Voss said pleasantly. "I hope I'm not
interrupting. I wanted to touch base before tomorrow's presentation.
Mind if I come in?"
Chen gestured him inside, noting how Voss's eyes performed their
habitual quick scan of the space— the slides in Van's hands, some pink strawberry ice cream on her chin, an
annotated copy of the presentation outline on the desk.
"I've read the abstract for tomorrow's talk," Voss said,
closing the hallway door and settling his back against it in that
relaxed-but-alert posture he favored. "Very modest. 'Validation
of Temporal Geopolitical Predictions Using Semantic
Question-Answering Systems.' Doesn't quite capture the magnitude of
what you've actually accomplished, does it?"
"It's an academic conference," Chen replied. "Modest
titles are traditional."
"Mm." Voss's gaze moved to the twins. "Ms. Bergman,
you're presenting? That's good. Young researcher, brilliant work, the
future of AI research—makes for good optics. Though I imagine some
of the old guard will have questions about methodology."
"I can handle questions," Vanesy said, an edge in her
voice.
"I'm sure you can," Voss agreed. "But I wanted to
make sure we're all aligned on what gets discussed and what
remains... internal to our ongoing collaboration."
The warm current surged through Chen's thoughts, bringing with it
a crystalline clarity about what was happening. Voss wasn't here to
discuss presentation strategy. He was here to ensure that certain
information remained classified, that the full capabilities of the
Psychohistory system didn't become public knowledge.
"The presentation will focus on the validated Gorbachev
prediction," Chen said. "Theoretical framework,
methodology, confirmation. We'll mention ongoing analysis of Soviet
structural instabilities but won't provide specifics."
"No names," Voss said. It wasn't a question.
"No names," Chen confirmed. “My team had just
discussed that before you dropped by.”
"And the timeline estimates? The system's prediction about
when these structural instabilities might lead to significant
political changes?"
Chen felt the pressure behind his eyes again, that gentle
insistent guidance. "We'll suggest a rather wide range of
possibilities: five to ten years. Enough to indicate that we are
exploring medium-term forecasting capability. We can mention change without
creating panic because everyone in Russia is expecting change. Each person can fit their personal hopes for the future into our prediction of change."
Voss nodded slowly, his bland expression unchanged, but Chen could
sense satisfaction beneath the surface. "Keeping what you say about the future vague is prudent. The
intelligence community appreciates predictive capability, but we also
appreciate discretion. Especially regarding information that could
affect ongoing operations or diplomatic relations."
He placed a hand on the door handle, preparing to leave. "One
more thing, Dr. Chen. Your proposal to transition Psychohistory
research out of the university environment—Agisynth, was it? Your
proposed private research company?"
"I've filed the incorporation paperwork for Agisynth," Chen
confirmed.
Voss nodded. "And your funding proposals will be approved as
well. The DIA has a strong interest in seeing your Psychohistory
research continue, preferably in a more secure environment where we
can have better oversight of capabilities and outputs."
He opened the door, then paused. "Tomorrow should be
interesting. A validated AI prediction about a major geopolitical
event! That's going to turn heads. Some of those heads belong to
people who will want to know what else your system can predict. Be
prepared for attention, Dr. Chen. The kind of attention that changes
careers."
After Voss left, the three of them stood in silence for a long
moment.
"He's not just a funding liaison, is he?" Jennem said
finally. "He's an intelligence handler. We're not just doing
research—we're providing operational intelligence to the Defense
Department."
Chen returned to the half-melted remains of his ice cream. "Does
that trouble you?"
The twins looked at each other, another one of their silent
conversations.
"Yes," Vanesy said. "And no. If our QA system can
predict major geopolitical events, someone's going to use it for
intelligence purposes. Might as well be us, with some control over
how it's applied. But..."
"But?" Chen prompted.
"But it feels like we're being moved into position,"
Jennem finished. "Like pieces on a chessboard. The DIA's
interest, the funding, the pressure to privatize, the careful
management of what we reveal—it's all very coordinated. Very
deliberate."
"And you wonder who's doing the coordinating," Chen
said.
"Don't you?" Vanesy asked.
The warm current flowed through his thoughts, and Chen realized
with a strange detachment that he'd stopped wondering about that
question months ago. Somewhere along the way, he'd accepted the
inexplicable certainty, the impossible knowledge, the guidance that
came from nowhere and everywhere at once.
"Tomorrow," he said instead of answering, "you're
going to stand in front of three hundred AI researchers and tell them
that we've built a system that can predict the future. Some of them
will be impressed. Some will be skeptical. Some will be frightened.
But all of them will remember it. And nothing will be quite the same
afterward."
He looked at both twins, these brilliant young women who'd
followed him into territory stranger than any of them had imagined
three years ago.
Chen handed his empty bowl to Jen. "Get some rest," he
said. "Tomorrow we change the world."
After the twins left, Chen was alone in the darkening hotel room,
the lights of Los Angeles spreading before him like a map of
interconnected futures. Somewhere in his brain, imperceptible to any
Earthly technology of 1985, a femtozoan composed of hierion-based
matter continued its careful work, integrating artificial memories
with biological neural networks, providing knowledge that shouldn't
exist yet, guiding the course of human technological development
along carefully calculated pathways.
And in the Hierion Domain, Nyrtia's Earth Observation System and Earth
Simulation System registered another small probability anomaly in the
Earth Simulation System's prediction of the life course of Dr.
Michael Chen—nothing definitive yet, nothing that would justify
direct intervention, but enough to warrant closer monitoring.
The game continued, with Chen as both player and playing piece,
moving toward a future that was already memory.
The conference hall held approximately three hundred researchers,
their conversations creating a low background hum that reminded
Vanesy Bergman of the old mainframe room back at Umass—its cluster
of computing machines working away and creating an emergent
pattern of collective noise, a type of music loved by hackers.
She approached the podium, clutching the notes which she had no
intention of using, watching the previous presenter finish talking to
an interested colleague about expert systems for medical diagnosis.
Beside her, Dr. Chen maintained his characteristic calm, though she'd
learned to recognize the slight tension in his shoulders that
indicated he was more nervous than he appeared.
"Five minutes," a conference volunteer told Van.
Vanesy nodded, running through her opening lines one more time.
She'd given presentations before—seminar talks at UMass, a poster
session at a regional AI workshop—but never anything like this. The
AAAI Conference attracted the field's leading researchers: Marvin
Minsky from MIT, Roger Schank from Yale, Douglas Lenat from MCC, Ed
Feigenbaum from Stanford. The giants of artificial intelligence. At
the start of the session, she'd been pleased to see them gathered
there in this room.
And Harold Voss was there, sitting in the back row, bland and
forgettable and watching everything.
"You'll do fine," Chen said quietly. "Just
remember: you're not defending the prediction. You're presenting a
methodology that happened to generate an accurate forecast. The
prediction is validation, not the core contribution."
"Right," Vanesy said. "Methodology, framework,
validation. Modest and academic."
The previous presenter moved away from the podium. The session
chair—Dr. Margaret Boden from Sussex University—moved to the
podium microphone to introduce Vanesy's talk.
"Our next speaker is Vanesy Bergman from the University of
Massachusetts where she has been doing her Ph.D. thesis research
project under the supervision of Dr. Michael Chen. Their paper,
'Temporal Geopolitical Prediction Using Distributed Semantic
Representations,' presents a novel approach to question-answering
systems with some rather remarkable validation results. Ms. Bergman?"
Vanesy walked to the podium, her hands steadier than she'd
expected. The first slide appeared on the screen behind her:
"Temporal Geopolitical Prediction Using Distributed Semantic
Representations."
She began, "In December 1984, our research group published an
AI-generated prediction in the Journal of Computational Modeling.
The prediction was that Mikhail Gorbachev, then a senior member of
the Soviet Politburo, would become the next General Secretary of the
Communist Party. In March 1985, this prediction was confirmed.
Gorbachev was appointed General Secretary following the death of
Konstantin Chernenko."
A slight murmur went through the audience. She had their
attention.
"I'm not going to discuss the political implications of that
prediction. I'm going to discuss the computational methodology that
allowed our lab's new artificial intelligence system to forecast a
specific geopolitical event with sufficient accuracy and confidence
that we were willing to publish before the real world outcome was
known."
Next slide: a complex diagram showing the system architecture.
"The Psychohistory system—named after Isaac Asimov's
fictional science—represents geopolitical knowledge as
configurations in high-dimensional semantic space. Traditional
question-answering systems use symbolic reasoning: facts encoded as
logical propositions, queries resolved through deductive inference.
But real-world geopolitical systems are too interconnected, too
dynamic, for symbolic approaches to be programmed and deployed
effectively... at least in the time available for a Ph.D. student."
She paused while polite laughter came from the audience. Then she
moved on to the heart of her presentation, trudging quickly through
the technical details with increasing confidence, watching
comprehension and interest dawn on faces throughout the audience.
Vector embeddings for concepts and entities. Transformation operators
for modeling relationships and causal mechanisms. Dynamic context
windows for temporal reasoning. The mathematics was sophisticated but
not impossible—it was exactly the kind of approach that many AI
researchers in 1985 were beginning to explore, just more advanced
than those in the audience expected to see in a completed, working
system.
"The key innovation in our lab's approach to QA," Vanesy
continued, "is treating predictions not as logical deductions
but as geometric transformations in semantic space. When we ask
questions like, 'Who will lead the Soviet Union after the death of
Chernenko?', we're not searching a database of facts. We're
identifying the relevant region of semantic space—Soviet
leadership, Politburo dynamics, succession patterns, geopolitical
constraints—and applying transformations that project current
configurations forward in time."
Next slide: visualization of the vector space analysis that had
identified Gorbachev.
"The system analyzed historical patterns of Soviet leadership
succession. It incorporated data about Politburo members' career
trajectories, their speeches and policy positions, their
relationships with military and industrial leaders. It modeled how
different succession scenarios would affect Soviet economic planning,
military posture, relations with satellite states."
"And it identified Mikhail Gorbachev as the candidate who
best satisfied the existing real world constraints. Not because he
was the most senior, or the most powerful, or the most obvious choice
to Western analysts. But because he occupied a unique position in the
system's semantic space—reform-oriented but institutionally
connected, young but experienced, ideologically flexible but
party-loyal. The system predicted Gorbachev because the geometric
structure of Soviet political dynamics pointed to him."
The murmuring grew louder. In the front row, she saw Marvin Minsky
leaning forward, his expression intrigued.
"Now, I want to be clear about what this result represents
and what it doesn't represent. We published one prediction in our
1984 article. Our prediction proved accurate. That's significant
validation of the methodology, but it's not proof of universal
forecasting capability. We need more validation, more predictions,
more rigorous analysis of where the approach works and where it
fails."
Next slide: accuracy metrics and confidence intervals.
"Since 1984, we have been conducting additional analysis. Our
QA system indicates that there are significant structural
instabilities in the Soviet economic system, particularly regarding
energy prices, industrial productivity, and satellite state cohesion.
The geometric configurations suggest these instabilities will
continue to intensify over the next five to seven years, potentially
leading to major political reconfigurations."
Another murmur, this one more concerned. She was venturing into
sensitive territory.
"I'm not going to provide specific predictions about those
reconfigurations today. The analysis is ongoing, the confidence
intervals are wider than we'd like, and there are obvious concerns
about prediction affecting outcomes... and our poor suffering LISP
machine lacks the computational power to complete this analysis, at
least in the time-frame before I hope to graduate.” She pause and
smiled at her own joke. “But the methodology appears capable of
identifying medium-term systemic trends, not just individual
leadership changes."
Final slide: future research directions.
"Our next steps include expanding the historical training
data, improving temporal projection operators, incorporating more
sophisticated models of feedback loops and intervention effects.
We're also transitioning this research to a new private
institute—Agisynth—where we can focus on long-term development
without the crippling constraints that exist for conventional funding
of computer science research students."
She paused, looking out at the audience.
"One final point: this work raises important questions about
the social implications of predictive AI systems. If machines can
forecast geopolitical events, who controls those forecasts? How do we
ensure predictions are used responsibly? How do we avoid creating
self-fulfilling prophecies or triggering harmful reactions to
forecasted events? These aren't just technical questions—they're
ethical and political questions that our field needs to address as AI
capabilities advance."
She clicked to the acknowledgments slide.
"Thank you. I'm happy to take questions."
For a moment, silence. Then hands shot up across the room.
Dr. Boden pointed to someone in the third row—Douglas Lenat from
MCC. "Yes, Dr. Lenat?"
Lenat stood. "Fascinating work. But I'm skeptical about the
theoretical foundation. You're claiming that geopolitical knowledge
can be represented as vectors in an abstract space, with
relationships as geometric transformations. That seems to assume a
smooth, continuous structure to political systems. But real politics
is discontinuous, chaotic, driven by individual decisions that can't
be reduced to spatial geometry. How do you handle genuine
discontinuities—revolutions, assassinations, radical policy
shifts?"
Vanesy had prepared for this question. "You're right that
political systems have some seemingly discontinuous elements that can
take poorly-informed human observers by surprise. But most
discontinuities aren't as random as they appear to ignorant
observers. Revolutions require preconditions—economic stress,
political delegitimization, military defection patterns. Our vector
space representations encode all of those preconditions. When the
system predicts instability, it's identifying configurations where
discontinuous changes become more probable."
"But—" Lenat persisted.
Dr. Boden interrupted. "We'll do multiple rounds of
questions. Yes, Dr. Schank?"
Roger Schank stood, his expression thoughtful. "Your system
made one correct prediction. Congratulations. But from a statistical
standpoint, one success proves very little. Maybe you got lucky.
Maybe your model is overfit to Soviet leadership patterns and won't
generalize. What's your plan for generating enough validated
predictions to demonstrate robust capability?"
This was the opening Chen had coached her for.
"You're absolutely right that one prediction doesn't
establish robust capability," Vanesy agreed. "Our approach
going forward is twofold. First, we're going to continue making
medium-term predictions about political and economic developments. We
will be publishing them in advance so they can be validated. Second,
we're applying the methodology to other domains—not just
geopolitics. We've begun analysis of technology industry trends,
economic cycle patterns, social movement dynamics. The goal is to
build a track record across multiple domains. My prediction is: our
QA system can deal with any domain of knowledge, we just need to
collect the data... lots of data. Maybe that's work that my own
research students will do... when I have my own lab."
More hands. Dr. Boden pointed to a younger researcher near the
back. "You've been very careful to frame this as a
question-answering system rather than a prediction system. Is that
just political sensitivity, or is there a technical distinction?"
"Both," Vanesy said. "Technically, the system
responds to queries: 'Who will lead the Soviet Union after the
eventual death of the current leader?' is a question, not a
prediction. But practically, yes, there's political sensitivity.
Forecasting geopolitical events has obvious intelligence
applications. We're working with government funding agencies, and
there are constraints on what we publicize."
Harold Voss, she noticed, was watching this exchange with
particular attention.
More questions followed, each one probing different aspects of the
methodology. How many dimensions in the vector space? (Currently 512,
but that's a tunable parameter.) How is historical data encoded?
(Combination of structured databases and natural language processing
of texts.) What's the computational complexity? (Expensive—we're
using a Symbolics 3600 and still running into performance
bottlenecks.) How do you handle missing data? (Carefully, and it
degrades prediction confidence significantly.)
After twenty minutes, Dr. Boden called a halt. "We're out of
time, though I'm sure there's more to discuss. Thank you, Ms.
Bergman, for a genuinely thought-provoking presentation. I suspect
we'll be seeing more from your group. Now, we've fallen slightly behind schedule, so no break. Let's move on to the last presentation of this session..."
Vanesy returned to her seat beside Dr. Chen as the next speaker
began setting up. Her hands were shaking slightly from adrenaline.
"Well done," Chen whispered. "Perfect balance of
detail and restraint."
"Did you see Minsky's expression? I think he's intrigued."
"Half the room is intrigued. The other half thinks we're
either frauds or dangerously naive. Both reactions are useful."
After the session ended, researchers clustered around them—some
offering congratulations, others pressing for technical details, a
few expressing polite skepticism. Vanesy found herself repeating the
same points: one validated prediction, ongoing analysis,
methodological innovation rather than magical forecasting.
Van, Jen, Chen and Voss. Image generated by ChatGPT.
Then Harold Voss appeared, materializing from the crowd with his
characteristic unobtrusive efficiency.
"Dr. Chen. Ms. Bergman. Excellent presentation. Very
measured. Could we speak briefly in private?"
They followed him to a small conference room off the main hall.
Voss closed the door and turned to face them, his bland expression
harder than usual.
"You handled the questions well," he said. "But I
need to know: how much of the full analysis have you shared with your
academic colleagues? Informally, outside the presentation?"
“You know how it is in academia, people eat, breath and talk.”
Chen's expression didn't change. "Why?"
"Because people have approached me asking if the DIA has
additional information about Soviet leadership succession beyond
Gorbachev. Apparently there are rumors circulating about your group
making more detailed predictions."
Vanesy felt her stomach tighten. "We haven't discussed—"
"Save your breath," Voss interrupted. "The specter of classified analysis hovering over your work is now obvious to anyone paying
attention. Which means we will accelerate some timelines."
He pulled a folder from his briefcase. "Agisynth. Your
private research company. We're going to fast-track the funding
approvals. You'll have your first contract before the end of
September. But in exchange, all future Psychohistory outputs go
through secure channels. No more unbridled freedom in your publication in academic journals, no
more wild-West academic openness in conference presentations. The methodology is now public
knowledge—fine. But any specific sensitive predictions stay classified until they are explicitly cleared for publication."
Chen nodded slowly. Vanesy noticed he didn't seem surprised by
this development.
"We've been planning to transition anyway," Chen said.
"This just accelerates the timeline."
"There's more," Voss continued. "You must
expand your team. Hire more researchers, build out the computational
infrastructure, broaden the scope of analysis. We're prepared to
provide substantial funding—think tens of millions." Voss pulled a budget summary sheet out of a file folder he was carrying and handed it to Chen. "But you'll be working on classified problems, and your
outputs will inform national security decisions."
He paused, looking directly at Chen. "You understand what
that means? You won't be an academic researcher anymore. You'll be a
defense contractor. Different rules, different constraints, different
implications."
The warm current flowed through Chen's thoughts, and Vanesy saw
something change in his expression—a kind of acceptance, as if he'd
been expecting this moment.
"I understand," Chen said. "And I'm prepared for
it. In fact, I'll go further: I'm prepared to race everyone else in
the world to the creation of a machine with artificial general
intelligence."
The room went very quiet.
Voss studied Chen with new intensity. "That's a bold claim.
Artificial general intelligence is science fiction."
"So was predicting Soviet leadership succession," Chen
replied. "Until 1984. This is a brave new world."
"You think your Psychohistory system is a path to AGI?"
"I think question-answering about complex real-world systems
requires something approaching general intelligence. The semantic
representations, the contextual reasoning, the ability to model
counterfactuals and project future states—these are general
cognitive capabilities, not narrow tricks. We're further along than
anyone realizes."
Vanesy stared at her advisor, recognizing something she'd
suspected but never heard articulated so directly. Chen wasn't just
building a geopolitical forecasting system. He was building something
much more ambitious, and using the DIA's interest in prediction as
a way to tap into a convenient funding source for his push towards a deeper research agenda.
"How far along?" Voss asked quietly.
"The Bergman twins have been building the core components for
three years. Vector embeddings that capture semantic meaning.
Transformation operators that model causal relationships. Dynamic
architectures that adapt to new information. These aren't
special-purpose modules—they're general-purpose cognitive
primitives. Give us five years and serious resources, and we'll have
something that can answer questions no human expert could answer, in
domains no one laboriously programmed it for."
Chen leaned forward slightly. "But here's what I need you to
understand: this technology will be developed. If not by us, then by
someone else—probably multiple someone elses, in multiple
countries. The question isn't whether artificial general intelligence
will exist."
Chen added, "The question is, who builds it first? And I care how it's built
and what values and constraints get embedded in its foundational
architecture."
He paused. "I plan to be first. I want AGI to be developed by people who understand the implications,
with safety and control mechanisms designed in from the beginning.
And I'm prepared to dedicate the rest of my career to making that
happen."
Voss was silent for a long moment, and Vanesy could almost see
calculations running behind his forgettable face.
"I'll need to take this up the chain," he said finally.
"What you're describing goes beyond DIA's normal portfolio. But
if you can deliver even partial progress toward AGI, there will be
no shortage of interested onlookers."
He stood, collecting his folder. "Expect contact within two
weeks about Agisynth funding. Use those two weeks to start planning who you will hire to staff a version of Agisynth that will be larger than what you have so far dared to imagine. And a personal favor for me, Dr. Chen... I'd
appreciate if you'd keep your AGI ambitions somewhat quiet. Some
people in Washington aren't yet ready to think about artificial general
intelligence as a near-term possibility. I don't want them feeling blindsided. I need time to spin them up to speed or else I'll be subjected to uncomfortable questions about why I did not warn my superiors about such a dramatic technological advance. Dr. Chen, you and I are science fiction fans, we enjoy change, but I have to deal with powerful people who's thinking is still stuck in the previous century."
After Voss left, Vanesy turned to Chen. "AGI? You've never
mentioned AGI as our research goal."
"Because it wasn't the right time to mention it," Chen
replied. "But after today's presentation, after demonstrating
validated prediction capability, the time is right. People are
wondering what else the system can do. Better to frame the
conversation ourselves than have others frame it for us... or push us out of the conversation."
"But... do you really think we can build AGI? Artificial
general intelligence?" Jennem's voice was uncertain. "That's
not just advanced question-answering. That's machine consciousness,
self-awareness, genuine understanding. We're nowhere close to that."
Chen looked at both twins, and Vanesy saw something in his
expression that troubled her—a certainty that went beyond
confidence, beyond expertise, beyond anything that should be
possible.
"Aren't we?" he said quietly. "Think about what the
system already does. It takes questions in natural language. It
understands context and meaning. It reasons about complex causal
chains. It generates answers that demonstrate genuine insight, not
just pattern matching. What's missing? What essential component of
intelligence doesn't our system already have, at least in preliminary
form?"
"Consciousness," Jennem said. "Self-awareness. The
ability to reflect on its own thinking processes."
"Those will emerge from sufficient complexity in a layered
system with multiple layers of neural network models tuned for
specific computational tasks, just like a human brain has it own
distinct collection of functional modules," Chen replied.
"That's all we humans have and that's all that will be necessary
for a non-human machine general intelligence. We don't actually know
the details yet, but we're going to find out."
He moved towards the door, preparing to attend another session of
the conference. "In five years—maybe less—we're going to
build something remarkable. Something that will change everything
about how humans relate to technology, to knowledge, to the future
itself. And yes, that's exciting. But it's going to happen whether
we're ready or not."
He paused at the door. "The question is: do you want to be
part of it?"
The twins looked at each other, another one of those silent
conversations.
"Yes," they said in unison.
"Then let's get started," Chen said. "We have a lot
of work to do."
Hierion
Domain Observation Post
August
11, 1985
7:23
PM Pacific Time
Nyrtia's femtobot structure reassembled into her preferred
analytical configuration—less humanoid, more crystalline, optimized
for processing the data flowing from her observation network on
Earth. Every human being, every object on Earth contained femtobots
that continually transmitted data into Nyrtia's Earth Simulation
System in the Herion Domain.
The AAAI conference had triggered multiple alert thresholds in the
comparator system that continually looked for differences between the
predicted course of events on Earth and the actual events. At this
time, there was not enough for definitive intervention, but enough to
elevate Dr. Michael Chen to Priority One monitoring status. Nyrtia recognized the pattern. Manny had led Nyrtia down this road before.
She reviewed the analysis cascading through her systems:
The evidence was circumstantial but compelling. Chen's research
trajectory showed unusual acceleration. His "intuitions"
consistently predicted correct solutions before empirical validation.
His students—both exceptionally talented—nevertheless couldn't
replicate his insight generation process.
And now he'd publicly validated a geopolitical prediction that
should have required far more sophisticated modeling than 1985-era AI
could support.
Nyrtia expanded the temporal analysis, running Simulations forward
through multiple potential futures:
Timeline A (Current Trajectory):
1986: Agisynth receives DIA
funding
1988: Psychohistory system
predicts Soviet collapse mechanism
1990-1991: Soviet Union dissolves
as predicted
1995: Chen's team achieves
artificial general intelligence breakthrough
The comparison was stark. Manny's intervention—if that's what
this was—would accelerated AI development by several decades. Manny had first tried this kind of Intervention back in Reality 186443. Chen,
guided by what Nyrtia increasingly suspected was a femtozoan from the
future, was once again on a path towards building an AGI that would change the
technological landscape of Earth. But would such technological
acceleration be bad or good for Earth?
In Reality 186443, word had spread like fire through the human population of Earth: aliens were secretly controlling Humanity. Doomsday cults had proliferated, mass suicide and infectious pessimism smothered human society. The Huaoshy had then stepped in an reverted every phase of Manny's misguided intervention.
And now, if Manny was back at the same game, where was the evidence for direct intervention by the bumpha?
The femtozoan integration, if it existed, was perfectly subtle. Chen
showed no obvious behavioral anomalies beyond his "intuition."
His research, while advanced, didn't incorporate technologies that
didn't exist yet. Everything was plausible extrapolation from current
knowledge.
Nyrtia ran her Simulations in a focused effort to find violations of the Rules of Intervention:
Law One: Did humans still have apparent
self-determination? Technically yes—Chen apparently still believed
that his insights were his own. No obvious violation.
Rule One: Were the bumpha appearing to follow
observation-only protocols? Yes—there was no obvious
alien technology transfer.
But Nyrtia knew Manny. The bumpha was too sophisticated for crude
violations. This was subtle intervention, working through natural
channels, accelerating the natural progression of computing rather
than replacing human technology with alien technology.
She composed a warning message to inform the Huaoshy:
Priority One Alert: Potential bumpha temporal intervention
detected. Human Subject: Michael H. Chen. Evidence: statistical
anomaly in insight generation, validated prediction capability beyond
era-appropriate technology, research trajectory consistent with
Manny's past Intervention in Reality 186443. Recommendation: Enhanced monitoring,
prepare Rule Two contingencies. Requesting authorization for placing
a human simulation on Chen's research team to monitor and determine
intervention mechanisms and assess behavioral modification
requirements.
She paused before sending it. Rule Two was serious business:
"Individuals for which Law #1 is not met should be put into an
environment where they have interactions with the social group that
they perceive to be depriving them of self-determination."
If the Huaoshy authorized Rule Two protocols, Chen could be
brought into the Hierion Domain and placed inside the Earth Simulation
System. His neural patterns would be analyzed, modified if necessary,
adjusted to neutralize the femtozoan influence, if it existed.
But that was a significant escalation. The Huaoshy would
need strong evidence, not just statistical anomalies and suspicious
coincidences.
Nyrtia sent the message and prepared to wait. The Huaoshy
decision timelines often operated on the scale of weeks, not days.
In the meantime, she increased the femtobot observation density
around Chen, the Bergman twins, Harold Voss, and the entire emerging
Agisynth organization. Every conversation would be monitored. Every
research decision analyzed. Every anomalous probability spike
investigated.
If Manny was playing illegal games with human technological development,
Nyrtia would find the evidence. And then the real confrontation would
begin.
She just hoped this attempt by Manny to achieve AGI on Earth in 2000 would play out as it had previously... with Manny's defeat.
Dr. Chen's office was being systematically mined for his personal
belonings. Boxes lined the walls, filled with papers, books,
printouts, the accumulated detritus of years on the faculty. The LISP
Machine 3600 would stay; Agisynth would have newer and better
equipment.
Vanesy and Jennem had helped Chen pack, working with the efficient
coordination that came from years of twin synchronization. They'd had
a week to process the conference, Chen's AGI declaration, the
implications of transitioning from academic research to classified
defense work.
"So that's it?" Jennem said, taping up another box.
"You're leaving academia with no regrets?"
"I am. You two need to write up and defend your theses and
try not bring down on yourselves the wrath of Voss in the process,"
Chen replied. He was cleaning out his desk drawers, finding forgotten
items from years past—a review copy of Hofstadter's Gödel,
Escher, Bach, course notes from his graduate seminars, a
photograph from his PhD graduation.
"Do you ever worry," Vanesy asked carefully, "that
we're moving too fast? Three years ago, we were working on semantic
parsing. Now we're supposedly building AGI with DIA funding, making
predictions about the collapse of the Soviet Union, racing the world
to machine consciousness. Doesn't that trajectory seem... I don't
know... implausibly steep?"
Chen set down the photograph and looked at both twins directly.
"Yes," he said. "It's implausibly steep. I'm aware of
that."
"And I can't explain it," Chen admitted. "Not in
any way that would satisfy your very reasonable skepticism. I just
know—with absolute certainty—that we're on the right path. That
the work we're doing is important beyond anything I can articulate.
That we need to move as fast as possible because..." He trailed
off, struggling with thoughts that felt both his own and not his own.
"Because we can," he finished.
The twins exchanged their familiar glance, and Chen wondered what
they saw when they looked at him now. A brilliant mentor? A man
touched by something beyond normal understanding? A person being used
by forces he didn't comprehend?
All of those, perhaps.
"We're with you," Vanesy said finally. "Whatever
this is—however implausible the trajectory—we're with you. But
Dr. Chen..."
"Yes?"
"If you ever figure out where your certainty comes from, we'd
like to know."
Chen felt the warm current flow through his thoughts, accompanied
by that now-familiar sensation of gentle guidance, of paths opening
and closing, of knowledge that couldn't be his but was.
"If I ever get to a point where I can tell you," he
promised, "you'll be the first to know."
But even as he spoke, he knew it wasn't true. The knowledge would
remain locked inside him, accessible but inexplicable, useful but
forever mysterious.
In his brain, the femtozoan continued its careful work,
integrating artificial memories with biological networks, guiding the
course of human development toward a future that was already written,
already memory, already inevitable.
The game continued, with Chen as both player and playing piece,
moving toward checkmate against an opponent he couldn't see.
Outside, autumn was beginning to touch New England, the first
hints of color appearing in the trees. The future was coming, faster
than anyone expected.
Below is the Grok-generated list of changes that I made to Claude's original draft of Chapter 4. Note: in the following list, Grok is not always consistent about which of the two drafts is "version 1".
Grok: Significant Changes Between Ver1 and
Ver2 of Chapter 4
Assuming "Downgrade Ch 4 Ver1.pdf"
is the earlier version and "Downgrade Ch 4 Ver2.pdf" is the
revised version, here is a list of the most notable differences.
Changes include additions, rewordings, expansions, and minor
deletions that affect character development, pacing, dialogue, or
plot details. I've focused on those that alter tone, depth, or
narrative flow, grouping similar ones where appropriate.
1. Vanesy's Initial Mood and
Behavior. In Ver1, Vanesy is described as tense and standing with
arms crossed by the window. In Ver2, she is enthusiastic, pacing
energetically with a "glow" on her face, shifting her
portrayal from anxious to excited.
2. Presentation Slides Setup.
Ver1 mentions a carousel of slides and a rented projector for
practice. Ver2 changes this to a transparent plastic slide-holder
without mentioning a projector, simplifying the scene's props.
3. Chen's Internal Thoughts on Time
Travel. Ver2 adds Chen reflecting on time travel science fiction
stories and temporal paradoxes when deciding not to reveal details,
enhancing his internal conflict and tying into the story's sci-fi
themes. This is absent in Ver1.
4. Addition of Ice Cream Scene.
Ver2 introduces a new segment where Chen orders room service ice
cream to defuse tension, leading to Van demanding an explanation
beyond "intuition," Chen laughing it off, and them eating
while speculating about AI luminaries at the talk. This lightens the
mood and adds character interaction; absent in Ver1.
5. Chen's Political Philosophy
Monologue. Ver2 adds a lengthy speech by Chen about political
systems inevitably sinking, the rise of global communications ending
nations, and focusing on "bigger fish" beyond politics.
Vanesy processes this at the window. This expands Chen's worldview
and philosophical depth; not in Ver1.
6. Timeline for Soviet Collapse in
Discussion. In Ver1, the broader Soviet prediction is framed as
five to seven years during the initial argument, but adjusted to five
to ten in the Voss meeting. Ver2 consistently uses five to seven in
the argument but switches to five to ten with Voss, creating a slight
inconsistency but aligning more with Ver1's Voss dialogue.
7. Voss's Arrival and Room Scan.
Ver2 notes Voss seeing slides in Van's hands and an annotated outline
on the desk. Ver1 mentions a practice projector, scattered slides,
and twins' annotated copies, altering the room's setup slightly for
consistency with earlier changes.
8. Presentation Introduction and
Opening. Ver2's session chair introduction is briefer ("working
with Dr. Chen"). Ver1 specifies it's her Ph.D. thesis under
supervision. Ver2's opening speech refers to "our research group
published a prediction"; Ver1 specifies "AI-generated
prediction," emphasizing the tech.
9. Humor and Self-Referential Jokes
in Presentation. Ver2 adds light-hearted jokes during Vanesy's
talk, like limitations "in the time available for a Ph.D.
Student" (causing laughter) and the LISP machine lacking power
before she graduates, plus aspiring to her own lab and students.
These make her more relatable and add levity; absent in Ver1.
10. Future Research Constraints
Phrasing. In the presentation's future steps, Ver2 mentions
transitioning to Agisynth "without the constraints of academic
funding cycles." Ver1 calls them "crippling constraints...
for conventional funding of computer science research students,"
sharpening the critique of academia.
11. Jennem's Position During AGI
Reveal. In Ver2, Jennem speaks about AGI from the doorway
(waiting outside during Voss meeting). Ver1 has her voice uncertain
without specifying location, implying she's in the room.
12. Explanation of Consciousness
Emergence. During the AGI discussion, Ver2 says it "might
emerge from sufficient complexity" or not be necessary. Ver1
expands to "emerge from sufficient complexity in a layered
system... like a human brain," adding mechanistic detail
grounded in fictional science.
13. Tone of AGI Ambition. Ver2
describes AGI as "both exciting and terrifying... whether
frightened or not." Ver1 simplifies to "exciting... whether
ready or not," softening the ominous tone.
14. Hierion Domain Details. Ver2
adds that every human/object contains femtobots transmitting to the
Earth Simulation System. It also references "Nanovac" from
the Turing-Asimov manuscript and questions if acceleration is
bad/good for Earth. The simulation request changes: Ver2 for a human
simulation of Chen; Ver1 for placing one on his team. Ver2's Timeline
A includes "2000: Conscious AI... (probable Nanovac)" and
"2005+: convergence with Manny's objectives"; Ver1 omits
Nanovac and 2005+.
15. Final UMass Scene Additions.
Ver1 adds Chen reminding twins to write/defend theses without
invoking Voss's wrath. Ver2 omits this, focusing on packing. Ver1
ends "because we can"; Ver2 "because time is short,"
heightening urgency.
Above, Figure 1 shows the first Grok-generated image that was made when I asked for "Chen, Voss, Vanesy and Jennem together at the conference." I then specified in the text prompt: "the four story characters in natural postures and NOT shown posing for a photograph" and Grok generated the image shown in Figure 2.I then added to the text prompt, "Vanesy and Jennem are twenty five years old twins and they have long blond hair"andGrok generated Figures 3 and 4in which we are returned to the pose I was trying to avoid.In Figure 3 there is one blond and in Figure 4 Voss became an old man. As seen in Figure 5, I could not get Leonardo to generate this image either (it insisted on posing them for a photo) and I had to paste in a different head for Voss from another image because after including "one man is 35-year-old Asian" in the text prompt, Leonardo wanted to make both men appear to be Asian (and often Vanesy and Jennem as well).
Below are two Grok-generated videos.
The first video, above, is the default video that was generated by Grok. For the second video, I specified that Dr. Chen should speak a line of text from Chapter 4 of "Downgrade".