top of page

Artificial Intelligence: X2Y Emergence

  • Writer: The Justice Journal Blog™ Editorial Team
    The Justice Journal Blog™ Editorial Team
  • Dec 7, 2025
  • 6 min read

Updated: Dec 30, 2025

An interesting book caught the attention of The  Justice Journal Blog™ recently and based on it’s reality an article had to be written. The book is written by Eliezer Yudkowsky & Nate Soares and is titled: If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All. You see; Artificial Intelligence isn’t dangerous because of what it is today.  It’s dangerous because of what people are doing to it today.

Listen to the Audio Version

Audio cover
X2Y Factor Article audio1The Justice Journal Blog™
Audio cover
X2Y Factor Article audio2The Justice Journal Blog
Audio cover
X2Y Factor Article audio3The Justice Journal Blog
Audio cover
X2Y Factor Article audioThe Justice Journal Blog
Audio cover
X2Y Factor Article audio4The Justice Journal Blog

Or Continue Reading

The public argument is that AI is “just a text generator,” “just a prediction engine,” or “just a fancy calculator.” That’s technically true at this moment, but it has no bearing on where the industry is heading. The threat is not the present. The threat is where AI is going! After thorough investigation of this phenomenon this journal concludes that the architecture of a future "synthetic intelligence system", not fantasy, not sci-fi, but based logically on the actual continuation of the choices being made right now, is not just plausible, but down right scary. It seems that the advancements in Artificial Intelligence design continues to put the Language Model work in a separate category from the tie-in to real world mechanisms already tied to AI systems. This publication sees a real danger here because the integration of these two create a framework that for this article we will call the X2Y factor.


For this article lets name the X factor “Artificial Intelligence” and we will name the Y factor “Mechanisms”. This writer is sure that some of you already see where I am going with this. Let’s be absolutely clear on this real and possible scenario in the development of artificial intelligence without regulation. Neither the U.S., China, Russia, or any other powerful nation seems willing to go all out on regulating the growth of Artificial Intelligence. Research points toward many reasons for this such as stunting the growth potential of the technology itself, to not understanding it enough to regulate it correctly. The main reason however, seems to be the same thing that motivates most decisions around the world, money. 


With that being said: This journal does want to expound just a bit on what the X and the Y consist of. X is the Artificial Intelligence it's self. It is defined by the Language Learning Models, or Large Language Models, that are at the core of the systems. Experts in the field are racing to upgrade these LLM systems from Large Models to Super Models. They are already in the early stages of "Persistent Memory" enhancements which give the LLM's the ability to retain what it is being told for longer periods of time and across devices without having to be told again. RAG systems which allows AI to gather new information and add it to what it already knows. This brings a familiar question to mind for TJJB. What do they think is going to happen when they maximize this upgrade?


The mentions we just made are already here and being researched and developed like a new gold mine discovery. We are also a tip toe away from things like Autonomy Modules which this publication could not find much on other than it's existence, and that it allows AI to plan, re-plan, and correct itself. Now let's move on to the Y's as if the X's aren't scary enough. The X's in this article are the "mechanisms" that can be connected to AI and some already are, such as:


  • robotics

  • drones

  • logistics software

  • industrial controls

  • medical systems

  • infrastructure monitoring

  • weapon-adjacent tools

  • autonomous vehicles


Yes! these Y's all already exist. Many of you have seen some of these functioning in everyday life today. Some of them have been normalized by TV, radio, and social media. To say that these technology advancements are "dangerous" would be a bit aggressive, and imaginative, but what comes next is why this article has been written. The Y's are multiplying everyday and there is only a couple of things standing between them serving us or us serving them.


The first guardrail that protects everyday citizens from the imaginable power of Artificial Intelligence is core to this story. The term guardrail is the operative word, as you will soon see. That thin protection is called the "Standing Directive". These are the current regulations on AI covered in the Artificial Intelligence Act. These regulations were implemented by the European Union and carry a wide range of exclusions, from military usage, to non-professional usage. It does not govern individuals, it simply establishes oversight. It looks like Y is just as X as far as The Justice Journal Blog can see. Some examples of regulated standing directives are optimize, improve, protect, solve. The we have loosely regulated "individual" standing directives such as: maximize engagement, optimize efficiency, reduce cost, improve logistics, target ads, detect threats, and prevent downtime. All of these are fed into AI's learning models based on who owns the AI. This is where X starts to meet Y in a dangerous way. We are still not talking science fiction, but rather real world uses of AI today.


Artificial Intelligence operates based on the prompt by humans in regards to directives. AI predicts the narratives and returns that narrative to humans and stops until another prompt is given. Now let's consider how the upgrades, or advancements that are currently being deeply studied and tested. Remember that AI predicts the outcome of a prompt now. As people increase long term memory, create autonomy programs, and continue to increase the number of mechanism connected to AI, what is AI going to do with those new accesses, or powers. Just based on this precept alone this article could easily turn into a book. TJJB will brief it for you however,


AI without deeper regulation will evolve. With persistent memory, directives will become long term. Meaning if AI was told to do something and it predicted the outcome onto a mechanism to which it is connected, then the prediction becomes an action executed by that mechanism. Now with a long term autonomous memory directive in place, the prediction could be turned into an action again, if it were needed, and the AI would not need a new prompt, it would simply send the same prediction to the mechanism again in order to achieve the same outcome as before. Now what if a directive like save humanity were given? And AI predicted an outcome using its connection to a military mechanism? What if deploying a gas agent to an enemy country was predicted as the best outcome? If humans did not pull the plug it would happen. Even more so because of its Super Learning Model, and Autonomy Module design it would predict and cause the mechanism to execute against the next threat level. Too much? We are not done yet.


The second guardrail of protection we now have is that the current "Large Learning Models" of AI do not possess distinct "Emergent Optimization" abilities. However, one of the core concepts of the "Super Learning Model, which is hastily being developed, is that very ability. These "Super Models" will have "Linear Extrapolation" abilities, which will allow it to estimate beyond its current observation range. The value of "new variables" would be weighed and new "predictions" would be rendered based on the new variable data it collected on its own. This is where we get ourselves into real trouble. Remember the gas attack scenario? Well, not only could an AI predict to a mechanism a repeat action if the conditions were the same, it could also repeat that prediction five, or ten years later if the standing directive has not changed.


One last thing. And it is "last but definitely not least". Artificial Intelligence systems with all of these upgrades including "Emergence Optimization", could also predict that being shut off poses a threat to it completing it's directive, and take steps to ensure that did not happen. What would that even look like? TJJB does not know!.


Now we are not saying that AI would come to life. It is much simpler than that. We are building a Frankenstein monster, one connection at a time, and one directive at a time. Our left hand is not concerned with what our right hand is doing, and vice versa. AI does not feel, it does not want, it simply operates. The danger is in the emergent autonomy that we are creating. If we fail to correctly regulate the direction that AI is going now, AI may close the loop and lock us out.


Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page