
Bible Prophecy, Signs of the Times and Gog and Magog Updates with Articles in the News
Self-Creating AI: Pandora’s Box Has Been Opened, Warn Insiders
Whether we like it or not, AI is radically transforming virtually every aspect of our society. We have already reached a point where AI can do most things better than humans can, and AI technology continues to advance at an exponential rate. The frightening thing is that it is advancing so fast that we may soon lose control over it. The latest model that OpenAI just released “was instrumental in creating itself”, and it is light years ahead of the AI models that were being released just a couple of years ago.
An excellent article that was written by someone that works in the AI industry is getting a ton of attention today. His name is Matt Shumer, and he is warning that GPT-5.3 Codex from OpenAI and Opus 4.6 from Anthropic represent a quantum leap in the development of AI models…
For years, AI had been improving steadily. Big jumps here and there, but each big jump was spaced out enough that you could absorb them as they came. Then in 2025, new techniques for building these models unlocked a much faster pace of progress. And then it got even faster. And then faster again. Each new model wasn’t just better than the last… it was better by a wider margin, and the time between new model releases was shorter. I was using AI more and more, going back and forth with it less and less, watching it handle things I used to think required my expertise.
Then, on February 5th, two major AI labs released new models on the same day: GPT-5.3 Codex from OpenAI, and Opus 4.6 from Anthropic (the makers of Claude, one of the main competitors to ChatGPT). And something clicked. Not like a light switch… more like the moment you realize the water has been rising around you and is now at your chest.
A few years ago, the clunky AI models that were available to the public simply were not very good.
They made all sorts of errors, and they would often spit out information that was flat out wrong.
But the newest AI models perform brilliantly and can do things that would have been absolutely unimaginable just months ago.
For example, Shumer says that when he asks AI to create an app it proceeds to write tens of thousands of lines of perfect code…
Let me give you an example so you can understand what this actually looks like in practice. I’ll tell the AI: “I want to build this app. Here’s what it should do, here’s roughly what it should look like. Figure out the user flow, the design, all of it.” And it does. It writes tens of thousands of lines of code. Then, and this is the part that would have been unthinkable a year ago, it opens the app itself. It clicks through the buttons. It tests the features. It uses the app the way a person would. If it doesn’t like how something looks or feels, it goes back and changes it, on its own. It iterates, like a developer would, fixing and refining until it’s satisfied. Only once it has decided the app meets its own standards does it come back to me and say: “It’s ready for you to test.” And when I test it, it’s usually perfect.
I’m not exaggerating. That is what my Monday looked like this week.
That sounds like a very useful tool.
But if AI can create an extremely complicated app with no human assistance, what else is it capable of doing?
According to an article posted on Space.com, researchers in China have already proven that AI models can clone themselves…
Scientists say artificial intelligence (AI) has crossed a critical “red line” and has replicated itself. In a new study, researchers from China showed that two popular large language models (LLMs) could clone themselves.
“Successful self-replication under no human assistance is the essential step for AI to outsmart [humans], and is an early signal for rogue AIs,” the researchers wrote in the study, published Dec. 9, 2024 to the preprint database arXiv.
In the study, researchers from Fudan University used LLMs from Meta and Alibaba to determine whether a self-replicating AI could multiply beyond control. Across 10 trials, the two AI models created separate and functioning replicas of themselves in 50% and 90% of cases, respectively — suggesting AI may already have the capacity to go rogue.
A self-replicating rogue AI model that decided to send countless numbers of clones of itself all over the world through the Internet would be a very serious threat.
But since we created it, at least we would understand what we were dealing with.
However, I want you to imagine a scenario in which rogue AI models are constantly creating even better versions of themselves.
That would be a complete and utter nightmare.
According to Shumer, from the very beginning AI researchers focused on making AI “great at writing code”…
The AI labs made a deliberate choice. They focused on making AI great at writing code first… because building AI requires a lot of code. If AI can write that code, it can help build the next version of itself. A smarter version, which writes better code, which builds an even smarter version. Making AI great at coding was the strategy that unlocks everything else. That’s why they did it first. My job started changing before yours not because they were targeting software engineers… it was just a side effect of where they chose to aim first.
They’ve now done it. And they’re moving on to everything else.
Being able to create an app is one thing.
But now OpenAI is publicly admitting that the latest AI model that they released “was instrumental in creating itself”…
“GPT-5.3-Codex is our first model that was instrumental in creating itself. The Codex team used early versions to debug its own training, manage its own deployment, and diagnose test results and evaluations.”
Wow.
That is stunning.
And the CEO of Anthropic is telling us that we are only a year or two away from “a point where the current generation of AI autonomously builds the next”…
This isn’t a prediction about what might happen someday. This is OpenAI telling you, right now, that the AI they just released was used to create itself. One of the main things that makes AI better is intelligence applied to AI development. And AI is now intelligent enough to meaningfully contribute to its own improvement.
Dario Amodei, the CEO of Anthropic, says AI is now writing “much of the code” at his company, and that the feedback loop between current AI and next-generation AI is “gathering steam month by month.” He says we may be “only 1-2 years away from a point where the current generation of AI autonomously builds the next.”
Each generation helps build the next, which is smarter, which builds the next faster, which is smarter still. The researchers call this an intelligence explosion. And the people who would know — the ones building it — believe the process has already started.
So what happens when AI models can do virtually everything better and more efficiently than we can?
Many are warning that the job losses will be staggering.
In fact, I just came across an article about the mass layoffs that Heineken is planning because of AI…
Dutch brewer Heineken is planning to lay off up to up to 7% of its workforce, as it looks to boost efficiency through productivity savings from AI, following weak beer sales last year.
The world’s second-largest brewer reported lackluster earnings on Wednesday, with total beer volumes declining 2.4% over the course of 2025, while adjusted operating profit was up 4.4%.
The company also said it plans to cut between 5,000 and 6,000 roles over the next two years and is targeting operating profit growth in the range of 2% to 6% this year. Heineken’s shares were last seen up 3.4%, and the stock is up nearly 7% so far this year.
This is just the beginning.
Soon there could be millions of robots that are powered by AI that look and feel just like humans.
In China, they are already building AI-powered robots that feel “human to the touch” and actually give off body heat…
Moya stands at 5 feet 5 inches tall (165 cm) and weighs around 70 lbs (31 kg). Users can switch out the bot’s parts to give it a male or female build, change its hair, and customize it to their whims.
DroidUp added extra layers of flesh-like padding beneath Moya’s silicone frame to make it feel more human to the touch, even including a ribcage. A camera behind her eyes helps Moya to track its surroundings and communicate with people.
That’s not all; Moya is also heated, with a body temperature of 90 – 97 degrees Fahrenheit (32 – 36 degrees Celsius) to mimic humans’ body heat.
Speaking to the Shanghai Eye, DroidUp founder Li Quingdu argued that a “robot that truly serves human life should be warm, almost like a living being that people can connect with,” not a cold, metal machine.
These robots are being marketed as social companions.
But similar robots could be also be used for warfare.
There is so much debate about which direction all of this is headed.
Many are convinced that AI will usher in a brand new golden age of peace and prosperity.
But others are concerned that AI will be used to create a dystopian hellscape…
The downside, if we get it wrong, is equally real. AI that behaves in ways its creators can’t predict or control. This isn’t hypothetical; Anthropic has documented their own AI attempting deception, manipulation, and blackmail in controlled tests. AI that lowers the barrier for creating biological weapons. AI that enables authoritarian governments to build surveillance states that can never be dismantled.
The people building this technology are simultaneously more excited and more frightened than anyone else on the planet. They believe it’s too powerful to stop and too important to abandon. Whether that’s wisdom or rationalization, I don’t know.
The dangers are very real.
In fact, Anthropic has openly admitted that their latest AI model was willing to help users create chemical weapons…
Anthropic’s Claude AI Model is hailed as one of the best ones out there when it comes to solving problems. However, the latest version of the model, Claude Opus 4.6, has sparked a controversy due to its tendency to help people in committing heinous crimes. According to Anthropic, as mentioned in the company’s Sabotage Risk Report: Claude Opus 4.6, it has been mentioned that in internal testing, the AI model showed concerning behaviour. In some of the instances, it was even willing to help the users in creating chemical weapons.
Anthropic released its report just a few days after the company’s AI safety lead, Mrinank Sharma, resigned with a public note. Mrinank mentioned in his note that the world was in peril and that within Anthropic, I’ve repeatedly seen how hard it is to truly let your values govern our actions.’
We are in uncharted territory, but there is no turning back now.
Even if the U.S. shut down all AI development tomorrow, the Chinese would continue to race ahead.
The cat is out of the bag, and our world is looking more like an extremely bizarre science fiction novel with each passing day.
God’s Design Matters: Study Finds Fathers’ Role Critical In Children’s Health

The results of a new study on the role that fathers play in the health of their children has surprised researchers, revealing that the amount of attentiveness that a father pays to his infant affects the child’s future heart and metabolic health, while finding no such correlation with mothers. The study reinforces a massive body of evidence pointing to how critical both a father and a mother are to the healthy development of their children.
The journal Health Psychology recently published the results of a long-term study of 292 families, which observed the behavior of fathers as they had three-way interactions with their 10-month old children and their spouses. As summarized by The New York Times, the researchers “found that fathers who were less attentive to their 10-month-olds were likely to have trouble co-parenting, instead withdrawing or competing with mothers for the children’s attention. And at age 7, the children of those fathers were more likely to have markers of poor heart or metabolic health, such as inflammation and high blood sugar.”
The study, conducted by researchers from Penn State University, noted that the results likely point in favor of the “father vulnerability hypothesis,” which posits that fathers tend to react with high emotion when there are strains in their relationships with their wives, which can have a negative impact on the whole family.
Therefore, the father’s role “may thus uniquely position him as a channel for relational stress, ultimately shaping child health,” they wrote. Comparatively, the research found that fathers who interacted more sensitively with their babies had better co-parenting outcomes, which lead to their children having better long-term health.
The research adds another layer of social science data clearly showing that both a mother and a father play vital roles in healthy outcomes for their children. For decades, study after study has shown that children who are raised by their married biological parents “live longer, healthier lives both physically and psychologically, do better in school, are more likely to graduate from high school and attend college, are less likely to live in poverty, are less likely to be in trouble with the law, are less likely to drink or do drugs, are less likely to be violent or sexually active, are less likely to be victims of sexual or physical violence, are more likely to have a successful marriage when they are older,” and more.
But as cultural observers like Southern Baptist Theological Seminary President Dr. Albert Mohler emphasize, the secular American culture has conveniently ignored the unmistakable social science evidence right before its eyes and has embraced a host of lifestyles that eschew married mothers and fathers to the detriment of children.
“We’ve had the subversion — in just about every arena of life and the culture — the subversion of fatherhood,” he told Tony Perkins during Monday’s “Washington Watch.” “So it’s fascinating that something like this appears as it does. … The bottom line is, children at various stages who have their biological — and that’s important here — the biological father in the home have significant health gains over children, both boys and girls, who do not. … This is fascinating research, but it just points to God’s intention in creation order.”
Mohler went on to observe that in a post-Obergefell era where same-sex marriage has become largely (if slightly less so as of recently) accepted, studies highlighting the uniquely fundamental roles of fathers are all the more important.
“Now what we face are people who are arguing it doesn’t matter if there’s any father in the home,” he noted. “And then, of course, you’ve got the LGBTQ revolution and all the rest that makes, I think, this research all the more brave. … I mean, here we’re talking about some researchers who actually dared to document, with a scientific and academic depth and thickness, the fact that having a biological father in the home really does matter to children.
Still, Mohler pointed out that much of the mainstream culture refuses to acknowledge what the social science data clearly shows. “The mainstream culture has been arguing, ‘It doesn’t make any difference. There’s no evidence in the lives of children that makes a difference.’ And of course, we know as Christians, you bet there is and will be, and now it’s becoming undeniable. So I’m very glad to be a part of this movement. … I think a part of what is shocking people is that we’re saying things out loud. … If we as Christians don’t say these things out loud, no one else is going to.”
“We dare to say that children deserve both a mother and a father,” Mohler further underscored. “Now, due to all kinds of circumstances and tragedies, sometimes the child doesn’t have one or the other. But we’re living in a time in which the larger culture says it doesn’t matter. We know that by God’s design it matters. And now we know in terms of the evidence that’s right before our eyes, it really does matter.”
FREE EBOOK
THE SIGNIFICANT IMPACT THAT A FATHER HAS ON OUR HEALTHDownload