The AI chaos, horror stories, rationales & opportunities

The AI chaos, horror stories, rationales & opportunities.

Axel Schultze


Let me start with what we have, maybe soon have, maybe never have.

  1. We have AI-Software. Code is written in a computer language, and processed by a computer processor, silicon, quantum computer, and sooner or later, bio-computer. A human being or team is writing this code — maybe even another AI programmed to write AI code. The processor(s) output the results to an output device to let’s say make it available for now.
  2. We have not yet but soon, so-called AGI (Artificial General Intelligence) Software. “Artificial General Intelligence (AGI) is a form of artificial intelligence that has the ability to understand, learn, and apply knowledge across a broad range of tasks and solve problems it wasn’t specifically trained for or had never experienced with and improve itself." summarizing what GPT is saying about it. There the data are analyzed, and the system may try to solve complex problems that are very hard to solve for humans.
  3. We don’t have it, but people love to shock the world with Singularity. It is a type of AI system so powerful that it supersedes any and all capabilities of the human brain. Well, here it is already unclear whether we compare it with our brain as a whole or just parts. Shall it include multiple levels of consciousness, a purpose, hyper emotions or no emotions, and so forth? This Singularity is what the hell, or equivalent term, is in most religions. But it is also God because of the infinite power, knowledge, comprehension, and things we don’t even know about. It is designed to be so terrible that we all fear it more than anything else. Interestingly enough, humans can create hell and have done so since the beginning of time. Also interesting to note that this hell can be coded super easily already today and is done in all developed countries by engineers who do anything for money in “defense departments” of governments and across industries.

You may notice that the step from AGI to Singularity is as big as the step from being human to being god. Needless to make any more explanations. Also interesting to think about just normal authors of stories. There are far more horror, crime, disaster, war, and other horrifying categories of books than stories of advancement, progress, love, and so forth. The Singularity simply sells so much better.


I’m repeating myself but it is a good start for talking about risks: “AI will not replace any jobs, but people using AI will displace people who don’t.” We use AI in all parts of our business like Sales to analyze markets, industries, companies, and so forth. Marketing to do market research, identify and analyze trends, and future needs, craft content, disseminate content, create unique artwork, video clips, and more. In finance to calculate, predict, compute probabilities, forecasts… And, of course in engineering all over the map. Those who have no AI skills or even don’t understand to use it can’t manage and will fail.

AI is a mission-critical management and business execution practice
— it’s not IT

  1. The JOB RISK is to ignore AI as people in the 1980s ignored to use of Personal Computers and ended up losing a job that simply required the skills to use it. Yet — there are still jobs that don’t require a computer and so there will be jobs that don’t require AI in the future.
  2. The INFORMATION RISK is to blindly trust an AI output. It may simply be wrong “Nobody is perfect — and so is the AI”. Yet the overall support of an AI is superior to the same work of any human. And like the 4-eye principle is used among even the best, this is necessary with AI too.
  3. The MISINFORMATION RISK follows the general Innovation Risk. Humans have to get smarter and develop a sense to find out what is right or wrong — no matter who the source is or seems to be. MISINFORMATION is used as a criminal tool for thousands of years. And since it was usually used by rulers, it was never considered criminal. Today we are all “publishers” and now we have to handle Misinformation equally.
  4. The PROCESS RISK is no stranger. Whether AI or human-defined process flow, there is a risk of doing it wrong. Only the consequences may differ. An AI, conditioned to oversee and handle a process in a certain way may create more damage than an accidental mistake of a human. The human can correct it instantly or very quickly, If the AI is trusted it may take months until the mistake is recognized.
  5. The BENEFICIARY RISK is also no stranger. A knife is a tool we all have and can’t live without it. But it can kill. AI can make computer access security much more safe. However, hackers can crack your 14-digit password in seconds using AI if you don’t know how to craft a safe password and you would not even know they did.
  6. And even the HARDWARE RISK is no stranger. If you let a car drive autonomously by an AI, you will run the risk that the car is killing somebody. Now, the car is not a deadly vehicle because it can’t do anything alone. The AI is not deadly software because it was (hopefully) never programmed to kill people. But a mistake in the combination is a significant risk. This risk can be transferred onto robots and basically any “AUTONOMOUS MACHINE”.
  7. The “INFLUENCE RISK” is still less understood and therefore more important. An AI system, acting as a judge could make the wrong judgment. But possibly AI-Judges could create more unbiased judgments than human judges. An AI-based governmental advice system could cause catastrophic decisions and so could Financial Decision Support systems and so forth. At present, we don’t have enough data to compare the failure ratio and what is the bigger risk in the respective group of applications. But even with the best AGI systems in the future — life will remain to be risky for quite some time.
  8. The “AI WARNING RISK” is possibly the most inhumane risk as people will push those who are more fearful out of the game, lose their job in the future, and split society with unsubstantiated horror stories. We all have to learn. Instead of restrictions we need education. Instead of developing AI in secret labs, we should be transparent with our work. If all this is beyond somebody’s comprehension, learning is the best — crying ‘Wolfe’ is the worst.

By any means, don’t consider this list of risks complete. Feel free to comment and we can extend it all together. Ask your questions and there will be different opinions but not to confuse you but to sharpen all our minds.

AI and AGI Opportunities

And like the risks are very real, so are the opportunities for our future. Humanity will change not only through using the technology AI is giving us but also through the social opportunity AI will present.

  1. Technology-based guidance. The technology will help significant advancements in medical devices, technical instruments, and voice-activated guidance systems that will guide us through any kind of processor effort like getting from home to my seat in the airplane to helping me process complex insurance case validation processes. We use it today to guide innovation teams through the entire three-year innovation process with countless variations.
  2. Time & Efficiency Advancement. AI-Technology will allow us to cut the time we need for almost any task in half or better. Will we work less time? No, but we will be twice as effective without extra work. We will learn faster, cut research time down by 80%, and still have better results. Publish research in 20% of the time it takes today. Produce 5 times as much content at the same time, no matter what content we are talking about. In the end, we will spend more time on the creative background of a task than on its administrative work.
  3. Personalized Education. The education systems will adapt to the new opportunities, where Leran and response behavior will be improved through a child’s personal AI system. Education material will move to ever more digital content and teachers will become moderators, coaches, and mentors versus routine-driven instructors. And what works for children will also be the new standard in a lifelong learning environment for adults.
  4. Personal AI. The personal AI system will be a bias-free feedback engine to help individuals learn or adopt their behavior, based on individual goals and objectives. A human response like “Don’t do this, but do that” is hard to judge, wondering what their own interests are is part of a natural distrust. But a machine telling you, what you may want to do or decide in a manner that you can predefine will be far more effective.
  5. Solving some of humanity’s biggest problems. I always wanted to write an antidote to the book “Art of War” and create a similarly powerful strategy for peace and write “Art of Peace”. I struggled for 30 years to even find a start. The methods of Innovation combined with an AI system changed everything within 3 weeks. Mainly because a large language model with very specific prompt framing can create question structures an author would not come up with.
  6. Societal Problems and AI. What worked with questions like “How to create and maintain peace” is working for virtually any societal problem. The human brain is a problem-solving engine with a purpose model that frames the problem and directs the solution, with a purpose in mind. The support of AI is accelerating the solution-creation process by an order of magnitude. If you ask ChatGPT today you already get an impressively well-formed answer. In the future, today’s answer will look like the Mercedes three-wheel “car” from 1986 vis-a-vis the full electric EQS 580.

We will see people fighting AI and others embracing AI, similar to the early days of virtually any disruptive technology. But progress never stopped, maybe in one nation, but it flourished shortly thereafter in another country.

Axel Schultze



Axel Schultze

CEO BlueCallom, Chair World Innovations Forum. Working on the bleeding edge of Fusing AI with Neuro Science. Building the world's most advanced Innovation SW.