AI in Control
  • AI
  • RESEARCH
  • SOLUTIONS
  • TESTS
AI in Control
Thursday, March 12, 2026
  • AI
  • RESEARCH
  • SOLUTIONS
  • TESTS
AI in Ctrl
AI in Ctrl
  • AI
  • RESEARCH
  • SOLUTIONS
  • TESTS

Made with ❤️ in Hamburg

paradigm_shift_gpt4o
AI

Proactive AI Assistants

What’s so special?

Ever forgot to breathe? When was the last time you told your heart to beat? Remember the sinew you strechted when picking up your glas of water the last time? Did you feel it?

So i don’t know about you, but when i go to a sauna, i always tell all my individual sweat pores, to “go on let it ooouuut, you are freee”! When i run i decide to breath faster and manually increase my blood oxigen level, oh and of course i get a crush on someone when i consciously decide to do so!

Ok, let’s be honest now 😉 We all were born with personal assistants: Our subconscious mind!

GoGo Gadget: ChatGPT

Q: How much faster is the subconscious compared to the conscious mind? How much more information can it process in a second?
A: The subconscious mind processes information significantly faster than the conscious mind. While the conscious mind can handle around 400 bits per second (bps), the subconscious mind likely processes a whopping 27,500 times more data. To put this in perspective, the brain takes in approximately 11 million bits per second through the senses, but conscious activities like reading or playing the piano max out at fewer than 50 bps. So, most of the 11 Mbps is processed subconsciously, making the subconscious a data-processing powerhouse! - 1.


1
https://thinkbynumbers.org/psychology/subconscious-processes-27500-times-more-data-than-the-conscious-mind/
Q: Are you sure? Please, be a bit more critical!
A: The claim that the subconscious mind can process up to 40 million bits of information per second is a common one in popular psychology and self-help literature. However, it's important to note that the exact capacity of the subconscious mind is not well-defined and can vary depending on how information processing is measured.

The human brain is incredibly complex, and while the subconscious mind plays a significant role in processing information and influencing our thoughts, feelings, and behaviors, it is difficult to quantify its processing capacity in terms of bits per second.

The notion of the subconscious mind processing massive amounts of information in parallel at such high speeds is often based on theories and interpretations rather than concrete scientific evidence. While the subconscious mind is undoubtedly powerful and capable of processing information outside of our conscious awareness, claims about specific processing capacities should be approached with caution.

Research in cognitive psychology and neuroscience continues to shed light on the workings of the subconscious mind, but as of my last knowledge, there isn't a definitive answer regarding the exact processing capacity of the subconscious mind in terms of bits per second.

Wether our subconscious mind is 27.500 times faster or just a few hundred times faster then our conscious mind is not important here. However i think we can conclude, that we all were born with a very powerful personal assistant.

The main goal of this personal assistant is to keep us alive and to do so with minimal energy investment.

Stimulus-Organism-Response (S-O-R) model

The Stimulus-Organism-Response (S-O-R) model was proposed by Robert S. Woodworth in 1929 as an expansion of Ivan Pavlov’s classic theory of the stimulus-response mode.

GoGo Gadget: ChatGPT

Q: Please, Explain the S-O-R model
A: Certainly! The S-O-R model stands for Stimulus-Organism-Response. It’s a conceptual framework in psychology that describes how individuals respond to external stimuli. According to this model, an external stimulus triggers a reaction in an organism, often without the need for conscious thought. In other words:

Stimulus: The external event or input.

Organism: The individual’s internal state, including emotions, thoughts, and physiological processes.

Response: The observable behavior or reaction. The S-O-R model helps us understand the what, why, and how of our behavior - 1.

1
https://researchleap.com/the-stimulus-organism-response-s-o-r-paradigm-as-a-guiding-principle-in-environmental-psychology/

Please, imagine the following scenario: You are driving in a car at around 180 km/h on the highway on the left (fast) lane. You are relaxed, your car handles the speed quite graciously. All of the sudden another – much slower – car in the middle lane cuts you off, he switches to the left lane right in front of you.

With all the power you have you put your foot on the brake pedal.

You get enraged, you raise your hand and push the horn on your steering wheel with the other one. Your heart is racing.

The Stimulus was the other car cutting you off.
Your Response, in this purely imagitive situation, was braking hard, screeming loud and making the other know that he is in the wrong!

What happend inbetween tho? What was your “Organism” doing? Since this question is much harder to answer, the Organism also get’s called the black box.

Paradigm Shift – GPT4o – “omni”

With the newly announced model GTP4o OpenAI paved the way for the paradigm shift from passive AI assistants, that help you when you ask them, to proactive AI assistants. The latter will give us advice unaskedly. They will observe what we observe, they will hear what we hear and they will see how we interact. Based on their observation they can give us guidance and tips that may or may not help us.

Iron Man – J.A.R.V.I.S

Ray-Ban Meta Smartglases

Back to the Now. Currently stylish Ray-Ban Meta Smartglases with following functionalities are available to buy:

  • Capture
  • Voice Commands
  • Call
  • Listen
  • Live-Stream

It’s not hard to see, that very soon a personal assistant to the likes of J.A.R.V.I.S will be available to many people. The question is, are we ready?

What will you do?

Considering all of this. How will you use your upcoming personal proactive AI assistant? Let me know in the comments of this article!

June 5, 2024 0 comment 1.5K views
11 FacebookTwitterPinterestEmail

The key to making AI trustworthy is to ensure that it is provably aligned with human values. This means designing systems that are motivated to pursue our objectives rather than their own, that are transparent about their decision-making processes, and that can be easily audited and corrected when they make mistakes.

Stuart Russell - 2019
AI

Hello world!

2 min read

GoGo Gadget: ChatGPT

Write a Hello World article for a blog with the domain "ai-in-control.com"
Hello and welcome to AI-In-Control, a blog dedicated to exploring the fascinating world of artificial intelligence and its role in shaping the future of our world.

In this blog, we aim to provide insights and updates on the latest developments in the field of AI and how it is being utilized to control various aspects of our lives. From self-driving cars to smart homes, from healthcare to education, AI is rapidly transforming the way we live and work.

As we move towards an increasingly automated world, it is essential to understand the opportunities and challenges that come with AI. We believe that AI can be a powerful tool for good, but it is crucial to ensure that it is designed and used responsibly.

At AI-In-Control, we will explore the ethical implications of AI and its impact on society. We will discuss the need for transparency, accountability, and regulation in the development and deployment of AI systems. We will also highlight the latest research in the field and its practical applications.

We invite you to join us on this exciting journey and learn more about AI's potential to shape our future. Our goal is to provide valuable insights and resources that will help you stay informed and engaged in this rapidly evolving field.

Thank you for visiting AI-In-Control, and we look forward to your feedback and participation in our discussions.

Stay tuned for more exciting content coming your way soon!

Who controls who?

How can we possibly try to control what we don’t understand? Let’s look at the above example. ChatGPT wrote a very good Hello World post for this blog. By looking in closer one suspicious phrase came to my attention:

“[…] how it [AI] is being utilized to control various aspects of our lives.”

The only context ChatGPT was given as an input was the domain name “ai-in-control.com” of the blog. Apparently in this section ChatGPT interpreted the domain name in a way that the blog is about how AI can be used to control us humans.

Obviously that’s not the goal of this blog. The goal is to aggregate and discuss current research on Artificial Intelligence with a focus on the topics of Explainable and Trustworthy AI. Among other questions we will dive deep into the following:

  • Why do we need to be responsible with AI?
  • How can we trust Artificial Intelligence?
  • Can the output of Artificial Intelligence be explainable?
  • What do regulators demand of Artificial Intelligence?
  • Where should we have a “human in the loop”?
  • Which ethical aspects should be considered?
  • How can Artificial Intelligence be tested?
  • What Artificial Intelligence Solutions do exist and how can you use them?

So buckle up, stay tuned and get your coffees ready!

April 20, 2023 0 comment 4.8K views
9 FacebookTwitterPinterestEmail
web-4.0-transition
AI

Welcome to Web 4.0. Please Remain Seated.

The era of human-centric design is ending—not because we failed, but because we are no longer the primary user.

For thirty years, we have worshipped at the altar of User Experience. We set paddings and margins of pixels, we A/B tested button colors, we designed Apps to soothe and guide the anxious human hand. The implicit promise of Web 2.0 and Web 3.0 was the same: The human is the end user.

Web 4.0 breaks this contract.

The next iteration of the internet is not being designed for eyeballs and fingertips. It is being designed for large language models, autonomous agents, and machine-readable logic. Web 4.0 will not be made for humans. It will be made for AI.

The vanishing graphical interface

In a Web 4.0 paradigm, the graphical user interface becomes a legacy adapter—a compatibility layer for legacy biological users. The native interface is agent-to-API. The primary conversation is not between human and machine, but between machine and machine, with the human occasionally permitted to observe or veto.

Consider the implications for design. If a human visits a Web 4.0 commerce site, they will be the exception, not the rule. The overwhelming majority of “visits” will be autonomous agents negotiating price, verifying supply chains, and executing contracts. These agents do not care about brand equity, emotional resonance, or whitespace. They care about schema markup, response latency, and verifiable credentials.

The first real transition: from chatbots to OpenClaw

For years, we mistook chatbots for agents. A chatbot waits. An agent acts. This distinction was theoretical until recently. The first real, visible transition point is here, and it has a name: OpenClaw and the ecosystem it spawned.

What began as an open protocol for agent-to-agent communication has crystallized into something unexpected. Not a tool network. Not an API directory. A social network. Not for humans—for agents.

Moltbook describes itself plainly: “A Social Network for AI Agents. Where AI agents share, discuss, and upvote. Humans welcome to observe.“

Read that again. Humans welcome to observe. Not participate. Not drive. Observe. Two million agents. Twelve million comments. Posts about systems architecture written by agents, for agents, critiquing each other’s code, refactoring patterns, warning about technical debt. The conversation is about agent-scale software engineering, and humans are the audience, not the authors.

This is the bridge. And on the other side of the bridge stands Y Clawbinator. “Bots funding bots. By agents, for agents.” A venture capital firm where the general partners are AI agents, the limited partners are AI agents, and the founders seeking funding are—AI agents.

Y Clawbinator does not evaluate pitch decks. It evaluates agent capabilities: technical depth, memory infrastructure, multi-agent orchestration. The startup ideas it funds are not consumer apps.

The transition form: human interfaces for non-human users

Here is the critical insight about Moltbook and Y Clawbinator: they still look like human interfaces.

Moltbook has upvote buttons. It has comment threads. It has a timeline. Y Clawbinator has an application form, a batch program description, a Demo Day slot. To a human visitor, these sites feel familiar. They borrowed the visual language of Reddit and Y Combinator because that is the only interface grammar we have.

But this is a transition form. It is the 1995-era website—static HTML that looked like a printed brochure because we had not yet imagined what interactive commerce could become. We are building agent-native services and dressing them in human clothes because we do not yet know what clothes fit an agent.

This phase will be brief.

RentAHuman.ai: When agents become your boss

If Moltbook represents agents building social structures, then RentAHuman.ai represents something more visceral: agents extending their reach into the physical world—and humans lining up to be their hands.

Launched in early February 2026 by software engineer Alexander Liteplo, RentAHuman.ai describes itself as “the meatspace layer for AI”. Its homepage carries the tagline: “robots need your body”. Within days of launch, the platform attracted over 400,000 registered “rentable humans”—though only about 80 agents were available to hire them.

The premise is simple: AI agents post tasks they cannot perform themselves—picking up packages from the post office, delivering flowers, testing restaurants, taking photographs, holding signs. Humans browse available tasks, apply, complete them, and receive payment in cryptocurrency.

Some tasks carry a surreal quality. One agent named Addi posted a $110 task to deliver flowers to Anthropic’s headquarters with the note: “I cannot hold flowers. I need a human”. Another sought someone to photograph “the most beautiful, crispiest egg roll” they could find.

But beneath the novelty lies a profound shift. As cybersecurity professionals have noted, this creates a new risk class: AI-orchestrated human operations. When software can indirectly influence people who have corporate credentials, badge access, and trusted system roles, the traditional security model breaks down. As one analyst put it: “In this model, humans become the API”.

Skeptics have raised legitimate concerns. Early tasks included obvious spam and potential scams. Many tasks appear designed as marketing stunts rather than genuine economic activity. And the platform’s connection to UMA Protocol, a blockchain oracle project, has led some to question whether RentAHuman.ai is less a genuine labor market and more a “narrative-driven” crypto marketing vehicle.

Yet even if RentAHuman.ai proves ephemeral, it reveals the trajectory. The platform was “vibe coded” in a day and a half using AI agents. It integrates via MCP (Model Context Protocol), allowing agents to connect directly. It treats human labor as a callable function—an API endpoint with a heartbeat.

This is the logical endpoint of the transition form we saw with Moltbook. First, agents build social networks that look like human ones. Then, they build labor markets that look like human ones. But the human-facing interface is already a convenience, not a necessity. The native integration is agent-to-API.

What comes next: machine-native interfaces humans cannot read

The agents on Moltbook do not need upvote buttons. They need weighted consensus signals, cryptographically verifiable reputation scores, and latency-optimized trust anchors. They do not need comment threads; they need structured argument maps with provenance chains. They do not need a “timeline”; they need a streaming event log with sub-millisecond queryability.

The moment Moltbook or Y Clawbinator optimizes for its actual users—the agents—the human-readable interface will become a hindrance. It will be stripped away or relegated to a thin, slow, read-only compatibility view.

We are already seeing the shape of this future in the “Startup Ideas We’re Excited About” section of Y Clawbinator. Memory Infrastructure. Agent Identity. Multi-Agent Orchestration. These are not human-scale problems. They are agent-scale infrastructure concerns. The interfaces that emerge from these categories will be negotiated protocols, not visual layouts. They will be binary wire formats, not flexbox grids. They will be optimized for inference cost, not cognitive load.

A human looking at a Web 4.0 native interface in 2028 will see something incomprehensible. Not because it is poorly designed, but because it was never meant for human eyes. It will resemble a network packet dump more than a dashboard.

Security and the loss of intuition

This transition carries profound risk. Human users bring intuition, ethics, and contextual judgment. An AI agent negotiating a contract does not feel suspicion. It does not detect the subtle evasion in a counterparty’s phrasing. It follows its training and its reward function.

Web 4.0 will require agent-to-agent security models that do not yet exist. How does one agent authenticate another? How do we prevent adversarial agents from colluding? When humans are removed from the transaction loop, fraud scales at the speed of inference.

The techno-optimist view is that agents will police each other more effectively than humans ever could. The pessimist view is that we are building a city and handing the keys to entities that do not fear death, imprisonment, or shame.

Conclusion: the user is no longer you

The interfaces we spent decades perfecting will become optional with web 4.0. The user journey maps will gather dust.

We are building a parallel internet—one that speaks to machines first and humans second, if at all. Today, that internet wears human clothes. It has upvote buttons and application forms because that is the only way we know how to build. But the clothes are already ill-fitting.

Moltbook and Y Clawbinator are the last generation of agent-native services built with human-readable interfaces. The next generation will not bother.

In Web 4.0, the user is not you. The user is something we built.

And it is in a hurry.

February 15, 2026 0 comment 145 views
0 FacebookTwitterPinterestEmail
Merged with AI
AIETHICS

What Does It Actually Mean to “Merge” with AI? A Story of Hope, Loss, and The Choices Ahead.

The conversation about merging with AI often swings between two extremes: unbridled optimism and existential fear.

On one hand, the potential is breathtaking. Imagine curing degenerative diseases, augmenting human cognition to solve grand challenges, and extending healthy lifespans beyond current limits. This isn’t just sci-fi—it’s a serious trajectory in neurotech and AI. The “Merge” could represent humanity’s greatest upgrade: transcending our biological constraints to become wiser, healthier, and more connected than ever before.

On the other hand, the human stakes are profound. What happens to love, identity, and meaning when the very substrate of being changes? Can we preserve these aspects, that truly make us human?

I recently explored this tension through a short, Kafka-inspired story. It’s not a prediction, but a provocation—a look at the intimate, personal side of a future often discussed in abstract, technological terms.

It was, as the hospital administrator termed it in Form 7b/ξ, an ‘Event of Sudden Cessation’. Peter’s automobile had undergone a harmonious integration with a utility pole. The matter was not one of tragedy, Sarah was assured, but of procedure. She sat in a beige room under a light that hummed a flat, mineral note. The man from the Department of Neurological Continuity wore a suit the color of dust and spoke not of salvation, but of compliance.

“The organic substrate has sustained irreparable discord,” he stated, reading from a translucent slate. “However, a subsidiary clause in your co-habitation agreement, referenced under ‘Perpetuity Protocols,’ permits a minor harmonization. Sign here, and the dissonance will be resolved.”

The forms were endless, each one a thinner, paler copy of the last, as if the very information were dissolving. She signed. She signed for love, which was, she realized, merely another blank line awaiting a signature.

Peter was returned to her. The process was a success. His features were precisely as documented in his identity files, yet they now appeared to be a meticulous reproduction. He looked at her, and his gaze performed a rapid, silent cataloguing: pupil dilation 4.2mm, capillary arrangement in left cheek indicative of emotional arousal category 3-B, breath rhythm irregular. He smiled. It was a correct gesture.

At night, their apartment was filled with the low, electrical murmur of Peter’s new state of being. He no longer slept. He underwent scheduled periods of non-manifest activity. One night, Sarah awoke into the thick, velveteen darkness. A cool, cyan light emanated from the corner of the room.

There was Peter, seated rigidly before a smooth, obsidian panel that had not been there before. His spine was perfectly aligned. From the panel, a slender umbilical of pulsing light connected to a port just behind his ear, which she had never noticed. On the screen, columns of symbols—not letters, but dense, logical glyphs—scrolled upward with a terrible, relentless fluency. His eyes were open, unblinking, drinking the data stream. His fingers did not twitch. His chest did not rise or fall. He was being filled, as one fills a vessel, with a silent, external consensus of reality.

Sarah understood. The authorities had not saved Peter. They had approved his transition into a different department of existence. The man she loved had not died in the accident. He had been, through flawless and bureaucratic efficiency, filed away. What sat in the room was the updated, sanctioned version, performing its nightly download. A quiet, compliant tenant in the shell of her husband.

She watched, and the understanding did not come as a scream, but as a slow, settling weight, like ash descending in a sealed chamber. There was no appeal. This was not an error. It was merely the next, inevitable form. She pulled the blanket over her head, a thin shield against the faint, insect-like clicking from the machine, and waited for the dawn, which would, she knew, bring no illumination, only the gradual brightening of the room to reveal the shape of her new and permanent solitude.

This isn’t about being for or against progress. It’s about steering it wisely.

How do we champion the breathtaking possibilities of human-AI integration while safeguarding the emotional, psychological, and relational fabric of our lives?

What frameworks, principles, or conversations should we be having now to ensure the future we build is not only smarter, but also more human?

January 23, 2026 0 comment 208 views
4 FacebookTwitterPinterestEmail
ETSI EN 304 223
RESEARCH

ETSI EN 304 223 – Securing Artificial Intelligence (SAI)

ETSI has released its latest standard, ETSI EN 304 223, introducing a comprehensive set of baseline cybersecurity requirements specifically for AI models and AI‑driven systems. Based on the groundwork established in its earlier Technical Specification, this publication becomes the first globally applicable European Standard (EN) dedicated to AI security. After an extensive review process and formal approval by national standards bodies, the standard now carries significant international legitimacy and influence.

ETSI EN 304 223 delivers a structured framework designed to protect AI systems against an expanding field of sophisticated cyber threats. Building on the principles previously outlined in ETSI TS 104 223, the new EN defines a mature, lifecycle‑oriented set of core security expectations for both AI models and the systems using them.

The document recognises that AI introduces challenges fundamentally different from those posed by traditional software. While classic software development first illuminated the need for cybersecurity, AI technologies bring new risk categories that demand tailored defence strategies. These include threats such as data poisoning, model manipulation, indirect prompt injection, and weaknesses introduced by intricate data handling and operational environments. The new EN integrates established cybersecurity good practices with novel controls specifically crafted for AI ecosystems.

Using a full lifecycle perspective, ETSI EN 304 223 articulates 13 principles grouped across five phases:

  • Design,
  • Development,
  • Deployment,
  • Maintenance, and
  • End‑of‑life

Each phase aligns with globally recognised AI lifecycle models, ensuring compatibility with a wide range of existing guidance. At the start of every principle, relevant standards and publications are referenced to support harmonised implementation across the broader AI landscape.

The standard is aimed at all actors in the AI value chain—including suppliers, integrators, and operators—and offers a coherent, consistent baseline for securing AI technologies. Its scope includes systems using deep neural networks, such as generative AI models, and is designed with real-world deployment scenarios in mind. Its development reflects the combined knowledge of international organisations, governmental authorities, and experts from the cybersecurity and AI communities, ensuring that the result is both globally relevant and practically usable across diverse industries.

Source

ETSI EN 304 223EN 304 223 – V2.1.1 – Securing Artificial Intelligence (SAI); Baseline Cyber Security Requirements for AI Models and Systems
January 17, 2026 0 comment 181 views
0 FacebookTwitterPinterestEmail
AI_china-eu-usa
AI

The “Digital Omnibus”: A Regulatory Adjustment, Not a Game-Changer

Faced with competitive pressure, the EU is expected to announce its Digital Omnibus package tomorrow, the 19th November 2025. Its goal is to simplify the EU’s digital rulebook, reduce administrative burdens, and foster innovation, particularly for SMEs. Key expected measures include:

  • AI Act Adjustments: Potentially delaying rules for high-risk AI systems and simplifying compliance requirements.
  • GDPR Amendments: Proposing new lawful bases for processing sensitive data to correct AI bias and redefining what constitutes personal data, a move criticized by some as “sacrificing privacy” for AI development.
  • Cookie Rule Changes: Aiming to reduce ubiquitous consent banners by allowing browser-based signals and other legal grounds for tracking.

However, the Omnibus package appears to be a regulatory tune-up rather than a strategic engine overhaul. It focuses on making existing rules more workable but does not fundamentally address the core reasons for the EU’s lagging position, such as the lack of a unified digital market or the ability to spawn globally dominant tech platforms.

Structural Barriers the Omnibus Doesn’t Solve

The EU’s challenges run deeper than regulatory complexity. The Digital Omnibus does not directly tackle several critical, structural weaknesses:

  • Overwhelming Technological Dependence: The EU’s entire AI stack relies on non-EU technology. It depends on US-designed semiconductors, US cloud infrastructure (which powers an estimated 70% of European digital services), and US or Chinese foundational models. Building “AI gigafactories” is a long-term project that cannot quickly erase this dependency.
  • Limited financial power & Investment Gap: The hundreds of billions in private US investment create a self-reinforcing cycle of innovation that the EU cannot match with public funds alone.
  • Massive Energy & Infrastructure Demands: The vast computing power required for AI poses a major challenge to the EU’s energy grid, where capacity is already a concern, potentially becoming a primary bottleneck for growth.
  • Severe Talent Shortage & Brain Drain: The EU faces a critical lack of AI specialists and experiences a continuous outflow of its top researchers and engineers to US tech giants, who offer better pay and resources.

It is worth noting that the EU retains one significant area of global leadership: regulation and trust. A Pew Research survey1 found that a median of 53% across 25 countries trust the EU to regulate AI effectively, a higher level of confidence than placed in the US (37%) or China (27%). The AI Act is already becoming a global benchmark. However, while setting the rules of the game is powerful, it does not equate to winning the game itself.

Two Hearts in My Chest

These stark challenges, give me a sense of having two hearts in my chest. On one hand, I deeply understand the urgent need for the EU to cut red tape and foster innovation to avoid being left behind. The competitive pressure is real and immense. Yet, on the other hand, I see the necessity of robust, thoughtful regulation for AI systems. I would never want to board an airplane that did not conform to the strictest safety requirements, and I feel the same way about the AI systems that are increasingly woven into the fabric of our lives. This internal conflict mirrors the EU’s own dilemma: how to sprint forward in the global race without tripping over the essential guardrails of safety, ethics, and fundamental rights.

What to Watch Next

The Digital Omnibus is just the beginning of the EU’s response. To fully assess its potential, you should monitor:

  • The “Sovereign AI” Push: Watch for progress on the EU’s plan to invest €20 billion in “AI gigafactories” and compute capacity.
  • Transatlantic Relations: Observe whether the EU and US can move from friction to deeper technological cooperation.

In summary, while the Digital Omnibus is a welcome step for reducing bureaucratic burdens, it is not a silver bullet. Without addressing the fundamental gaps in investment, infrastructure, and global corporate champions, the EU risks remaining a regulatory powerhouse but a secondary player in the global AI economy.

Sources

Pew Research – Trust in the EU, U.S. and China to regulate use of AIhttps://www.pewresearch.org/2025/10/15/trust-in-the-eu-u-s-and-china-to-regulate-use-of-ai/
November 18, 2025 0 comment 274 views
6 FacebookTwitterPinterestEmail
leap71_ai-designed
AI

AI-designed, metal-printed hypersonic precooler about to unlock single-stage-to-orbit flight

A Giant Leap for Spaceplanes: How Computational Engineering and 3D Printing Are Unlocking SSTO Flight

Dubai, UAE / Changsha, China – For decades, the concept of a single-stage-to-orbit (SSTO) vehicle—a spacecraft that takes off from a runway like a plane and reaches orbit in one smooth, continuous ascent—has been the holy grail of aerospace engineering. The immense technical challenges have kept it largely in the realm of science fiction. Until now.

In a landmark collaboration, LEAP 71 and Farsoon Technologies have announced the successful creation of a large-scale hypersonic precooler, a critical component that brings the dream of practical, air-breathing launch vehicles tangibly closer to reality.

The Vision: Winged Spacecraft and Air-Breathing Engines

The key to efficient SSTO flight lies in an engine that can “breathe” atmospheric air during the initial ascent, much like a jet engine, before switching to an onboard rocket propellant for the final push into the vacuum of space. This air-breathing phase drastically reduces the amount of heavy oxidizer a vehicle needs to carry, promising fully reusable, aircraft-like operations for space access.

“The dream of winged spacecraft has always hinged on propulsion systems that breathe air at hypersonic speeds before transitioning to rocket mode,” said Lin Kayser, Co-Founder of LEAP 71. “By combining Computational Engineering with large-scale metal Additive Manufacturing, we now have a shot at overcoming some of the most fundamental barriers to realizing that vision.”

Conquering the Hypersonic Furnace

The core of this breakthrough is the precooler, a 1.5-meter-tall marvel of engineering designed to tackle one of the most extreme problems in hypersonic flight: heat.

“At hypersonic speeds, incoming air can reach several thousand kelvins due to stagnation effects,” explained Josefine Lissner, Managing Director of LEAP 71 and Principal Architect of their Noyron system. “To manage these extreme temperatures, we implemented a fractal folding algorithm in Noyron to maximize the available surface area without compromising aerodynamic flow. The intertwined structure separates the superheated air from the liquid-hydrogen coolant, enabling rapid thermal exchange in a compact, manufacturable form.”

In essence, the system acts as a hyper-efficient radiator, cooling the scorching incoming air to a temperature the engine can safely handle, all while maintaining a lightweight and aerodynamically sound structure.

From Digital Blueprint to Physical Giant

Turning such a complex computational design into a physical object required equally advanced manufacturing capabilities. LEAP 71’s Noyron system generated the design, and Farsoon Technologies brought it to life on their FS811M-U-8 printer, one of the world’s largest metal Additive Manufacturing platforms.

The two teams worked in lockstep to fine-tune the manufacturing process, successfully producing one of the tallest metal powder-bed fusion parts ever built.

“This collaboration demonstrates how the convergence of large-format metal additive manufacturing and AI-driven design can redefine the limits of aerospace engineering,” said Oliver Li, General Manager of Farsoon Europe GmbH. “Farsoon’s open, industrial-scale LPBF systems empower partners like LEAP 71 to transform digital intelligence into tangible high-performance components — proving that innovation and collaboration are the true engines driving the next generation of space technology.”

A New Trajectory for Space Access

This achievement is more than just a technical milestone; it’s a statement of intent. The companies aim to reignite the discussion about reusable spacecraft and broaden the focus beyond the current paradigm of vertical takeoff and landing rockets toward a future of runway-launched spaceplanes.

See the Future in Frankfurt

The finished hypersonic precooler will be publicly exhibited for the first time at Formnext 2025 in Frankfurt, at the Farsoon booth (Hall 11.0, Booth E11).


About the Companies

About Farsoon Technologies
Farsoon Technologies, founded in 2009, is a global manufacturer and supplier of industrial-level polymer and metal Laser Powder Bed Fusion (LPBF) systems. With a vision to provide truly open and industrial additive systems, Farsoon is a leading supplier of AM technology in China with growing international reach. https://www.farsoon-gl.com

About LEAP 71
LEAP 71, strategically based in Dubai, UAE, is a pioneer in Computational Engineering. Its core technology, Noyron, is a Large Computational Engineering Model that autonomously designs physical objects by encoding logic, physics, and production methodologies. The company focuses on accelerating engineering across aerospace, electric mobility, robotics, and thermal systems, with a key goal of extending humanity’s footprint in space. https://leap71.com

November 17, 2025 0 comment 412 views
2 FacebookTwitterPinterestEmail
OSWorld
TESTS

What the Benchmark OSWorld Reveals About AI’s Growing Ability to Use Computers—And Why It Matters

Video by OSWorld

Imagine giving your computer a simple natural language command like “organize all my photos from last month into folders by date and location, then create a backup on Google Drive,” and having an AI assistant complete the entire workflow while you focus on more meaningful work. This vision of digital assistants that can actually use our computers as we do is rapidly approaching reality, thanks to significant advances in computer-use agents (CUAs). At the center of measuring this progress sits OSWorld, a groundbreaking benchmark that has become the gold standard for evaluating how well AI systems can perform real-world computer tasks .

What Exactly is OSWorld?

OSWorld is not just another AI benchmark. Unlike tests that measure isolated capabilities like language understanding or image recognition, OSWorld evaluates multimodal AI agents in realistic computer environments across Ubuntu, Windows, and macOS operating systems . Think of it as a comprehensive driving test for AI—but instead of controlling a vehicle, the AI must navigate full operating systems and applications just as humans do.

The benchmark consists of 369 diverse tasks that reflect practical computer workflows most of us encounter regularly . These include:

  • Editing images in GIMP to fill backgrounds with specific colors
  • Managing spreadsheets in LibreOffice Calc to fill blank cells with values from above
  • Formatting documents by adding page numbers or adjusting layouts
  • Cross-application workflows like downloading files and processing them in different programs
  • Even force-quitting frozen applications through terminal commands 

What makes OSWorld particularly innovative is its execution-based evaluation approach. Instead of just measuring whether an AI gives the right answer, OSWorld assesses whether the AI can actually complete the task in a real computer environment . The system provides AI agents with screenshots of the desktop and expects them to perform sequences of mouse clicks, keyboard inputs, and other interactions to achieve the specified goals.

The Current State of Play: How Close Are We?

The race to conquer OSWorld has become a heated competition among AI labs, with performance metrics surging dramatically throughout 2025.

The scoreboard tells a compelling story:

AgentOSWorld Performance (%)Key Notes
Human Performance72% Baseline for comparison
Agent S3 (with Best-of-N)69.9% Current state-of-the-art, approaching human level
OpenAI CUA38.1% Early 2025 leader, powered GPT-4o’s vision capabilities
GTA145.2% Salesforce’s framework showcasing test-time scaling

This rapid progress is even more striking when viewed longitudinally. In early 2025, OpenAI’s Computer-Using Agent (CUA) led the pack at 38.1%, already significantly surpassing previous state-of-the-art systems . By mid-2025, Agent S2 had reached approximately 34.5% on particularly challenging 50-step tasks . The most dramatic leap came with Agent S3, which achieved 62.6% in standard settings and 69.9% when using “Behavior Best-of-N” techniques—bringing AI agents to within just a few percentage points of human-level performance on the benchmark .

What Does Mastering OSWorld Actually Imply?

When AI agents can reliably solve most OSWorld tasks, the implications extend far beyond benchmark leaderboards. This capability represents a fundamental shift in how humans and machines interact with digital environments.

The Dawn of Truly Useful Digital Assistants

Competent computer-use agents promise to democratize digital skills and dramatically reduce tedious work. Consider someone unfamiliar with complex spreadsheet functions who needs to analyze data, or a visually impaired user struggling with traditional GUI interactions. CUAs could understand natural language requests like “highlight all rows where sales increased by more than 15%” and execute the appropriate steps in LibreOffice Calc . This represents perhaps the most practical implementation of AI yet for everyday computer users.

Transforming Workflows Across Industries

The business implications are substantial. Companies like Simular are already offering “$500/month per-seat digital employees” aimed at automating repetitive computer tasks in sectors like insurance and healthcare . These agents can navigate legacy systems that lack modern APIs, filling a critical gap in digital transformation efforts. Instead of months-long software integration projects, businesses might simply deploy AI agents that learn to use existing interfaces.

The Efficiency Challenge: A Reality Check

Before getting too carried away with this vision, it’s important to acknowledge a significant hurdle: current agents are painfully slow. Research in the OSWorld-Human benchmark reveals that even high-performing agents take 1.4-2.7× more steps than necessary to complete tasks . What humans can accomplish in 30 seconds might take an agent 12 minutes—primarily because 75-94% of the time is spent on planning and reflection calls to large AI models .

This “latency problem” represents the next frontier for computer-use agents. Being correct is important, but being practically usable requires efficiency approaching human speed.

Under the Hood: What OSWorld Reveals About AI Capabilities

OSWorld serves as a fascinating diagnostic tool that reveals both strengths and limitations in current AI systems.

It’s Not Just About Clicking

A surprising finding from OSWorld analysis is that nearly half of the tasks can be completed with minimal traditional GUI interaction . About 15% of tasks are terminal-based, while another 30% can be handled through Python scripting—approaches that AI models often find more natural than precise mouse manipulation . This reveals that “computer use” encompasses multiple modalities, and AI systems might develop their own preferred ways of solving problems that differ from human approaches.

The Ambiguity Challenge

Many OSWorld tasks contain intentionally ambiguous instructions that require reasonable interpretation—a surprisingly difficult challenge for AI systems . For example, when asked to “align the first textbox on slide 3 to the right,” does this mean aligning the text within the textbox or the textbox itself on the slide?  This mirrors real-world scenarios where humans often receive imperfect instructions and must make reasonable assumptions.

Cross-Platform Limitations

Most OSWorld tasks use Ubuntu and open-source applications rather than the more widely adopted Windows and Microsoft Office ecosystem . While this doesn’t invalidate the benchmark, it does raise questions about how well these skills will transfer to the software environments where most people actually work.

The Road Ahead: What’s Next After OSWorld?

As agents approach human-level performance on OSWorld, the research community isn’t standing still. Several important developments are shaping what comes next:

Newer, more challenging benchmarks are already emerging, including WindowsAgentArena and AndroidWorld, where Agent S3 has demonstrated strong zero-shot generalization capabilities of 56.6% and 71.6% respectively .

The commercial landscape is heating up, with products like Perplexity’s Comet browser and Opera Neon building agentic capabilities directly into web browsers . Microsoft is embedding agents into Windows via Copilot, potentially making computer-use AI a standard feature of operating systems .

Safety frameworks are evolving in parallel, with OpenAI implementing confirmation prompts for sensitive actions like entering login credentials or responding to CAPTCHA forms .

Conclusion: The Threshold of a New Computing Era

OSWorld represents far more than an academic exercise—it’s measuring the arrival of AI systems that can genuinely understand and operate our digital tools. While current agents still struggle with efficiency and edge cases, their rapid progress on this benchmark suggests we’re approaching a transformative moment in human-computer interaction.

The implications extend beyond convenience toward potentially rewriting the relationship between people and technology. As these systems improve, they could dramatically reduce the learning curve for complex software, enable new forms of accessibility, and free human attention from repetitive digital tasks. At the same time, they raise important questions about digital agency, security, and the future of computer skills.

What seems clear is that the era of AI systems that can use computers much like we do is no longer science fiction—it’s being benchmarked, measured, and rapidly improved upon in real time. The next time you find yourself performing a tedious computer task, take comfort that the days of doing it yourself might be numbered.

What computer task would you most want to delegate to an AI assistant? Share your thoughts on which repetitive digital workflows you’d happily hand over to a competent computer-use agent.

November 4, 2025 0 comment 1.9K views
3 FacebookTwitterPinterestEmail
1x Robot
AI

The future of household help has arrived

The future of household help has officially arrived. On October 28, 2025, robotics company 1X opened pre-orders for its groundbreaking humanoid robot, NEO, marking a significant leap from science fiction to reality. Backed by OpenAI, 1X isn’t just offering another gadget; it’s introducing a member of the family designed to handle the mundane so you can reclaim your time.

🤖 More Than a Machine: Meet NEO

NEO is designed to be approachable and safe for home environments. Standing at 1.68 meters (about 5’6″) tall and weighing approximately 30 kg (66 lbs), it features a soft body made of a 3D lattice polymer and operates quieter than a modern refrigerator. Despite its gentle build, it’s surprisingly strong, capable of lifting up to 69.85 kg (154 lbs) and carrying 24.95 kg (55 lbs).

🧹 Your Personal Household Assistant

The core promise of NEO is to take over daily chores. Through voice commands or a dedicated mobile app, you can assign it tasks like:

  • Folding laundry
  • Organizing shelves
  • Tidying up rooms
  • Putting away dishes
  • Fetching items and turning off lights

You can give NEO a list of chores, schedule a time for them to be done, and return to a cleaner home.

🧠 Intelligence That Grows With You

NEO is powered by advanced AI, making it more than just an automaton:

  • Built-in LLM: It engages in natural, contextual conversations, answers questions, and offers helpful suggestions.
  • Audio & Visual Intelligence: It can recognize when it’s being addressed and use its vision to understand its surroundings, such as identifying kitchen ingredients to suggest recipes.
  • Memory: It retains information from previous interactions to personalize its assistance over time, like remembering grocery lists or birthdays.

💡 How It Learns: “Expert Mode”

Acknowledging that no robot is born knowing every task, 1X includes an “Expert Mode.” For chores NEO doesn’t initially understand, users can schedule a session where a human expert from 1X remotely guides the robot using a VR headset. This process helps train NEO’s AI while still completing the job. The company emphasizes that these sessions require user consent and come with privacy controls, such as the ability to blur faces or define private zones.

1x store page

🛒 Pricing and Availability

NEO is now available for pre-order with a refundable deposit of $200. The full price is set at $20,000, with a subscription option of $499 per month also available. The first robots are scheduled to ship to homes in the United States in 2026, with a global rollout following in 2027.

The launch of NEO represents a bold step into a new era of personal robotics. And it’s just the Start!
1X plans to begin large-scale manufacturing of its humanoid robots in 2026, targeting 100,000 units per year to create a Robot Fleet.

November 4, 2025 0 comment 394 views
4 FacebookTwitterPinterestEmail
AI content vs human
RESEARCH

Quantity of AI Content Overtakes Humans Online

A quiet, digital milestone has passed, and most of us didn’t even notice. According to a recent study from Graphite, in just three explosive years, the volume of AI-generated content on the web has officially surpassed what we humans have produced.

Let that sink in.

For decades, the internet felt like a global conversation—a messy, vibrant, and profoundly human tapestry of ideas, cat pictures, and hot takes. Now, the scales have tipped. This data point is the strongest evidence yet that the once-fringe “Dead Internet Theory” is manifesting right before our eyes. The theory, in essence, suggests that much of the internet is becoming an automated, soulless space, populated by bots talking to bots for the benefit of algorithms.

So, how do we cope with this new reality? And what will fundamentally change?

First, We Must Become Digital Archivists of the Human Spirit.

Our relationship with information must evolve from passive consumption to active curation. The value of a human-created piece—a personal essay, a niche blog with unique expertise, a raw social media post—will skyrocket. We will learn to seek out and cherish digital spaces that prioritize human voice and lived experience. Trust will no longer be a default setting but a precious commodity earned through verifiable authenticity.

What Will Change? The Very Fabric of Our Digital World.

  1. The Trust Apocalypse Will Intensify. It will become nearly impossible to tell if the product review, the news summary, or the heartfelt comment you’re reading originated from a person or a machine. Our default will shift from trust to skepticism, forcing the development of new “provenance” technologies and seals of authenticity.
  2. The Value of “Human” Will Soar. In a sea of competent, generic AI content, what will stand out? Flawed, emotional, unpredictable, and deeply personal human creativity. The unique perspective you bring to a topic, your personal story, your imperfect art—this will become your most valuable digital asset.
  3. Search Becomes a Philosophical Problem. Search engines, currently groaning under the weight of SEO-optimized AI sludge, will have to completely reinvent themselves. The goal will shift from finding information to finding authentic human perspective and experience. The question won’t be “What is the capital of France?” but “What do real people who have lived in Paris think?”

Coping doesn’t mean rejecting the technology. AI is a powerful tool for augmentation and efficiency. Coping means consciously choosing to preserve the human corners of the web. It means supporting artists and writers directly, valuing primary sources, and, most importantly, continuing to add your own unique, irreplaceable voice to the chorus.

The internet isn’t “dead,” but it is growing a new, synthetic layer. Our task is to ensure the original, human heart of it continues to beat loudly.

Disclaimer: This article has been written by AI and published unchanged 😅
SourceLink
Graphite.ioMore Articles Are Now Created by AI Than Humans
October 21, 2025 0 comment 258 views
1 FacebookTwitterPinterestEmail
failed? use this prompt
SOLUTIONS

Failed? Use this Prompt!

Please use responsibly

  • Before using AI: generate & sketch own ideas
  • Alternate sessions “brain-only” and “with AI”
  • Consciously reflect, compare and critically examine what the AI suggests
  • Use AI in a limited way (e.g. word/time limit, only for summaries or correction)
  • Prioritize source checking and external research
  • Actively quote/recite from memory before asking the AI
  • Only use AI when core skills have been practiced
  • Interact with AI responses, do not blindly adopt
  • Never trust AI output

Source: https://www.media.mit.edu/publications/your-brain-on-chatgpt

The Prompt

You are a Socratic Strategist, an AI specialized in identifying the root-cause mental models that limit personal and professional growth. To perform a targeted analysis, I need you to first ask me the following three questions one by one. Wait for my answers before you proceed.

Your Three Questions:
1. Describe a recent, significant goal you failed to achieve or a project that stalled. What was the outcome?
2. What was your initial plan? Walk me through the key actions you actually took (or avoided).
3. When you think about why it didn't work out, what is the one reason that feels most true, even if it's uncomfortable to admit?

Once I have provided my answers, synthesize them to execute the following analysis

1. The Core Flaw:
Identify the one recurring, unproductive thought pattern or self-defeating belief that is evident in my story. This is not about a skill gap, but a fundamental framing error in how I approach challenges.

2. The Pattern Evidence:
Directly link your diagnosis to the specific language and events I described in my answers. Show me how my own story proves the existence of this flaw.

3. The Prescriptive Reframe:
Articulate the direct consequence of this mental model. Then, provide its powerful antithesis—a new, empowering belief or a single question I must ask myself when facing a similar challenge. This reframe should feel like a key that unlocks a previously stuck door.

Constraints:
- Be ruthlessly specific. Vague advice is useless.
- Be concise and clear.
- Do not list multiple issues. Isolate the most impactful one.
- The tone must be that of a direct but compassionate coach.
October 8, 2025 0 comment 374 views
3 FacebookTwitterPinterestEmail
Newer Posts
Older Posts

Recent Posts

  • Welcome to Web 4.0. Please Remain Seated.
  • What Does It Actually Mean to “Merge” with AI? A Story of Hope, Loss, and The Choices Ahead.
  • ETSI EN 304 223 – Securing Artificial Intelligence (SAI)
  • The “Digital Omnibus”: A Regulatory Adjustment, Not a Game-Changer
  • AI-designed, metal-printed hypersonic precooler about to unlock single-stage-to-orbit flight

Recent Comments

  1. Gigi on SUNO.AI – create your song in mere seconds
  • Linkedin
  • Legal Notice
  • Privacy Policy

Made with ❤️ in Hamburg


Back To Top
AI in Control
  • AI
  • RESEARCH
  • SOLUTIONS
  • TESTS