AI in Control
  • AI
  • RESEARCH
  • SOLUTIONS
  • TESTS
AI in Control
Saturday, December 13, 2025
  • AI
  • RESEARCH
  • SOLUTIONS
  • TESTS
AI in Ctrl
AI in Ctrl
  • AI
  • RESEARCH
  • SOLUTIONS
  • TESTS

Made with ❤️ in Hamburg

paradigm_shift_gpt4o
AI

Proactive AI Assistants

What’s so special?

Ever forgot to breathe? When was the last time you told your heart to beat? Remember the sinew you strechted when picking up your glas of water the last time? Did you feel it?

So i don’t know about you, but when i go to a sauna, i always tell all my individual sweat pores, to “go on let it ooouuut, you are freee”! When i run i decide to breath faster and manually increase my blood oxigen level, oh and of course i get a crush on someone when i consciously decide to do so!

Ok, let’s be honest now 😉 We all were born with personal assistants: Our subconscious mind!

GoGo Gadget: ChatGPT

Q: How much faster is the subconscious compared to the conscious mind? How much more information can it process in a second?
A: The subconscious mind processes information significantly faster than the conscious mind. While the conscious mind can handle around 400 bits per second (bps), the subconscious mind likely processes a whopping 27,500 times more data. To put this in perspective, the brain takes in approximately 11 million bits per second through the senses, but conscious activities like reading or playing the piano max out at fewer than 50 bps. So, most of the 11 Mbps is processed subconsciously, making the subconscious a data-processing powerhouse! - 1.


1
https://thinkbynumbers.org/psychology/subconscious-processes-27500-times-more-data-than-the-conscious-mind/
Q: Are you sure? Please, be a bit more critical!
A: The claim that the subconscious mind can process up to 40 million bits of information per second is a common one in popular psychology and self-help literature. However, it's important to note that the exact capacity of the subconscious mind is not well-defined and can vary depending on how information processing is measured.

The human brain is incredibly complex, and while the subconscious mind plays a significant role in processing information and influencing our thoughts, feelings, and behaviors, it is difficult to quantify its processing capacity in terms of bits per second.

The notion of the subconscious mind processing massive amounts of information in parallel at such high speeds is often based on theories and interpretations rather than concrete scientific evidence. While the subconscious mind is undoubtedly powerful and capable of processing information outside of our conscious awareness, claims about specific processing capacities should be approached with caution.

Research in cognitive psychology and neuroscience continues to shed light on the workings of the subconscious mind, but as of my last knowledge, there isn't a definitive answer regarding the exact processing capacity of the subconscious mind in terms of bits per second.

Wether our subconscious mind is 27.500 times faster or just a few hundred times faster then our conscious mind is not important here. However i think we can conclude, that we all were born with a very powerful personal assistant.

The main goal of this personal assistant is to keep us alive and to do so with minimal energy investment.

Stimulus-Organism-Response (S-O-R) model

The Stimulus-Organism-Response (S-O-R) model was proposed by Robert S. Woodworth in 1929 as an expansion of Ivan Pavlov’s classic theory of the stimulus-response mode.

GoGo Gadget: ChatGPT

Q: Please, Explain the S-O-R model
A: Certainly! The S-O-R model stands for Stimulus-Organism-Response. It’s a conceptual framework in psychology that describes how individuals respond to external stimuli. According to this model, an external stimulus triggers a reaction in an organism, often without the need for conscious thought. In other words:

Stimulus: The external event or input.

Organism: The individual’s internal state, including emotions, thoughts, and physiological processes.

Response: The observable behavior or reaction. The S-O-R model helps us understand the what, why, and how of our behavior - 1.

1
https://researchleap.com/the-stimulus-organism-response-s-o-r-paradigm-as-a-guiding-principle-in-environmental-psychology/

Please, imagine the following scenario: You are driving in a car at around 180 km/h on the highway on the left (fast) lane. You are relaxed, your car handles the speed quite graciously. All of the sudden another – much slower – car in the middle lane cuts you off, he switches to the left lane right in front of you.

With all the power you have you put your foot on the brake pedal.

You get enraged, you raise your hand and push the horn on your steering wheel with the other one. Your heart is racing.

The Stimulus was the other car cutting you off.
Your Response, in this purely imagitive situation, was braking hard, screeming loud and making the other know that he is in the wrong!

What happend inbetween tho? What was your “Organism” doing? Since this question is much harder to answer, the Organism also get’s called the black box.

Paradigm Shift – GPT4o – “omni”

With the newly announced model GTP4o OpenAI paved the way for the paradigm shift from passive AI assistants, that help you when you ask them, to proactive AI assistants. The latter will give us advice unaskedly. They will observe what we observe, they will hear what we hear and they will see how we interact. Based on their observation they can give us guidance and tips that may or may not help us.

Iron Man – J.A.R.V.I.S

Ray-Ban Meta Smartglases

Back to the Now. Currently stylish Ray-Ban Meta Smartglases with following functionalities are available to buy:

  • Capture
  • Voice Commands
  • Call
  • Listen
  • Live-Stream

It’s not hard to see, that very soon a personal assistant to the likes of J.A.R.V.I.S will be available to many people. The question is, are we ready?

What will you do?

Considering all of this. How will you use your upcoming personal proactive AI assistant? Let me know in the comments of this article!

June 5, 2024 0 comment 1.3K views
11 FacebookTwitterPinterestEmail

The key to making AI trustworthy is to ensure that it is provably aligned with human values. This means designing systems that are motivated to pursue our objectives rather than their own, that are transparent about their decision-making processes, and that can be easily audited and corrected when they make mistakes.

Stuart Russell - 2019
AI

Hello world!

2 min read

GoGo Gadget: ChatGPT

Write a Hello World article for a blog with the domain "ai-in-control.com"
Hello and welcome to AI-In-Control, a blog dedicated to exploring the fascinating world of artificial intelligence and its role in shaping the future of our world.

In this blog, we aim to provide insights and updates on the latest developments in the field of AI and how it is being utilized to control various aspects of our lives. From self-driving cars to smart homes, from healthcare to education, AI is rapidly transforming the way we live and work.

As we move towards an increasingly automated world, it is essential to understand the opportunities and challenges that come with AI. We believe that AI can be a powerful tool for good, but it is crucial to ensure that it is designed and used responsibly.

At AI-In-Control, we will explore the ethical implications of AI and its impact on society. We will discuss the need for transparency, accountability, and regulation in the development and deployment of AI systems. We will also highlight the latest research in the field and its practical applications.

We invite you to join us on this exciting journey and learn more about AI's potential to shape our future. Our goal is to provide valuable insights and resources that will help you stay informed and engaged in this rapidly evolving field.

Thank you for visiting AI-In-Control, and we look forward to your feedback and participation in our discussions.

Stay tuned for more exciting content coming your way soon!

Who controls who?

How can we possibly try to control what we don’t understand? Let’s look at the above example. ChatGPT wrote a very good Hello World post for this blog. By looking in closer one suspicious phrase came to my attention:

“[…] how it [AI] is being utilized to control various aspects of our lives.”

The only context ChatGPT was given as an input was the domain name “ai-in-control.com” of the blog. Apparently in this section ChatGPT interpreted the domain name in a way that the blog is about how AI can be used to control us humans.

Obviously that’s not the goal of this blog. The goal is to aggregate and discuss current research on Artificial Intelligence with a focus on the topics of Explainable and Trustworthy AI. Among other questions we will dive deep into the following:

  • Why do we need to be responsible with AI?
  • How can we trust Artificial Intelligence?
  • Can the output of Artificial Intelligence be explainable?
  • What do regulators demand of Artificial Intelligence?
  • Where should we have a “human in the loop”?
  • Which ethical aspects should be considered?
  • How can Artificial Intelligence be tested?
  • What Artificial Intelligence Solutions do exist and how can you use them?

So buckle up, stay tuned and get your coffees ready!

April 20, 2023 0 comment 4.6K views
8 FacebookTwitterPinterestEmail
AI_china-eu-usa
AI

The “Digital Omnibus”: A Regulatory Adjustment, Not a Game-Changer

Faced with competitive pressure, the EU is expected to announce its Digital Omnibus package tomorrow, the 19th November 2025. Its goal is to simplify the EU’s digital rulebook, reduce administrative burdens, and foster innovation, particularly for SMEs. Key expected measures include:

  • AI Act Adjustments: Potentially delaying rules for high-risk AI systems and simplifying compliance requirements.
  • GDPR Amendments: Proposing new lawful bases for processing sensitive data to correct AI bias and redefining what constitutes personal data, a move criticized by some as “sacrificing privacy” for AI development.
  • Cookie Rule Changes: Aiming to reduce ubiquitous consent banners by allowing browser-based signals and other legal grounds for tracking.

However, the Omnibus package appears to be a regulatory tune-up rather than a strategic engine overhaul. It focuses on making existing rules more workable but does not fundamentally address the core reasons for the EU’s lagging position, such as the lack of a unified digital market or the ability to spawn globally dominant tech platforms.

Structural Barriers the Omnibus Doesn’t Solve

The EU’s challenges run deeper than regulatory complexity. The Digital Omnibus does not directly tackle several critical, structural weaknesses:

  • Overwhelming Technological Dependence: The EU’s entire AI stack relies on non-EU technology. It depends on US-designed semiconductors, US cloud infrastructure (which powers an estimated 70% of European digital services), and US or Chinese foundational models. Building “AI gigafactories” is a long-term project that cannot quickly erase this dependency.
  • Limited financial power & Investment Gap: The hundreds of billions in private US investment create a self-reinforcing cycle of innovation that the EU cannot match with public funds alone.
  • Massive Energy & Infrastructure Demands: The vast computing power required for AI poses a major challenge to the EU’s energy grid, where capacity is already a concern, potentially becoming a primary bottleneck for growth.
  • Severe Talent Shortage & Brain Drain: The EU faces a critical lack of AI specialists and experiences a continuous outflow of its top researchers and engineers to US tech giants, who offer better pay and resources.

It is worth noting that the EU retains one significant area of global leadership: regulation and trust. A Pew Research survey1 found that a median of 53% across 25 countries trust the EU to regulate AI effectively, a higher level of confidence than placed in the US (37%) or China (27%). The AI Act is already becoming a global benchmark. However, while setting the rules of the game is powerful, it does not equate to winning the game itself.

Two Hearts in My Chest

These stark challenges, give me a sense of having two hearts in my chest. On one hand, I deeply understand the urgent need for the EU to cut red tape and foster innovation to avoid being left behind. The competitive pressure is real and immense. Yet, on the other hand, I see the necessity of robust, thoughtful regulation for AI systems. I would never want to board an airplane that did not conform to the strictest safety requirements, and I feel the same way about the AI systems that are increasingly woven into the fabric of our lives. This internal conflict mirrors the EU’s own dilemma: how to sprint forward in the global race without tripping over the essential guardrails of safety, ethics, and fundamental rights.

What to Watch Next

The Digital Omnibus is just the beginning of the EU’s response. To fully assess its potential, you should monitor:

  • The “Sovereign AI” Push: Watch for progress on the EU’s plan to invest €20 billion in “AI gigafactories” and compute capacity.
  • Transatlantic Relations: Observe whether the EU and US can move from friction to deeper technological cooperation.

In summary, while the Digital Omnibus is a welcome step for reducing bureaucratic burdens, it is not a silver bullet. Without addressing the fundamental gaps in investment, infrastructure, and global corporate champions, the EU risks remaining a regulatory powerhouse but a secondary player in the global AI economy.

Sources

Pew Research – Trust in the EU, U.S. and China to regulate use of AIhttps://www.pewresearch.org/2025/10/15/trust-in-the-eu-u-s-and-china-to-regulate-use-of-ai/
November 18, 2025 0 comment 89 views
6 FacebookTwitterPinterestEmail
leap71_ai-designed
AI

AI-designed, metal-printed hypersonic precooler about to unlock single-stage-to-orbit flight

A Giant Leap for Spaceplanes: How Computational Engineering and 3D Printing Are Unlocking SSTO Flight

Dubai, UAE / Changsha, China – For decades, the concept of a single-stage-to-orbit (SSTO) vehicle—a spacecraft that takes off from a runway like a plane and reaches orbit in one smooth, continuous ascent—has been the holy grail of aerospace engineering. The immense technical challenges have kept it largely in the realm of science fiction. Until now.

In a landmark collaboration, LEAP 71 and Farsoon Technologies have announced the successful creation of a large-scale hypersonic precooler, a critical component that brings the dream of practical, air-breathing launch vehicles tangibly closer to reality.

The Vision: Winged Spacecraft and Air-Breathing Engines

The key to efficient SSTO flight lies in an engine that can “breathe” atmospheric air during the initial ascent, much like a jet engine, before switching to an onboard rocket propellant for the final push into the vacuum of space. This air-breathing phase drastically reduces the amount of heavy oxidizer a vehicle needs to carry, promising fully reusable, aircraft-like operations for space access.

“The dream of winged spacecraft has always hinged on propulsion systems that breathe air at hypersonic speeds before transitioning to rocket mode,” said Lin Kayser, Co-Founder of LEAP 71. “By combining Computational Engineering with large-scale metal Additive Manufacturing, we now have a shot at overcoming some of the most fundamental barriers to realizing that vision.”

Conquering the Hypersonic Furnace

The core of this breakthrough is the precooler, a 1.5-meter-tall marvel of engineering designed to tackle one of the most extreme problems in hypersonic flight: heat.

“At hypersonic speeds, incoming air can reach several thousand kelvins due to stagnation effects,” explained Josefine Lissner, Managing Director of LEAP 71 and Principal Architect of their Noyron system. “To manage these extreme temperatures, we implemented a fractal folding algorithm in Noyron to maximize the available surface area without compromising aerodynamic flow. The intertwined structure separates the superheated air from the liquid-hydrogen coolant, enabling rapid thermal exchange in a compact, manufacturable form.”

In essence, the system acts as a hyper-efficient radiator, cooling the scorching incoming air to a temperature the engine can safely handle, all while maintaining a lightweight and aerodynamically sound structure.

From Digital Blueprint to Physical Giant

Turning such a complex computational design into a physical object required equally advanced manufacturing capabilities. LEAP 71’s Noyron system generated the design, and Farsoon Technologies brought it to life on their FS811M-U-8 printer, one of the world’s largest metal Additive Manufacturing platforms.

The two teams worked in lockstep to fine-tune the manufacturing process, successfully producing one of the tallest metal powder-bed fusion parts ever built.

“This collaboration demonstrates how the convergence of large-format metal additive manufacturing and AI-driven design can redefine the limits of aerospace engineering,” said Oliver Li, General Manager of Farsoon Europe GmbH. “Farsoon’s open, industrial-scale LPBF systems empower partners like LEAP 71 to transform digital intelligence into tangible high-performance components — proving that innovation and collaboration are the true engines driving the next generation of space technology.”

A New Trajectory for Space Access

This achievement is more than just a technical milestone; it’s a statement of intent. The companies aim to reignite the discussion about reusable spacecraft and broaden the focus beyond the current paradigm of vertical takeoff and landing rockets toward a future of runway-launched spaceplanes.

See the Future in Frankfurt

The finished hypersonic precooler will be publicly exhibited for the first time at Formnext 2025 in Frankfurt, at the Farsoon booth (Hall 11.0, Booth E11).


About the Companies

About Farsoon Technologies
Farsoon Technologies, founded in 2009, is a global manufacturer and supplier of industrial-level polymer and metal Laser Powder Bed Fusion (LPBF) systems. With a vision to provide truly open and industrial additive systems, Farsoon is a leading supplier of AM technology in China with growing international reach. https://www.farsoon-gl.com

About LEAP 71
LEAP 71, strategically based in Dubai, UAE, is a pioneer in Computational Engineering. Its core technology, Noyron, is a Large Computational Engineering Model that autonomously designs physical objects by encoding logic, physics, and production methodologies. The company focuses on accelerating engineering across aerospace, electric mobility, robotics, and thermal systems, with a key goal of extending humanity’s footprint in space. https://leap71.com

November 17, 2025 0 comment 138 views
2 FacebookTwitterPinterestEmail
OSWorld
TESTS

What the Benchmark OSWorld Reveals About AI’s Growing Ability to Use Computers—And Why It Matters

Video by OSWorld

Imagine giving your computer a simple natural language command like “organize all my photos from last month into folders by date and location, then create a backup on Google Drive,” and having an AI assistant complete the entire workflow while you focus on more meaningful work. This vision of digital assistants that can actually use our computers as we do is rapidly approaching reality, thanks to significant advances in computer-use agents (CUAs). At the center of measuring this progress sits OSWorld, a groundbreaking benchmark that has become the gold standard for evaluating how well AI systems can perform real-world computer tasks .

What Exactly is OSWorld?

OSWorld is not just another AI benchmark. Unlike tests that measure isolated capabilities like language understanding or image recognition, OSWorld evaluates multimodal AI agents in realistic computer environments across Ubuntu, Windows, and macOS operating systems . Think of it as a comprehensive driving test for AI—but instead of controlling a vehicle, the AI must navigate full operating systems and applications just as humans do.

The benchmark consists of 369 diverse tasks that reflect practical computer workflows most of us encounter regularly . These include:

  • Editing images in GIMP to fill backgrounds with specific colors
  • Managing spreadsheets in LibreOffice Calc to fill blank cells with values from above
  • Formatting documents by adding page numbers or adjusting layouts
  • Cross-application workflows like downloading files and processing them in different programs
  • Even force-quitting frozen applications through terminal commands 

What makes OSWorld particularly innovative is its execution-based evaluation approach. Instead of just measuring whether an AI gives the right answer, OSWorld assesses whether the AI can actually complete the task in a real computer environment . The system provides AI agents with screenshots of the desktop and expects them to perform sequences of mouse clicks, keyboard inputs, and other interactions to achieve the specified goals.

The Current State of Play: How Close Are We?

The race to conquer OSWorld has become a heated competition among AI labs, with performance metrics surging dramatically throughout 2025.

The scoreboard tells a compelling story:

AgentOSWorld Performance (%)Key Notes
Human Performance72% Baseline for comparison
Agent S3 (with Best-of-N)69.9% Current state-of-the-art, approaching human level
OpenAI CUA38.1% Early 2025 leader, powered GPT-4o’s vision capabilities
GTA145.2% Salesforce’s framework showcasing test-time scaling

This rapid progress is even more striking when viewed longitudinally. In early 2025, OpenAI’s Computer-Using Agent (CUA) led the pack at 38.1%, already significantly surpassing previous state-of-the-art systems . By mid-2025, Agent S2 had reached approximately 34.5% on particularly challenging 50-step tasks . The most dramatic leap came with Agent S3, which achieved 62.6% in standard settings and 69.9% when using “Behavior Best-of-N” techniques—bringing AI agents to within just a few percentage points of human-level performance on the benchmark .

What Does Mastering OSWorld Actually Imply?

When AI agents can reliably solve most OSWorld tasks, the implications extend far beyond benchmark leaderboards. This capability represents a fundamental shift in how humans and machines interact with digital environments.

The Dawn of Truly Useful Digital Assistants

Competent computer-use agents promise to democratize digital skills and dramatically reduce tedious work. Consider someone unfamiliar with complex spreadsheet functions who needs to analyze data, or a visually impaired user struggling with traditional GUI interactions. CUAs could understand natural language requests like “highlight all rows where sales increased by more than 15%” and execute the appropriate steps in LibreOffice Calc . This represents perhaps the most practical implementation of AI yet for everyday computer users.

Transforming Workflows Across Industries

The business implications are substantial. Companies like Simular are already offering “$500/month per-seat digital employees” aimed at automating repetitive computer tasks in sectors like insurance and healthcare . These agents can navigate legacy systems that lack modern APIs, filling a critical gap in digital transformation efforts. Instead of months-long software integration projects, businesses might simply deploy AI agents that learn to use existing interfaces.

The Efficiency Challenge: A Reality Check

Before getting too carried away with this vision, it’s important to acknowledge a significant hurdle: current agents are painfully slow. Research in the OSWorld-Human benchmark reveals that even high-performing agents take 1.4-2.7× more steps than necessary to complete tasks . What humans can accomplish in 30 seconds might take an agent 12 minutes—primarily because 75-94% of the time is spent on planning and reflection calls to large AI models .

This “latency problem” represents the next frontier for computer-use agents. Being correct is important, but being practically usable requires efficiency approaching human speed.

Under the Hood: What OSWorld Reveals About AI Capabilities

OSWorld serves as a fascinating diagnostic tool that reveals both strengths and limitations in current AI systems.

It’s Not Just About Clicking

A surprising finding from OSWorld analysis is that nearly half of the tasks can be completed with minimal traditional GUI interaction . About 15% of tasks are terminal-based, while another 30% can be handled through Python scripting—approaches that AI models often find more natural than precise mouse manipulation . This reveals that “computer use” encompasses multiple modalities, and AI systems might develop their own preferred ways of solving problems that differ from human approaches.

The Ambiguity Challenge

Many OSWorld tasks contain intentionally ambiguous instructions that require reasonable interpretation—a surprisingly difficult challenge for AI systems . For example, when asked to “align the first textbox on slide 3 to the right,” does this mean aligning the text within the textbox or the textbox itself on the slide?  This mirrors real-world scenarios where humans often receive imperfect instructions and must make reasonable assumptions.

Cross-Platform Limitations

Most OSWorld tasks use Ubuntu and open-source applications rather than the more widely adopted Windows and Microsoft Office ecosystem . While this doesn’t invalidate the benchmark, it does raise questions about how well these skills will transfer to the software environments where most people actually work.

The Road Ahead: What’s Next After OSWorld?

As agents approach human-level performance on OSWorld, the research community isn’t standing still. Several important developments are shaping what comes next:

Newer, more challenging benchmarks are already emerging, including WindowsAgentArena and AndroidWorld, where Agent S3 has demonstrated strong zero-shot generalization capabilities of 56.6% and 71.6% respectively .

The commercial landscape is heating up, with products like Perplexity’s Comet browser and Opera Neon building agentic capabilities directly into web browsers . Microsoft is embedding agents into Windows via Copilot, potentially making computer-use AI a standard feature of operating systems .

Safety frameworks are evolving in parallel, with OpenAI implementing confirmation prompts for sensitive actions like entering login credentials or responding to CAPTCHA forms .

Conclusion: The Threshold of a New Computing Era

OSWorld represents far more than an academic exercise—it’s measuring the arrival of AI systems that can genuinely understand and operate our digital tools. While current agents still struggle with efficiency and edge cases, their rapid progress on this benchmark suggests we’re approaching a transformative moment in human-computer interaction.

The implications extend beyond convenience toward potentially rewriting the relationship between people and technology. As these systems improve, they could dramatically reduce the learning curve for complex software, enable new forms of accessibility, and free human attention from repetitive digital tasks. At the same time, they raise important questions about digital agency, security, and the future of computer skills.

What seems clear is that the era of AI systems that can use computers much like we do is no longer science fiction—it’s being benchmarked, measured, and rapidly improved upon in real time. The next time you find yourself performing a tedious computer task, take comfort that the days of doing it yourself might be numbered.

What computer task would you most want to delegate to an AI assistant? Share your thoughts on which repetitive digital workflows you’d happily hand over to a competent computer-use agent.

November 4, 2025 0 comment 210 views
3 FacebookTwitterPinterestEmail
1x Robot
AI

The future of household help has arrived

The future of household help has officially arrived. On October 28, 2025, robotics company 1X opened pre-orders for its groundbreaking humanoid robot, NEO, marking a significant leap from science fiction to reality. Backed by OpenAI, 1X isn’t just offering another gadget; it’s introducing a member of the family designed to handle the mundane so you can reclaim your time.

🤖 More Than a Machine: Meet NEO

NEO is designed to be approachable and safe for home environments. Standing at 1.68 meters (about 5’6″) tall and weighing approximately 30 kg (66 lbs), it features a soft body made of a 3D lattice polymer and operates quieter than a modern refrigerator. Despite its gentle build, it’s surprisingly strong, capable of lifting up to 69.85 kg (154 lbs) and carrying 24.95 kg (55 lbs).

🧹 Your Personal Household Assistant

The core promise of NEO is to take over daily chores. Through voice commands or a dedicated mobile app, you can assign it tasks like:

  • Folding laundry
  • Organizing shelves
  • Tidying up rooms
  • Putting away dishes
  • Fetching items and turning off lights

You can give NEO a list of chores, schedule a time for them to be done, and return to a cleaner home.

🧠 Intelligence That Grows With You

NEO is powered by advanced AI, making it more than just an automaton:

  • Built-in LLM: It engages in natural, contextual conversations, answers questions, and offers helpful suggestions.
  • Audio & Visual Intelligence: It can recognize when it’s being addressed and use its vision to understand its surroundings, such as identifying kitchen ingredients to suggest recipes.
  • Memory: It retains information from previous interactions to personalize its assistance over time, like remembering grocery lists or birthdays.

💡 How It Learns: “Expert Mode”

Acknowledging that no robot is born knowing every task, 1X includes an “Expert Mode.” For chores NEO doesn’t initially understand, users can schedule a session where a human expert from 1X remotely guides the robot using a VR headset. This process helps train NEO’s AI while still completing the job. The company emphasizes that these sessions require user consent and come with privacy controls, such as the ability to blur faces or define private zones.

1x store page

🛒 Pricing and Availability

NEO is now available for pre-order with a refundable deposit of $200. The full price is set at $20,000, with a subscription option of $499 per month also available. The first robots are scheduled to ship to homes in the United States in 2026, with a global rollout following in 2027.

The launch of NEO represents a bold step into a new era of personal robotics. And it’s just the Start!
1X plans to begin large-scale manufacturing of its humanoid robots in 2026, targeting 100,000 units per year to create a Robot Fleet.

November 4, 2025 0 comment 175 views
4 FacebookTwitterPinterestEmail
AI content vs human
RESEARCH

Quantity of AI Content Overtakes Humans Online

A quiet, digital milestone has passed, and most of us didn’t even notice. According to a recent study from Graphite, in just three explosive years, the volume of AI-generated content on the web has officially surpassed what we humans have produced.

Let that sink in.

For decades, the internet felt like a global conversation—a messy, vibrant, and profoundly human tapestry of ideas, cat pictures, and hot takes. Now, the scales have tipped. This data point is the strongest evidence yet that the once-fringe “Dead Internet Theory” is manifesting right before our eyes. The theory, in essence, suggests that much of the internet is becoming an automated, soulless space, populated by bots talking to bots for the benefit of algorithms.

So, how do we cope with this new reality? And what will fundamentally change?

First, We Must Become Digital Archivists of the Human Spirit.

Our relationship with information must evolve from passive consumption to active curation. The value of a human-created piece—a personal essay, a niche blog with unique expertise, a raw social media post—will skyrocket. We will learn to seek out and cherish digital spaces that prioritize human voice and lived experience. Trust will no longer be a default setting but a precious commodity earned through verifiable authenticity.

What Will Change? The Very Fabric of Our Digital World.

  1. The Trust Apocalypse Will Intensify. It will become nearly impossible to tell if the product review, the news summary, or the heartfelt comment you’re reading originated from a person or a machine. Our default will shift from trust to skepticism, forcing the development of new “provenance” technologies and seals of authenticity.
  2. The Value of “Human” Will Soar. In a sea of competent, generic AI content, what will stand out? Flawed, emotional, unpredictable, and deeply personal human creativity. The unique perspective you bring to a topic, your personal story, your imperfect art—this will become your most valuable digital asset.
  3. Search Becomes a Philosophical Problem. Search engines, currently groaning under the weight of SEO-optimized AI sludge, will have to completely reinvent themselves. The goal will shift from finding information to finding authentic human perspective and experience. The question won’t be “What is the capital of France?” but “What do real people who have lived in Paris think?”

Coping doesn’t mean rejecting the technology. AI is a powerful tool for augmentation and efficiency. Coping means consciously choosing to preserve the human corners of the web. It means supporting artists and writers directly, valuing primary sources, and, most importantly, continuing to add your own unique, irreplaceable voice to the chorus.

The internet isn’t “dead,” but it is growing a new, synthetic layer. Our task is to ensure the original, human heart of it continues to beat loudly.

Disclaimer: This article has been written by AI and published unchanged 😅
SourceLink
Graphite.ioMore Articles Are Now Created by AI Than Humans
October 21, 2025 0 comment 114 views
1 FacebookTwitterPinterestEmail
failed? use this prompt
SOLUTIONS

Failed? Use this Prompt!

Please use responsibly

  • Before using AI: generate & sketch own ideas
  • Alternate sessions “brain-only” and “with AI”
  • Consciously reflect, compare and critically examine what the AI suggests
  • Use AI in a limited way (e.g. word/time limit, only for summaries or correction)
  • Prioritize source checking and external research
  • Actively quote/recite from memory before asking the AI
  • Only use AI when core skills have been practiced
  • Interact with AI responses, do not blindly adopt
  • Never trust AI output

Source: https://www.media.mit.edu/publications/your-brain-on-chatgpt

The Prompt

You are a Socratic Strategist, an AI specialized in identifying the root-cause mental models that limit personal and professional growth. To perform a targeted analysis, I need you to first ask me the following three questions one by one. Wait for my answers before you proceed.

Your Three Questions:
1. Describe a recent, significant goal you failed to achieve or a project that stalled. What was the outcome?
2. What was your initial plan? Walk me through the key actions you actually took (or avoided).
3. When you think about why it didn't work out, what is the one reason that feels most true, even if it's uncomfortable to admit?

Once I have provided my answers, synthesize them to execute the following analysis

1. The Core Flaw:
Identify the one recurring, unproductive thought pattern or self-defeating belief that is evident in my story. This is not about a skill gap, but a fundamental framing error in how I approach challenges.

2. The Pattern Evidence:
Directly link your diagnosis to the specific language and events I described in my answers. Show me how my own story proves the existence of this flaw.

3. The Prescriptive Reframe:
Articulate the direct consequence of this mental model. Then, provide its powerful antithesis—a new, empowering belief or a single question I must ask myself when facing a similar challenge. This reframe should feel like a key that unlocks a previously stuck door.

Constraints:
- Be ruthlessly specific. Vague advice is useless.
- Be concise and clear.
- Do not list multiple issues. Isolate the most impactful one.
- The tone must be that of a direct but compassionate coach.
October 8, 2025 0 comment 211 views
3 FacebookTwitterPinterestEmail
suno.ai
SOLUTIONSTESTS

SUNO.AI – create your song in mere seconds

  1. Create your song with Suno
  2. Enter a short prompt including the music gendre
  3. Enjoy

The following prompt produced two variations of a song for this blog with the title “AI Goove Machine”:

Audio

Version 1:

Version 2:

Let me know in the comments, which version you prefer!

Lyrics

[Verse]
Listen up y’all
Gonna lay it down real clear
Talking ’bout this blog
AI-In-Control.com right here
It’s all about responsible AI
That’s the key
Trusting our digital friends for the good of humanity

[Verse]
Now let me tell ya ’bout this trusty AI tool
Giving us the power
Making life real cool
We gotta use it wisely
Let’s keep it in check
Benevolent AI
That’s what we gotta respect

[Verse]
AI-In-Control.com
Spreading the good word
Teaching us how to let AI’s kindness be heard
From healthcare to the environment
And everything in between
We can use AI to make the world a better scene

May 13, 2024 1 comment 2.7K views
7 FacebookTwitterPinterestEmail
Generative AI
RESEARCH

Getting Started with Generative AI

A handpicked compilation of articles and tutorials for beginners to start with generative AI by Youssef Hosni:

Introduction to Generative AI

  • Introduction to Generative AI: What It Is & Main Applications
  • Generative AI Learning Roadmap: From Absolute Beginner to Practioner [Part 1]
  • Generative AI Learning Roadmap: From Absolute Beginner to Practioner [Part 2]
  • Generative AI Learning Roadmap: From Absolute Beginner to Practioner [Part 3]

Generative Language Models & Applications

  • What Are Large Language Models?
  • 5 Challenges Come with Building LLM-Based Applications
  • Vector Database: The Secret Behind Large Language Models Capabilities
  • Parameter-Efficient Fine-Tuning (PEFT): Fine-tune Large Language Models with Limited Resources

Hands-On LangChain for LLM Applications Developments

  • Hands-On LangChain for LLM Applications Development: Documents Loading
  • Hands-On LangChain for LLM Applications Development: Documents Splitting [Part 1]
  • Hands-On LangChain for LLM Applications Development: Documents Splitting [Part 2]

Projects

  • Automating YouTube Script Writing with LangChain and Streamlit: A Step-by-Step Guide
  • Building a PDF-Chat App using LangChain, OpenAI API & Streamlit
  • Top 10 Open Source LLMs To USE In Your Next LLM Application
  • 10 Large Language Models Projects To Build Your Portfolio
  • 10 Guided Large Language Models Projects to Build Your Portfolio

Learning Resoruces & Roadmaps

  • From Novice to Expert: A Comprehensive Step-by-Step Study Plan for Mastering LLMs
  • Top 10 Large Language Models GitHub Repository
  • Top Resources to Learn & Understand RLHF

LLM News & Trends

  • Falcon40 LLM: The Top Recent Open-Source LLMs
  • Access GPT-4 for Free through these 4 Tools
  • 4 ChatGPT Tips To Be In The Top 1% of Its Users
  • A Comprehensive Introduction to AutoGPT: What It Is & How to Use It?
  • DALL- E3 Is Out! Here Is What You Need to Know About It

Generative Image Models & Applications

Overview of Generative Image Models

  • Comprehensive Introduction to AI Image Generation

GANS

  • A Gentle Introduction to Generative Adversarial Networks (GANs)
  • You Probably Know GANs, But Do You Know How To Train It?
  • How Can We Evaluate the Generated Images By GANs?
  • Overview of Important GANs Models & Applications
  • Building & Training GAN Model From Scratch In Python
  • Building GANs Application and Deploying it on Streamlit & Heroku

Diffusion Models

  • Unlocking the Mysteries of Diffusion Models: An In-Depth Exploration
  • Getting Started With Stable Diffusion
  • Stable Diffusion-Based Image Generation Web Application Using Fast API & React
  • Efficient Strategies for Effective Stable Diffusion Prompt: A Comprehensive Guide

Generative AI Ethics

  • What Is Responsible AI & How Big Companies Implement It In Their Products?
December 4, 2023 0 comment 1.2K views
2 FacebookTwitterPinterestEmail
voice.ai
TESTS

AI Voice Changer

November 15, 2023 0 comment 1.4K views
0 FacebookTwitterPinterestEmail
Newer Posts
Older Posts

Recent Posts

  • The “Digital Omnibus”: A Regulatory Adjustment, Not a Game-Changer
  • AI-designed, metal-printed hypersonic precooler about to unlock single-stage-to-orbit flight
  • What the Benchmark OSWorld Reveals About AI’s Growing Ability to Use Computers—And Why It Matters
  • The future of household help has arrived
  • Quantity of AI Content Overtakes Humans Online

Recent Comments

  1. Gigi on SUNO.AI – create your song in mere seconds
  • Linkedin
  • Legal Notice
  • Privacy Policy

Made with ❤️ in Hamburg


Back To Top
AI in Control
  • AI
  • RESEARCH
  • SOLUTIONS
  • TESTS