AI in Control
  • AI
  • RESEARCH
  • SOLUTIONS
  • TESTS
AI in Control
Wednesday, July 9, 2025
  • AI
  • RESEARCH
  • SOLUTIONS
  • TESTS
AI in Ctrl
AI in Ctrl
  • AI
  • RESEARCH
  • SOLUTIONS
  • TESTS

Made with ❤️ in Hamburg

paradigm_shift_gpt4o
AI

Proactive AI Assistants

What’s so special?

Ever forgot to breathe? When was the last time you told your heart to beat? Remember the sinew you strechted when picking up your glas of water the last time? Did you feel it?

So i don’t know about you, but when i go to a sauna, i always tell all my individual sweat pores, to “go on let it ooouuut, you are freee”! When i run i decide to breath faster and manually increase my blood oxigen level, oh and of course i get a crush on someone when i consciously decide to do so!

Ok, let’s be honest now 😉 We all were born with personal assistants: Our subconscious mind!

GoGo Gadget: ChatGPT

Q: How much faster is the subconscious compared to the conscious mind? How much more information can it process in a second?
A: The subconscious mind processes information significantly faster than the conscious mind. While the conscious mind can handle around 400 bits per second (bps), the subconscious mind likely processes a whopping 27,500 times more data. To put this in perspective, the brain takes in approximately 11 million bits per second through the senses, but conscious activities like reading or playing the piano max out at fewer than 50 bps. So, most of the 11 Mbps is processed subconsciously, making the subconscious a data-processing powerhouse! - 1.


1
https://thinkbynumbers.org/psychology/subconscious-processes-27500-times-more-data-than-the-conscious-mind/
Q: Are you sure? Please, be a bit more critical!
A: The claim that the subconscious mind can process up to 40 million bits of information per second is a common one in popular psychology and self-help literature. However, it's important to note that the exact capacity of the subconscious mind is not well-defined and can vary depending on how information processing is measured.

The human brain is incredibly complex, and while the subconscious mind plays a significant role in processing information and influencing our thoughts, feelings, and behaviors, it is difficult to quantify its processing capacity in terms of bits per second.

The notion of the subconscious mind processing massive amounts of information in parallel at such high speeds is often based on theories and interpretations rather than concrete scientific evidence. While the subconscious mind is undoubtedly powerful and capable of processing information outside of our conscious awareness, claims about specific processing capacities should be approached with caution.

Research in cognitive psychology and neuroscience continues to shed light on the workings of the subconscious mind, but as of my last knowledge, there isn't a definitive answer regarding the exact processing capacity of the subconscious mind in terms of bits per second.

Wether our subconscious mind is 27.500 times faster or just a few hundred times faster then our conscious mind is not important here. However i think we can conclude, that we all were born with a very powerful personal assistant.

The main goal of this personal assistant is to keep us alive and to do so with minimal energy investment.

Stimulus-Organism-Response (S-O-R) model

The Stimulus-Organism-Response (S-O-R) model was proposed by Robert S. Woodworth in 1929 as an expansion of Ivan Pavlov’s classic theory of the stimulus-response mode.

GoGo Gadget: ChatGPT

Q: Please, Explain the S-O-R model
A: Certainly! The S-O-R model stands for Stimulus-Organism-Response. It’s a conceptual framework in psychology that describes how individuals respond to external stimuli. According to this model, an external stimulus triggers a reaction in an organism, often without the need for conscious thought. In other words:

Stimulus: The external event or input.

Organism: The individual’s internal state, including emotions, thoughts, and physiological processes.

Response: The observable behavior or reaction. The S-O-R model helps us understand the what, why, and how of our behavior - 1.

1
https://researchleap.com/the-stimulus-organism-response-s-o-r-paradigm-as-a-guiding-principle-in-environmental-psychology/

Please, imagine the following scenario: You are driving in a car at around 180 km/h on the highway on the left (fast) lane. You are relaxed, your car handles the speed quite graciously. All of the sudden another – much slower – car in the middle lane cuts you off, he switches to the left lane right in front of you.

With all the power you have you put your foot on the brake pedal.

You get enraged, you raise your hand and push the horn on your steering wheel with the other one. Your heart is racing.

The Stimulus was the other car cutting you off.
Your Response, in this purely imagitive situation, was braking hard, screeming loud and making the other know that he is in the wrong!

What happend inbetween tho? What was your “Organism” doing? Since this question is much harder to answer, the Organism also get’s called the black box.

Paradigm Shift – GPT4o – “omni”

With the newly announced model GTP4o OpenAI paved the way for the paradigm shift from passive AI assistants, that help you when you ask them, to proactive AI assistants. The latter will give us advice unaskedly. They will observe what we observe, they will hear what we hear and they will see how we interact. Based on their observation they can give us guidance and tips that may or may not help us.

Iron Man – J.A.R.V.I.S

Ray-Ban Meta Smartglases

Back to the Now. Currently stylish Ray-Ban Meta Smartglases with following functionalities are available to buy:

  • Capture
  • Voice Commands
  • Call
  • Listen
  • Live-Stream

It’s not hard to see, that very soon a personal assistant to the likes of J.A.R.V.I.S will be available to many people. The question is, are we ready?

What will you do?

Considering all of this. How will you use your upcoming personal proactive AI assistant? Let me know in the comments of this article!

June 5, 2024 0 comment 738 views
11 FacebookTwitterPinterestEmail

The key to making AI trustworthy is to ensure that it is provably aligned with human values. This means designing systems that are motivated to pursue our objectives rather than their own, that are transparent about their decision-making processes, and that can be easily audited and corrected when they make mistakes.

Stuart Russell - 2019
AI

Hello world!

2 min read

GoGo Gadget: ChatGPT

Write a Hello World article for a blog with the domain "ai-in-control.com"
Hello and welcome to AI-In-Control, a blog dedicated to exploring the fascinating world of artificial intelligence and its role in shaping the future of our world.

In this blog, we aim to provide insights and updates on the latest developments in the field of AI and how it is being utilized to control various aspects of our lives. From self-driving cars to smart homes, from healthcare to education, AI is rapidly transforming the way we live and work.

As we move towards an increasingly automated world, it is essential to understand the opportunities and challenges that come with AI. We believe that AI can be a powerful tool for good, but it is crucial to ensure that it is designed and used responsibly.

At AI-In-Control, we will explore the ethical implications of AI and its impact on society. We will discuss the need for transparency, accountability, and regulation in the development and deployment of AI systems. We will also highlight the latest research in the field and its practical applications.

We invite you to join us on this exciting journey and learn more about AI's potential to shape our future. Our goal is to provide valuable insights and resources that will help you stay informed and engaged in this rapidly evolving field.

Thank you for visiting AI-In-Control, and we look forward to your feedback and participation in our discussions.

Stay tuned for more exciting content coming your way soon!

Who controls who?

How can we possibly try to control what we don’t understand? Let’s look at the above example. ChatGPT wrote a very good Hello World post for this blog. By looking in closer one suspicious phrase came to my attention:

“[…] how it [AI] is being utilized to control various aspects of our lives.”

The only context ChatGPT was given as an input was the domain name “ai-in-control.com” of the blog. Apparently in this section ChatGPT interpreted the domain name in a way that the blog is about how AI can be used to control us humans.

Obviously that’s not the goal of this blog. The goal is to aggregate and discuss current research on Artificial Intelligence with a focus on the topics of Explainable and Trustworthy AI. Among other questions we will dive deep into the following:

  • Why do we need to be responsible with AI?
  • How can we trust Artificial Intelligence?
  • Can the output of Artificial Intelligence be explainable?
  • What do regulators demand of Artificial Intelligence?
  • Where should we have a “human in the loop”?
  • Which ethical aspects should be considered?
  • How can Artificial Intelligence be tested?
  • What Artificial Intelligence Solutions do exist and how can you use them?

So buckle up, stay tuned and get your coffees ready!

April 20, 2023 0 comment 1.1K views
6 FacebookTwitterPinterestEmail
suno.ai
SOLUTIONSTESTS

SUNO.AI – create your song in mere seconds

  1. Create your song with Suno
  2. Enter a short prompt including the music gendre
  3. Enjoy

The following prompt produced two variations of a song for this blog with the title “AI Goove Machine”:

Audio

Version 1:

Version 2:

Let me know in the comments, which version you prefer!

Lyrics

[Verse]
Listen up y’all
Gonna lay it down real clear
Talking ’bout this blog
AI-In-Control.com right here
It’s all about responsible AI
That’s the key
Trusting our digital friends for the good of humanity

[Verse]
Now let me tell ya ’bout this trusty AI tool
Giving us the power
Making life real cool
We gotta use it wisely
Let’s keep it in check
Benevolent AI
That’s what we gotta respect

[Verse]
AI-In-Control.com
Spreading the good word
Teaching us how to let AI’s kindness be heard
From healthcare to the environment
And everything in between
We can use AI to make the world a better scene

May 13, 2024 0 comment 1.7K views
6 FacebookTwitterPinterestEmail
Generative AI
RESEARCH

Getting Started with Generative AI

A handpicked compilation of articles and tutorials for beginners to start with generative AI by Youssef Hosni:

Introduction to Generative AI

  • Introduction to Generative AI: What It Is & Main Applications
  • Generative AI Learning Roadmap: From Absolute Beginner to Practioner [Part 1]
  • Generative AI Learning Roadmap: From Absolute Beginner to Practioner [Part 2]
  • Generative AI Learning Roadmap: From Absolute Beginner to Practioner [Part 3]

Generative Language Models & Applications

  • What Are Large Language Models?
  • 5 Challenges Come with Building LLM-Based Applications
  • Vector Database: The Secret Behind Large Language Models Capabilities
  • Parameter-Efficient Fine-Tuning (PEFT): Fine-tune Large Language Models with Limited Resources

Hands-On LangChain for LLM Applications Developments

  • Hands-On LangChain for LLM Applications Development: Documents Loading
  • Hands-On LangChain for LLM Applications Development: Documents Splitting [Part 1]
  • Hands-On LangChain for LLM Applications Development: Documents Splitting [Part 2]

Projects

  • Automating YouTube Script Writing with LangChain and Streamlit: A Step-by-Step Guide
  • Building a PDF-Chat App using LangChain, OpenAI API & Streamlit
  • Top 10 Open Source LLMs To USE In Your Next LLM Application
  • 10 Large Language Models Projects To Build Your Portfolio
  • 10 Guided Large Language Models Projects to Build Your Portfolio

Learning Resoruces & Roadmaps

  • From Novice to Expert: A Comprehensive Step-by-Step Study Plan for Mastering LLMs
  • Top 10 Large Language Models GitHub Repository
  • Top Resources to Learn & Understand RLHF

LLM News & Trends

  • Falcon40 LLM: The Top Recent Open-Source LLMs
  • Access GPT-4 for Free through these 4 Tools
  • 4 ChatGPT Tips To Be In The Top 1% of Its Users
  • A Comprehensive Introduction to AutoGPT: What It Is & How to Use It?
  • DALL- E3 Is Out! Here Is What You Need to Know About It

Generative Image Models & Applications

Overview of Generative Image Models

  • Comprehensive Introduction to AI Image Generation

GANS

  • A Gentle Introduction to Generative Adversarial Networks (GANs)
  • You Probably Know GANs, But Do You Know How To Train It?
  • How Can We Evaluate the Generated Images By GANs?
  • Overview of Important GANs Models & Applications
  • Building & Training GAN Model From Scratch In Python
  • Building GANs Application and Deploying it on Streamlit & Heroku

Diffusion Models

  • Unlocking the Mysteries of Diffusion Models: An In-Depth Exploration
  • Getting Started With Stable Diffusion
  • Stable Diffusion-Based Image Generation Web Application Using Fast API & React
  • Efficient Strategies for Effective Stable Diffusion Prompt: A Comprehensive Guide

Generative AI Ethics

  • What Is Responsible AI & How Big Companies Implement It In Their Products?
December 4, 2023 0 comment 788 views
1 FacebookTwitterPinterestEmail
voice.ai
TESTS

AI Voice Changer

November 15, 2023 0 comment 974 views
0 FacebookTwitterPinterestEmail
AI

What evidential value do images still have?

The AI excuse

On Wednesday 26th April 2023, a judge in California ruled that Tesla CEO Elon Musk must testify under oath regarding certain statements he made about the safety and capabilities of Tesla’s Autopilot features. This order is part of a lawsuit filed by the family of Walter Huang, an Apple engineer who died in a 2018 car crash. Tesla’s lawyers have argued that Musk cannot remember the specifics of his statements that the plaintiffs want to question him about, and that Musk, as a high-profile CEO, is often the target of convincing “deepfake” videos.

The Huang family alleges that Tesla’s semi-automated driving software failed, while Tesla argues that Huang was distracted by a video game on his phone and ignored vehicle warnings at the time of the crash.

Musk is expected to be questioned about a 2016 statement he allegedly made, claiming that a Model S and Model X can drive autonomously more safely than a human. Tesla has opposed this request in court documents, arguing that Musk cannot remember the details of his statements.

On Friday, a California state court jury ruled that Tesla’s Autopilot feature did not fail in what seems to be the first trial related to a crash involving the semi-automated driving software.

End of the evidential value of visual and audio recordings?

It’s clear that AI-generated images are now so realistic that they’re virtually indistinguishable from actual photos to the untrained human eye. This is increasingly true for images of people, known as “deep fakes”, and sound recordings, referred to as “deep voice”. In the United States, there have been instances where criminals have used deep voice technology to impersonate relatives over the phone and ask for money, essentially a modern version of the grandparent scam.

AI programs like Midjourney, DALL-E, and others have the ability to create incredibly realistic images, sounds, and videos, which presents two significant issues for the formation of public opinion. The conversation has largely centered around the risk of fake content and the subsequent spread of misinformation. However, as more people become aware of AI’s capabilities, they’re likely to become more skeptical of images and sound recordings.

In the near future, the issue will likely shift from scandals caused by fake images to real scandals being dismissed due to the decreasing credibility of image and sound evidence. This could also severely undermine trust in historical documentaries, as the ability to alter and stage the past could lead to more people questioning historical facts. This blurring of the past into the realm of fiction could have serious implications for societal consensus.

In this new reality, “It wasn’t me” could become a common defense for individuals confronted with incriminating images. This presents a significant challenge for journalists who receive potentially explosive material that they struggle to verify. As demonstrated by the Tesla case, judges will also increasingly have to consider the possibility that evidence may not be authentic, but rather AI-generated. The question is, are they ready for this?

Consequences for court proceedings

In a discussion with LTO, Dr. Christian Rückert, a legal academic and expert in cybercrime, voiced his worries about the German judiciary’s hesitance to critically evaluate digital evidence. He highlighted that the judiciary needs to enhance its comprehension of digital evidence, even without taking into account AI-created deep fakes or audio recordings. He used the EncroChat cases as an illustration, where the courts accepted the authenticity and completeness of chat logs, despite the lack of original data from the French authorities.

Rückert also expressed disapproval of the unquestioning use of WhatsApp chats as evidence in court, where chat conversation printouts are frequently deemed trustworthy. He considers this method to be negligent, especially considering the existence of free online tools that can generate fake WhatsApp chat logs. He asserts that if courts were genuinely dedicated to their disclosure responsibilities, they would need to inspect the original WhatsApp file on the smartphone to detect any possible manipulation.

Rückert stressed that, given the rise in AI-produced images, it’s vital for courts to examine evidence more thoroughly and stay vigilant. He counsels judges to always remember that any evidence presented to them could potentially be doctored.

Judge is “deeply concerned”

This all ties back to the lawsuit against Tesla for damages and the assertion credited to Elon Musk that Tesla’s Autopilot is safer than human driving. As it stands, all signs point to a potentially embarrassing claim made by Tesla’s legal team. There’s no evidence suggesting a deep fake video: the event was organized by the well-known US media platform recode, which also uploaded the video on YouTube. This happened nearly six years ago, a time when artificial intelligence was not yet advanced enough to create such flawless deep fakes. The nature of the conversation, specifically Musk’s engagement with an audience member, doesn’t imply a fake either. Judge Evette Pennypacker of California responded with appropriate outrage. She described Tesla’s arguments as deeply concerning, as they essentially protect celebrities from being held accountable for their actions and words.

The judge responded by scheduling a preliminary three-hour hearing for the Tesla CEO at the end of July to interrogate Musk about the legitimacy of the statement made at the event. An event, by the way, that was attended by a large crowd. In the end, Musk’s statement could be traced back to him using the most ancient form of legal evidence. Testimony from witnesses.

April 29, 2023 0 comment 719 views
3 FacebookTwitterPinterestEmail
eu-ai-logo
AIETHICS

EU – Ethics Guidelines for Trustworthy AI

GoGo Gadget: ChatPDF

Summarize the Ethics Guidelines for Trustworthy AI by the European Commission
The Ethics Guidelines for Trustworthy AI by the European Commission seek to foster responsible and sustainable AI innovation in Europe. They aim to make ethics a core pillar for developing a unique approach to AI that benefits, empowers, and protects both individual human flourishing and the common good of society. The guidelines were created by the High-Level Expert Group on Artificial Intelligence and present an overall framework for Trustworthy AI. The document includes a Trustworthy AI assessment list that will undergo a piloting phase by stakeholders to gather practical feedback, with a revised version of the assessment list presented to the European Commission in early 2020.

Conditions for Trusworthy AI

According to the Ethics guideline for trustworthy AI follwing conditions need to be fullfilled by the AI-System:

  1. Lawful – AI should respect all applicable laws and regulations
  2. Ethical – AI should respect ethical principles and values
  3. Robust – AI should be robust from a technical perspective while taking into account it’s social environment

Key Requirements for Trustworthy AI

  • Human agency and oversight: AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights. At the same time, proper oversight mechanisms need to be ensured, which can be achieved through human-in-the-loop, human-on-the-loop, and human-in-command approaches
  • Technical Robustness and safety: AI systems need to be resilient and secure. They need to be safe, ensuring a fall back plan in case something goes wrong, as well as being accurate, reliable and reproducible. That is the only way to ensure that also unintentional harm can be minimized and prevented.
  • Privacy and data governance: besides ensuring full respect for privacy and data protection, adequate data governance mechanisms must also be ensured, taking into account the quality and integrity of the data, and ensuring legitimised access to data.
  • Transparency: the data, system and AI business models should be transparent. Traceability mechanisms can help achieving this. Moreover, AI systems and their decisions should be explained in a manner adapted to the stakeholder concerned. Humans need to be aware that they are interacting with an AI system, and must be informed of the system’s capabilities and limitations.
  • Diversity, non-discrimination and fairness: Unfair bias must be avoided, as it could could have multiple negative implications, from the marginalization of vulnerable groups, to the exacerbation of prejudice and discrimination. Fostering diversity, AI systems should be accessible to all, regardless of any disability, and involve relevant stakeholders throughout their entire life circle.
  • Societal and environmental well-being: AI systems should benefit all human beings, including future generations. It must hence be ensured that they are sustainable and environmentally friendly. Moreover, they should take into account the environment, including other living beings, and their social and societal impact should be carefully considered. 
  • Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes. Auditability, which enables the assessment of algorithms, data and design processes plays a key role therein, especially in critical applications. Moreover, adequate an accessible redress should be ensured.

Sources

Website of the European CommissionEthics guidelines for trustworthy AI | Shaping Europe’s digital future (europa.eu)
High-Level Expert Group on AI set up by the European CommissionEthics guidelines for trustworthy AI – 2019, PDF
April 28, 2023 0 comment 1K views
6 FacebookTwitterPinterestEmail
microsoft-copilot
SOLUTIONS

Microsoft Copilot

What is Microsoft 365 Copilot

Microsoft Copilot is a Large Language Model utilized as a personal assistant that is integrated in Microsoft Apps like Word, Excel, PowerPoint, Outlook and Teams. Furthermore Microsoft will introduce a new Experience called “Business Chat”.

Copilot is the result of a consequent development of personal assistants for Microsoft Products. Many of you might remember “Clippy” the advice giving paper clip build into early Microsoft Office products. “Cortana” was the next personal assistant by Microsoft installed in various products. Copilot covers the same need but more.

The promise is to unleash creativity, unlock productivity and uplevel skills. Or will it destroy creativity? It will certainly produce vast amounts of data.

How does Copilot work

By beeing integrated in the most used business Apps Microsoft Copilot will change the work of millions of people.

Word: Copilot can integrate information from shared documents within the organization, and is able to generate an initial draft based on a given prompt. In addition, it has the ability to adjust the tone of the text to suit the intended audience, whether it be formal or informal.

Excel: Copilot can interpret natural language prompts to provide answers and recommendations, minimizing the need for complex formulas. It can produce various visualization and charts without altering the spreadsheet.

PowerPoint: Copilot can generate a slide deck based on an outline or a Word document, and simplify the creating of professional presentations.

Outlook: Copilot can not only respond quickly to routine emails, but also gather content from email threads and other Microsoft 365 sources. It can also adjust the tone and length of responses.

Teams: Copilot is capable of setting meeting agendas and schedules in Teams, and acting as an effective decision-maker by listing the pros and cons of any discussion and suggesting next steps.

Business Chat: Copilot enables users to access their calendar, emails, chats, documents, meetings, and contacts all in one place, facilitating collaboration and streamlining project management.

A whole new way to work

It is easy to see, that Microsoft Copilot will transform our understanding of work. Summarizing meetings and pointing out the keyfacts. Consider it done! You want a powerpoint presentation about the clients subject? Here you have it. Microsoft is giving every office employee a powerful personal assistant that will increase the capacity, throughput and efficiency of it’s users.

Hatespeech in your proposal deck

As impressive as Copilot is, using the generated output of it unfiltered can lead to problems. Large language models can fail by inserting e.g. gender or racial bias into text or make things up by hallucinating. It’s alarming enough that those quirks can be found in output of search engines but imagine the consequences of such biases in Excel spreadsheets or your email inbox.

Here we also touch another important subject this blog will try to dive deeper in. Who is responsible if Copilot insulted your Client? Should the Large Language Model be blamed or should the user that didn’t quality check the output be blamed?

The value of a handwritten letter

Since Copilot will certainly boost the generation of Documents, Presentations, E-Mails etc. we must ask who will still be able to intake all that data ? Certainly this flood will increase the need of prioritizing your informational intake. Considering this, how much would you appreciate a handwritten letter?

April 28, 2023 0 comment 672 views
0 FacebookTwitterPinterestEmail
reading_through_research_paper
RESEARCH

The false hope of current approaches to explainable artificial intelligence in health care

GoGo Gadget: ChatPDF

Summarize the Research Paper "The false hope of current approaches to explainable artificial intelligence in health care"
The research paper "The false hope of current approaches to explainable artificial intelligence in health care" argues that the current methods of explainable AI are unlikely to achieve the goals of patient-level decision support. The authors suggest that while AI will have a significant impact on medicine in the coming decades, it is important to ensure that this technology is implemented in a way that maximizes patient benefit. However, despite its intuitive appeal, explainability for patient-level decision making is unlikely to advance these goals in meaningful ways. The paper recommends that healthcare workers exercise appropriate caution when using explanations from an AI system and urges regulators to be judicious in listing explanations among the requirements needed for clinical deployment of AI.

Why this paper was selected

This paper has been selected due to following factors:

  1. The paper provides an overview of approaches towards explainable AI
  2. Health Care and Medicine are high-stake areas that can’t rely on approximations
  3. The authors outline possible failures of explainable AI approaches
  4. As a solution rigorous and thorough validation procedures are adviced

Explainability of individual or local decisions by AI

The authors discuss techniques for explainability that can generate general descriptions of how AI systems operate, but they note that these methods have limitations and can only offer unreliable or shallow explanations for specific decisions made by AI.

Inherent vs post-hoc explainability

Explainable AI approaches are categorized into two groups: inherent and post-hoc explainability:

Inherent explainability can be used for machine learning models that use a finite amount of input data with limited complexity in which the relatonship between input arguments can be clearly quantified and understood.

Post-hoc explainablity on the other hand is beeing used for modern AI systems that have high-dimensional and complex data and models in which the relationship between inputs and outputs cannot be quantified anymore.

Approaches towards AI explainability

For AI systems that give output on the input of images saliency maps can be used as a heatmap that reflect the degree of importance in an area on the image for the given output.

Interpretability gaps

The interpretability gap of explainability methods relies on humans to decide what a given explanation might mean. Unfortunately, the human tendency is to ascribe a positive interpretation: we assume that the feature we would find important is the one that was used (this is an example of a famously harmful cognitive error called confirmation bias). This problem is well summarised by computer scientist Cynthia Rudin: “You could have many explanations for what a complex model is doing. Do you just pick the one you ‘want’ to be correct?

Reasonability of decisions by AI

In the example of heat maps, the important question for users trying to understand an individual decision is not where the model was looking but instead whether it was reasonable that the model was looking in this region.

Sources

Ghassemi M., Oakden-Rayner L., Beam L.The false hope of current approaches to explainable artificial intelligence in health care. The Lancet – 2019
April 23, 2023 0 comment 1K views
1 FacebookTwitterPinterestEmail

Recent Posts

  • Proactive AI Assistants
  • SUNO.AI – create your song in mere seconds
  • Getting Started with Generative AI
  • AI Voice Changer
  • What evidential value do images still have?

Recent Comments

No comments to show.
  • Linkedin
  • Legal Notice
  • Privacy Policy

Made with ❤️ in Hamburg


Back To Top
AI in Control
  • AI
  • RESEARCH
  • SOLUTIONS
  • TESTS