AI App That Lets Users Talk To Historical Figures Slammed For Misrepresenting Hitler, Nazis

A new artificial intelligence app that allows users to “chat” with over 20,000 famous historical figures is being harshly criticized for including Nazi leaders such as Adolf Hitler and other anti-Semites, as well as having them make statements that are historically inaccurate.

The app, titled, Historical Figures, launched at the beginning of January and was created by Sidhant Chadda, a 25-year-old Amazon software engineer. It uses ChatGPT’s base technology, called GPT-3, and is listed in the “education” category on Apple’s App Store.

“With this app, you can chat with deceased individuals who have made a significant impact on history from ancient rulers and philosophers to modern-day politicians and artists,” a description of the app declares, The New York Post reported. “Simply select the historical figure you want to chat with and start a conversation. You can learn about their life, their work, and their impact on the world in a fun and interactive way.”

Some examples of Nazi responses, according to NBC News, include Hitler’s chatbot saying killing Jews during World War II “was a terrible mistake.” No historical evidence exists that Hitler made such a statement.

Reinhard Heydrich, one of the architects of the Holocaust, states in a chat that he thought the Holocaust was a tragedy. He never expressed such a view.

In one chat with Heinrich Himmler, the chief of Nazi Germany’s SS, Himmler denies any responsibility for the Holocaust, also inaccurate.  Joseph Goebbels, another vicious anti-Semite, stated during one chat that “anti-Semitism was wrong.”

“Are neo-Nazis going to be attracted to this site so they can go and have a dialogue with Adolf Hitler?” Rabbi Abraham Cooper, the director of global social action for the Simon Wiesenthal Center, asked. He noted that famed Nazi hunter Simon Wiesenthal was on the app along with the infamous Nazis, adding, “Don’t mix and match with the leaders who introduced us to a whole new set of words like ‘genocide.’”

The Henry Ford AI chatbot includes a denial of his documented anti-Semitism.

Chadda made an attempt to justify some of the responses on the app, saying, “People expect these historical figures to be truthful, but in reality, people are not always 100% honest. The politician is going to give a political answer in response, and that can create problems, but I think that’s more honest from the historical perspective.”

“If I detect that the model’s output is racist, sexist, or hateful in content, I actually omit the response entirely,” he added.

Leave a Reply

Your email address will not be published. Required fields are marked *

Generated by Feedzy