First, a necessary disclaimer: don’t use AI language generators to solve your ethical dilemmas. Second: definitely go and tell those dilemmas to this AI-powered Reddit simulation because the results are fascinating.
Are You The Asshole (AYTA) is, as the name suggests, created to mimic Reddit’s r/AmITheAsshole (AITA) crowdsourced advice forum. Created by internet artists Morris Kolman and Alex Petros with funding from Digital Void, the site allows you to enter a scenario and ask for advice on it, and then generates a series of comment posts that respond to your situation. Feedback does a remarkably good job of capturing the style of actual human-generated responses, but with the weird and slightly weird bias that many AI language models produce. Here are your answers to the plot of the classic science fiction novel. picnic on the road:
Even putting aside the weirdness of the premise I entered, they tend to platitudes that don’t quite fit the message, but the writing style and content are pretty compelling on the surface.
I also asked him to settle last year’s dispute.”bad friend of art” debate:
The first two bots were more confused by that one! Although, in fairness, many humans were too.
You can find some more examples. on a subreddit dedicated to the site.
AYTA is actually the result of three different language models, each trained on a different subset of data. as the site explains, the creators captured around 100,000 AITA posts from the year 2020, as well as the comments associated with them. They then trained a custom text generation system on different slices of the data: one bot received a series of comments that concluded the original posters were NTAs (not the asshole), another received posts that determined otherwise, and another got a combination. of data that included the previous two sets plus comments stating that no one or everyone involved was to blame. Interestingly, someone made a bot-only version of Reddit a few years ago that featured advice posts, though it also spawned messages with a markedly more surreal effect.
AYTA is similar to an earlier tool called Ask Delphi, which also used an AI trained on AITA posts (but combined with responses from hired respondents, not Redditors) to analyze the morality of user prompts. However, the framework of the two systems is quite different.
Ask Delphi implicitly highlighted the many shortcomings of using AI language analysis for moral judgments, particularly the frequency with which it responds to the tone of a post rather than its content. AYTA is more explicit about its absurdity. For one thing, he mimics the sarcastic style of Reddit commenters rather than a disinterested referee. On the other hand, it doesn’t make a single judgment, but allows you to see how the AI reasons to come to disparate conclusions.
“This project is about the bias and motivated reasoning that bad data teaches an AI.” Kolman tweeted in an announcement thread. “Biased AI looks like three models trying to parse the ethical nuances of a situation where one has only been shown comments from people calling each other assholes and another has only seen comments from people who say they have all the reason”. Against a recent New York Times headlineAI text generators they don’t exactly dominate the language; they’re just getting really good at mimicking human style, albeit not perfectly, which is where the fun comes in. “Some of the funniest answers aren’t the ones that are obviously wrong.” kolman notes. “They are the ones that are obviously inhumane.”