Hey, Happy New Year, and have a seat. There may be more AI fuckery afoot! We’ve already been discussing the proliferation of AI narrators and AI authors. Why not start the year off with a fresh generative AI trashfire, this time with a side order of Holy Shit Racism?
Two days ago, Tiana’s LitTalk on Threads posted a screenshot of her reader summary from Fable, a social media app for readers. If you’re not familiar, Fable users can host and/or join bookclubs for every possible iteration of book or genre, and they have a storefront, too.
And since Fable also allows users to track reading stats, like every app and service they’re showing off users’ 2024 cumulative stats.
Ok, hold on to your jaw, because it will drop.
This was Tianas_Littalk’s summary.

In case you can’t read the text of the summary:
Soulful Explorer: Your journey dives deep into the heart of Black narratives and transformative tales, leaving mainstream stories gasping for air.
Don’t forget to surface for the occasional white author, okay?
Wow.
WOW.
Is your jaw ok?
So, most importantly, on December 31, Fable responded to the original post, saying:

“Thanks for sharing, agreed this one is not ok. I’m passing it along to the team to resolve.”
Today, January 1, they followed up:

Just wanted to follow up on this – our team is working to make sure this never happens again. This never should have happened. Our reader summaries are intended to both capture your reading history and playfully roast your taste, but they should never, ever, include references to race, sexuality, religion, nationality, or other protected classes. We take this very seriously, apologize deeply, and appreciate you holding us accountable. We promise to do better. ❤️
I admit to being impressed that they responded on New Year’s Day, but clearly they are paying attention to reader feedback, and read replies and tagged threads.
I do have QUESTIONS.
First: this is likely AI.
The two little stars next to the word “Reader Summary” seem to indicate that it is, and the language of Fable’s response, that the reader summaries “should never, ever include references,” also suggest that a person or persons aren’t directly to blame for this choice.
And, let me pause there because if this WAS written by a person, Fable has an even bigger problem.
But let’s presume this is AI for the sake of this discussion.
That means they’ve elected to use generative AI to create summaries based on, I presume, user data and specific prompts. But were the summaries proofread or reviewed by a human?
It kinda boggles my mind that one of the major elements of contact with the people who frequently use Fable’s service is being left to AI. Egads.
Tiana’s_LitTalk mentioned that they were on Storygraph and invited folks to connect there, indicating they might have switched platforms. A few replies echoed that they weren’t using Fable going forward. That’s…not optimal.
Having done some, I know customer facing work can be AWFUL. I can understand a company wanting to place an AI between their employees who do customer service and the often abusive behavior of disgruntled customers. But this reader summary is for members who are already on board, and have used the service enough that they have accumulated statistics.
Why use generative AI, which has already been repeatedly proven to be racist, sexist, incorrect, and dangerous, TO ROAST ONE’S OWN COMMUNITY?
People in the comments of the Thread have already asked what prompts were used, which is a solid question.
While discussing this internally, Amanda said,
“I wonder if it’s programmed to suggest the “opposite” of something if someone is reading a lot of one thing, without knowing the cultural and contextual issues.”
Could be, but it’s still so very gross, offensive, and appalling, regardless of the source.
For me, this is another reminder that everywhere I go online, I’m running into AI. I have to add “-ai” to my Google searches so I don’t get an AI-generated summary. I have to maneuver through AI service prompts when I need help with something. I get an update to an app or a service, and suddenly there’s some cutesy named AI asking to talk to me.
We already did this with Clippy. We don’t need to do this again.
And Storygraph, linked above, uses AI to generate previews for listed books that summarize the book and its reviews and offer recommendations. I saw a few people talking about this update awhile back, and StoryGraph responded on Xitter in December 2023:

Hello!
Regarding our new AI feature, which displays a short description of the type of reader a book is a good fit for:
We are using an in-house solution. We don’t use any external APIs, all processing is done locally, and user data never leaves The StoryGraph’s servers.
Security isn’t the only issue people have with AI, but it’s one of them, along with having their reviews and comments harvested for the AI knowledge base and possibly sold to other vendors. So I’m somewhat reassured by the statement that this is in-house only.
Personally, I like StoryGraph for usability over Goodreads, which I find less appealing to look at and navigate when looking for book information. I don’t enter my reading data or review text at either place, though.
Either way, the proliferation of AI is everywhere, and it’s exhausting to try to stay away from it. I’m not against technology advancements, not by a long shot. But there have been enough examples of AI being terrible and dangerous, and enough examples of how AI was trained on stolen works or sold for such use by a publisher without author consent, to make me wary of it every time I see it, and try not to use it. I don’t find it useful for search results, and I don’t need it to write for me.
But that’s solely my experience, and I don’t want to decide for other people what technological tools are useful for them. A Sidenote: I kept getting a prompt that read “Polish” when I finished writing a message in Gmail. I spent some time looking for the language setting I must have inadvertently changed that kept asking me to translate my email into Polish. No, it’s AI and it wants to polish my email. No, thanks! But like I said: while I don’t want Polish or polish, I can also understand people asking for help writing an email when it’s daunting.
Moreover, if AI weren’t so incredibly and completely terrible for the environment, I wouldn’t feel so resentful to have it offered to me everywhere I go and in every service I use.
This example, however, is a different kettle of AI fish. (Note: do not ask generative AI for images of kettles of fish. Bad Idea Jeans.) I hope Fable can amend their queries so this doesn’t happen to another reader, and I still question the advantages of this usage when there is real potential for the text to go so horribly and offensively wrong.
ETA: Jan 2, 8:30AMET
Fable to their credit has been responding actively on Threads, and has shared more information about the summary generation:
We understand and certainly recognize the magnitude. Our summaries are generated by an AI model that allows us to create a personalized stats experience for ~2 million users who update progress toward their reading goals everyday. We take the responsibility of working with AI very seriously and are constantly fine-tuning based on feedback, particularly when we are made aware of problematic generations, like this one. We are adding new guardrails to make sure no one else has this experience.
But, alas, Tiana does not seem to be the only person to have received a reader summary that comments on racial identity. Amanda Rae on Bluesky shared another screenshot from another user:

Diversity Devotee: Your bookshelf is a vibrant kaleidoscope of voices and experiences making me wonder if you’re ever in the mood for a straight, cis white man’s perspective!
Update: Thanks to author Jo Conklin for alerting me to the source. This screenshot is from author Danny B. Groves.
This one seems to be more clearly sarcastic, but again, this way leads to YIKES on BIKES.
Also: No. I’m never in the mood for a straight cis white man’s perspective, thanks.


If every business and service didn’t make people wait on hold, then go through 2 or 3 minutes of not-useful prompts before reaching a human, perhaps customers wouldn’t find it so difficult not to unleash abuse on the customer representatives.
Pleased to learn I can eliminate the AI search (thanks for your inadvertent tip!), although more and more frequently I’m considering eliminating Google for search.
There are also dreadful AI reviews top and center for books on Amazon.
Sarah, this is a great post. Important and vile to read. Thanks for amplifying the story … and huge thanks to Tiana for sharing it. AI is utterly toxic, the gift NO ONE asked for and we’re all getting.
@LML, there are browser extensions you can add that will block the AI search and summaries. I mostly use Chrome and I have one called Bye Bye Google AI, but there’s probably others out there.
I hate how AI has crept everywhere. Thank you everybody for the tips on how to avoid it.
Re: Storygraph, in the app, if you go into preferences, you can turn off the AI summaries through a drop down option. I wish more platforms made it easy to opt out.
Re: the summary, so many yikes. Also, reading is personal, so does everyone want to be “playfully roast[ed]” about their choices? Who thought that was a great idea, AI or no?
This website automatically adds the udm14 option to google searches on your behalf, which will remove the AI summaries:
https://udm14.org/
Ars Technica had a nice article back in March explaining how to configure your default browser search with this option: https://arstechnica.com/gadgets/2024/05/google-searchs-udm14-trick-lets-you-kill-ai-search-for-good/
If you’re feeling really adventurous, you could even stop using google. I use DuckDuckGo as my default search engine because it’s much better with privacy: https://duckduckgo.com/.
OOOH – those are great resources, book_reader. Thank you!
I saw this yesterday, shook my head, and went “It is too early for this…it is ALWAYS too early for this”.
I realize that the presence of the human element is not a guarantee that there will be no fuckery, but the absence of it almost assures there will be.
I am a linguist who used to work for a language AI company, and because of this, I’m pretty sure Amanda’s theory is correct.
The companies who create this technology don’t really understand it. They understand the algorithms, but they don’t understand the output, they don’t take the output seriously, and they don’t think nearly enough about it. It’s a huge, grotesque, terrifying illustration of the potency of the Dunning-Kruger effect on Smart People. Then they pass off this technology to user-companies who assume (not unreasonably!) that the product is more than a cute dystopian toy, which it isn’t, and don’t realize that they can’t trust it and need to be inspecting and monitoring absolute everything it does if they want to use it, since after all, it’s supposed to save them manpower.
Almost everyone I met working at that company was completely delusional. I wish I were exaggerating. And that company had a better grasp of the problems than most.
@Star – thank you for your concise and horrifying input. I’m sorry you had to experience that insanity day in and day out and I’m glad for you that you don’t have to go to work there anymore (then again, if there are now no more non-delusionals there who can enlighten us from inside…)
This is only made many, many times worse by the awful fact that a cumulative Dunning–Kruger Effect is about to hit us like a tsunami in less than two weeks (Wiki: “the D-KE is a cognitive bias in which people with limited competence in a particular domain overestimate their abilities”).
But on a (very temporary) bright note, the mention of the Dunning-Kruger Effect made me laugh. My brain instantaneously translated it to the Dunder-Mifflin Effect.
Michael Scott, anyone?
That’s what she said.
@Betsydub – oops; I don’t seem to be able to count well today: it’s two and a half weeks away.
Why does the whole “playfully roasting” business make me think “tech dude bros”?
Everyone keeps talking about AI and using it and honestly, I hate AI. It’s fake. I don’t need or want a computer to create art or books or think for me. Even Instagram now has an annoying little thing that offers to have AI write posts and comments for you. No. So much no.
And I’m 110% here for that last line. I am even more tired of white, het, cis men’s opinions now than I was when I actively bailed on reading books by men (except when forced by college course reading assignments) in 2015/2016/2017 (somewhere around there – it’s kind of a blur). By 2017 I was definitely aware that I was gravitating towards books by and about women, almost exclusively, and made a deliberate decision to only read books by women. My personal reading is the one space that I do have control and I will gladly gatekeep the heck out of it.
@Kir: I also want to know, do men using this app get “playfully roasted” for only reading Eurocentric white male authors and told to come up for air by reading women authors or diversify their reading by introducing BIPOC books? Or is the “playful roasting” directly solely at users who center BIPOC, women’s, LGBTQ+ (and any other POVs I might be forgetting at this moment) books and leave the white male authors out?
My loathing of AI continues to be fully justified!
Feels really uncomfy that they have assigned all these demographic markers to authors based (presumably) on the books they write.
They probably won’t do anything terrible with it, but I don’t like it. Not in this political climate especially.
Makes me itch.
Hate AI in any part of the arts.
Yikes, how did they not foresee the potential outcomes of an AI generated ‘roast’ like that… I tried Fable for about 3 seconds earlier this year and thought it wasn’t for me. So I guess I was right about that anyway! I did get into StoryGraph though, and really like that I can just turn the AI summary option off there.
I wish we could keep AI far away from anything creative, but that’s obviously not the direction we’re travelling in. I listen to a lot of audiobooks and I’m thankful that in the UK we don’t yet seem to have the same issues with AI narration that I’ve been seeing US users of Audible/Libby/etc talking about recently. But sadly I expect it’s only a matter of time.
Storygraph’s AI is my one complaint about it, and at least it’s opt out. This is… something else.
I’m here for the Bad Idea Jeans reference
Note that AI is bad for the environment because of the huge data centers required, and cloud storage also uses huge data centers. Basically, try not to store more than you need to, so delete large video/photo/document files that you don’t really need to have backed up.
No point in “blaming” AI, when AI is almost entirely programmed by cis white males, the very demographic that’s fragile about diversity in the first place. AI will always be problematic because of the fragile white men behind it in every organization regardless of whether they use APIs or program it internally.
Does it even matter if I’m ever in the mood for a straight, cis white man’s perspective? Regardless of whether I ask for it or not, I know I’ll receive it! That’s not a demographic that is particularly well-known for being reticent.
@Noname and @Twyll: you both made me snort laugh. Thank you.