On Scratch, I have seen various Scratchers get muted several times for posting a comment that "breaks the Community Guidelines," even when it doesn't. Most users post screenshots of these alerts and share them on their projects, though the Scratch Team considers it "sensitive and should not be shared publicly." Others claim that it is a "bot," with Scratchers using negative adjectives to describe it. The bad word detector (and the muting system) both use words on a blacklist, which is words or phrases filtered out. Scratch does not use machine-learning algorithms or AI for these filters, which could accurately filter out words and phrases, but should Scratch use them in the future?