Get a better AI engine for reviewing comments. It censors comments that it does not understand. Especially for lengthy comments.
Improve the LLMs used to teach the AI engine. It seems to mark certain users based on previous comments and almost automatically mark/censor comments. I've posted several comments just to prove my theory and it fails miserably based on it's own community guidelines. My comments neither advocate violence or hate speech, yet several times they were flagged. I appealed and then they were approved. I've noticed that after a certain number of flagged comments the option to appeal the decision is no longer presented. This further indicates a flaw in the evaluation engine. My most recent comment on "Immigration agents arrest Palestinian activist who helped lead Columbia University protests" was rejected for violating community guidelines. I'd like to know EXACTLY what guideline was violated. The very flaw in this practice makes me question just what Yahoo News is doing in it's policies and authenticity..... Free speech is owed everyone, Hate speech isn't ... none of my comments qualify as hate speech. The AI engine is deeply flawed.
