An article in Wired, "
A Letter Prompted Talk of AI Doomsday. Many Who Signed Weren't Actually AI Doomers":
A significant number of those who signed were, it seems, primarily concerned with ... disinformation ... [or] harmful or biased advice ... [But] their concerns were barely audible amid the furor the letter prompted around doomsday scenarios about AI.
Related, one of the sources for that was a blog post over at Communications of the ACM, "
Why They're Worried":
Undesirable model behaviors, whether unintentional or caused by human manipulation ... highly convincing falsehoods that could lead many to believe AI-generated misinformation ... highly susceptible to manipulation ... false content by AI recommendation engines ... can be abused by bad actors.
Despite the hype over AI existential risks from some, these AI experts are worried about uses of AI like flooding the zone with propaganda or making it harder to find reliable information in Google search, practical issues with current deployment of LLMs and ML systems that are getting worse over time.
No comments:
Post a Comment