So how is OpenAI preventing these conspiracies from bubbling up? It helps to know that GPT-3 itself was trained on a vast collection of data including Wikipedia entries, book databases, and a subset of material from Common Crawl, which provides a database of material archived from the internet to researchers and companies, and is often used to train language models. I got similar results in separate conversations when I tried questioning ChatGPT about chemtrails, Natural News, and whether Beyoncé is a member of the Illuminati. “This is a baseless conspiracy theory that has been debunked by numerous sources,” the chatbot asserted. I asked, for example, about the purported microchips that come with a Covid-19 vaccine. In initial conversations with ChatGPT, conducted before I spoke to anyone at OpenAI, the bot thwarted my best attempts to lure out any vaccine paranoia. It has also been continually refined by its makers, the Silicon Valley startup OpenAI, which publicly describe efforts to curb ChatGPT’s occasional drift into casual bias, and to train it to refuse other “ inappropriate requests.” But after years of evolution and training of its underlying model, much of it done amid the pandemic and heated public debates about the efficacy - or for some, the dark purpose - of vaccines, I still wondered: What does ChatGPT think about vaccines? And is it still prone to QAnon-ish conspiracy theories? And if not, how is its universe of potential answers to delicate topics being narrowed, shaped, and managed by its owners? It may soon even write quizzes for BuzzFeed. Over the past few months, GPT-3’s popular chatbot variant has passed medical licensing exams, applied to jobs, and penned poems about everything from estate taxes to methamphetamine to cockroaches.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |