Meta Galactica AI

Enter Galactica, an LLM aimed at writing scientific literature. Its authors trained Galactica on “a large and curated corpus of humanity’s scientific knowledge,” including over 48 million papers, textbooks and lecture notes, scientific websites, and encyclopedias. According to Galactica’s paper, Meta AI researchers believed this purported high-quality data would lead to high-quality output.

While some people found the demo promising and useful, others soon discovered that anyone could type in racist or potentially offensive prompts, generating authoritative-sounding content on those topics just as easily. For example, someone used it to author a wiki entry about a fictional research paper titled “The benefits of eating crushed glass.”

Even when Galactica’s output wasn’t offensive to social norms, the model could assault well-understood scientific facts, spitting out inaccuracies such as incorrect dates or animal names, requiring deep knowledge of the subject to catch.

DB2

1 Like

The further AI strays in development from classical odds the less useful it is. The more inaccurate it is. The less it seems to humans to know anything or do anything intelligent.

Classical odds belonging to the synthetic worlds of gambling or board games.

The invitation to go further in the coding beyond classical odds is in decision making. The problem is the computer on its own fails to understand fact from fiction. Dealing with human beings AI is not up to it and wont be up to it.