Since the advent of generative AI like ChatGPT, much research has focused on LLMs’ question-answering capabilities, showcasing their remarkable skill in summarising knowledge from extensive training data.
However, rather than emphasising their backward-looking ability to retrieve past information, we explored whether LLMs could synthesise knowledge to predict future outcomes.
Scientific progress often relies on trial and error, but each meticulous experiment demands time and resources. Even the most skilled researchers may overlook critical insights from the literature.
Our work investigates whether LLMs can identify patterns across vast scientific texts and forecast outcomes of experiments.
The international research team began their study by developing BrainBench, a tool to evaluate how well large language models (LLMs) can predict neuroscience results.
🧠🆔
@neurocognitionandlearning