Description
Chair: Dr. Jennifer D'Souza (TIB Hannover)
Content/Abstract:
This session will delve into the intricacies of Large Language Models (LLMs). The session aims address the impact of these AI systems along the topics of interest (see below), exploring their capabilities, implications, and potential applications.
Topics of interest:
- Pretraining Techniques for LLMs: Exploring foundational strategies and algorithms in the development of LLMs.
- Testing and Evaluating LLM Fitness:Methods for assessing LLM performance on well-known tasks and benchmarks.
- Application of LLMs in Scientific Research: Case studies and examples of LLMs driving discovery and innovation in various scientific fields.
- Innovative Insights Generation: Strategies for leveraging LLMs to generate novel insights and accelerate research outcomes.
- Challenges and Solutions in LLM Application: Discussing the practical challenges encountered in applying LLMs to scientific research and potential solutions.
The growth and evolution of our civilization have been based on the use of biodiversity. Throughout history, human survival has been intricately intertwined with the utilization of diverse plant species, serving as vital sources of nutrition and medicinal remedies. The exploration of natural products is a cornerstone in the quest for innovative pharmaceuticals. A staggering 67% of all globally...
The advent of Large Language Models (LLMs), most notably ChatGPT, has fascinated researchers and the public alike and LLMs. The main attractor toward LLMs is their capability to interpret prompts formulated in natural language and to respond accordingly, allowing for more organic interactions with LLM-based AI systems and increasing their accessibility especially for less tech-savvy users....
During pre-training a Large Language Model (LLM) learns language features such as syntax, grammar and, to a certain degree, semantics. In this process, the model not only acquires language, but also implicitly acquires knowledge. In this sense knowledge is a byproduct of language acquisition. This characteristic is inherent in the architecture of modern LLMs. Hence, much like language...
Abstract
Mental health has become a paramount concern in contemporary society, as an increasing number of individuals are grappling with depression (WHO, 2023). People increasingly rely on partners, friends, or even chatbots as effective means to prevent and alleviate symptoms of depression (Rauws, 2022). When observing interpersonal interactions, it's evident that conversations are...
Large language models (LLMs) have shown promising capabilities in several domains, but “Inherent Bias,” “Data Privacy & Confidentiality,” “Hallucinations,” “Stochastic Parrot,” and “Inadequate Evaluations” limit the LLM’s reliability for direct and unsupervised use. These challenges are exacerbated in complex, sensitive, low-resource domains with scarce large-scale, high-quality datasets....
A brief into on how Sparse Autoencoders (SAE) can be leveraged to extract interpretable, monosemantic features from the opaque intermediate activations of LLMs, providing a window into their internal representations. And we hope to initiate discussions on the methodology of training SAEs on LLM activations, the resulting sparse and high-dimensional representations, and how these can be...