Speaker
Description
Scholarly knowledge curation encounters challenges due to the diverse methodologies across scientific fields. Tailored approaches and protocols are essential to address these challenges, considering each domain's unique characteristics. Machine assistance, particularly through Large Language Models (LLMs) such as GPT-3.5, offers significant potential to navigate these complexities and enhance knowledge curation processes. The Open Research Knowledge Graph (ORKG) is a structured semantic platform to mitigate issues associated with traditional document-centric scholarly communication, including publication overload and peer review deficiencies.
The rapid evolution of multi-modal foundation models in artificial intelligence has transformed the integration of diverse modalities, including language and visual data. In this session, we aim to explore innovative approaches that leverage these multi-modal models alongside prompt engineering techniques. Emphasizing explainability, transparency, and trustworthiness, our research focuses on the curation of scholarly knowledge. By integrating LLMs (e.g., GPT-3.5) and visual models, we aim to deepen insights and enhance the accessibility of knowledge extracted from academic literature.
In this session, we demonstrate our approach using LLMs (e.g., GPT-3.5) to extract data from research articles, facilitating innovative insights and accelerating research outcomes. Our method integrates LLM capabilities with ORKG, validated by domain experts to ensure data accuracy and relevance. This collaborative framework merges neural LLM capabilities with symbolic knowledge graphs and human expertise, addressing practical challenges in applying LLMs to scientific research. We leverage both language and visual models to enrich our understanding and interpretation of scholarly content, thereby broadening the scope of insights generated.
We employ prompt engineering with predefined templates to effectively interact with LLMs, ensuring consistent and precise data extraction. Our discussion will cover strategies for leveraging LLMs to generate novel insights, challenges encountered in their application, and solutions developed to overcome these obstacles. Emphasis will be placed on tools for enhancing AI explainability and trustworthiness, ensuring transparent and reliable use of LLMs in scientific research.