- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
Data science and machine learning have become a core topic in academia and industry in the last decade with major breakthroughs in some application fields. Purely data driven methods are still dominating this sector, often called “modern AI”; but increasingly efforts exist to merge those with knowlegde-based symbol manipulation approaches, known from long ago and associated with - what people had called - “GOFAI”.
The Symposium is organised by the three working groups:
We are looking for contributions on the following topics:
• Newest Developments in AI and ML (theoretical and implementations)
• Current AI challenges
• AI Safety (technical & AI governance)
• AI Privacy and Security
• Ethics in ML and AI applications
• Consciousness of Machines
• AI services compared to standards in scientific research
• AI Law and governance
You can either register as a participant or contribute with a presentation. Poster cannot be placed.
Alte Mensa, University Göttingen.
We have reserved several rooms for the participants of the symposium. Please follow the link for more information.
The reserved rooms are only available until Augst 20th.
Welcome Desk at the Foyer of the Alte Mensa
Welcome by Prof. Ramin Yahyapour
Robotics and AI are often thought of as science. The Robotics Society of Japan is "Nippon Robotto Gakkai" in Japanese. "Nippon" means "Japan", "Robotto" means "robot", and "Gakkai" is the combination of two words, "Gaku" and "Kai". "Kai" means "society" in English, but "Gaku" is neither "science" nor "technology". "Gaku" means disciplines related to the field. "Robotto Gaku" means disciplines related to robots. The Japanese society for artificial intelligence is "Jinkou Chinou Gakkai” in Japanese. "Jinkou" means “artificial,” and "Chinou" means "intelligence". Like the Robotics Society of Japan, "Jinkou Chinou Gaku“ means disciplines related to artificial intelligence. From the point of view of science and technology, "Robotto Gaku" means science and technology (engineering) related to robots, and "Jinkou Chinou Gaku" means science and technology related to artificial intelligence. Robots and AI do not exist in nature from the beginning and are artificially created by us for various purposes. If robots are created only for scientific curiosity, funding agencies and taxpayers have no reason to invest so much. We need to look at how robotics and AI contribute to our society. One of the contributions is to advance science, but the other contributions are expected to benefit humanity. In my talk, we will discuss how different research groups approach AI and robotics from a robotics researcher's point of view.
Time for networking !
The generalizations of complex numbers considered here differ from well-known generalizations such as quaternions, octonions, bicomplex and multicomplex numbers and Clifford algebras, to name some of the well-known ones, in at least two fundamental respects.
First of all, products of elements of such traditional algebraic structures are explained by the fact that certain expressions in brackets are formally treated as when multiplying expressions in brackets of real numbers, with additional assumptions made for the multiplication of so-called basic elements. In the present work, on the one hand, suitable vector-valued vector products are introduced and used which are geometrically motivated as rotations and stretches.
Furthermore, traditional generalizations of complex numbers do not provide any information about which concrete mathematical objects fulfill the formulated wishes with regard to multiplication and whether the fulfillment of these wishes is unequivocal or ambiguous, while in concrete objects they are always specified in the present work and, in particular, no place is left for the mystification of imaginary numbers.
The new vector products allow the introduction of vector-valued vector division, vector powers and exponential functions. A corresponding generalization of the Euler formula applies to the theory of directional probability laws. New consequences also arise in the special case of classical complex numbers, as shown by the Fourier transformation of probability densities. The new general vector division opens new perspectives of differentiability and function theory.
In Space Robotics Lab (SRL) at Tohoku University, Japan, we are developing heterogeneous multi-robot systems for lunar/planetary exploration. So far, robotic exploration missions on the surface of the Moon and remote planets have been conducted by a single capable mobile robot. But if deploying multiple robots, we expect advanced performance in terms of increased coverage areas, accuracy of mapping, adaptability to challenging terrains and robustness to contingent situations, even though the capability of each robot is limited.
In this presentation, an ongoing advanced research project under the support of the Japanese “Moonshot R&D Program” for collaborative heterogeneous multi-robot systems for resource exploration and human outpost construction is introduced. Here, the modular robotic design is one of the key points in the development. In space missions where the delivery of new hardware parts and components is not easy, the capability to change the mechanical configuration of the robots by rearranging the modular components offers the self-update of the functionality of the existing robots onsite. The challenge includes the successful application of state-of-the-art AI technologies for the evolvement of the controllers coping with the reconfiguration of the robot system for up-to-date task requirements in different environments.
The project will bring robust and sustainable robotics-based solutions to exploring the Moon and beyond.
Features measured from a protein ensemble, like atom distances, are often not recovered on average in molecular dynamics simulations due to imperfections in the simulated force fields. To remedy this problem, various ensemble refinement methods have been developed. We approach the problem from the maximum entropy point of view, in order to determine a least biased force field modification for a specific system. The problem then presents as a doubly intractable Bayesian inference problem which requires an adaptive two-step Monte Carlo method. This approach goes beyond typical ensemble refinement approaches by providing a system specific force field refinement and variance estimates for force field coefficients.
We introduce AI Usage Cards, a standardized way to report the use of AI in scientific research. Our model and cards allow users to reflect on key principles of responsible AI usage. They also help the research community trace, compare, and question various forms of AI usage and support the development of accepted community norms.
Statistical inference in the field of phylogenetics requires the adaptiion of many classical methods to more general metric frameworks. The Fréchet-mean of a probability distribution is a generalisation of the expectation to metric spaces. It has been observed that the sample mean of certain probability distributions in Billera-Holmes-Vogtmann (BHV) phylogenetic spaces is confined to a lower-dimensional subspace for large enough sample size. The umbrella term for such non-standard behavior is stickiness and poses difficulties in statistical applications when comparing samples of sticky distributions. We introduce multiple flavors of stickiness and extend previous results to show their equivalence in the special case of BHV spaces. Furthermore, we propose to alleviate statistical comparision of sticky distributions by including the directional derivatives of the Fréchet function: the degree of stickiness.
KISSKI, a pioneering force in AI research and services, centres its efforts on sensitive and critical infrastructures. Collaborating with experts spanning medicine, energy, and AI domains, KISSKI is dedicated to forging a robust AI service centre. Its holistic range of provisions, encompassing hardware, software, consultancy, and training, is poised to empower SMEs, start-ups, and research institutions, catalysing a dynamic synergy that propels the progress of AI research and its real-world implementations.
This talk introduces our Moonshot project which is a project in the National Research and Development (R&D) program in Japan. The Moonshot program promotes high-risk, high-impact R&D aiming to achieve ambitious Moonshot Goals and solve issues facing future society such as super-aging populations. Our project is accepted under the Moonshot Goal 3: Realization of AI robots that autonomously learn, adapt to their environment, evolve in intelligence, and act alongside human beings, by 2050. Our project aims to create adaptable AI-enabled robots available in a variety of places. We are now developing a variety of assistive robots called the Robotic Nimbus which can change their shape and form according to the user’s condition, environment, and the purpose of the task, and provide appropriate assistance to encourage the user to take independent action. Especially, in this talk, we focus on the human-assistive/human function-enhancing robots in the fields of nursing care and healthcare.
In this talk, I will give an overview of ongoing and planned research of my new chair "Optimization and Biomechanics for Human-Centred Robotics" at KIT and my CERC "Human-Centred Robotics and Machine Intelligence" at the University of Waterloo.
Human-centred robots are predicted to have a large societal impact in the future, e.g. in form of humanoid robots supporting people in dangerous or monotonous jobs. They also can take the form of wearable robots or physical assistive systems enhancing and restoring mobility and independence of seniors or patients with impairments. In order to take human-centred robots to this level, still a number of challenges have to be solved, since these robots have to enter in close physical interaction with humans in a safe and socially acceptable manner. For this, they require motion intelligence which makes them aware of the mechanics of their own motions and lets them predict the actions of humans. In our research, we aim to gain a fundamental understanding of the biomechanics of human movement and human-human and human-robot interaction and to develop tailored multibody system models for humans and robots including detailed contact models. Model-based optimization and optimal control play and important role in motion analysis, prediction and control, and can be efficiently combined with model-free methods. This talk covers examples ranging from human movement studies in sports to balancing and bimanual manipulation tasks in humanoid robots and robotic rollators and exoskeletons to support activities of daily living. I will also talk about our very active collaboration with Y. Hirata from Tohoku University and would be happy to identify possibilities for further collaborations with HeKKSaGOn partners in robotics.
The rapid advancement of Artificial Intelligence (AI) technologies has increased interest in their application within educational contexts. In particular, Short Answer Scoring (SAS), which focuses on the automated assessment of brief, descriptive answers, is attracting increasing attention. The primary reasons for researching SAS include reducing grading costs and facilitating real-time, interactive assessments in large-scale online courses (i.e., MOOCs). However, practical implementation faces two significant challenges: 1) ensuring the reliability of scoring results and 2) reducing the costs of building these models. In this talk, we tackle these two challenges. First, we introduce a practical human-in-the-loop framework for deploying automated SAS models to maintain scoring quality by having human experts re-grade low-reliable predictions generated by the model. We also propose 'cross-prompt training,' which can make the development of SAS models more cost-effective. Finally, considering these studies, we will discuss the potential for the practical application of automated SAS.
The rapid advancement of large-scale language models (LLMs) is
bringing about a transformative era in language processing technology.
Traditionally, developing models that are capable of effectively
handling language expressions with the same level of freedom and
complexity as human intelligence has been extremely difficult.
Specifically, in the field of semantic parsing, there has been a
significant gap in achieving the performance required for real-world
applications. However, LLMs have greatly improved this situation and
are expanding the possibilities for social applications.
In the first part of this presentation, I will use the example of
ellipsis resolution, a benchmark task that requires a deep
understanding of semantic structures in texts and general knowledge,
to provide an overview of the challenges faced in the field of
semantic analysis and how neural language processing and LLMs have
partly addressed these challenges.
In the second part, I will present our efforts in language education
support technologies made possible by these technological
advancements, such as automated grading of short text answers and
writing assistance. I will discuss the considerations that need to be
taken into account when applying these technologies in practical
educational settings, including robustness, reliability, and
explainability.
In recent months, there has been an increasing discussion about the advances of AI Technologies and their potential effects on society, particularly the negative ones.
In this talk, we will start with an overview of the current field of AI Safety and present the most prominent research agendas. Then, we will move on to interpretability research, specifically focusing on "Discovering Latent Knowledge" and "Concept Mapping" in LLMs.
We stand at the threshold of a transformative period, defined by the remarkable advancements in large language models (LLMs). Given their prowess, there's a burgeoning interest in expanding LLMs to vision and language (VL) tasks, where models harness the capabilities of LLMs to analyze both visual and textual data concurrently.
In this talk, I will introduce our research that delves into utilizing VL models, fortified with LLMs, to predict driving hazards that drivers may encounter while driving a car. This challenge compels VL models to forecast and reason about imminent events from ambiguous observations—a task characterized as visual abductive reasoning. Recognizing the nascent state of this domain, I will also unveil our novel dataset and the baseline methods we've developed to catalyze further inquiry.
Since the use of point cloud data by deep learning has become widespread, research on neural networks for handling point clouds and methods for handling 3D data has been focused in this research area. When data points are not arranged on a grid and are not sequential, they are referred to as "irregular data structures". For such data, including point clouds, graphs, and tabular data, it is difficult to apply existing deep learning methods (such as convolution) for purely regular data structures. In this talk, I will present a deep learning approach for dealing with such irregular data structures, using 3D point clouds as an example.
Recently, the penalized regularization regression methods have been extensively studied in the literature, but it is still difficult to generalize a method to be applied in various applications, particularly that suffer from heterogeneity and collinearity. Due to that, we compare the developed machine learning methods using numerical simulations for various senario. The applied methods in this research are elastic-net, and adaptive lasso where the weights are based on a quantile regression estimator as remedies at τ = 0.25, 0.5, 0.75, besides that we applied the adaptive lasso weighted based on the Ridge regression. Therefore, the methods' performance are verified by the criteria RSS, RMSE, MAE, MAPE, and MASE. Overall, the finding of the application shows that the quantile elastic-net regression generally outperforms all other methods with respect to all measures.
In medicine, artificial intelligence (AI) and machine learning (ML) promise vast advances. We start by providing an overview of current applications of AI and ML in cardiovascular medicine. Furthermore, we will provide an example for the development and validation of an explainable ML model from cardiovascular medicine. Finally, we discuss the role of statistics in AI/ML applications.
Alice Violaine Saletta, Gustavo Hernan Diaz, Shreya Santra and Kazuya Yoshida
Department of Aerospace Engineering, Tohoku University, Japan
“Teaching by showing” is a topic that has been widely explored in the history of robotics research. The idea of having a system that can understand what action is performed by a human using its sight and make a robot able to reproduce it is indeed an interesting approach. If the past decades’ research was limited by the technologies available at the time, it is our purpose to exploit the current AI state-of-the-art algorithms along with up-to-date computer vision systems to bring it to the next level. Our main goal is to use the “teaching by showing” technique in an AI scenario for assembling applications of space structures by benefitting today’s tools’ advanced state.
The most important aspect is to be sure that the robot does not just repeat the same movement it sees, but it must understand the action it is performing. For example, in the case of an assembling task, the robot is supposed to perform it no matter what the initial conditions of the single pieces or the position of the final assembled piece; it should act as close as possible to a human brain, being able to recognize the single pieces and understanding the correct way to join them. What is required, then, is a semantic and logical understanding of the connections.
For this purpose, we will develop our AI system in charge of identifying an action from a demonstration. Then we will integrate its output command into our controller and motion planner to have the robot achieve the assembling task.
charge of identifying an action from a demonstration. Then we will integrate its output command into our controller and motion planner to have the robot achieve the assembling task.
Despite their widespread adoption, the inner workings of Deep Neural Networks (DNNs) remain largely unknown. One key aspect of DNN learning is how the hidden layers' activation patterns encode the ground-truth label that the network is supposed to predict. This task-relevant information is represented jointly by groups of neurons within a layer. However, the specific way in which this mutual information about the classification label is distributed among the individual neurons is not well understood: While parts of it may only be obtainable from specific single neurons, other parts are carried redundantly or synergistically by multiple neurons. A better understanding of this representation may in the future help to identify problems during the training process or guide the selection of a network architecture.
We show how Partial Information Decomposition (PID), a recent extension of information theory, can disentangle these different contributions. From this, we introduce the measure of "Representational Complexity", which quantifies the difficulty of accessing information spread across multiple neurons. The quantity reflects the smallest number of neurons that need to be considered jointly to retrieve a piece of information on average: A representational complexity of C=1 means that all pieces of information can be obtained from single neurons, while C is equal to the total number of neurons if the information is encoded synergistically between all of them. We show how this complexity is directly computable for smaller layers. For larger layers, we propose subsampling and coarse-graining procedures and prove corresponding bounds on the latter.
Empirically, for quantized deep neural networks solving the MNIST and CIFAR10 image classification tasks, we observe that representational complexity decreases both through successive hidden layers and over training, and compare the results to related measures. Overall, we propose representational complexity as a principled and interpretable summary statistic for analyzing the structure and evolution of neural representations and complex systems in general.
Current deep-learning-based AI allows integrating a lot of written material humans have created over generations and helps people in abstracting information, learning, making decisions, etc. However if this type of AI is directly transferable to robots is not yet clear. In order to act in some environment, robots have to cover the gap between continuous sensing and action on one side and the pool of symbolic knowledge on the other side. A way for an artificial system to perform a smooth transition from sensor information into concept-like entities will be discussed in this contribution.
With the progress of machine learning and artificial intelligence, various research is being conducted in dentistry to support diagnosis and treatment, prevent medical errors, and improve patients' QOL by introducing these technologies. In order to infer the contents of dental treatment and acquire feature representations of surgeons’ behavior, the Joint Research Department for Oral Data Science of Osaka University Dental Hospital, with the cooperation of Morita Corp. and the group companies, collect various data such as videos recorded in an operating room and different sensor data obtained from a dental chair during treatments. We are conducting AI research using the data to support dental practices, for instance, automatically filling out electronic medical records (EMR) and implementing safety measures, etc. We will present the findings of our study as well as the initiatives we hope to implement.
Neural networks rely on coordination among individual neurons to perform complex tasks, but in the brain, they must operate within the constraints of locality for both computation and learning. Our research uses an information-theoretic approach to better understand how locality affects neural networks' structure and operation. We employ Partial Information Decomposition (PID) to quantify unique, redundant, and synergistic information contributions to a neuron's output from multiple groups of inputs. Using this conceptualization, we derive a general, parametric local learning rule. This rule allows for the construction of networks that consist of locally learning neurons, which can perform tasks from supervised, unsupervised, and associative memory learning. We have recently scaled our approach, demonstrating its potential as an alternative to deep neural networks. Our framework provides a powerful tool for investigating the information-theoretic principles underlying the operation of living neural networks and may facilitate the development of locally learning artificial neural networks that function more closely to the brain.
In order to continue the exchange and lively discussion we invite you to stay at the venue a bit longer. We will provide some drinks and finger food in the foyer of the Alte Mensa.
Joint session with AI conference participants and presidents on “What Opportunities and Challenges does Artificial Intelligence offer to the Higher Education Sector?”
The panel consist of the presidents of the HeKKSaGOn partner universities.
Please see the program of the HeKKSaGOn Presidents Meeting for more information and venues for the different working groups.
https://www.uni-goettingen.de/en/hekksagon+2023/668295.html