Scientific program

November 19, 2021    London, UK

Webinar on

Artificial Intelligence and Robotics

  • Home -
  • Scientific program

Keynote Forum

Chaouachi Wassim

Chaouachi Wassim

Technische Universitat Munchen France

Title: Facial detection and recognition in social networks (Images, videos)

Abstract:

Within the framework of expert advice learning strategies, one needs to have good enough experts in terms of performance, causality, and stability. Indeed, an expert advice online learning algorithm is an algorithm, which deduces its prediction from the advice of its experts. Having well-performing experts increases the performance of our algorithm; therefore, it is necessary to improve their performance. In order to achieve this objective, we were able to create an objective function, which reflects the performance of our experts and we created a causality inspired by brain neurons causality. In the case of our experts (technical indicators), we cannot determine with certainty the regularity of our objective functions which differs from one expert to another. This lack of information on regularity and a large number of functions to be optimized has pushed us to see beyond classical convex optimization and to think of a type of optimization, evolutionary learning.

Biography:

Chaouachi Wassim has completed his Master's degree at the age of 24 years from Ecole Normale supérieure and Paris-Dauphine University in Applied Mathematics and Machine Learning. He is a Quantitative Portfolio Manager at one of the best hedge funds in Europe

Darius Chile

Darius Chile

LFIS Capital United Kingdom

Title: Prevalence, risk factors and antibiotic resistance of staphylococcus aureus and mrsa nasal carriage among healthy population in ibadan, Nigeria

Abstract:

The current AI approaches based on Deep Learning were originally developed for fast data queries in large datasets for search engines, social media, and advertising. The common property of these fields is that they are not used in critical decision loops (control) of a robotic system, but they serve as an index key to finding previously searched information that is similar to the current situation. This origin resulted in a strong development of the data labeling direction that is essential for fast data association. In my talk, I want to discuss the necessary extensions that need to be added to the current AI approaches to make them applicable for decisions on robotic systems. While the approaches become increasingly better in answering the "what is there?" question, a robotic system requires in addition also information about the "confidence" of each query. A 95% accurate system running for 24 hours fails during 72min/day. The control system needs to identify these periods to prevent damages to the system and the surrounding environment. Additionally, usually, not a single sensor is used for control, and for a robust data-fusion, a (metric) error covariance is important. I show ways how to achieve this goal in the DL context. The last step is a discussion of temporal extensions of the current AI approaches, which need to understand not only the current snapshot of the scene but its temporal evolution to grasp the current context and model dynamic events. I will present our initial work on temporal scene modeling and discuss the necessary updates to the benchmarking in current AI to make it applicable to robotics

Biography:

Darius Burschka received his Ph.D. degree in Electrical and Computer Engineering in 1998 from the Technische Universitätt München in the field of vision-based navigation and map generation with binocular stereo systems. In 1999, he was a Postdoctoral Associate at Yale University, Connecticut, where he worked on laser-based map generation and landmark selection from video images for vision-based navigation systems. From 1999 to 2003, he was an Associate Research Scientist at the Johns Hopkins University, Baltimore, Maryland.  Later 2003 to 2005,  he was an Assistant Research Professor in Computer Science at the Johns Hopkins University. Currently, he is a Professor in Computer Science at the Technische Universität München, Germany, where he heads the Machine Vision and Perception group, he is a member of the Scientific Board of the Munich School for Robotics and Machine Intelligence (MSRM).  His areas of research are sensor systems for mobile and medical robots and human-computer interfaces. The focus of his research is on vision-based navigation and three-dimensional reconstruction from sensor data.  He is a Senior Member of IEEE.

Speakers

Marcela Del

Marcela Del

University Militar Nueva Granada Colombia

Title: Discussion on explainable AI for Robotic applications

Abstract:

Gender violence is an issue of public health that affects women and children globally. According to the UN, “35 percent of women worldwide have experienced either physical and/or sexual intimate partner violence or non-partner sexual violence” (UN Women, s/f). Indeed, WHO found that “almost one-third of all women who have had a relationship have suffered physical or sexual violence at the hands of their partner” (WHO, 2017). Within the scope of gender violence, femicide is a phenomenon that occurs as a consequence of cycles of violence against a woman, the rates of which continue to grow on a global level. “A total of 87,000 women were intentionally killed in 2017. More than half of them (58 percent) ̶ 50,000 ̶ were killed by intimate partners or family members […] More than a third (30,000) […] were killed by their current or former intimate partner ̶ someone they would normally expect to trust” (UNODC, 2018, pág. 10). Meanwhile, we are seeing great advances in AI and the use of machine learning and deep learning for the creation of algorithms for risk prediction. Tools that aim to determine the level of risk of femicide have been developed in Spain and Canada, for example, Viogen, “The Ontario Domestic Assault Risk Assessment” (ODARA), and “Domestic Violence Risk Appraisal Guide” (DVRAG), etc. When building such tools and considering that risk determination will be carried out by an algorithm, it is pertinent to analyze how the algorithm should be built, how information is collected, how to decide which variables to include or exclude. Also, as the algorithm becomes autonomous thanks to machine learning, the so-called black box plays an important role.  We cannot know the internal workings of the algorithm and how it determines the level of risk. Therefore, the question for an investigation that arises is: Which variables need to be considered when building algorithms to determine risk in the prevention of gender violence? To answer this, an inductive qualitative methodology is used to analyze primary sources, secondary sources, and case studies (algorithms). The results show that there is a need to evaluate situational and trigger factors, as well as factors related to the perpetrator, the victim, and type of relationship (prior violence, threats of homicide)

Biography:

Marcela has completed his Ph.D. at the age of 40 years at Tehran University and postdoctoral studies from Tehran University School of Surveying Geospatial Engineering-Department of Surveying and Geomatics Engineering. He is the director at the Directorate of Engineering and Transportation, a premier service organization. He has published more than 15 papers in reputed journals and has been serving as an editorial board member of repute. He Opening and studying the financial offers and the organization of the fundamental record, supervising the efficiency of electrical generators at the Nseeb border center, and Supervising the efficiency of agricultural machinery at the ministry of agriculture.