We are delighted to announce the draft program for SafeComp 2021. Join us for four days of workshops, keynotes, presentations, and discussions.
SafeComp 2021 is a hybrid conference, taking place in York (UK) and online.
"Empowering data with knowledge and reasoning"
Adnan Darwiche is a professor and former chairman of the computer science department at UCLA.
He directs the Automated Reasoning Group, which focuses on symbolic reasoning, probabilistic reasoning and machine learning.
Professor Darwiche is a Fellow of AAAI and ACM. He is a former editor-in-chief of the Journal of Artificial Intelligence Research (JAIR) and author of "Modeling and Reasoning with Bayesian Networks," by Cambridge University Press.
"Driver reactions to autonomous vehicles"
For almost three decades, Professor Stanton and his research team have been testing the effects of automated driving on drivers - in simulators, on test-tracks, as well as on open roads.
These studies have revealed that drivers of automated vehicles are less able to respond in an emergency than when driving manually. Professor Stanton asserts that the role of monitoring automation continuously with the task of intervening only very occasionally is (almost) impossible for drivers to undertake effectively (particularly for an extended duration). In fact, if drivers attempt to monitor as they are expected to do, it actually places greater mental demand on them than driving manually. In any case, they cannot sustain this level of attention for long. What happens in reality is that drivers adopt a more passive ‘passenger’ mentality, and start engaging with other tasks and devices in their vehicles. Watching vehicle automation for any extended period is very boring. These studies have led Professor Stanton and his team to the conclusion that partially automated driving (especially where the driver is expected to monitor and intervene) is a really bad idea. In this provocative presentation, he will present some of his research team’s studies in simulators and on UK roads to explain why partially automated vehicles crash.
Professor Neville Stanton, PhD, DSc, is a Chartered Psychologist, Chartered Ergonomist and Chartered Engineer. He is a Professor Emeritus in Human Factors Engineering at University of Southampton in the UK. He has degrees in Occupational Psychology, Applied Psychology and Human Factors Engineering and has worked at the Universities of Aston, Brunel, Cornell and MIT. His research interests include modelling, predicting, analysing and evaluating human performance in systems as well as designing the interfaces and interaction between humans and technology.
Professor Stanton has worked on the design of automobiles, aircraft, ships and control rooms over the past 30 years, on a variety of automation projects. He has published 50 books and almost 400 journal papers on Ergonomics and Human Factors. In 1998 he was presented with the Institution of Electrical Engineers Divisional Premium Award for research into System Safety. The Institute of Ergonomics and Human Factors in the UK awarded him The Otto Edholm Medal in 2001, The President’s Medal in 2008 and 2018, The Sir Frederic Bartlett Medal in 2012 and The William Floyd Medal in 2019 for his contributions to basic and applied ergonomics research. The Royal Aeronautical Society awarded him and his colleagues the Hodgson Prize in 2006 for research on design-induced, flight-deck, error published in The Aeronautical Journal. The University of Southampton has awarded him a Doctor of Science in 2014 for his sustained contribution to the development and validation of Human Factors methods.
"Designing interaction and interfaces for automated vehicles", Neville Stanton, Kirsten M. A. Revell, Patrick Langdon
"Thoughts on a cybersecurity framework for protecting machine learning /AI systems"
Sadie Creese is Professor of Cyber Security in the Department of Computer Science at the University of Oxford, where she teaches operational aspects of cybersecurity including threat detection and security architectures. Her current research portfolio includes: predicting organisational Cyber-Value-at-Risk, the potential for systemic cyber-risk; agent based simulations for understanding malware and ransomware attack propagation; threat detection especially the insider threat and threat to AI; visual analytics; risk propagation logics; resilience strategies for business; vulnerability of block-chains; the Cyber Security Capacity Maturity Model for Nations (CMM). Sadie is the founding Director of the Global Cyber Security Capacity Centre (GCSCC) at the Oxford Martin School, where she continues to serve as a Director conducting research into what constitutes national cybersecurity capacity, working with countries and international organisations around the world. She was the founding Director of Oxford’s Cybersecurity network launched in 2008 and now called CyberSecurity@Oxford, a member of the World Economic Forum’s Cyber Security Centre’s Strategic Advisory Board, and was a Technical Advisor to the Government of Japan (GOJ) and the World Economic Forum joint project on International Data Flow Governance ‘Advancing the Osaka Track’. Most recently Sadie has become Course Director for the Saïd Business School’s online programme Cybersecurity for Business Leaders.