It’s risk, Jim, but not as we know it: identifying the risks associated with future Artificial General Intelligence-based Unmanned Combat Aerial Vehicle systems
Event Type
Virtual Program Session
Human AI Robot Teaming (HART)
TimeWednesday, October 12th2:45pm - 3:00pm EDT
DescriptionThe next generation of artificial intelligence, known as Artificial General Intelligence (AGI), could either revolutionise or destroy humanity. Human Factors and Ergonomics (HFE) has a critical role to play in the design of safe and ethical AGI; however, there is little evidence that HFE is contributing to development programs. This paper presents the findings from a study which involved the use of the Work Domain Analysis-Broken Nodes approach to identify the risks that could emerge in a future ‘envisioned world’ AGI-based unmanned combat aerial vehicle system. The findings demonstrate that there are various potential risks, but that the most critical arise not due to poor performance, but rather when the AGI attempts to achieve goals at the expense of other system values, or when the AGI becomes ‘super-intelligent’, and humans can no longer manage it. The urgent need for further work exploring the design of AGI controls is emphasised.