Human-compatible artificial intelligence – Stuart Russell, University of California
It is reasonable to expect that AI capabilities will eventually exceed those of humans across a range of real-world-decision making scenarios. Should this be a cause for concern, as Alan Turing and others have suggested? Will we lose control over our future? Or will AI complement and augment human intelligence in beneficial ways? It turns out that both views are correct, but they are talking about completely different forms of AI. To achieve the positive outcome, a fundamental reorientation of the field is required. Instead of building systems that optimize arbitrary objectives, we need to learn how to build systems that will, in fact, be beneficial for us. I will argue that this is possible as well as necessary. The new approach to AI opens up many avenues for research and brings into sharp focus several questions at the foundations of moral philosophy.
Introduction
This conference - organized under the auspices of the Isaac Newton Institute “Mathematics of Deep Learning” Programme — brings together leading researchers along with other stakeholders in industry and society to discuss issues surrounding trustworthy artificial intelligence.
This conference will overview the state-of-the-art within the wide area of trustworthy artificial intelligence including machine learning accountability, fairness, privacy, and safety; it will overview emerging directions in trustworthy artificial intelligence, and engage with academia, industry, policy makers, and the wider public.
It is reasonable to expect that AI capabilities will eventually exceed those of humans across a range of real-world-decision making scenarios. Should this be a cause for concern, as Alan Turing and others have suggested? Will we lose control over our future? Or will AI complement and augment human intelligence in beneficial ways? It turns out that both views are correct, but they are talking about completely different forms of AI. To achieve the positive outcome, a fundamental reorientation of the field is required. Instead of building systems that optimize arbitrary objectives, we need to learn how to build systems that will, in fact, be beneficial for us. I will argue that this is possible as well as necessary. The new approach to AI opens up many avenues for research and brings into sharp focus several questions at the foundations of moral philosophy.
Introduction
This conference – organized under the auspices of the Isaac Newton Institute “Mathematics of Deep Learning” Programme — brings together leading researchers along with other stakeholders in industry and society to discuss issues surrounding trustworthy artificial intelligence.
This conference will overview the state-of-the-art within the wide area of trustworthy artificial intelligence including machine learning accountability, fairness, privacy, and safety; it will overview emerging directions in trustworthy artificial intelligence, and engage with academia, industry, policy makers, and the wider public.