Guidelines for Artificial Intelligence Containment

07/24/2017
by   James Babcock, et al.
1

With almost daily improvements in capabilities of artificial intelligence it is more important than ever to develop safety software for use by the AI research community. Building on our previous work on AI Containment Problem we propose a number of guidelines which should help AI safety researchers to develop reliable sandboxing software for intelligent programs of all levels. Such safety container software will make it possible to study and analyze intelligent artificial agent while maintaining certain level of safety against information leakage, social engineering attacks and cyberattacks from within the container.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/29/2020

On Safety Assessment of Artificial Intelligence

In this paper we discuss how systems with Artificial Intelligence (AI) c...
research
01/09/2022

Arguments about Highly Reliable Agent Designs as a Useful Path to Artificial Intelligence Safety

Several different approaches exist for ensuring the safety of future Tra...
research
11/25/2020

Measuring Happiness Around the World Through Artificial Intelligence

In this work, we analyze the happiness levels of countries using an unbi...
research
07/14/2020

Using GSM SMS controller alarm Configurator to develop cost effective, intelligent fire safety system in a developing country

Electricity supply to facilities is essential, but can cause fires when ...
research
09/26/2019

Superintelligence Safety: A Requirements Engineering Perspective

Under the headline "AI safety", a wide-reaching issue is being discussed...
research
07/22/2019

A system of different layers of abstraction for artificial intelligence

The field of artificial intelligence (AI) represents an enormous endeavo...

Please sign up or login with your details

Forgot password? Click here to reset