AI Safety Gridworlds

By: Alexander Mervar

The next few posts will come in regards to multiple white papers that I have been reading this summer. I hope you enjoy these texts as much as I did.


Read the Paper Here!

Researchers at DeepMind have observed through the development of AI agents, there has not been a set and static amount of data sets and/or environments to test out the safety of AIs. Without these environments, there cannot be reliable research done on the solution to problems of safe interruptibility, avoiding side effects, the absence of a supervisor, reward gaming, self-modification, distributional shift, robustness to adversaries, and safe exploration.Thus, in order to answer these eight AI safety problems, these researchers provide gridworlds that act as testing environments to measure the effectiveness of an AI system and its safety. Each world is given different objectives are areas to measure.

Without this text, the world of AI development would be at incredibly detrimental area and stage of development. By providing these static data sets and making them publicly available on GitHub, the authors of AI Safety Gribworlds allow for easy and precise measurement of the effectiveness of multiple different types of AI agents created and many AI agents left to be created in the future. This public nature also incentives an exponential growth in AI development on a globalistic scale, which can lead to further necessary research in order to educate ourselves on potential risks and dangers of AI development, which the authors of AI Safety Gridworlds recognize is completely necessary and subject to change.

Due to the immediate applicability of these gribworlds, the application can be directly placed in the post-developmental stages of AI creation. By testing possible AI agents on these worlds, one can be able to recognize fallacies in the agent and make necessary adjustments to optimize the agent’s performance.