A hazard analysis framework for large code synthesis language models


Codex, a large language model (LLM) trained on various codebases, exceeds the prior art in its ability to synthesize and generate code. While Codex provides many advantages, models that can generate code on such a scale have significant limitations, compatibility issues, potential for abuse, and the potential to increase the rate of progress in technical areas that can themselves have destabilizing effects or abuse potential. However, such safety impacts are not yet known or have yet to be investigated. In this paper, we present a hazard analysis framework built in OpenAI to detect hazards or security risks that implementation of a Codex-like model may impose technically, socially, politically, and economically. The analysis is based on a new evaluation framework that determines the capacity of advanced code generation techniques with respect to the complexity and expressiveness of specification queries, and their ability to understand and execute them relative to human capabilities.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *