Assured Automatic Programming
Abstract
With the advancement of AI-based code generation, natural language requirements can now be transformed into executable code in standard programming languages. However, AI-generated code is often unreliable, as it could introduce safety risks and may fail to accurately reflect the user’s actual intent. Existing software assurance techniques, such as testing and software verification, can help assess its reliability, but they typically depend on additional trustworthy artifacts, such as test cases or formal specifications. When these artifacts are also auto-generated, the outcome of testing and verification becomes inherently unreliable. This talk discusses some of the challenges faced by existing software assurance techniques and proposes a solution to the issue of establishing trust in auto-generated artifacts even in the absence of a single fully trustworthy artifact.