We introduce clever, the first curated benchmark for evaluating the generation of specifications and formally verified code in lean The benchmark comprises of 161 programming problems Our analysis yields a novel robustness metric called clever, which is short for cross lipschitz extreme value for network robustness One common approach is training models to refuse unsafe queries, but this strategy can be vulnerable to clever prompts, often referred to as jailbreak attacks, which can trick the ai into providing harmful responses Our method, stair (safety alignment with introspective reasoning), guides models to think more carefully before responding. While, as we mentioned earlier, there can be thorny “clever hans” issues about humans prompting llms, an automated verifier mechanically backprompting the llm doesn’t suffer from these
Hook it up with taskconfig—our handy layer for crafting clever input templates and grabbing outputs steadily via jmespath—and switching agents turns effortless, no extra fiddling needed Our benchmark structure ensures reproducibility by locking in versions. With a clever usage of the equivalence between reward models and the corresponding optimal policy, the algorithm features a simple objective that combines (i) a preference optimization loss that directly aligns the policy with human preference, and (ii) a supervised learning loss which explicitly imitates the policy with a baseline distribution.
OPEN