Authors: Michael Thompson, Timothy Vidas



The DARPA Cyber Grand Challenge (CGC) pit autonomous machines against one another in a battle to discover, mitigate, and take advantage of software vulnerabilities. The competitors repeatedly formulated and submitted binary software for execution against opponents, and to mitigate attacks mounted by opponents. The US Government sought confidence that competitors legitimately won their rewards (a prize pool of up to $6.75 million USD), and competitors deserved evidence that all parties operated in accordance with the rules, which prohibited attempts to subvert the competition infrastructure. To support those goals, we developed an analysis system to vet competitor software submissions destined for execution on the competition infrastructure, the classic situation of running untrusted software.