Fine Grained Dataflow Tracking with Proximal Gradients
Dataflow tracking with Dynamic Taint Analysis (DTA) is an important method in systems security with many applications, including exploit analysis, guided fuzzing, and side-channel information leak detection. However, DTA is fundamentally limited by the boolean nature of taint labels, which provide no information about the significance of detected dataflows and lead to false positives/negatives on complex real world programs. We introduce proximal gradient analysis (PGA), a novel theoretically grounded approach that can track more accurate and fine-grained dataflow information than dynamic taint analysis. We observe that the gradients of neural networks precisely track dataflow and have been used widely for different data-flow-guided tasks like generating adversarial inputs and interpreting their decisions. However, programs, unlike neural networks, contain many discontinuous operations for which gradients cannot be computed. Our key insight is that we can efficiently approximate gradients over discontinuous operations by computing proximal gradients, a mathematically rigorous generalization of gradients for discontinuous functions. Proximal gradients allow us to apply the chain rule of calculus to accurately compose and propagate gradients over a program with minimal error. We compare our prototype PGA implementation two state of the art DTA implementations, DataFlowSanitizer and libdft, on 7 real-world programs. Our results show that PGA can improve the F1 accuracy of data flow tracking by up to 33 average). We further demonstrate the effectiveness of PGA by discovering 23 previously unknown security vulnerabilities and 2 side-channel leaks, and analyzing 9 existing CVEs in the tested programs.
READ FULL TEXT