Date: Tuesday September 17th Location: Campus Cyber (5-7 rue Bellini, La Défense, Metro 1 Station “Esplanade de la Défense”) Time: 9:30-17:30
The workshop will take place in the Campus Cyber (5-7 rue Bellini, La Défense, Metro 1 Station “Esplanade de la Défense”)
The room for the presentation will be at the Inria floor (3rd floor).
Fuzzing is one of the most effective techniques to discover vulnerabilities in software. Unfortunately, most fuzzers (including the most successful ones like AFL++ or Honggfuzz) only work well for stateless systems. This hinders automated testing for systems that inherently rely on a state model, such as protocol implementations that are the pillars of the Internet and critical infrastructures. Fuzzing stateful programs is much more challenging due to the complexity of tracking states and dealing with dependencies between messages. In this talk, we investigate and highlight the challenges involved in fuzzing stateful systems and present LibAFL*, a fast and effective state-aware fuzzer.
Bio: Cristian is a third-year PhD student doing research at Radboud University in the Netherlands, in software security. His primary research area is security testing. In particular, he focuses on stateful fuzzing, i.e., fuzzing techniques for systems that need to implement a state model to work properly. He is also interested in combinations between fuzzing and active or passive learning to enhance fuzzing performances, benchmark stateful fuzzers and perform differential testing.
We’ve been looking into passive learning as opposed to active learning as way to obtain state machines, at least as initial approximation of the state machine. It seems much faster than say active learning and could also be used in various ways and in combination with any other technique.
Critical and widely used cryptographic protocols have repeatedly been found to contain flaws in their design and their implementation. A prominent class of such vulnerabilities is logical attacks, e.g. attacks that exploit flawed protocol logic. Automated formal verification methods, based on the Dolev-Yao (DY) attacker, formally define and excel at finding such flaws, but operate only on abstract specification models. Fully automated verification of existing protocol implementations is today still out of reach. This leaves open whether such implementations are secure. Unfortunately, this blind spot hides numerous attacks, such as recent logical attacks on widely used TLS implementations introduced by implementation bugs.
We answer by proposing a novel and effective technique that we call DY model-guided fuzzing, which precludes logical attacks against protocol implementations. The main idea is to consider as possible test cases the set of abstract DY executions of the DY attacker, and use a novel mutation-based fuzzer to explore this set. The DY fuzzer concretizes each abstract execution to test it on the program under test. This approach enables reasoning at a more structural and security-related level of messages represented as formal terms (e.g. decrypt a message and re-encrypt it with a different key) as opposed to random bit-level modifications that are much less likely to produce relevant logical adversarial behaviors. We implement a full-fledged and modular DY protocol fuzzer. We demonstrate its effectiveness by fuzzing three popular TLS implementations, resulting in the discovery of four novel vulnerabilities.
In order to discover security vulnerability in protocol implementation, a common approach attempts to infer a Mealy Machine describing the protocol logic. The Mealy inference can be performed with multiple active learning algorithms (e.g. L*, L#). These active learning algorithms rely on a special type of query named equivalence queries which submits an automaton hypothesis to an oracle. The equivalence query may either receive a response which validates the hypothesis or a counter-example which invalidates the hypothesis. Most of the inference time is spent in the oracle (or equivalence method) to answer the equivalence query. An exhaustive equivalence method is particularly expensive as its complexity grows exponentially with the length of the counter-example.
In our proposition, we assume a greybox setting where the SUL memory can be read. We motivate our work with the intuition that finding a counter-example of length k is slower should be slower than extracting a state difference in the SUL memory. We present some challenges to build a greybox equivalence method and early results with the implementation of our prototype.
Implementations of network protocols such as DTLS or SSH must carefully handle complex message flows. To do so, they effectively implement a state machine which keeps track of the type and order of exchanged messages. Bugs in this state machine, such as the absence of important messages, can pose a severe risk to security. To find such bugs, in the past we have employed model learning to automatically generate the implementation’s state machine (or an approximation of it), which we then analyzed for bugs. Unfortunately, this analysis was done manually, a process that is time-consuming and prone to miss bugs.
In this talk, I will present an automata-based technique which can detect all the state machine bugs found in learned models, and do so fully automatically. The technique uses the idea that bug-exposing flows can be captured by finite automata, which, when compared against an implementation’s state machine, can reveal corresponding bugs in the implementation. My talk will continue by presenting SMBugFinder, the tool implementing this technique. I will conclude by presenting ongoing efforts to extend the tool and the technique. The talk is based on two works: one presenting the technique, published at NDSS, the other presenting the tool, to be presented at ISSTA (pre-print available here).
Many network protocol specifications are long and lack clarity, which paves the way to implementation errors. Such errors have led to vulnerabilities for secure protocols such as SSH and TLS. Active automata learning, a black-box method, is an efficient method to discover discrepancies between a specification and its implementation. It consists in extracting state machines by interacting with a network stack. It can be (and has been) combined with model checking to analyze the obtained state machines. Model checking is designed for exhibiting a single model violation instead of all model violations and thus leads to a limited understanding of implementation errors. As far as we are aware, there is only one specialized exhaustive method available for analyzing the outcomes of active automata learning applied to network protocols,Fiterau-Brostean’s method. We propose an alternative method, to improve the discovery of new bugs and vulnerabilities and enhance the exhaustiveness of model verification processes. In this article, we apply our method to two use cases: SSH, where we focus on the analysis of existing state machines and OPC UA, for which we present a full workflow from state machine inference to state machine analysis.
tlspuffin
from LORIAWe will give a presentation about tlspuffin
but you
can already install and run the tool to test out our fuzzer before the
talk. We suggest you follow the subsequent instructions.
tlspuffin
Visit the Quickstart guide and follow the instructions from https://tlspuffin.github.io/docs/guides/quickstart
Run the tlspuffin
fuzzer on WolfSSL v5.4.0 with ASAN
enabled:
cargo run --bin tlspuffin --features wolfssl540,asan --release -- \
--cores 0-4 --tui experiment -t Workshop -d test
The fuzzer should find at least an objective after a few minutes (if
not, rerun a campaign). The found objectives are stored in
./experiments/<name_experiment>/objective/
Re-execute the trace to confirm the vulnerability:
cargo run --bin tlspuffin --features wolfssl540,asan --release -- \
execute ./experiments/<name_experiment>/objective/<found_objective>.trace
Generate a graphical representation of the attack trace triggering the vulnerability:
cargo run --bin tlspuffin --features wolfssl540,asan --release -- \
plot ./experiments/<name_experiment>/objective/<found_objective>.trace \
pdf attack_trace.pdf
Open attack_trace.pdf
and observe the steps the
adversary has to execute to trigger the vulnerability.