Multimatics Insight

Fuzzing with AFL: Discover More Bugs Accurately As Easy As It Can Be

Fuzzing with AFL: Discover More Bugs Accurately As Easy As It Can Be

The vast improvement in technology and innovation makes opportunities for cyber threats that attack vulnerabilities in software, leading to serious damage and disrupting business activities. Several cyber-attacks and threats including Distributed Denial of Service (DDoS) attack, the case of flooding a server with too much data that caused the system to crash, and the WannaCry ransomware attack which occurred in 2017, The WannaCry exploits a vulnerability in Server Message Block (SMB) protocol, reported to have infected more than 230,000 computers in over 150 countries within one day. Several attempts to combat cyber threats are developed in order to improve information security and prevent cyber attacks. Techniques including static analysis, dynamic analysis, symbolic execution, and fuzzing (Liu. et.al. 2012) are proposed to detect vulnerabilities and bugs.

Many cyber techniques are implemented to overcome flooding data, one of them is fuzzing. Many corporations use fuzzing as their cybersecurity technique to find vulnerabilities in their software. According to brightsec.com, several international enterprises have used fuzzing to improve their cybersecurity. Google uses fuzzing to check and protect millions of lines of code in chrome, in 2019, google discovered more than 20,000 vulnerabilities in chrome via internal fuzz testing. Microsoft uses fuzzing as one of the stages in its software development lifecycle to find vulnerabilities and improve the stability of its products. The US Department of Defence (DoD) issued a DevSecOps reference design and an application security guide which both require fuzz testing as a standard part of the software development process.

Fuzzing: History and The Basic Principles

Fuzzing is an automatic bug and vulnerability discovery technique that continuously generates inputs and reports those that crash the program (Bohme M., Cadar C., Roychoudhury A. 2021). The purpose of fuzzing is to discover bugs in programs in an immediate manner. Fuzzing can add another perspective to classical software techniques (hand code review, debugging) because of is a non-human approach. Fuzzing discovers implementation bugs using malformed/semi-malformed data injection in an automated fashion. Fuzzing was developed in 1989 at the University of Wisconsin-Madison by Professor Barton Miller and his students. The first project of fuzzing mainly focused on command-lined and UI fuzzing and shows that modern operating systems are vulnerable to even simple fuzzing. The earlier technique is putting random inputs, not using any model of program behavior, application type, or system description, usually known as black box testing. The reliability criteria are simple, if the application crashes or hangs, it is considered to fail the test, otherwise, it passes. This criterion allows the use of a simple test oracle. To keep in mind that the application does not have to respond in a sensible manner to the input, and it can even quietly exit.

The first project of fuzzing proceeded in four steps:
a. construct a program to generate random characters, plus a program to help test interactive utilities
b. use these programs to test a large number of utilities on random input strings to see if they crash
c. identify the strings (or types of strings) that crash these programs; and
d. identify the cause of the program crashes and categorize the common mistakes that cause these crashes

The results of the testing are, that among almost 90 different utility programs on seven versions of UNIX (a multi-user operating system)more than 24% of them are able to be crashed. The project generates a list of bugs (and fixes) for the crashed programs and a set of tools available to the systems community. Fuzzing is the art of automatic bug finding, and its role is to find software implementation faults, and identify them if possible. Compared with other techniques, fuzzing is easy to deploy and of good extensibility and applicability, and could be performed with or without the source code (Li et. al. 2018).

There are three types of approaches in fuzzing, which are White-Box, Grey-Box, and Black-Box. White box fuzzing is assumed to have access to the source code, and thus more information could be collected through analysis of the source code and how test cases affect the program running state. Grey box fuzzing work without source code, either, and gain the internal information of target programs through program analysis. Black box fuzzing does fuzzing tests without any knowledge of target program internals.

Introducing AFL: American Fuzzy Lop

American Fuzzy Lop (AFL) is an open source fuzzers written in C and assembly. AFL was first introduced by Michal Zalewski. a Polish security researcher. Zalewski firstly experimented the effectiveness of AFL in pulling JPEGs out of thin air by created a text file containing just "hello" and asked the fuzzer to keep feeding it to a program that expects a JPEG image. In developing AFL, testers and developers several persuasive experiments using gcov block coverage to select optimal test cases out of a large corpus of data and then used them as a starting point for traditional fuzzing workflows. AFL does its best not to focus on any singular principle of operation and not be a proof-of-concept for any specific theory.

You can start using AFL for fuzzing by following these steps:

a. Download, Compile, and Install AFL
b. Download, Instrument, and Install Target
c. Get data that will be used to feed AFL
d. Create Ramdisk and Store AFL fuzzing session input and output directories
e. Start Fuzzing

Why Should You Use AFL for Fuzzing?

a. Speed

AFL lets you fuzz most of the targets at roughly their native speed to avoid slower running speed when finding bugs. Most fuzzing tools are more likely to increase accuracy in finding a bug, however, affect their running speed which resulted in tools running slower in the background. AFL increases instrumentation to actually reduce the amount of work by carefully trimming the corpus or skipping non-functional but non-trimmable regions in the input files.

b. Rock-solid reliability

AFL is attractive because it implements automated testing that's simple to use and scalable. Most of the fuzzing tools are based on symbolic execution, taint tracking, or complex syntax-aware instrumentation which are not reliable with real-world targets. AFL, on the other hand, is designed with a wide range of interesting, well-researched strategies to help fuzzers focus their effort on important tasks.

c. Simplicity

In contrast with most fuzzing tools which offered countless knobs and fuzzing ratios to work with, AFL offers only three knobs, which are the output file, the memory limit, and the ability to override the default, auto-calibrated timeout. The more knobs are used, the more complicated the fuzzing processes are. This will caused confusion between the author of the testing and the operator that could lead to more vulnerabilities and risks. By simplify the knobs, AFL avoid this risks and ease the fuzzing processes. Even when mishaps happen, AFL is already provided with user-friendly error messages that outlines the emerging causes and solve the problem immediately.

d. Chainability

Most general-purpose fuzzers found it difficult to utilize resource-hungry or interaction-heavy tools, which require them to develop specialized in-process fuzzers or investment in massive CPU power. The process is considered ineffective since it mostly leads to tasks that are not related with codes that actually want to be tested. AFL avoid this by allowing users to use more lightweight targets (e.g standalone image parsing libraries) to create small corpora of interesting test cases that can be put into a manual testing process or a UI harness.

It is surely a burden to deal with unresolved codes while at the same time, needs to fix hidden bugs and vulnerabilities at once. In addition, when fuzzing tools used are giving more complex processes rather than useful functions, testers are more likely spending more time solving problems within the tool than utilize it to discover vulnerabilities. AFL eases the burden by offering scalable automatic testing with high reliability. It allows fuzzers to design a well-researched strategies and interesting test cases to discover more bugs.

Liu B, Shi L, Cai Z, Li M (2012) Software vulnerability discovery techniques: A survey. In: Multimedia Information Networking and Security (MINES), 2012Fourth International Conference on. IEEE, Nanjing. pp 152–156. https://doi.org/10.1109/MINES.2012.202
Bohme M., Cadar C., Roychoudhury A. (2021) Fuzzing: Challenges and Refelctions. IEEE Software Magazine. May-June 2021, pp. 79-86, vol. 38. DOI: 10.1109/MS.2020.3016773

Share this on:

Scroll to Top