💾 Archived View for envs.net › ~seirdy › 2022 › 02 › 02 › floss-security.gmi captured on 2022-04-28 at 18:06:57. Gemini links have been rewritten to link to archived content
⬅️ Previous capture (2022-03-01)
-=-=-=-=-=-=-
Originally posted 2022-02-02. Last updated 2022-04-18.
I find it easy to handle views different from my own. I feel more troubled when I see people agree with me for the wrong reasons.
It's no secret that I'm a passionate supporter of software freedom: I've written two posts about how Free, Libre, and Open-Source Software (FLOSS) is necessary but insufficient to preserve user autonomy:
Whatsapp and the domestication of users
After two posts spanning over 5000 words, I need to add some nuance.
One of the biggest parts of the Free and Open Source Software definitions is the freedom to study a program and modify it; in other words, access to editable source code. I agree that such access is essential; however, far too many people support source availability for the *wrong* reasons. One such reason is that source code is necessary to have any degree of transparency into how a piece of software operates, and is therefore necessary to determine if it is at all secure or trustworthy. Although security through obscurity is certainly not a robust measure, this claim has two issues:
I'd like to expand on these issues, focusing primarily on compiled binaries. Bear in mind that I do not think that source availability is *useless* from a security perspective (it certainly makes audits easier), and I *do* think that source availability is required for user freedom. I'm arguing only that *source unavailability doesn't imply insecurity*, and *source availability doesn't imply security*. It's possible (and often preferable) to perform security analysis on binaries, without necessarily having source code. In fact, vulnerability discovery doesn't typically rely on source code analysis.
I'll update this post occasionally as I learn more on the subject. If you like it, check back in a month or two to see if it has something new.
(PS: this stance is not absolute; I concede to several good counter-arguments at the bottom!)
I don't think anyone seriously claims that software's security instantly improves the second its source code is published. The argument I'm responding to is that source code is necessary to to understand what a program does and how (in)secure it is, and without it we can't know for sure.
Assuming a re-write that fundamentally changes a program's architecture is not an option¹, software security typically improves by fixing vulnerabilities via something resembling this process:
1. Someone discovers a vulnerability
2. Developers are informed of the vulnerability
3. Developers reproduce the issue and understand what caused it
4. Developers patch the software to fix the vulnerability
Source code is typically helpful (sometimes essential) to Step 3. If someone has completed Step 3, they will require source code to proceed to Step 4. Source code *isn't necessary for Steps 1 and 2*; these steps rely upon understanding how a program misbehaves. For that, we use *reverse engineering* and/or *fuzzing*.
Understanding *how a program is designed* is not the same as understanding *what a program does.* A reasonable level of one type of understanding does not imply the other.
Source code² is essential to describe a program's high-level, human-comprehensible design; it represents a contract that outlines how a developer *expects* a program to behave. A compiler or interpreter³ must then translate it into machine instructions. But source code isn't always easy to map directly to machine instructions because it is part of a complex system:
Furthermore, all programmers are flawed mortals who don't always fully understand source code. Everyone who's done a non-trivial amount of programming is familiar with the feeling of encountering a bug during run-time for which the cause is impossible to find...until they notice it staring them in the face on Line 12. Think of all the bugs that *aren't* so easily noticed.
Reading the source code, compiling, and passing tests isn't sufficient to show us a program's final behavior. The only way to know what a program does when you run it is to...run it.⁵
Almost all programmers are fully aware of their limited ability, which is why most already employ techniques to analyze run-time behavior that don't depend on source code. For example, developers of several compiled languages⁵ can build binaries with sanitizers to detect undefined behavior, races, uninitialized reads, etc. that human eyes may have missed when reading source code. While source code is necessary to *build* these binaries, it isn't necessary to run them and observe failures.
Distributing binaries with sanitizers and debug information to testers is a valid way to collect data about a program's potential security issues.
It's hard to figure out which syscalls and files a large program program needs by reading its source, especially when certain libraries (e.g. the libc implementation/version) can vary. A syscall tracer like strace(1)⁶ makes the process trivial.
A personal example: the understanding I gained from `strace` was necessary for me to write my bubblewrap scripts. These scripts use bubblewrap(1) to sandbox programs with the minimum permissions possible.
Analyzing every relevant program and library's source code would have taken me months, while `strace` gave me everything I needed to know in an afternoon: analyzing the `strace` output told me exactly which syscalls to allow and which files to grant access to, without even having to know what language the program was written in. I generated the initial version of the syscall allow-lists with the following command:⁷
strace name-of-program program-args 2>&1 \ | rg '^([a-z_]*)\(.*' --replace '$1' \ | sort | uniq
This also extends to determining how programs utilize the network: packet sniffers like Wireshark can determine when a program connects to the network, and where it connects.
These methods are not flawless. Syscall tracers are only designed to shed light on how a program interacts with the kernel. Kernel interactions tell us plenty (it's sometimes all we need), but they don't give the whole story. Furthermore, packet inspection can be made a bit painful by transit encryption⁸; tracing a program's execution alongside packet inspection can offer clarity, but this is not easy.
For more information, we turn to *core dumps*, also known as memory dumps. Core dumps share the state of a program during execution or upon crashing, giving us greater visibility into exactly what data a program is processing. Builds containing debugging symbols (e.g. DWARF) have more detailed core dumps. Vendors that release daily snapshots of pre-release builds typically include some symbols to give testers more detail concerning the causes of crashes. Web browsers are a common example: Chromium dev snapshots, Chrome Canary, Firefox Nightly, WebKit Canary builds, etc. all include debug symbols. Until recently, *Minecraft: Bedrock Edition* included debug symbols which were used heavily by the modding community.⁹
In 2020, Zoom Video Communications came under scrutiny for marketing its "Zoom" software as a secure, end-to-end encrypted solution for video conferencing. Zoom's documentation claimed that it used "AES-256" encryption. Without source code, did we have to take the docs at their word?
The Citizen Lab didn't. In April 2020, it published a report revealing critical flaws in Zoom's encryption:
Move Fast and Roll Your Own Crypto: A Quick Look at the Confidentiality of Zoom Meetings
It utilized Wireshark and mitmproxy to analyze networking activity, and inspected core dumps to learn about its encryption implementation. The Citizen Lab's researchers found that Zoom actually used an incredibly flawed implementation of a weak version of AES-128 (ECB mode), and easily bypassed it.
Syscall tracing, packet sniffing, and core dumps are great, but they rely on manual execution which might not hit all the desired code paths. Fortunately, there are other forms of analysis available.
Tracing execution and inspecting memory dumps can be considered forms of reverse engineering, but they only offer a surface-level view of what's going on. Reverse engineering gets much more interesting when we analyze a binary artifact.
Static binary analysis is a powerful way to inspect a program's underlying design. Decompilation (especially when supplemented with debug symbols) can re-construct a binary's assembly or source code. Symbol names may look incomprehensible in stripped binaries, and comments will be missing. What's left is more than enough to decipher control flow to uncover how a program processes data. This process can be tedious, especially if a program uses certain forms of binary obfuscation.
The goal doesn't have to be a complete understanding of a program's design (incredibly difficult without source code); it's typically to answer a specific question, fill in a gap left by tracing/fuzzing, or find a well-known property. When developers publish documentation on the security architecture of their closed-source software, reverse engineering tools like decompilers are exactly what you need to verify their honesty (or lack thereof).
Decompilers are seldom used alone in this context. Instead, they're typically a component of reverse engineering frameworks that also sport memory analysis, debugging tools, scripting, and sometimes even IDEs. Here are two popular frameworks:
The radare project (I use this)
The Ghidra software reverse engineering suite
Their documentation should help you get started if you're interested.
These reverse-engineering techniques--a combination of tracing, packet sniffing, binary analysis, and memory dumps--make up the workings of most modern malware analysis. See this example of a fully-automated analysis of the Zoom Windows installer:
Falcon Sandbox report for ZoomInstaller.exe
It enumerates plenty of information about Zoom without access to its source code: reading unique machine information, anti-VM and anti-reverse-engineering tricks, reading config files, various types of network access, scanning mounted volumes, and more.
To try this out yourself, use a sandbox designed for dynamic analysis. Cuckoo is a common and easy-to-use solution, while DRAKVUF is more advanced.
Cuckoo Sandbox: automated malware analysis
DRAKVUF® Black-box Binary Analysis System
The Intel Management Engine (ME) is a mandatory subsystem of all Intel processors (after 2008) with extremely privileged access to the host system. Active Management Technology (AMT) runs atop it on the subset of Intel processors with "vPro" branding. The latter can be disabled and is intended for organizations to remotely manage their inventory (installing software, monitoring, remote power-on/sleep/wake, etc).
The fact that Intel ME has such deep access to the host system and the fact that it's proprietary have both made it the subject of a high degree of scrutiny. Many people (most of whom have little experience in the area) connected these two facts together to allege that the ME is a backdoor, often by confusedly citing functionality of Intel AMT instead of ME. Is it really impossible to know for sure?
I picked Intel ME+AMT to serve as an extreme example: it shows both the power and limitations of the analysis approaches covered. ME isn't made of simple executables you can just run in an OS because it sits far below the OS, in what's sometimes called "Ring -3".¹⁰ Analysis is limited to external monitoring (e.g. by monitoring network activity) and reverse-engineering unpacked partially-obfuscated firmware updates, with help from official documentation. This is slower and harder than analyzing a typical executable or library.
Answers are a bit complex and...more boring than what sensationalized headlines would say. Reverse engineers such as Igor Skochinsky and Nicola Corna (the developers of me-tools and me_cleaner, respectively) have analyzed ME, while researchers such as Vassilios Ververis have thoroughly analyzed AMT in 2010. Interestingly, the former pair argues that auditing binary code is preferable to potentially misleading source code: binary analysis allows auditors to "cut the crap" and inspect what software is truly made of. However, this was balanced by a form of binary obfuscation that the pair encountered; I'll describe it in a moment.
Intel ME: Myths and Reality (PDF)
Security Evaluation of Intel's Active Management Technology
Simply monitoring network activity and systematically testing all claims made by the documentation allowed Ververis to uncover a host of security issues in Intel AMT. However, no undocumented features have (to my knowledge) been uncovered. The problematic findings revolved around flawed/insecure implementations of documented functionality. In other words: there's been no evidence of AMT being "a backdoor", but its security flaws could have had a similar impact. Fortunately, AMT can be disabled. What about ME?
This is where some binary analysis comes in. Neither of Skochinsky's linked presentations seem to enumerate any contradictions with official documentation. Unfortunately, some components are poorly understood due to being obfuscated using Huffman compression with unknown dictionaries:
Understanding the inner workings of the obfuscated components blurs the line between software reverse-engineering and figuring out how the chips are actually made, the latter of which is nigh-impossible if you don't have access to a chip lab full of cash. However, black-box analysis does tell us about the capabilities of these components: see page 21 of "ME Secrets". Thanks to zdctg for clarifying this.
Skochinsky's and Corna's analysis was sufficient to clarify (but not completely contradict) sensationalism claiming that ME can remotely lock any PC (it was