Chrome’s person interface (UI) code is notoriously complex, often plagued by bugs.
Are these bugs safety bugs? Can an attacker leverage memory corruption triggered by user interactions, potentially causing harm to the individual?
We often say “typically.” For example, an attacker might easily convince someone to click on an autofilled prompt, but it would be much harder to persuade them to navigate through multiple sequential dialogues.
Although some vulnerabilities may not be straightforwardly exploitable, even non-exploitable bugs can still consume an inordinate amount of time for security teams to thoroughly evaluate and confirm their potential impact on system safety. Person-to-person interface bugs are often notoriously flaky, failing to reproduce consistently. Despite being considered non-exploitable, these bugs can still cause frustrating crashes that perturb the user.
Wouldn’t it be great if we could uncover such flaws with robotic precision?
If the vast repository of Chrome UI controls were somehow laid bare, enabling us to systematically catalog and collaborate with each component programmatically.
Aha! Chrome fully exposes all UI controls to assistive technologies. Chrome takes significant efforts to fully expose its user interface, making it accessible to users of braille displays and other assistive technologies. This control structure comprises all the toolbars, menus, and a . This existing structural definition of the browser’s user interface is widely adopted across various contexts, including some password managers, highlighting the value of investing in accessibility. We’re utilizing this funding to proactively identify and address both security vulnerabilities and safety bugs in our system.
We’re currently fuzz-testing our accessibility tree by engaging with various UI controls in a random manner to identify potential crashes or errors. This system has a .
The display of reader expertise varies across platforms, but on Linux, navigating the file system tree can be achieved using the command line.
All that remains is to identify the equivalent control tree through fuzzing. How onerous can it’s?
We never imagined that achieving this would be so seamless, requiring the convergence of multiple cutting-edge technologies to make it possible. Particularly,
- Various combinations exist for collaborating seamlessly with Chrome. Randomly interacting with UI elements is unlikely to uncover bugs; instead, let’s harness fuzzing techniques to guide the fuzzer in selecting combinations of controls that seem to reach new code within Chrome.
- We aim for any identified bugs to be genuine and accurately reported. To accurately test the Chrome UI, we need to simulate its exact functionality within a realistic context, rather than isolating specific code components through unit tests. That’s where our framework excels – by executing fuzz tests within a simulated Chrome environment, essentially mimicking real-world Chrome instances.
- However, browser tests often come with an excessively high startup cost. We need to spread this cost across numerous browser launches, processing each transaction individually within each iteration. is designed to do this.
- However, not all checks will be idempotent. Within each browser invocation, the UI state can be dynamically updated through successive modifications triggered by individual check cases. What’s the point of adding a centipede to solve this?
- Chrome is a challenging test environment, boasting an array of timers that can effectively confound coverage-guided fuzzers, causing them to navigate a complex maze of execution paths and timing constraints. Protecting a large-scale software system requires a meticulous and incremental approach. Whether coverage-guided fuzzing is effective in identifying UI paths remains uncertain here.
Most notable among these considerations are their frequency in the context of browser tests, particularly with regard to observing weblog posts. Despite its promise of simplicity, the UI fuzzer presented certain complexities.
Discovering UI bugs can be a valuable exercise, but only when they’re actionable and solvable issues that can be fixed to improve user experience. Ideally, meaning:
- Our cutting-edge fuzzing infrastructure provides a comprehensive suite of diagnostic tools.
- Will it be effective to track down the exact time frame during which the bug was introduced and fixed?
- Will it diminish complex verification scenarios to their most minuscule replicator?
- The test case provides detailed descriptions of the UI controls utilized, allowing a human to potentially replicate the issue.
The collective implications of these necessities dictate that test scenarios must be stable across all Chrome versions – if a specific test case reproduces a bug in Chrome 125, it should also do so in Chrome 124 and Chrome 126, assuming the bug remains present in both. But that’s tough, given that Chrome UI controls are intricately nested and occasionally lack clear identifiers.
The initial fuzzer approach relied solely on ordinal positions, targeting control nodes like “Management 3” or “Management 5”, which was embedded within “Management 0”. However, this method is inherently insecure since the Chrome UI continually evolves. As a substitute, we opted for a system where controls were consistently labelled, whenever possible, or alternatively identified by a combination of their position and order. These yield checks.
motion {
path_to_control {
named {
identify: “Check – Chromium”
}
}
path_to_control {
nameless {
position: “panel”
}
}
path_to_control {
nameless {
position: “panel”
}
}
path_to_control {
nameless {
position: “panel”
}
}
path_to_control {
named {
identify: “Bookmarks”
}
}
take_action {
action_id: 12
}
}
Despite advanced instrumentation and probabilistic methods, fuzzers are highly unlikely to encounter these management names by chance alone. While initially touted as a game-changer, the bespoke approach ultimately proved to be a mere 20% as effective as its ordinal counterpart in terms of efficiency. To effectively address this challenge, we introduced a tailored mutator capable of seamlessly substituting placeholder names and roles recognized as valid. We employ this mutator and the standard libprotobuf-mutator in a random manner to achieve the optimal balance between different scenarios. This method has been verified to operate at approximately 80% of the speed of its unique ordinal-based counterpart, while providing a reliable means of generating secure validation scenarios.
Does anything actually get accomplished when we’re stuck in a cycle of uncertainty and self-doubt?
We don’t know but! and you’ll join us on a journey of discovery as we explore together. The fuzzer has identified a limited number of vulnerabilities within its own accessibility code, but its exploration thus far has not uncovered any bugs in Chrome’s fundamental user interface. Despite being operational for just a few hours within our ClusterFuzz environment, this project hasn’t yet been deployed to our production server. If you’d prefer to monitor alongside, please keep an eye on our protection dashboard as it evolves to cover UI code.