Live from Yokogawa Tech Fair: To Fix System Bugs, Wurldtech Gets Fuzzy Wid It

April 9, 2008
Anything Can Be Hacked. The list of people who enjoy finding bugs more than Dr. Nate Kube isn’t very long. His company’s customers might be at the top of that short list. “When vendors or manufacturers use our stuff to find bugs, they’re just bugs,” explained the co-founder and CTO of Wurldtech Security Technologies. “When those bugs are found in the field, they’re security issues.” “In a house, if you know where the doors and windows are, you have a better chance at securing it,” said Wurldtec...
Anything Can Be Hacked. The list of people who enjoy finding bugs more than Dr. Nate Kube isn’t very long. His company’s customers might be at the top of that short list. “When vendors or manufacturers use our stuff to find bugs, they’re just bugs,” explained the co-founder and CTO of Wurldtech Security Technologies. “When those bugs are found in the field, they’re security issues.” “In a house, if you know where the doors and windows are, you have a better chance at securing it,” said Wurldtech CTO Nate Kube, in explaining the need for vulnerability testing during system development. Kube explained some of his company’s techniques for detecting software faults and failures at the 2008 Yokogawa Technology Fair and Users Conference this week in Houston. “Anything can be hacked,” said Kube. “There need to be layers of protection to everything. People, quality and devices are all components to security. At the end of the day, don’t get comfortable just because you have antivirus protection.” Kube oversees development of security technologies for SCADA and process control domains at Wurldtech, and he explained that the best way to find potential problems is to figure out where they might enter. “When you determine the vulnerability profile, you have a better idea of how to protect it,” he said. “In a house, if you know where the doors and windows are, you have a better chance at securing it.” Kube divides software bugs into two categories—faults and failures. A fault is a static defect in source code, while incorrect system behavior or output is a failure. He recommends two complementary approaches to verification. Source code review uses manual analysis to find faults, and testing uses program execution to find failures. Observability—the ease with which outputs from the device under test (DUT) may be observed—and controllability—the ease with which inputs may be supplied to the DUT—influence test automation. Fuzzing, or fuzz testing, is a type of security testing that provides data, or fuzz, to program inputs so that defects can be noted if the program fails. “It’s like sending negative packets,” explained Kube. “IP or TCP is a language. You can send invalid statements in the language. It’s a way to send invalid traffic.” When issues are discovered, they’re system implementation issues, not usually design issues, he said. “When we’re testing an implementation, the more information we can get from the device, the more tests we can send,” said Kube. Fuzzing can be done during software development by in-house test teams, QA teams or the developers. Results can be handled as bugs. But when fuzzing is done by organizations to verify their networks, it can result in found issues that require fixing by the organization and may need to be reported publicly or discreetly. Since its formal designation in 1990 by Barton Miller at University of Wisconsin-Madison, fuzzing has come a long way. Originally, fuzzing wasn’t concerned with the state, message structure or oracles, and the only repeatable part was the random seed used to start random sequences. Thirty-two-bit messages were the limit. Advanced fuzzing can understand protocol structure, has simple rule libraries and offers some support for multi-message or stateful protocols. Kube says real hard-core fuzzers use a high degree of controllability and observability, understand protocol semantics and states, and use an extensive rule library, as well as feedback mechanisms to optimize test design.