Vasileios Kemerlis spends a good bit of his time hacking into operating systems and other software. But don’t worry. He’s on our side.
“I try to understand the ways in which software is insecure and why deployed defenses are failing,” said Kemerlis, who joins the Department of Computer Science as an assistant professor. “This approach can help us understand how to design more robust countermeasures. This is what’s known as software hardening. You make it harder for someone to take advantage of vulnerabilities that exist in the software.”
There are two sides to Kemerlis’s work. There’s the offensive side — hacking into systems to find out what the vulnerabilities are — and the defensive — trying to fix those holes.
He isn’t interested so much in isolated vulnerabilities that may occur because of coding errors or other bugs. Instead, he focuses on fundamental flaws in the way software systems are designed that may lead to security problems. These are the kinds of problems that may occur in products from multiple vendors and put millions of computers at risk.
“If we find something that exists in Windows and Mac and in another operating system, it’s fundamental,” Kemerlis said. “We want to say, ‘You shouldn’t be doing it this way because there’s this problem.’”
For example, Kemerlis and a few colleagues recently found a vulnerability in the way most operating systems deal with physical memory — the chips that provide computers with random access memory. It’s a design flaw that can cancel out other security protections that a computer may have.
Several vendors have already started to address the issue. Kemerlis hopes more will follow.
“This is an example where you have one particular design choice that reduces the effectiveness of other security mechanisms,” he said. “That’s the kind of thing I try to study. Operating systems and other systems are becoming more and more complex. They have many different components and we might not understand the security of the whole thing even if we know the security properties of individual elements. The composability can be tricky.”
On the defensive side of Kemerlis’ work, he hopes to help change the general approach to system and software security.
Currently, he says, software is protected in much the same way as a knight heading to battle. Security measures are layered on like armor in the hopes that at least one of those layers will deflect enemy swords and arrows. But all that armor can slow the knight down and reduce flexibility.
“It’s the same with the software,” Kemerlis said. “With all those layers, it cannot scale. It becomes slower. You end up paying too much for security — whether the cost is in resources or energy or performance.”
Kemerlis wants to work on new, smarter, and more agile security approaches.
“I think we can do something more targeted. Instead of protecting everything all the time, we can develop something that’s applied when you need it and only when you need it. In other words, the software understands whether it’s under attack or not, and it intensifies the protections only when it needs to.”
He hopes that these new approaches might also help to break what he calls the “security monoculture,” in which most vendors tend to rely on the same types of defenses. As it is now, once attackers develop a strategy to get past one vendor’s defenses, they’ve got the keys to the kingdom. But a more varied set of defenses would reduce the value of a given attack strategy.
“It helps prevent a situation where attackers develop one strategy and then use it a thousand times,” Kemerlis said. “That reduces the return on investment for attackers.”
Kemerlis comes to Brown from Columbia University, where he received his Ph.D. earlier this year. He says he’s excited to start working with Brown students and get them thinking in a “security mindset.”
“I want students to always be asking questions: What are we trying to protect and from whom? What are the assumptions? What are the defenses? How can we go around them?
“Sometimes you really need to think in adversarial terms.”