Sunday 13 October 2024
Font Size
   
Tuesday, 30 August 2011 07:00

Qubes OS: An Operating System Designed For Security

Rate this item
(0 votes)
Qubes OS: An Operating System Designed For Security

What would an operating system look like it if were redesigned with security in mind? Joanna Rutkowska thinks she has the answer with the development of Qubes OS. We sit down for an interview with Joanna to discuss the way Qubes OS augments security.

Alan: Hi Joanna, thanks again for taking the time to chat with us.

Joanna: You're welcome.

Alan: Since I know you’re busy, I’ll just throw in a link to our previous interview (Exclusive Interview: Going Three Levels Beyond Kernel Rootkits), in which you talked about the risks beyond the rootkit, and ask that our readers skim through it first.

 I really want to get to talking about Qubes OS, though.

For the benefit of our audience, I want to review the three approaches to system security. We have security by obscurity with things like memory randomization, obfuscating code, and system administrators mandating complex passwords. This acts as a first line of defense—if the bad guys can’t find your house, they can’t break in. It’s a deterrent that encourages the bad guys to look for an easier target. But it doesn’t work when they really want your data.

Then we have security by correctness, where software developers try to write bug-free code so that there are no vulnerabilities. Every time software gets patched, it’s a little bit more correct. But as we see every second Tuesday of the month, even the resources that Microsoft has are insufficient to come up with a perfectly correct OS. Modern software is so big and complex that it’s almost impossible to validate code to be perfect.

Finally, we have security by isolation, which takes a somewhat pessimistic (though more realistic) view that, at some point, the bad guys will break through whatever security measures you have, and so the focus should be stopping the bad guys from getting access to the rest of the system. Fair summary?

Joanna: Ha! I wish more interviewers were so well-prepared. :)

I would perhaps add to this one more category: reactive security, which in practice comes down to: patches and signatures (for IDS and AV). Of course, this approach is the least effective, as we all well know.

Alan: The problem with security by isolation is that popular implementations like Safari’s or Chrome’s sandboxing or Internet Explorer’s Protected Mode are great in concept, but less secure in real life. Is developing perfect isolation just as difficult as security by correctness?

Joanna: Well, one still needs security by correctness when implementing isolation (sandboxing). The difference is that this is only needed for the code that enforces the isolation, not for all of the code.

If we can design a system where the isolation-enforcing code is very small, than there is a clear win—we have much less code to write correctly.

On the other hand, if one tries to build a sandbox on top of a huge, buggy, monolithic system, which exposes numerous complex APIs to applications, then the amount of code that must be written correctly is not that small, and the potential gain is much less obvious.

As a side note, I find it funny how the word "sandbox" became such a buzzword. Since the early days of multitasking OSes, the system was supposed to provide isolation between processes and users (address space isolation, access control to file system objects, etc.).

Thus, we can say that on any multitasking OS, for decades, every process has always been "sandboxed." It's just that, first, the sandboxing was designed for server applications and not for desktop applications (where all processes usually run as the same user), and second, OS kernels turned out to be buggy, and not so effective at enforcing this isolation.

Today's sandboxing technologies attempt to address the first problem in that they try to be more suited for desktop applications.

This might, for example, require splitting a browser into several processes: one for rendering, another for user interface handling, and so on. This is all good, but the second problem mentioned above still remains unsolved. Can we rely on a big, fat, and buggy kernel that has hundreds of drivers inside, networking stacks, and so forth to enforce strong isolation?

People who regularly release kernel exploits for popular OSes (Linux being no exception) seem to be yelling: NO!

Authors:

French (Fr)English (United Kingdom)

Parmi nos clients

mobileporn