The Open Source Software Engagement Award
Outside of my day job, my life revolves around three primary foci — Open Source Software, in that I am a contributor to FreeBSD and from time to time release other small projects independently; classical music, in that I play with the West Coast Symphony and am the Treasurer of the West Coast Amateur Musicians Society; and my Alma Mater, Simon Fraser University, where I am one of four alumni on the university Senate, and serve on three committees dealing with the creation and adjudication of scholarships, bursaries, and awards. While these foci are usually quite separate, I am always happy when they overlap; and so it is that I am delighted to announce the establishment, with funding from Tarsnap, of the $1000 Open Source Software Engagement Award at Simon Fraser University.Simon Fraser University, some years ago, adopted an explicit strategy of being "engaged with the community": It is not enough, the thinking goes, for a university to be an "ivory tower" of research and learning, but instead a modern university must participate in the world around it. Such programs as the Trottier Observatory are thus not merely outreach activities which attempt to recruit future students, but rather a core part of the University's mission, by bringing science to students of all ages. Similarly, SFU now has a long list of awards (generally valued between $500 and $1000) which recognize students' non-academic activities — covering everything from serving in the student society, to helping at local hospitals, to teaching English to refugees, to running kids' sports camps. Indeed, one of the only communities which I never see mentioned is the one to which I have the strongest connection: The community of open source software developers.
To me, this seems like an obvious place to encourage extra-curricular activity: Like other forms of community service, contributions to open source software constitute a clear public good; in many cases such contributions allow students to directly exercise the skills they are developing during their education; and while it is unusual in not being geographically localized or propagated by lineal descent, there is a very distinct culture within the open source community — one which has striking similarities to the gift cultures of the indigenous populations which inhabited the area where the university is now located, in fact. Unfortunately I can do nothing to direct university funding in this direction; but since I run an online backup service which has an explicit policy of donating money to support open source software, I was able to make the funding available for this award nonetheless.
To quote the terms of reference for the award:
One award [will be granted each year] to an undergraduate student who meets the following criteria:Unlike Google's Summer of Code, this isn't an award which pays a student to work on open source software; rather, it is "free money" to recognize the contributions a student has already made.
- is enrolled full-time in a Bachelor's degree program;
- is in good academic standing [GPA 2.0 or higher]; and
- has demonstrated excellence in contributing to Open Source Software project(s) on a volunteer basis, consisting of code and/or documentation.
Preference will be given to students who have taken a leadership role within a project.
Applications must include:
- a list of contributions to the Open Source Software project(s); and
- a letter of reference from another project member describing the project and the applicant's contributions.
A few notes about this: First, as a developer I know the importance of good documentation — and the fact that it is often overlooked — so I asked for it to be explicitly included as a accepted form of contribution. Second, I know that trying to lead volunteers is similar to trying to herd cats; but I also know that having people step into (or sometimes fall into) leadership positions is essential for the smooth progress of open source software projects, so I wanted to recognize those less quantifiable contributions. Third, because this award will be adjudicated by a committee which is not very familiar with open source software (or software generally, for that matter), the letters of reference are absolutely crucial. While requiring a letter from another project member does rule out single-person projects, I don't particularly mind this: I'd rather give money to a student who works with other developers than a student who just writes code by his or her self anyway. And finally, because this is an award rather than a scholarship or bursary, it is disbursed entirely based on the above terms — there is no need for a high GPA (as with scholarships) or financial need (as with bursaries).
This award should be disbursed for the first time in the Spring 2015 term, and the deadline for applications is January 16th — although given the need for a letter of reference, I would encourage students to apply well before the deadline. In future academic years this will be awarded in the Fall term.
If you are an SFU student who contributes to open source software, please apply!
Zeroing buffers is insufficient
On Thursday I wrote about the problem of zeroing buffers in an attempt to ensure that sensitive data (e.g., cryptographic keys) which is no longer wanted will not be left behind. I thought I had found a method which was guaranteed to work even with the most vexatiously optimizing C99 compiler, but it turns out that even that method wasn't guaranteed to work. That said, with a combination of tricks, it is certainly possible to make most optimizing compilers zero buffers, simply because they're not smart enough to figure out that they're not required to do so — and some day, when C11 compilers become widespread, thememset_s
function will make this easy.
There's just one catch: We've been solving the wrong problem.
With a bit of care and a cooperative compiler, we can zero a buffer — but that's not what we need. What we need to do is zero every location where sensitive data might be stored. Remember, the whole reason we had sensitive information in memory in the first place was so that we could use it; and that usage almost certainly resulted in sensitive data being copied onto the stack and into registers.
Now, some parts of the stack are easy to zero (assuming a cooperative
compiler): The parts which contain
objects which we have declared explicitly. Sensitive data may be stored
in other places on the stack, however: Compilers are free to make copies
of data, rearranging it for faster access. One of the worst culprits in
this regard is GCC: Because its register allocator does not apply any
backpressure to the common subexpression elimination routines, GCC can
decide to load values from memory into "registers", only to end
up spilling those values onto the stack when it discovers that it does
not have enough physical registers (this is one of the reasons why
gcc -O3
sometimes produces slower code than
gcc -O2
).
Even without register allocation bugs, however, all compilers will
store temporary values on the stack from time to time, and there is no
legal way to sanitize these from within C. (I know that at least one
developer, when confronted by this problem, decided to sanitize his stack
by zeroing until he triggered a page fault — but that is an extreme
solution, and is both non-portable and very clear C "undefined behaviour".)
One might expect that the situation with sensitive data left behind in registers is less problematic, since registers are liable to be reused more quickly; but in fact this can be even worse. Consider the "XMM" registers on the x86 architecture: They will only be used by the SSE family of instructions, which is not widely used in most applications — so once a value is stored in one of those registers, it may remain there for a long time. One of the rare instances those registers are used by cryptographic code, however, is for AES computations, using the "AESNI" instruction set.
It gets worse. Nearly every AES implementation using AESNI will leave two values in registers: The final block of output, and the final round key. The final block of output isn't a problem for encryption operations — it is ciphertext, which we can assume has leaked anyway — but for encryption an AES-128 key can be computed from the final round key, and for decryption the final round key is the AES-128 key. (For AES-192 and AES-256 revealing the final round key provides 128 bits of key entropy.) I am absolutely certain that there is software out there which inadvertantly keeps an AES key sitting in an XMM register long after it has been wiped from memory. As with "anonymous" temporary space allocated on the stack, there is no way to sanitize the complete CPU register set from within portable C code — which should probably come as no surprise, since C, being designed to be a portable language, is deliberately agnostic about the registers and even the instruction set of the target machine.
Let me say that again: It is impossible to safely implement any cryptosystem providing forward secrecy in C.
If compiler authors care about security, we need a new C language extension. After discussions with developers — of both cryptographic code and compilers — over the past couple of years I propose that a function attribute be added with the following meaning:
"This function handles sensitive information, and the compiler must ensure that upon return all system state which has been used implicitly by the function has been sanitized."While I am not a compiler developer, I don't think this is an entirely unreasonable feature request: Ensuring that registers are sanitized can be done via existing support for calling conventions by declaring that every register is callee-save, and sanitizing the stack should be easy given that that compiler knows precisely how much space it has allocated.
With such a feature added to the C language, it will finally be possible
— in combination with memset_s
from C11 — to write
code which obtains cryptographic keys, uses them without leaking them into
other parts of the system state, and then wipes them from memory so that a
future system compromise can't reveal the keys. People talk a lot about
forward secrecy; it's time to do something about it.
But until we get that language extension, all we can do is hope that we're lucky and our leaked state gets overwritten before it's too late. That, and perhaps avoid using AESNI instructions for AES-128.
Erratum
In my blog post yesterday concerning zeroing arrays without interference from compiler optimization I incorrectly claimed that the following code was guaranteed to zero an array on any conforming C compiler:static void * (* const volatile memset_ptr)(void *, int, size_t) = memset;
static void
secure_memzero(void * p, size_t len)
{
(memset_ptr)(p, 0, len);
}
void
dosomethingsensitive(void)
{
uint8_t key[32];
...
/* Zero sensitive information. */
secure_memzero(key, sizeof(key));
}
While I was correct in stating that the compiler is required to access
memset_ptr
and is forbidden from assuming that it will not
change to point at some other function, I was wrong to conclude that
these meant that the compiler could not avoid zeroing the buffer: The
requirement to access the memset_ptr
function pointer
does not equate to a requirement to make a call via that pointer.
As "davidtgoldblatt" pointed out on Hacker News, a compiler could opt to
load memset_ptr
into a register, compare it to
memset
, and only make the function call if they are unequal,
since a call to memset
in that place is known to have no
observable effect.
In light of this and other observations, I do not believe that there is any way to force a C99 compiler (i.e., one which conforms to the standard but is otherwise free to act as perversely as it wishes) to generate code to zero a specified non-volatile object.
How to zero a buffer
In cryptographic applications, it is often useful to wipe data from memory once it is no longer needed. In a perfect world, this is unnecessary since nobody would gain unauthorized access to that data; but if someone is able to exploit an unrelated problem — a vulnerability which yields remote code execution, or a feature which allows uninitialized memory to be read remotely, for example — then ensuring that sensitive data (e.g., cryptographic keys) is no longer accessible will reduce the impact of the attack. In short, zeroing buffers which contained sensitive information is an exploit mitigation technique.Alas, this is easier said than done. Consider the most obvious approach:
void
dosomethingsensitive(void)
{
uint8_t key[32];
...
/* Zero sensitive information. */
memset(key, 0, sizeof(key));
}
This looks like it should zero the buffer containing the key before
returning; but a "sufficiently intelligent" compiler — in this
case, most of them — is allowed to recognize that key
is not accessible via conforming C code after the function returns, and
silently optimize away the call to memset
. While this completely
subverts our intention, it is perfectly legal: The observable
behaviour of the program is unchanged by the optimization.
Now, we don't want to truly change the observable behaviour of our software
— but fortunately the C standard has a more liberal concept of
"observable" than most people. In particular, the C standard states that
the observable behaviour includes accesses to volatile objects.
What is a volatile object, you ask? It is an object defined with a volatile
type — originally intended for memory-mapped device registers, where the
mere act of reading or writing the "memory" location can have side effects.
These days, the volatile
keyword essentially means "you can't
assume that this acts like normal memory".
This brings us to a common attempt at zeroing buffers:
void
dosomethingsensitive(void)
{
uint8_t key[32];
...
/* Zero sensitive information. */
memset((volatile void *)key, 0, sizeof(key));
}
On most compilers this is no better: While there is a cast to a volatile
type, the pointer is immediately cast back to void *
since
that is the type of the first parameter to memset. This may produce a
warning message, but it won't prevent the optimization: The double cast
will be collapsed and the compiler will recognize that it is not handling
a volatile object.
A somewhat more nuanced attempt is the following:
static void
secure_memzero(void * p, size_t len)
{
volatile uint8_t * _p = p;
while (len--) *_p++ = 0;
}
void
dosomethingsensitive(void)
{
uint8_t key[32];
...
/* Zero sensitive information. */
secure_memzero(key, sizeof(key));
}
This does trick a few more compilers, but it isn't guaranteed to work
either: The C standard states that accesses to volatile objects
are part of the unalterable observable behaviour — but it says
nothing about accesses via lvalue expressions with volatile types.
Consequently a sufficiently intelligent compiler can still optimize the
buffer-zeroing away in this case — it just has to prove that the
object being accessed was not originally defined as being volatile.
Some people will try this with secure_memzero
in a separate C
file. This will trick yet more compilers, but no guarantees — with
link-time optimization the compiler may still discover your treachery.
Is it possible to zero a buffer and guarantee that the compiler won't optimize it away? Yes, and here's one way to do it:
static void * (* const volatile memset_ptr)(void *, int, size_t) = memset;
static void
secure_memzero(void * p, size_t len)
{
(memset_ptr)(p, 0, len);
}
void
dosomethingsensitive(void)
{
uint8_t key[32];
...
/* Zero sensitive information. */
secure_memzero(key, sizeof(key));
}
The trick here is the volatile function pointer memset_ptr
.
While we know that it points to memset
and will never
change, the compiler doesn't know that — and most importantly, even
if it figures out that we will never change the value of the function
pointer, the compiler is forbidden from assuming that the function pointer
won't change on its own (since that's what volatile objects do). If the
function pointer might change, it might point at a function which has side
effects; and so the compiler is forced to emit the function call which
causes the key
buffer to be zeroed.
UPDATE 2014-09-04: The above code is not guaranteed to work after all.
Now, I'm not the first person to look at this problem, of course, and
if you're willing to limit yourself to narrow platforms, you don't need
to write secure_memzero
yourself: On Windows, you can use
the SecureZeroMemory
function, and on C11 (are there any
fully C11-compliant platforms yet?) you can use the memset_s
function. Both of these are guaranteed (or at least specified) to write
the provided buffer and to not be optimized away.
There's just one catch: We've been solving the wrong problem.