base64_decode_nopad() destination buffer length problem
TL;DR; the base64_decode_nopad()
doesn't work.
Here is a concrete example. We have 40 bytes
of binary data that we want to encode. With padding, that is using base64_encode()
we end up with a size of 56 bytes
. Those resulting bytes, when passed to base64_decode()
, the check on the destination buffer done in that function makes it that we need 42 bytes
and not the original 40 bytes
. This is due because of the padding.
One solution, instead of explicitly adding 2 bytes like it's been done in many places in the code, it is to use the _nopad()
interface. However, the base64_decode_nopad()
simply adds some =
at the end with a new source buffer and passes along the base64_decode() function. However the dstlen
that is the destination buffer length where the decoded data will go is not updated to reflect the new length of the source buffer so the call fails because of the dstlen
check in the decode function.
Passing 40 bytes for dstlen
and 54 for srclen
(which is the expected value without padding), the nopad() call changes srclen
to 56 bytes but then dstlen
should be 42 bytes else the call fails.
I'm not sure how to fix that properly apart from making _nopad()
call allocating a new source buffer if needed. I would much prefer that than the caller adding bytes beforehand making the code cryptic and honestly unsafe to any future length changes.
Thoughts?