Removing difference between CLI and FFI use for computing a message digest

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

Removing difference between CLI and FFI use for computing a message digest

Sage Gerard
I have a Racket program that uses libcrypto through FFI bindings to compute digests. It's wrong because it returns different answers than `openssl dgst`, regardless of hash algorithm.


I'm not expecting anyone to run this program or review Racket code in detail. The links are just there for context. I just want to know if there are common C-level mistakes libcrypto users make that would make their digests disagree with the CLI. As far as I can tell, I replicated the example on wiki.openssl.org well enough to deterministically compute a digest with any byte string.

Let me know if there is any other context I can provide.

~slg


Reply | Threaded
Open this post in threaded view
|

Re: Removing difference between CLI and FFI use for computing a message digest

Matt Caswell-2


On 15/09/2020 22:48, Sage Gerard wrote:

> I have a Racket program that uses libcrypto through FFI bindings to
> compute digests. It's wrong because it returns different answers than
> `openssl dgst`,regardless of hash algorithm.
>
> The code is here:
> https://github.com/zyrolasting/xiden/blob/libcrypto/openssl.rkt#L76
> It is based on the example in:
> https://wiki.openssl.org/index.php/EVP_Message_Digests.
>
> I'm not expecting anyone to run this program or review Racket code in
> detail. The links are just there for context. I just want to know if
> there are common C-level mistakes libcrypto users make that would make
> their digests disagree with the CLI. As far as I can tell, I replicated
> the example on wiki.openssl.org well enough to deterministically compute
> a digest with any byte string.
>
> Let me know if there is any other context I can provide.

Common "rookie" errors that spring to mind are:

1) Use strlen on binary data and end up passing the wrong length of data
to the functions.

2) Include carriage return/line feed in the input data in one context
but not in another.

Matt
Reply | Threaded
Open this post in threaded view
|

Re: Removing difference between CLI and FFI use for computing a message digest

Sage Gerard
Thank you. I resolved the issue. The root cause was an incorrect cast on the type when crossing the FFI's boundary.


~slg

‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Tuesday, September 15, 2020 6:06 PM, Matt Caswell <[hidden email]> wrote:

>
>
> On 15/09/2020 22:48, Sage Gerard wrote:
>
> > I have a Racket program that uses libcrypto through FFI bindings to
> > compute digests. It's wrong because it returns different answers than
> > `openssl dgst`,regardless of hash algorithm.
> > The code is here:
> > https://github.com/zyrolasting/xiden/blob/libcrypto/openssl.rkt#L76
> > It is based on the example in:
> > https://wiki.openssl.org/index.php/EVP_Message_Digests.
> > I'm not expecting anyone to run this program or review Racket code in
> > detail. The links are just there for context. I just want to know if
> > there are common C-level mistakes libcrypto users make that would make
> > their digests disagree with the CLI. As far as I can tell, I replicated
> > the example on wiki.openssl.org well enough to deterministically compute
> > a digest with any byte string.
> > Let me know if there is any other context I can provide.
>
> Common "rookie" errors that spring to mind are:
>
> 1.  Use strlen on binary data and end up passing the wrong length of data
>     to the functions.
>
> 2.  Include carriage return/line feed in the input data in one context
>     but not in another.
>
>     Matt
>