Performance Issue With OpenSSL 1.1.1c

classic Classic list List threaded Threaded
13 messages Options
Reply | Threaded
Open this post in threaded view
|

Performance Issue With OpenSSL 1.1.1c

Jay Foster-2
I built OpenSSL 1.1.1c from the recent release, but have noticed what
seems like a significant performance drop compared with 1.1.1b.  I
notice this when starting lighttpd.  With 1.1.1b, lighttpd starts in a
few seconds, but with 1.1.1c, it takes several minutes.

I also noticed that with 1.1.1b, the CFLAGS automatically included
'-Wall -O3', but with 1.1.1c, '-Wall -O3' is no longer included in the
CFLAGS.  was this dropped?  I  added '-Wall -O3' to the CFLAGS, but this
did not seem to have any affect on the performance issue (unrelated?).

This is for a 32-bit ARM build.

Jay
Reply | Threaded
Open this post in threaded view
|

Re: Performance Issue With OpenSSL 1.1.1c

Jay Foster-2
On 5/28/2019 10:39 AM, Jay Foster wrote:

> I built OpenSSL 1.1.1c from the recent release, but have noticed what
> seems like a significant performance drop compared with 1.1.1b.  I
> notice this when starting lighttpd.  With 1.1.1b, lighttpd starts in a
> few seconds, but with 1.1.1c, it takes several minutes.
>
> I also noticed that with 1.1.1b, the CFLAGS automatically included
> '-Wall -O3', but with 1.1.1c, '-Wall -O3' is no longer included in the
> CFLAGS.  was this dropped?  I  added '-Wall -O3' to the CFLAGS, but
> this did not seem to have any affect on the performance issue
> (unrelated?).
>
> This is for a 32-bit ARM build.
>
> Jay
>
I think I have tracked down the change in 1.1.1c that is causing this. 
It is the addition of the DEVRANDOM_WAIT functionality for linux in
e_os.h and crypto/rand/rand_unix.c.  lighttpd (libcrypto) is waiting in
a select() call on /dev/random.  After this eventually wakes up, it then
reads from /dev/urandom.  OpenSSL 1.1.1b did not do this, but instead
just read from /dev/urandom.  Is there more information about this
change (i.e., a rationale)?  I did not see anything in the CHANGES file
about it.

Jay
Reply | Threaded
Open this post in threaded view
|

Re: Performance Issue With OpenSSL 1.1.1c

Steffen Nurpmeso-2
Jay Foster wrote in <[hidden email]>:
 |On 5/28/2019 10:39 AM, Jay Foster wrote:
 |> I built OpenSSL 1.1.1c from the recent release, but have noticed what
 |> seems like a significant performance drop compared with 1.1.1b.  I
 |> notice this when starting lighttpd.  With 1.1.1b, lighttpd starts in a
 |> few seconds, but with 1.1.1c, it takes several minutes.
 |>
 |> I also noticed that with 1.1.1b, the CFLAGS automatically included
 |> '-Wall -O3', but with 1.1.1c, '-Wall -O3' is no longer included in the
 |> CFLAGS.  was this dropped?  I  added '-Wall -O3' to the CFLAGS, but
 |> this did not seem to have any affect on the performance issue
 |> (unrelated?).
 |>
 |> This is for a 32-bit ARM build.
 |>
 |> Jay
 |>
 |I think I have tracked down the change in 1.1.1c that is causing this. 
 |It is the addition of the DEVRANDOM_WAIT functionality for linux in
 |e_os.h and crypto/rand/rand_unix.c.  lighttpd (libcrypto) is waiting in
 |a select() call on /dev/random.  After this eventually wakes up, it then
 |reads from /dev/urandom.  OpenSSL 1.1.1b did not do this, but instead
 |just read from /dev/urandom.  Is there more information about this
 |change (i.e., a rationale)?  I did not see anything in the CHANGES file
 |about it.

I do not know why lighttpd ends up on /dev/random for you, but in
my opinion the Linux random stuff is both sophisticated and sucks.
The latter because (it seems that many) people end up using
haveged or similar to pimp up their entropy artificially, whereas
on the other side the initial OS seeding is no longer truly
supported.  Writing some seed to /dev/urandom does not bring any
entropy to the "real" pool.

This drove me insane on my elder boxes, and on my VM server (which
suddenly required minutes for booting, but mind you that was
actually really OpenSSH hanging on, just the boot messages made me
think something else) i even had to log in twice to end a hang of
half on hour -- by doing one (maybe two) keypress(es)!

Whereas that box does reasonable work by generating I/O and thus
I/O based entropy, once it is up.  But the pool cannot be feeded
until we get there.  I installed haveged, but this is ridiculous!
Therefore i have written a small program entropy-saver.c which
saves and restores entropy to the real pool, which is still
possible (though the interface is deprecated).
This works just fantastic, and even on my brand new laptop it is
of value.  And Linux does not take the proposed bits for granted
but about halfs that.  Feel free to use it.  Do not use it in
conjunction with haveged or something, or take care for the order.

--steffen
|
|Der Kragenbaer,                The moon bear,
|der holt sich munter           he cheerfully and one by one
|einen nach dem anderen runter  wa.ks himself off
|(By Robert Gernhardt)

entropy-saver.c (8K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Performance Issue With OpenSSL 1.1.1c

Dennis Clarke-2
In reply to this post by Jay Foster-2

> I also noticed that with 1.1.1b, the CFLAGS automatically included
> '-Wall -O3', but with 1.1.1c, '-Wall -O3' is no longer included in the
> CFLAGS.  was this dropped?  I  added '-Wall -O3' to the CFLAGS, but this
> did not seem to have any affect on the performance issue (unrelated?).
>
> This is for a 32-bit ARM build.
>
> Jay

Well, for what it is worth, on ye old Solaris 10 sparc world things were
horrific before and horrific after an upgrade. No optimization at all.
Slightly better but still horrific :

beta # uname -a
SunOS beta 5.10 Generic_150400-65 sun4u sparc SUNW,SPARC-Enterprise
beta #
beta # psrinfo -pv
The physical processor has 8 virtual processors (0-7)
   SPARC64-VII+ (portid 1024 impl 0x7 ver 0xa1 clock 2860 MHz)
beta #

     * * * before * * *

beta # /usr/local/bin/openssl speed rsa
Doing 512 bits private rsa's for 10s: 12665 512 bits private RSA's in 9.99s
Doing 512 bits public rsa's for 10s: 239095 512 bits public RSA's in 10.00s
Doing 1024 bits private rsa's for 10s: 2453 1024 bits private RSA's in 9.99s
Doing 1024 bits public rsa's for 10s: 95296 1024 bits public RSA's in 10.00s
Doing 2048 bits private rsa's for 10s: 400 2048 bits private RSA's in 10.01s
Doing 2048 bits public rsa's for 10s: 29899 2048 bits public RSA's in 10.00s
Doing 3072 bits private rsa's for 10s: 164 3072 bits private RSA's in 10.04s
Doing 3072 bits public rsa's for 10s: 14204 3072 bits public RSA's in 10.00s
Doing 4096 bits private rsa's for 10s: 78 4096 bits private RSA's in 10.00s
Doing 4096 bits public rsa's for 10s: 8257 4096 bits public RSA's in 10.00s
Doing 7680 bits private rsa's for 10s: 16 7680 bits private RSA's in 10.56s
Doing 7680 bits public rsa's for 10s: 2439 7680 bits public RSA's in 10.00s
Doing 15360 bits private rsa's for 10s: 3 15360 bits private RSA's in 13.18s
Doing 15360 bits public rsa's for 10s: 622 15360 bits public RSA's in 10.00s
OpenSSL 1.1.1b  26 Feb 2019
built on: Tue Mar 26 06:51:39 2019 UTC
options:bn(64,32) rc4(char) des(int) aes(partial) idea(int) blowfish(ptr)
compiler: /opt/developerstudio12.6/bin/cc -KPIC -m64 -Xa -g
-errfmt=error -erroff=%none -errshort=full -xstrconst -xildoff
-xmemalign=8s -xnolibmil -xcode=pic32 -xregs=no%appl -xlibmieee -mc
-ftrap=%none -xbuiltin=%none -xdebugformat=dwarf -xunroll=1 -xarch=sparc
-xdebugformat=dwarf -xstrconst -m64 -xarch=sparc -g -Xa -errfmt=error
-erroff=%none -errshort=full -xstrconst -xildoff -xmemalign=8s
-xnolibmil -xcode=pic32 -xregs=no%appl -xlibmieee -mc -ftrap=%none
-xbuiltin=%none -xunroll=1 -Qy -xdebugformat=dwarf -DFILIO_H -DB_ENDIAN
-DBN_DIV2W -DOPENSSL_PIC -DOPENSSL_CPUID_OBJ -DOPENSSL_BN_ASM_MONT
-DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DMD5_ASM
-DAES_ASM -DGHASH_ASM -DECP_NISTZ256_ASM -DPOLY1305_ASM -D_REENTRANT
-DNDEBUG -I/usr/local/include -D_POSIX_PTHREAD_SEMANTICS
-D_LARGEFILE64_SOURCE -D_TS_ERRNO
                   sign    verify    sign/s verify/s
rsa  512 bits 0.000789s 0.000042s   1267.8  23909.5
rsa 1024 bits 0.004073s 0.000105s    245.5   9529.6
rsa 2048 bits 0.025025s 0.000334s     40.0   2989.9
rsa 3072 bits 0.061220s 0.000704s     16.3   1420.4
rsa 4096 bits 0.128205s 0.001211s      7.8    825.7
rsa 7680 bits 0.660000s 0.004100s      1.5    243.9
rsa 15360 bits 4.393333s 0.016077s      0.2     62.2
beta #


     * * * after * * *

beta # /usr/local/bin/openssl version
OpenSSL 1.1.1c  28 May 2019
beta # /usr/local/bin/openssl speed rsa
Doing 512 bits private rsa's for 10s: 13654 512 bits private RSA's in 9.99s
Doing 512 bits public rsa's for 10s: 238275 512 bits public RSA's in 10.00s
Doing 1024 bits private rsa's for 10s: 2665 1024 bits private RSA's in
10.00s
Doing 1024 bits public rsa's for 10s: 95371 1024 bits public RSA's in 9.99s
Doing 2048 bits private rsa's for 10s: 431 2048 bits private RSA's in 9.99s
Doing 2048 bits public rsa's for 10s: 29914 2048 bits public RSA's in 10.00s
Doing 3072 bits private rsa's for 10s: 164 3072 bits private RSA's in 10.04s
Doing 3072 bits public rsa's for 10s: 14256 3072 bits public RSA's in 9.99s
Doing 4096 bits private rsa's for 10s: 80 4096 bits private RSA's in 10.06s
Doing 4096 bits public rsa's for 10s: 8278 4096 bits public RSA's in 10.00s
Doing 7680 bits private rsa's for 10s: 16 7680 bits private RSA's in 10.34s
Doing 7680 bits public rsa's for 10s: 2437 7680 bits public RSA's in 9.99s
Doing 15360 bits private rsa's for 10s: 3 15360 bits private RSA's in 13.17s
Doing 15360 bits public rsa's for 10s: 621 15360 bits public RSA's in 10.01s
OpenSSL 1.1.1c  28 May 2019
built on: Tue May 28 19:37:03 2019 UTC
options:bn(64,32) rc4(char) des(int) aes(partial) idea(int) blowfish(ptr)
compiler: /opt/developerstudio12.6/bin/cc -KPIC -m64 -xarch=sparc -g -Xa
-errfmt=error -erroff=%none -errshort=full -xstrconst -xildoff
-xmemalign=8s -xnolibmil -xcode=pic32 -xregs=no%appl -xlibmieee -mc
-ftrap=%none -xbuiltin=%none -xunroll=1 -Qy -xdebugformat=dwarf
-xstrconst -Xa -m64 -xarch=sparc -g -Xa -errfmt=error -erroff=%none
-errshort=full -xstrconst -xildoff -xmemalign=8s -xnolibmil -xcode=pic32
-xregs=no%appl -xlibmieee -mc -ftrap=%none -xbuiltin=%none -xunroll=1
-Qy -xdebugformat=dwarf -DFILIO_H -DB_ENDIAN -DBN_DIV2W -DOPENSSL_PIC
-DOPENSSL_CPUID_OBJ -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_GF2m
-DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DMD5_ASM -DAES_ASM -DGHASH_ASM
-DECP_NISTZ256_ASM -DPOLY1305_ASM -D_REENTRANT -DNDEBUG
-I/usr/local/include -D_POSIX_PTHREAD_SEMANTICS -D_LARGEFILE64_SOURCE
-D_TS_ERRNO
                   sign    verify    sign/s verify/s
rsa  512 bits 0.000732s 0.000042s   1366.8  23827.5
rsa 1024 bits 0.003752s 0.000105s    266.5   9546.6
rsa 2048 bits 0.023179s 0.000334s     43.1   2991.4
rsa 3072 bits 0.061220s 0.000701s     16.3   1427.0
rsa 4096 bits 0.125750s 0.001208s      8.0    827.8
rsa 7680 bits 0.646250s 0.004099s      1.5    243.9
rsa 15360 bits 4.390000s 0.016119s      0.2     62.0
beta #

The fact that it all works is good enough.


--
Dennis Clarke
RISC-V/SPARC/PPC/ARM/CISC
UNIX and Linux spoken
GreyBeard and suspenders optional
Reply | Threaded
Open this post in threaded view
|

Re: Performance Issue With OpenSSL 1.1.1c

Hal Murray
In reply to this post by Jay Foster-2

[hidden email] said:
> I think I have tracked down the change in 1.1.1c that is causing this.   It
> is the addition of the DEVRANDOM_WAIT functionality for linux in  e_os.h and
> crypto/rand/rand_unix.c.  lighttpd (libcrypto) is waiting in  a select() call
> on /dev/random.  ...

I have seen similar delays that don't involve OpenSSL.

[   10.585102] [drm] Initialized qxl 0.1.0 20120117 for 0000:00:02.0 on minor 0
[   10.605881] EDAC sbridge: Seeking for: PCI ID 8086:3ca0
[   10.605887] EDAC sbridge:  Ver: 1.1.2
[  540.286117] random: crng init done
[  540.287631] random: 7 urandom warning(s) missed due to ratelimiting

May 18 20:59:00 ntp1 kernel: [drm] Initialized qxl 0.1.0 20120117 for
0000:00:02.0 on minor 0
May 18 21:09:55 ntp1 kernel: random: crng init done


--
These are my opinions.  I hate spam.



Reply | Threaded
Open this post in threaded view
|

AW: Performance Issue With OpenSSL 1.1.1c

Dr. Matthias St. Pierre
In reply to this post by Jay Foster-2
> I think I have tracked down the change in 1.1.1c that is causing this.
> It is the addition of the DEVRANDOM_WAIT functionality for linux in
> e_os.h and crypto/rand/rand_unix.c.  lighttpd (libcrypto) is waiting in
> a select() call on /dev/random.  After this eventually wakes up, it then
> reads from /dev/urandom.  OpenSSL 1.1.1b did not do this, but instead
> just read from /dev/urandom.  Is there more information about this
> change (i.e., a rationale)?  I did not see anything in the CHANGES file
> about it.

The original discussions for this change can be found on GitHub:

- issue #8215, fixed by pull request #8251
- issue #8416, fixed by pull request #8428

(see links below).

And you are right, the change should have been mentioned in
the CHANGES file. Apologies for that.


HTH,
Matthias


https://github.com/openssl/openssl/issues/8215
https://github.com/openssl/openssl/pull/8251

https://github.com/openssl/openssl/issues/8416
https://github.com/openssl/openssl/pull/8428

Reply | Threaded
Open this post in threaded view
|

Re: Performance Issue With OpenSSL 1.1.1c

OpenSSL - User mailing list
In reply to this post by Steffen Nurpmeso-2
On 28/05/2019 23:48, Steffen Nurpmeso wrote:

> Jay Foster wrote in <[hidden email]>:
>   |On 5/28/2019 10:39 AM, Jay Foster wrote:
>   |> I built OpenSSL 1.1.1c from the recent release, but have noticed what
>   |> seems like a significant performance drop compared with 1.1.1b.  I
>   |> notice this when starting lighttpd.  With 1.1.1b, lighttpd starts in a
>   |> few seconds, but with 1.1.1c, it takes several minutes.
>   |>
>   |> I also noticed that with 1.1.1b, the CFLAGS automatically included
>   |> '-Wall -O3', but with 1.1.1c, '-Wall -O3' is no longer included in the
>   |> CFLAGS.  was this dropped?  I  added '-Wall -O3' to the CFLAGS, but
>   |> this did not seem to have any affect on the performance issue
>   |> (unrelated?).
>   |>
>   |> This is for a 32-bit ARM build.
>   |>
>   |> Jay
>   |>
>   |I think I have tracked down the change in 1.1.1c that is causing this.
>   |It is the addition of the DEVRANDOM_WAIT functionality for linux in
>   |e_os.h and crypto/rand/rand_unix.c.  lighttpd (libcrypto) is waiting in
>   |a select() call on /dev/random.  After this eventually wakes up, it then
>   |reads from /dev/urandom.  OpenSSL 1.1.1b did not do this, but instead
>   |just read from /dev/urandom.  Is there more information about this
>   |change (i.e., a rationale)?  I did not see anything in the CHANGES file
>   |about it.
>
> I do not know why lighttpd ends up on /dev/random for you, but in
> my opinion the Linux random stuff is both sophisticated and sucks.
> The latter because (it seems that many) people end up using
> haveged or similar to pimp up their entropy artificially, whereas
> on the other side the initial OS seeding is no longer truly
> supported.  Writing some seed to /dev/urandom does not bring any
> entropy to the "real" pool.
Something equivalent to your program (but not storing a bitcount field)
used to be standard in Linux boot scripts before systemd.  But it
typically used the old method of just writing the saved random bits
into /dev/{u,}random .

This makes me very surprised that they removed such a widely used
interface, can you point out when that was removed from the Linux
kernel?

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

Reply | Threaded
Open this post in threaded view
|

Re: Performance Issue With OpenSSL 1.1.1c

Steffen Nurpmeso-2
Jakob Bohm via openssl-users wrote in <23f8b94d-0078-af3c-b46a-929b9d005\
[hidden email]>:
 |On 28/05/2019 23:48, Steffen Nurpmeso wrote:
 |> Jay Foster wrote in <[hidden email]\
 |> >:
 |>|On 5/28/2019 10:39 AM, Jay Foster wrote:
 |>|> I built OpenSSL 1.1.1c from the recent release, but have noticed what
 |>|> seems like a significant performance drop compared with 1.1.1b.  I
 |>|> notice this when starting lighttpd.  With 1.1.1b, lighttpd starts in a
 |>|> few seconds, but with 1.1.1c, it takes several minutes.
 ...
 |>|I think I have tracked down the change in 1.1.1c that is causing this.
 |>|It is the addition of the DEVRANDOM_WAIT functionality for linux in
 |>|e_os.h and crypto/rand/rand_unix.c.  lighttpd (libcrypto) is waiting in
 |>|a select() call on /dev/random.  After this eventually wakes up, it then
 |>|reads from /dev/urandom.  OpenSSL 1.1.1b did not do this, but instead
 |>|just read from /dev/urandom.  Is there more information about this
 |>|change (i.e., a rationale)?  I did not see anything in the CHANGES file
 |>|about it.
 ...
 |> I do not know why lighttpd ends up on /dev/random for you, but in
 |> my opinion the Linux random stuff is both sophisticated and sucks.

P.S.: i have now looked at the OpenSSL code and understand what
you have said.  It indeed selects on /dev/random.

 |> The latter because (it seems that many) people end up using
 |> haveged or similar to pimp up their entropy artificially, whereas
 |> on the other side the initial OS seeding is no longer truly
 |> supported.  Writing some seed to /dev/urandom does not bring any
 |> entropy to the "real" pool.

 |Something equivalent to your program (but not storing a bitcount field)
 |used to be standard in Linux boot scripts before systemd.  But it
 |typically used the old method of just writing the saved random bits
 |into /dev/{u,}random .

Oh, still, for example AlpineLinux did (and still does i think,
using a script originating from Gentoo aka OpenRC) save a kilobyte
of /dev/urandom storage, to restore it upon next boot.  But it
does not feed the pool which feds /dev/random, it does not count
against /proc/sys/kernel/random/entropy_avail.

Even that i can understand a little bit (physical access would
reveal data stored in the entropy file), even though the entropy
is not used but passed through state machines, which could be
furtherly randomized when fed back in, like also dependent on the
host hardware environment interrupts which happen and depend on
actual devices i'd say, and while doing so.

But you loose all the entropy that the machine collected during
its last uptime, so you solely depend on some CPU features and the
noise that system startup produces to create a startup entropy.
After running in the problem and looking around i realized that
many people seem to run the haveged daemon (there is also a kernel
module which does something like this, but using it did not help
me out), which applies some maths, and it is mystic as it can
produce thousands of random bits in less than a second!

Even on my brand new laptop, which (stepped a decade of hardware
development for me and) has a 8th generation i5, i see hangs of
several seconds (iirc) without the little helper i attached in the
last message.  With it i have a (SysV init/BSD rc script aka
CRUX-Linux) boot time of two seconds, which is so horny i have to
write it down.

 |This makes me very surprised that they removed such a widely used
 |interface, can you point out when that was removed from the Linux
 |kernel?

Hm, ok, what they have actually removed was the RNDGETPOOL
ioctl(2) (according to random(4)).  Then my claim regarding
deprecation was misleaded and wrong.

Nonetheless it has to be said that today an administrator does not
have, that is, i have no idea whether systemd provides something
to overcome this, the possibility to simply feed in good entropy
via a shell script, unless i am mistaken.

--steffen
|
|Der Kragenbaer,                The moon bear,
|der holt sich munter           he cheerfully and one by one
|einen nach dem anderen runter  wa.ks himself off
|(By Robert Gernhardt)
Reply | Threaded
Open this post in threaded view
|

Re: Performance Issue With OpenSSL 1.1.1c

Tomas Mraz-2
In reply to this post by Jay Foster-2
On Tue, 2019-05-28 at 10:39 -0700, Jay Foster wrote:

> I built OpenSSL 1.1.1c from the recent release, but have noticed
> what
> seems like a significant performance drop compared with 1.1.1b.  I
> notice this when starting lighttpd.  With 1.1.1b, lighttpd starts in
> a
> few seconds, but with 1.1.1c, it takes several minutes.
>
> I also noticed that with 1.1.1b, the CFLAGS automatically included
> '-Wall -O3', but with 1.1.1c, '-Wall -O3' is no longer included in
> the
> CFLAGS.  was this dropped?  I  added '-Wall -O3' to the CFLAGS, but
> this
> did not seem to have any affect on the performance issue
> (unrelated?).
>
> This is for a 32-bit ARM build.

To workaround the /dev/random blocking issue, you can just add:

-DDEVRANDOM="\"/dev/urandom\""

as a parameter to ./Configure

This will remove the special handling of /dev/urandom and /dev/random
in 1.1.1c.

--
Tomáš Mráz
No matter how far down the wrong road you've gone, turn back.
                                              Turkish proverb
[You'll know whether the road is wrong if you carefully listen to your
conscience.]


Reply | Threaded
Open this post in threaded view
|

AW: Performance Issue With OpenSSL 1.1.1c

Dr. Matthias St. Pierre

> To workaround the /dev/random blocking issue, you can just add:
>
> -DDEVRANDOM="\"/dev/urandom\""
>
> as a parameter to ./Configure
>
> This will remove the special handling of /dev/urandom and /dev/random
> in 1.1.1c.


Tomáš, Jay,

I'm afraid this suggestion won't help, because `DEVRANDOM_WAIT` is defined
unconditionally in e_os.h:

https://github.com/openssl/openssl/blob/OpenSSL_1_1_1c/e_os.h#L30-L34

This means that the select() call will happen on linux independently of what
`DEVRANDOM` is defined to be:

https://github.com/openssl/openssl/blob/OpenSSL_1_1_1c/crypto/rand/rand_unix.c#L509-L535

I think that pull request #8251 needs to be reconsidered. Give me one day or two,
I'll create a GitHub issue for that and post the link here when it's ready.

Matthias


Reply | Threaded
Open this post in threaded view
|

AW: Performance Issue With OpenSSL 1.1.1c

Dr. Matthias St. Pierre
Correction, Tomáš was correct: there is an ` # ifndef DEVRANDOM` surrounding
the problematic code:

https://github.com/openssl/openssl/blob/OpenSSL_1_1_1c/e_os.h#L25-L34

Neverthelesss, I still think this code needs to be changed, because the seeding
should just work correctly out-of-the-box without having to add special
defines on the commandline.

Matthias

Reply | Threaded
Open this post in threaded view
|

Re: AW: Performance Issue With OpenSSL 1.1.1c

Dr. Matthias St. Pierre
In reply to this post by Dr. Matthias St. Pierre
Hi,

I opened an issue on GitHub to discuss this problem in more detail.

https://github.com/openssl/openssl/issues/9078

It would be nice if you could join the discussion there.


Matthias


@Jay:  in particular I'm interested to learn, which linux version and distribution
you were using. On newer systems, `getentropy()` should be the method of
choice, because it does not share the deficiencies of the `/dev/urandom` device.




On 30.05.19 02:11, Dr. Matthias St. Pierre wrote:

>> To workaround the /dev/random blocking issue, you can just add:
>>
>> -DDEVRANDOM="\"/dev/urandom\""
>>
>> as a parameter to ./Configure
>>
>> This will remove the special handling of /dev/urandom and /dev/random
>> in 1.1.1c.
>
> Tomáš, Jay,
>
> I'm afraid this suggestion won't help, because `DEVRANDOM_WAIT` is defined
> unconditionally in e_os.h:
>
> https://github.com/openssl/openssl/blob/OpenSSL_1_1_1c/e_os.h#L30-L34
>
> This means that the select() call will happen on linux independently of what
> `DEVRANDOM` is defined to be:
>
> https://github.com/openssl/openssl/blob/OpenSSL_1_1_1c/crypto/rand/rand_unix.c#L509-L535
>
> I think that pull request #8251 needs to be reconsidered. Give me one day or two,
> I'll create a GitHub issue for that and post the link here when it's ready.
>
> Matthias
>
>

Reply | Threaded
Open this post in threaded view
|

AW: AW: Performance Issue With OpenSSL 1.1.1c

Dr. Matthias St. Pierre
Yay,

there are some controversial discussions taking place on

https://github.com/openssl/openssl/issues/9078

It would be great if you could join us and provide more details about the
circumstances of your issue. In particular, information like kernel/os version
and whether the significant startup delay is encountered only at early boot time
or also when you start the daemon manually when the system is up and running.

Matthias