why TLSv1 need two tls1_enc to get decrypted data while TLSv1.1/TLSv1.2 need one in OpenSSL1.1.0f?

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

why TLSv1 need two tls1_enc to get decrypted data while TLSv1.1/TLSv1.2 need one in OpenSSL1.1.0f?

Ma chunhui
Hi, 

I met one problem when using OpenSSL1.1.0f with protocol TLSv1.
In brief, when using TLSv1,  after server side received encrypted data, and after function tls1_enc finished, the decrypted data is not put in result buffer, after another tls1_enc, the decrypted data is put in result buffer. While TLSv1.1/TLSv1.2 needs only one tls1_enc.


The way to reproduce it is quite simple:

1.some preparation: openssl req -x509 -newkey rsa:2048 -keyout key.pem -out cert.pem -days 365 -nodes
2.start server: openssl s_server -key key.pem -cert cert.pem -accept 44330 -www
    it's better to start server with gdb, and set breakpoints at tls1_enc, then continue to run. 
3.openssl s_client -connect localhost:44330 -tls1 -debug

After the client is started,  the server side will stop at breakpoint, do several "c" to make it continue to run to wait for client's messages
Then at client side, type a simple "hello" message and press Enter. Then server side will stop at tls1_enc, the input data is same as encrypted data from client side, but after Evp_Cipher and some pad removing, the decrypted data length is 0. After another tls1_enc, the decrypted data "hello" is put in the result buffer.

But if client use -tls11 or -tls12, the decrypted "hello" is put in the result buffer after the first tls1_enc.

Could anyone explains why the behavior of decryption is different between TLSv1 and TLSv1.1/TLSv1.2?

Thanks.

--
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev
Reply | Threaded
Open this post in threaded view
|

Re: why TLSv1 need two tls1_enc to get decrypted data while TLSv1.1/TLSv1.2 need one in OpenSSL1.1.0f?

Matt Caswell-2


On 27/09/17 15:44, Ma chunhui wrote:

> Hi, 
>
> I met one problem when using OpenSSL1.1.0f with protocol TLSv1.
> In brief, when using TLSv1,  after server side received encrypted data,
> and after function tls1_enc finished, the decrypted data is not put in
> result buffer, after another tls1_enc, the decrypted data is put in
> result buffer. While TLSv1.1/TLSv1.2 needs only one tls1_enc.
>
>
> The way to reproduce it is quite simple:
>
> 1.some preparation: openssl req -x509 -newkey rsa:2048 -keyout key.pem
> -out cert.pem -days 365 -nodes
> 2.start server: openssl s_server -key key.pem -cert cert.pem -accept
> 44330 -www
>     it's better to start server with gdb, and set breakpoints at
> tls1_enc, then continue to run. 
> 3.openssl s_client -connect localhost:44330 -tls1 -debug
>
> After the client is started,  the server side will stop at breakpoint,
> do several "c" to make it continue to run to wait for client's messages
> Then at client side, type a simple "hello" message and press Enter. Then
> server side will stop at tls1_enc, the input data is same as encrypted
> data from client side, but after Evp_Cipher and some pad removing, the
> decrypted data length is 0. After another tls1_enc, the decrypted data
> "hello" is put in the result buffer.
>
> But if client use -tls11 or -tls12, the decrypted "hello" is put in the
> result buffer after the first tls1_enc.
>
> Could anyone explains why the behavior of decryption is different
> between TLSv1 and TLSv1.1/TLSv1.2?

In TLSv1 and below the CBC IV is the previous record's last ciphertext
block. This can enable certain types of attack where an attacker knows
the IV that will be used for a record in advance. The problem was fixed
in the specification of TLSv1.1 and above where a new IV is used for
each record. As a counter measure to this issue OpenSSL (in TLSv1) sends
an empty record before each "real" application data record to
effectively randomise the IV and make it unpredictable so that an
attacker cannot know it in advance.

Therefore a TLSv1 OpenSSL client will send two records of application
data where a TLSv1.1 or above OpenSSL client will just send one. This
results in tls1_enc being called twice on the server side.

This behaviour can be switched off by using the option
SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS - but since this is considered
insecure that would probably be unwise.

Matt
--
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev
Reply | Threaded
Open this post in threaded view
|

Re: why TLSv1 need two tls1_enc to get decrypted data while TLSv1.1/TLSv1.2 need one in OpenSSL1.1.0f?

Ma chunhui
In reply to this post by Ma chunhui
Hi, Matt
<sorry I  repied this mail and copied replies from daily digest>
Thanks for your quickly response. 

And yes, with this option SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS in client side  the result can be get from one decryption. But the problem is, sometimes we can't control client's behavior.  Maybe the client is openssl s_client, or maybe it's a python script or some other client, and the option is not that safe.

Another interesting thing is,  if server is using OpenSSL 1.0.2 or1.0.1, the result can be get from just one decryption(one tls1_enc method) with protocol TLSv1, and the client didn't add that option either(In fact, I'm using client OpenSSL1.1.0f). So it seems the process is changed in some version of OpenSSL1.1.0.  
Could you please explain a bit more on why openSSL 1.1.0f made this change? (I mean, the SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS option is added since 0.9.6d but openSSL1.0.1 and 1.0.2 can get result in one tls1_enc, while OpenSSL1.1.0f needs two tls1_enc)

The reason why I'm focus on this is because:  I'm using JNI call OpenSSL, my usage is like this(Just like tomcat native ):  first, use BIO_write to write data to openssl, and then use SSL_read to read 0 length to trigger decryption, and then use SSL_pending to check how much data there is. If the data length is not put in result buffer in one decryption. then SSL_pending will get a 0 length result. and the whole process needs to be changed ,which is not we want.

Thanks.


>From: Matt Caswell <
[hidden email]>
>To: [hidden email]
>Subject: Re: [openssl-dev] why TLSv1 need two tls1_enc to get
>        decrypted data while TLSv1.1/TLSv1.2 need one in OpenSSL1.1.0f?
>Message-ID: <[hidden email]>
>Content-Type: text/plain; charset=utf-8
>
>
>
>On 27/09/17 15:44, Ma chunhui wrote:
>> Hi,?
>>
>> I met one problem when using OpenSSL1.1.0f with protocol TLSv1.
>> In brief, when using TLSv1, ?after server side received encrypted data,
>> and after function tls1_enc finished, the decrypted data is not put in
>> result buffer, after another tls1_enc, the decrypted data is put in
>> result buffer. While TLSv1.1/TLSv1.2 needs only one tls1_enc.
>>
>>
>> The way to reproduce it is quite simple:
>>
>> 1.some preparation: openssl req -x509 -newkey rsa:2048 -keyout key.pem
>> -out cert.pem -days 365 -nodes
>> 2.start server: openssl s_server -key key.pem -cert cert.pem -accept
>> 44330 -www
>> ? ? it's better to start server with gdb, and set breakpoints at
>> tls1_enc, then continue to run.?
>> 3.openssl s_client -connect localhost:44330 -tls1 -debug
>>
>> After the client is started, ?the server side will stop at breakpoint,
>> do several "c" to make it continue to run to wait for client's messages
>> Then at client side, type a simple "hello" message and press Enter. Then
>> server side will stop at tls1_enc, the input data is same as encrypted
>> data from client side, but after Evp_Cipher and some pad removing, the
>> decrypted data length is 0. After another tls1_enc, the decrypted data
>> "hello" is put in the result buffer.
>>
>> But if client use -tls11 or -tls12, the decrypted "hello" is put in the
>> result buffer after the first tls1_enc.
>>
>> Could anyone explains why the behavior of decryption is different
>> between TLSv1 and TLSv1.1/TLSv1.2?
>
>In TLSv1 and below the CBC IV is the previous record's last ciphertext
>block. This can enable certain types of attack where an attacker knows
>the IV that will be used for a record in advance. The problem was fixed
>in the specification of TLSv1.1 and above where a new IV is used for
>each record. As a counter measure to this issue OpenSSL (in TLSv1) sends
>an empty record before each "real" application data record to
>effectively randomise the IV and make it unpredictable so that an
>attacker cannot know it in advance.
>
>Therefore a TLSv1 OpenSSL client will send two records of application
>data where a TLSv1.1 or above OpenSSL client will just send one. This
>results in tls1_enc being called twice on the server side.
>
>This behaviour can be switched off by using the option
>SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS - but since this is considered
>insecure that would probably be unwise.
>
>Matt


On Wed, Sep 27, 2017 at 10:44 PM, Ma chunhui <[hidden email]> wrote:
Hi, 

I met one problem when using OpenSSL1.1.0f with protocol TLSv1.
In brief, when using TLSv1,  after server side received encrypted data, and after function tls1_enc finished, the decrypted data is not put in result buffer, after another tls1_enc, the decrypted data is put in result buffer. While TLSv1.1/TLSv1.2 needs only one tls1_enc.


The way to reproduce it is quite simple:

1.some preparation: openssl req -x509 -newkey rsa:2048 -keyout key.pem -out cert.pem -days 365 -nodes
2.start server: openssl s_server -key key.pem -cert cert.pem -accept 44330 -www
    it's better to start server with gdb, and set breakpoints at tls1_enc, then continue to run. 
3.openssl s_client -connect localhost:44330 -tls1 -debug

After the client is started,  the server side will stop at breakpoint, do several "c" to make it continue to run to wait for client's messages
Then at client side, type a simple "hello" message and press Enter. Then server side will stop at tls1_enc, the input data is same as encrypted data from client side, but after Evp_Cipher and some pad removing, the decrypted data length is 0. After another tls1_enc, the decrypted data "hello" is put in the result buffer.

But if client use -tls11 or -tls12, the decrypted "hello" is put in the result buffer after the first tls1_enc.

Could anyone explains why the behavior of decryption is different between TLSv1 and TLSv1.1/TLSv1.2?

Thanks.


--
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev
Reply | Threaded
Open this post in threaded view
|

Re: why TLSv1 need two tls1_enc to get decrypted data while TLSv1.1/TLSv1.2 need one in OpenSSL1.1.0f?

Matt Caswell-2


On 28/09/17 14:38, Ma chunhui wrote:

> Hi, Matt
> <sorry I  repied this mail and copied replies from daily digest>
> Thanks for your quickly response. 
>
> And yes, with this option SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS in client
> side  the result can be get from one decryption. But the problem is,
> sometimes we can't control client's behavior.  Maybe the client is
> openssl s_client, or maybe it's a python script or some other client,
> and the option is not that safe.
>
> Another interesting thing is,  if server is using OpenSSL 1.0.2 or1.0.1,
> the result can be get from just one decryption(one tls1_enc method) with
> protocol TLSv1, and the client didn't add that option either(In fact,
> I'm using client OpenSSL1.1.0f). So it seems the process is changed in
> some version of OpenSSL1.1.0.  
> Could you please explain a bit more on why openSSL 1.1.0f made this
> change? (I mean, the SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS option is added
> since 0.9.6d but openSSL1.0.1 and 1.0.2 can get result in one tls1_enc,
> while OpenSSL1.1.0f needs two tls1_enc)

Well I can't replicate that result. If using an OpenSSL 1.0.2 server I
still see two calls tls1_enc (with both a 1.1.0 client and a 1.0.2
client). Note - that doesn't necessarily translate to two SSL_read()
calls (see below).


> The reason why I'm focus on this is because:  I'm using JNI call
> OpenSSL, my usage is like this(Just like tomcat native ):  first, use
> BIO_write to write data to openssl, and then use SSL_read to read 0
> length to trigger decryption, and then use SSL_pending to check how much
> data there is. If the data length is not put in result buffer in one
> decryption. then SSL_pending will get a 0 length result. and the whole
> process needs to be changed ,which is not we want.

If OpenSSL reads an empty record then it will immediately try to read
the next record if one is available without returning control back to
the calling application. Therefore a single SSL_read() call can result
in multiple tls1_enc calls. However this is highly dependent on timing.
If the empty record and the following non-empty record arrive at the
destination slightly separated by time then when OpenSSL reads the first
empty record it will attempt to read the next record. This will fail
because it has not arrived yet and control will return to the calling
application. So sometimes you will have to call SSL_read() twice and
sometimes you will have to call it once. This is possibly a reason why
you see different behaviour between 1.0.2 and 1.1.0, i.e. because this
is very timing sensitive.

Basically what you are doing is wrong. You cannot rely on the fact that
calling SSL_read() will definitely result in readable data being
decrypted. It might do - it might not. Another scenario where this could
occur is if a record arrives that is split across multiple TCP packets.
You call SSL_read() when the first TCP packet arrives - but because a
full record isn't there yet you get no readable application data back.
Yet another scenario is if the client attempts a renegotiation: network
packets arrive but when decrypted they don't actually contain any
application data - just handshake data.

All of this is very reliant on timing, how the client behaves (which you
cannot control) and how the network behaves. If this was working for you
before then it sounds like you've been lucky so far.

Matt


>
> Thanks.
>
>
>>From: Matt Caswell <[hidden email] <mailto:[hidden email]>>
>>To: [hidden email] <mailto:[hidden email]>
>>Subject: Re: [openssl-dev] why TLSv1 need two tls1_enc to get
>>        decrypted data while TLSv1.1/TLSv1.2 need one in OpenSSL1.1.0f?
>>Message-ID: <[hidden email]
> <mailto:[hidden email]>>
>>Content-Type: text/plain; charset=utf-8
>>
>>
>>
>>On 27/09/17 15:44, Ma chunhui wrote:
>>> Hi,?
>>>
>>> I met one problem when using OpenSSL1.1.0f with protocol TLSv1.
>>> In brief, when using TLSv1, ?after server side received encrypted data,
>>> and after function tls1_enc finished, the decrypted data is not put in
>>> result buffer, after another tls1_enc, the decrypted data is put in
>>> result buffer. While TLSv1.1/TLSv1.2 needs only one tls1_enc.
>>>
>>>
>>> The way to reproduce it is quite simple:
>>>
>>> 1.some preparation: openssl req -x509 -newkey rsa:2048 -keyout key.pem
>>> -out cert.pem -days 365 -nodes
>>> 2.start server: openssl s_server -key key.pem -cert cert.pem -accept
>>> 44330 -www
>>> ? ? it's better to start server with gdb, and set breakpoints at
>>> tls1_enc, then continue to run.?
>>> 3.openssl s_client -connect localhost:44330 -tls1 -debug
>>>
>>> After the client is started, ?the server side will stop at breakpoint,
>>> do several "c" to make it continue to run to wait for client's messages
>>> Then at client side, type a simple "hello" message and press Enter. Then
>>> server side will stop at tls1_enc, the input data is same as encrypted
>>> data from client side, but after Evp_Cipher and some pad removing, the
>>> decrypted data length is 0. After another tls1_enc, the decrypted data
>>> "hello" is put in the result buffer.
>>>
>>> But if client use -tls11 or -tls12, the decrypted "hello" is put in the
>>> result buffer after the first tls1_enc.
>>>
>>> Could anyone explains why the behavior of decryption is different
>>> between TLSv1 and TLSv1.1/TLSv1.2?
>>
>>In TLSv1 and below the CBC IV is the previous record's last ciphertext
>>block. This can enable certain types of attack where an attacker knows
>>the IV that will be used for a record in advance. The problem was fixed
>>in the specification of TLSv1.1 and above where a new IV is used for
>>each record. As a counter measure to this issue OpenSSL (in TLSv1) sends
>>an empty record before each "real" application data record to
>>effectively randomise the IV and make it unpredictable so that an
>>attacker cannot know it in advance.
>>
>>Therefore a TLSv1 OpenSSL client will send two records of application
>>data where a TLSv1.1 or above OpenSSL client will just send one. This
>>results in tls1_enc being called twice on the server side.
>>
>>This behaviour can be switched off by using the option
>>SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS - but since this is considered
>>insecure that would probably be unwise.
>>
>>Matt
>
>
> On Wed, Sep 27, 2017 at 10:44 PM, Ma chunhui <[hidden email]
> <mailto:[hidden email]>> wrote:
>
>     Hi, 
>
>     I met one problem when using OpenSSL1.1.0f with protocol TLSv1.
>     In brief, when using TLSv1,  after server side received encrypted
>     data, and after function tls1_enc finished, the decrypted data is
>     not put in result buffer, after another tls1_enc, the decrypted data
>     is put in result buffer. While TLSv1.1/TLSv1.2 needs only one tls1_enc.
>
>
>     The way to reproduce it is quite simple:
>
>     1.some preparation: openssl req -x509 -newkey rsa:2048 -keyout
>     key.pem -out cert.pem -days 365 -nodes
>     2.start server: openssl s_server -key key.pem -cert cert.pem -accept
>     44330 -www
>         it's better to start server with gdb, and set breakpoints at
>     tls1_enc, then continue to run. 
>     3.openssl s_client -connect localhost:44330 -tls1 -debug
>
>     After the client is started,  the server side will stop at
>     breakpoint, do several "c" to make it continue to run to wait for
>     client's messages
>     Then at client side, type a simple "hello" message and press Enter.
>     Then server side will stop at tls1_enc, the input data is same as
>     encrypted data from client side, but after Evp_Cipher and some pad
>     removing, the decrypted data length is 0. After another tls1_enc,
>     the decrypted data "hello" is put in the result buffer.
>
>     But if client use -tls11 or -tls12, the decrypted "hello" is put in
>     the result buffer after the first tls1_enc.
>
>     Could anyone explains why the behavior of decryption is different
>     between TLSv1 and TLSv1.1/TLSv1.2?
>
>     Thanks.
>
>
--
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev
Reply | Threaded
Open this post in threaded view
|

Re: why TLSv1 need two tls1_enc to get decrypted data while TLSv1.1/TLSv1.2 need one in OpenSSL1.1.0f?

Ma chunhui
Hi, Matt
First, sorry for the mistake I made. In fact, with openSSL1.0.2 as the server, tls1_enc will be called twice in TLSv1, so it's not a timing issue.
And besides, we don't rely on SSL_read will definitely result in readable data being decrypted, I'm just saying the most general process, of course we will do some check and deal with every situations, and my server has been working fine at a high concurrency with old version of Openssl for a long time.

After some more debugging with my server(not OpenSSL s_server as the server) with different version OpenSSL: 1.0.2l & 1.1.0f, I found the reason why with openSSL1.0.2 my server can pass with TLSv1 while openSSL1.1.0 not.

The reason is with OpenSSL1.1.0, after the first decryption, the length of the result buffer is 0, and in the end of method ssl3_get_record, RECORD_LAYER_set_numrpipes is call to set the numrpipes to 1.  And then, in second SSL_read call (Note: when we call SSL_read, we pass 0 bytes as the length parameter, so in one SSL_read, only one tls1_enc is called. But we will call SSL_read twice), in ssl3_read_bytes, before ssl3_get_record, the value of numrpipes is checked and is not 0, so it won't call ssl3_get_record to process the second record and hence the second tls1_enc is not called. And I checked the place where this numrpipes will be set to 0 again, but unfortunately, since rr[0].read is always 0, so it won't be set to 0 again. And that's the reason why SSL_read can't get decrypted data. 

While with openSSL1.0.2,the condition of ssl3_get_record is if(rr->length || xxx),  in the second call to SSL_read, the rr->length is 0 in this case, so the second tls1_enc is called and the decrypted data is generated.

I don't quite understand of the "numrpipes", could you please explain a bit more about the change of numrpipes?   And it looks like there is a bug for me because it seems the numrpipes is not set to 0 correctly.

Thanks.

On Thu, Sep 28, 2017 at 11:19 PM, Matt Caswell <[hidden email]> wrote:


On 28/09/17 14:38, Ma chunhui wrote:
> Hi, Matt
> <sorry I  repied this mail and copied replies from daily digest>
> Thanks for your quickly response. 
>
> And yes, with this option SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS in client
> side  the result can be get from one decryption. But the problem is,
> sometimes we can't control client's behavior.  Maybe the client is
> openssl s_client, or maybe it's a python script or some other client,
> and the option is not that safe.
>
> Another interesting thing is,  if server is using OpenSSL 1.0.2 or1.0.1,
> the result can be get from just one decryption(one tls1_enc method) with
> protocol TLSv1, and the client didn't add that option either(In fact,
> I'm using client OpenSSL1.1.0f). So it seems the process is changed in
> some version of OpenSSL1.1.0.  
> Could you please explain a bit more on why openSSL 1.1.0f made this
> change? (I mean, the SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS option is added
> since 0.9.6d but openSSL1.0.1 and 1.0.2 can get result in one tls1_enc,
> while OpenSSL1.1.0f needs two tls1_enc)

Well I can't replicate that result. If using an OpenSSL 1.0.2 server I
still see two calls tls1_enc (with both a 1.1.0 client and a 1.0.2
client). Note - that doesn't necessarily translate to two SSL_read()
calls (see below).


> The reason why I'm focus on this is because:  I'm using JNI call
> OpenSSL, my usage is like this(Just like tomcat native ):  first, use
> BIO_write to write data to openssl, and then use SSL_read to read 0
> length to trigger decryption, and then use SSL_pending to check how much
> data there is. If the data length is not put in result buffer in one
> decryption. then SSL_pending will get a 0 length result. and the whole
> process needs to be changed ,which is not we want.

If OpenSSL reads an empty record then it will immediately try to read
the next record if one is available without returning control back to
the calling application. Therefore a single SSL_read() call can result
in multiple tls1_enc calls. However this is highly dependent on timing.
If the empty record and the following non-empty record arrive at the
destination slightly separated by time then when OpenSSL reads the first
empty record it will attempt to read the next record. This will fail
because it has not arrived yet and control will return to the calling
application. So sometimes you will have to call SSL_read() twice and
sometimes you will have to call it once. This is possibly a reason why
you see different behaviour between 1.0.2 and 1.1.0, i.e. because this
is very timing sensitive.

Basically what you are doing is wrong. You cannot rely on the fact that
calling SSL_read() will definitely result in readable data being
decrypted. It might do - it might not. Another scenario where this could
occur is if a record arrives that is split across multiple TCP packets.
You call SSL_read() when the first TCP packet arrives - but because a
full record isn't there yet you get no readable application data back.
Yet another scenario is if the client attempts a renegotiation: network
packets arrive but when decrypted they don't actually contain any
application data - just handshake data.

All of this is very reliant on timing, how the client behaves (which you
cannot control) and how the network behaves. If this was working for you
before then it sounds like you've been lucky so far.

Matt


>
> Thanks.
>
>
>>From: Matt Caswell <[hidden email] <mailto:[hidden email]>>
>>To: [hidden email] <mailto:[hidden email]>
>>Subject: Re: [openssl-dev] why TLSv1 need two tls1_enc to get
>>        decrypted data while TLSv1.1/TLSv1.2 need one in OpenSSL1.1.0f?
>>Message-ID: <[hidden email]
> <mailto:[hidden email]>>
>>Content-Type: text/plain; charset=utf-8
>>
>>
>>
>>On 27/09/17 15:44, Ma chunhui wrote:
>>> Hi,?
>>>
>>> I met one problem when using OpenSSL1.1.0f with protocol TLSv1.
>>> In brief, when using TLSv1, ?after server side received encrypted data,
>>> and after function tls1_enc finished, the decrypted data is not put in
>>> result buffer, after another tls1_enc, the decrypted data is put in
>>> result buffer. While TLSv1.1/TLSv1.2 needs only one tls1_enc.
>>>
>>>
>>> The way to reproduce it is quite simple:
>>>
>>> 1.some preparation: openssl req -x509 -newkey rsa:2048 -keyout key.pem
>>> -out cert.pem -days 365 -nodes
>>> 2.start server: openssl s_server -key key.pem -cert cert.pem -accept
>>> 44330 -www
>>> ? ? it's better to start server with gdb, and set breakpoints at
>>> tls1_enc, then continue to run.?
>>> 3.openssl s_client -connect localhost:44330 -tls1 -debug
>>>
>>> After the client is started, ?the server side will stop at breakpoint,
>>> do several "c" to make it continue to run to wait for client's messages
>>> Then at client side, type a simple "hello" message and press Enter. Then
>>> server side will stop at tls1_enc, the input data is same as encrypted
>>> data from client side, but after Evp_Cipher and some pad removing, the
>>> decrypted data length is 0. After another tls1_enc, the decrypted data
>>> "hello" is put in the result buffer.
>>>
>>> But if client use -tls11 or -tls12, the decrypted "hello" is put in the
>>> result buffer after the first tls1_enc.
>>>
>>> Could anyone explains why the behavior of decryption is different
>>> between TLSv1 and TLSv1.1/TLSv1.2?
>>
>>In TLSv1 and below the CBC IV is the previous record's last ciphertext
>>block. This can enable certain types of attack where an attacker knows
>>the IV that will be used for a record in advance. The problem was fixed
>>in the specification of TLSv1.1 and above where a new IV is used for
>>each record. As a counter measure to this issue OpenSSL (in TLSv1) sends
>>an empty record before each "real" application data record to
>>effectively randomise the IV and make it unpredictable so that an
>>attacker cannot know it in advance.
>>
>>Therefore a TLSv1 OpenSSL client will send two records of application
>>data where a TLSv1.1 or above OpenSSL client will just send one. This
>>results in tls1_enc being called twice on the server side.
>>
>>This behaviour can be switched off by using the option
>>SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS - but since this is considered
>>insecure that would probably be unwise.
>>
>>Matt
>
>
> On Wed, Sep 27, 2017 at 10:44 PM, Ma chunhui <[hidden email]
> <mailto:[hidden email]>> wrote:
>
>     Hi, 
>
>     I met one problem when using OpenSSL1.1.0f with protocol TLSv1.
>     In brief, when using TLSv1,  after server side received encrypted
>     data, and after function tls1_enc finished, the decrypted data is
>     not put in result buffer, after another tls1_enc, the decrypted data
>     is put in result buffer. While TLSv1.1/TLSv1.2 needs only one tls1_enc.
>
>
>     The way to reproduce it is quite simple:
>
>     1.some preparation: openssl req -x509 -newkey rsa:2048 -keyout
>     key.pem -out cert.pem -days 365 -nodes
>     2.start server: openssl s_server -key key.pem -cert cert.pem -accept
>     44330 -www
>         it's better to start server with gdb, and set breakpoints at
>     tls1_enc, then continue to run. 
>     3.openssl s_client -connect localhost:44330 -tls1 -debug
>
>     After the client is started,  the server side will stop at
>     breakpoint, do several "c" to make it continue to run to wait for
>     client's messages
>     Then at client side, type a simple "hello" message and press Enter.
>     Then server side will stop at tls1_enc, the input data is same as
>     encrypted data from client side, but after Evp_Cipher and some pad
>     removing, the decrypted data length is 0. After another tls1_enc,
>     the decrypted data "hello" is put in the result buffer.
>
>     But if client use -tls11 or -tls12, the decrypted "hello" is put in
>     the result buffer after the first tls1_enc.
>
>     Could anyone explains why the behavior of decryption is different
>     between TLSv1 and TLSv1.1/TLSv1.2?
>
>     Thanks.
>
>


--
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev