openssl commandline client use

classic Classic list List threaded Threaded
11 messages Options
Reply | Threaded
Open this post in threaded view
|

openssl commandline client use

Paul Chubb
Hi,
      I am in the process of using the openssl suite for many things including encrypting private information. There is a heap of information on the internet suggesting using the openssl client for these sort of purposes. However in a very few places there are also statements that the client is only for testing the library and shouldn't be used in anger, that it isn't secure or that only certain protocols are correctly implemented. There isn't a statement in the documentation about this and we all know the religiosity of some statements on the internet.

I am using aes-256-cbc to encrypt streamed data (a backup).

I am running OpenSSL 1.1.0g 2 Nov 2017

Any comments would be helpful.

many thanks in advance

Paul

--
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users
Reply | Threaded
Open this post in threaded view
|

Re: openssl commandline client use

Michael Wojcik
> From: openssl-users [mailto:[hidden email]] On Behalf Of Paul Chubb
> Sent: Wednesday, October 10, 2018 19:16

> I am in the process of using the openssl suite for many things including
> encrypting private information. There is a heap of information on the internet
> suggesting using the openssl client for these sort of purposes. However in a very few
> places there are also statements that the client is only for testing the library and
> shouldn't be used in anger, that it isn't secure or that only certain protocols are
> correctly implemented. ...

> I am using aes-256-cbc to encrypt streamed data (a backup).

You haven't explained your threat model.

If you're encrypting your diary so your kid sister can't read about your crush at school, then sure, use the openssl utility with aes-256-cbc.

If the data you're encrypting falls under a regulatory regime with potential stiff legal consequences, like HIPPA or GDPR, then this is a Bad Idea.

Here's the thing: Forget, for a moment, the question of whether you should script crypto using the openssl utility. The real issue, to my mind, is that cryptographic systems assembled by non-cryptographers are very often - probably almost often - fatally flawed. And AES is not a cryptosystem; it's a primitive. All the openssl utility is giving you there is a cipher, key length, and combining mode.

Some potential problems that are obvious right off the bat:

- No integrity protection over the data. You've run into the Moxie Marlinspike Cryptographic Doom Principle right off the bat.

- CBC has problems. *All* the block cipher combining modes have problems. Stream ciphers have problems. How are you avoiding those problems? (Note that experienced people make implementation mistakes in this area.)

- What are you doing about padding? Do you have predictable plaintext near the beginning? When do you re-key?

- How do you generate and manage your keys? Are you practicing good key hygiene?

- Data recovery from an encrypted backup is tough. With CBC, one bit goes astray and you've lost everything after that. (Well, you can brute-force a single-bit error, but it's going to be a tiresome exercise unless you automate it. Multi-bit errors will obviously be worse, and combinatorial explosion will bite you quickly.) ECC after encryption would be a good idea here, perhaps with an erasure code. Maybe you're storing to a suitable RAID mode or something; you haven't told us.

I can't really suggest alternatives, partly because this isn't an area I pay a lot of attention to, but mostly because you haven't explained your use case. "Data backup" doesn't mean much unless we have some idea of how much data, how often, what sort of data, what it's being backed up to, how sensitive it is, and so forth.

Sorry if this sounds overly negative, but in my opinion your question is severely underdetermined, and it sounds like you're potentially open to some rather serious failures. That may not be a concern - again, I don't know what your use case or threat model is.

--
Michael Wojcik
Distinguished Engineer, Micro Focus



--
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users
Reply | Threaded
Open this post in threaded view
|

Re: openssl commandline client use

Viktor Dukhovni
On Thu, Oct 11, 2018 at 01:23:41AM +0000, Michael Wojcik wrote:

> - Data recovery from an encrypted backup is tough. With CBC, one bit goes
> astray and you've lost everything after that.

No, a 1 bit error in CBC ciphertext breaks only the current block,
and introduces a 1 bit error into the plaintext of the next block.
After that, you're back in sync.

But yes, indeed "openssl enc" offers little integrity protection.
One should probably break the data into chunks and encrypt and MAC
each chunk with the MAC covering the chunk sequence number, and
whether it is the last chunk.

--
        Viktor.
--
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users
Reply | Threaded
Open this post in threaded view
|

Re: openssl commandline client use

Paul Chubb
Hi thanks for the responses. I try not to do crypto for the very reasons you raise - i simply don't know enough and your (good) pointed questions have demonstrated that.

 Context:

We are trying for GDPR and other privacy law compliance. We probably need to meet GDPR, US requirements, Australian requirements, Japanese requirements and UK requirements. The data is not hugely critical. It contains names and exercise metrics. It doesn't contain credit card details or anything above the level of names. I don't think it contains addresses but probably does contain names of recognizable organisations which could provide a tuple for identification purposes if the data was compromised. 

A mysqldump of the db in production at present is around 170Gb however that is text based and we are using a binary solution based on percola xtrabackup so the final size should be smaller for the current time. The documentation on this by the backup software provider is very simplistic and simply pipes the stream of data through openssl and then gzip:

mariabackup --user=root --backup --stream=xbstream | gzip | openssl  enc -aes-256-cbc -k mypass > backup.xb.gz.enc
There are thousands of posts that do similar and in non-crypto circles it is the accepted way of doing things. That was my starting point.

I am  not using a password but generating keys. The symetric key is generated by "openssl rand -hex 32" which I have read is suitable. The Nonce or IV is generated  by "openssl rand -hex 16". These values are used once and then kept for decryption of that file. They in turn are encrypted before storing - see below.

The two keys are held in ram while the backup occurs. They are applied to openssl using the -K and -iv switches. They are then written out to disk. encrypted with a list of public RSA keys and the original deleted from disk. I then package it all up and delete the intervening encrypted files leaving me with an archive with the encrypted backup and several copies of the nonce and key each encrypted by different people's public keys.

The backup regime has not been decided as yet. I expect it will be something like a full backup per week and then either incrementals or differentials on the other days. I expect that the fulls will be kept for 30 days and the deltas for 14days. The database backups will sit on a secured server disk which in turn will be backed up by the hosting provider with whatever process and rotation they use.

I would expect that headers in the backup stream would be predictable, whether they provide a good enough attack surface I don't know. In addition the clients of course know their data that may also provide an attack surface. Finally I have included an encrypted file with a known plain text phrase. Based on your comments, this will probably not get into production but provides an easy way for testing and debugging to check that things are encrypted or not.

The kind of statements that prompted my question was: https://security.stackexchange.com/questions/182277/is-openssl-aes-256-cbc-encryption-safe-for-offsite-backup whose comments suggest that openssl should never be used for production purposes.Their suggestion was GnuPG which isn't suitable for this purpose because it does password/key management that assumes a desktop/laptop environment and manual process. I also looked at ccrypt and mcrypt but then went back to openssl. 

Cheers Paul






On Thu, Oct 11, 2018 at 2:12 PM Viktor Dukhovni <[hidden email]> wrote:
On Thu, Oct 11, 2018 at 01:23:41AM +0000, Michael Wojcik wrote:

> - Data recovery from an encrypted backup is tough. With CBC, one bit goes
> astray and you've lost everything after that.

No, a 1 bit error in CBC ciphertext breaks only the current block,
and introduces a 1 bit error into the plaintext of the next block.
After that, you're back in sync.

But yes, indeed "openssl enc" offers little integrity protection.
One should probably break the data into chunks and encrypt and MAC
each chunk with the MAC covering the chunk sequence number, and
whether it is the last chunk.

--
        Viktor.
--
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users

--
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users
Reply | Threaded
Open this post in threaded view
|

Re: openssl commandline client use

Peter Magnusson
You would be better off with AES-CCM or such for your backup, that
gives you the integrity check.
 i.e. you would be reasonably sure what you decrypt is encrypted with your key.

So the fist question would be why even consider AES-CBC? Somewhere in
the decision process you ought to go "Is the major goal to support
very strange extremely limited legacy embedded environment where
library developers claims CBC is the only option?" and if no: don't
consider CBC. Since you are using OpenSSL, you clearly does not have
any problem that would give you a compelling reason to use CBC.

Using CBC in anything new design does not make much sense.
- CBC is weak against oracle attacks (online interactions with a
decryption oracle)
- CBC has no protection against modifications. If decryption succeeds,
you don't know if the resulting plaintext originated from

However - Your use-case as explained ( openssl enc -aes-256-cbc -k
mypass > backup.xb.gz.enc ) might not be a use-case where the AES-CBC
vulnerabilities are most dangerous though, if the decryption process
is a slow manual process. E.g. Padding oracle attacks against CBC
requires on average 128 decryption to crack one byte, and need an
online oracle (such as e.g. a backup decryption/restore service) to be
executed. With a human entering the decryption key manually for each
attempt, you'd expect the human to get suspicious with thousands of
decryption requests.  But as soon as you want to automate restore
functions and remove the human, you might enable oracle style attacks.

On Thu, Oct 11, 2018 at 6:45 AM Paul Chubb <[hidden email]> wrote:

>
> Hi thanks for the responses. I try not to do crypto for the very reasons you raise - i simply don't know enough and your (good) pointed questions have demonstrated that.
>
>  Context:
>
> We are trying for GDPR and other privacy law compliance. We probably need to meet GDPR, US requirements, Australian requirements, Japanese requirements and UK requirements. The data is not hugely critical. It contains names and exercise metrics. It doesn't contain credit card details or anything above the level of names. I don't think it contains addresses but probably does contain names of recognizable organisations which could provide a tuple for identification purposes if the data was compromised.
>
> A mysqldump of the db in production at present is around 170Gb however that is text based and we are using a binary solution based on percola xtrabackup so the final size should be smaller for the current time. The documentation on this by the backup software provider is very simplistic and simply pipes the stream of data through openssl and then gzip:
>
> mariabackup --user=root --backup --stream=xbstream | gzip | openssl  enc -aes-256-cbc -k mypass > backup.xb.gz.enc
>
> There are thousands of posts that do similar and in non-crypto circles it is the accepted way of doing things. That was my starting point.
>
> I am  not using a password but generating keys. The symetric key is generated by "openssl rand -hex 32" which I have read is suitable. The Nonce or IV is generated  by "openssl rand -hex 16". These values are used once and then kept for decryption of that file. They in turn are encrypted before storing - see below.
>
> The two keys are held in ram while the backup occurs. They are applied to openssl using the -K and -iv switches. They are then written out to disk. encrypted with a list of public RSA keys and the original deleted from disk. I then package it all up and delete the intervening encrypted files leaving me with an archive with the encrypted backup and several copies of the nonce and key each encrypted by different people's public keys.
>
> The backup regime has not been decided as yet. I expect it will be something like a full backup per week and then either incrementals or differentials on the other days. I expect that the fulls will be kept for 30 days and the deltas for 14days. The database backups will sit on a secured server disk which in turn will be backed up by the hosting provider with whatever process and rotation they use.
>
> I would expect that headers in the backup stream would be predictable, whether they provide a good enough attack surface I don't know. In addition the clients of course know their data that may also provide an attack surface. Finally I have included an encrypted file with a known plain text phrase. Based on your comments, this will probably not get into production but provides an easy way for testing and debugging to check that things are encrypted or not.
>
> The kind of statements that prompted my question was: https://security.stackexchange.com/questions/182277/is-openssl-aes-256-cbc-encryption-safe-for-offsite-backup whose comments suggest that openssl should never be used for production purposes.Their suggestion was GnuPG which isn't suitable for this purpose because it does password/key management that assumes a desktop/laptop environment and manual process. I also looked at ccrypt and mcrypt but then went back to openssl.
>
> Cheers Paul
>
>
>
>
>
>
> On Thu, Oct 11, 2018 at 2:12 PM Viktor Dukhovni <[hidden email]> wrote:
>>
>> On Thu, Oct 11, 2018 at 01:23:41AM +0000, Michael Wojcik wrote:
>>
>> > - Data recovery from an encrypted backup is tough. With CBC, one bit goes
>> > astray and you've lost everything after that.
>>
>> No, a 1 bit error in CBC ciphertext breaks only the current block,
>> and introduces a 1 bit error into the plaintext of the next block.
>> After that, you're back in sync.
>>
>> But yes, indeed "openssl enc" offers little integrity protection.
>> One should probably break the data into chunks and encrypt and MAC
>> each chunk with the MAC covering the chunk sequence number, and
>> whether it is the last chunk.
>>
>> --
>>         Viktor.
>> --
>> openssl-users mailing list
>> To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users
>
> --
> openssl-users mailing list
> To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users
--
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users
Reply | Threaded
Open this post in threaded view
|

Re: openssl commandline client use

Matt Caswell-2


On 11/10/18 09:47, Peter Magnusson wrote:
> You would be better off with AES-CCM or such for your backup, that
> gives you the integrity check.
>  i.e. you would be reasonably sure what you decrypt is encrypted with your key.

I'd just point out that CCM and other AEAD modes are not supported in
the openssl enc app.

Matt


>
> So the fist question would be why even consider AES-CBC? Somewhere in
> the decision process you ought to go "Is the major goal to support
> very strange extremely limited legacy embedded environment where
> library developers claims CBC is the only option?" and if no: don't
> consider CBC. Since you are using OpenSSL, you clearly does not have
> any problem that would give you a compelling reason to use CBC.
>
> Using CBC in anything new design does not make much sense.
> - CBC is weak against oracle attacks (online interactions with a
> decryption oracle)
> - CBC has no protection against modifications. If decryption succeeds,
> you don't know if the resulting plaintext originated from
>
> However - Your use-case as explained ( openssl enc -aes-256-cbc -k
> mypass > backup.xb.gz.enc ) might not be a use-case where the AES-CBC
> vulnerabilities are most dangerous though, if the decryption process
> is a slow manual process. E.g. Padding oracle attacks against CBC
> requires on average 128 decryption to crack one byte, and need an
> online oracle (such as e.g. a backup decryption/restore service) to be
> executed. With a human entering the decryption key manually for each
> attempt, you'd expect the human to get suspicious with thousands of
> decryption requests.  But as soon as you want to automate restore
> functions and remove the human, you might enable oracle style attacks.
>
> On Thu, Oct 11, 2018 at 6:45 AM Paul Chubb <[hidden email]> wrote:
>>
>> Hi thanks for the responses. I try not to do crypto for the very reasons you raise - i simply don't know enough and your (good) pointed questions have demonstrated that.
>>
>>  Context:
>>
>> We are trying for GDPR and other privacy law compliance. We probably need to meet GDPR, US requirements, Australian requirements, Japanese requirements and UK requirements. The data is not hugely critical. It contains names and exercise metrics. It doesn't contain credit card details or anything above the level of names. I don't think it contains addresses but probably does contain names of recognizable organisations which could provide a tuple for identification purposes if the data was compromised.
>>
>> A mysqldump of the db in production at present is around 170Gb however that is text based and we are using a binary solution based on percola xtrabackup so the final size should be smaller for the current time. The documentation on this by the backup software provider is very simplistic and simply pipes the stream of data through openssl and then gzip:
>>
>> mariabackup --user=root --backup --stream=xbstream | gzip | openssl  enc -aes-256-cbc -k mypass > backup.xb.gz.enc
>>
>> There are thousands of posts that do similar and in non-crypto circles it is the accepted way of doing things. That was my starting point.
>>
>> I am  not using a password but generating keys. The symetric key is generated by "openssl rand -hex 32" which I have read is suitable. The Nonce or IV is generated  by "openssl rand -hex 16". These values are used once and then kept for decryption of that file. They in turn are encrypted before storing - see below.
>>
>> The two keys are held in ram while the backup occurs. They are applied to openssl using the -K and -iv switches. They are then written out to disk. encrypted with a list of public RSA keys and the original deleted from disk. I then package it all up and delete the intervening encrypted files leaving me with an archive with the encrypted backup and several copies of the nonce and key each encrypted by different people's public keys.
>>
>> The backup regime has not been decided as yet. I expect it will be something like a full backup per week and then either incrementals or differentials on the other days. I expect that the fulls will be kept for 30 days and the deltas for 14days. The database backups will sit on a secured server disk which in turn will be backed up by the hosting provider with whatever process and rotation they use.
>>
>> I would expect that headers in the backup stream would be predictable, whether they provide a good enough attack surface I don't know. In addition the clients of course know their data that may also provide an attack surface. Finally I have included an encrypted file with a known plain text phrase. Based on your comments, this will probably not get into production but provides an easy way for testing and debugging to check that things are encrypted or not.
>>
>> The kind of statements that prompted my question was: https://security.stackexchange.com/questions/182277/is-openssl-aes-256-cbc-encryption-safe-for-offsite-backup whose comments suggest that openssl should never be used for production purposes.Their suggestion was GnuPG which isn't suitable for this purpose because it does password/key management that assumes a desktop/laptop environment and manual process. I also looked at ccrypt and mcrypt but then went back to openssl.
>>
>> Cheers Paul
>>
>>
>>
>>
>>
>>
>> On Thu, Oct 11, 2018 at 2:12 PM Viktor Dukhovni <[hidden email]> wrote:
>>>
>>> On Thu, Oct 11, 2018 at 01:23:41AM +0000, Michael Wojcik wrote:
>>>
>>>> - Data recovery from an encrypted backup is tough. With CBC, one bit goes
>>>> astray and you've lost everything after that.
>>>
>>> No, a 1 bit error in CBC ciphertext breaks only the current block,
>>> and introduces a 1 bit error into the plaintext of the next block.
>>> After that, you're back in sync.
>>>
>>> But yes, indeed "openssl enc" offers little integrity protection.
>>> One should probably break the data into chunks and encrypt and MAC
>>> each chunk with the MAC covering the chunk sequence number, and
>>> whether it is the last chunk.
>>>
>>> --
>>>         Viktor.
>>> --
>>> openssl-users mailing list
>>> To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users
>>
>> --
>> openssl-users mailing list
>> To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users
--
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users
Reply | Threaded
Open this post in threaded view
|

Re: openssl commandline client use

Blumenthal, Uri - 0553 - MITLL
On Oct 11, 2018, at 05:03, Matt Caswell <[hidden email]> wrote:
> On 11/10/18 09:47, Peter Magnusson wrote:
>> You would be better off with AES-CCM or such for your backup, that
>> gives you the integrity check.
>> i.e. you would be reasonably sure what you decrypt is encrypted with your key.
>
> I'd just point out that CCM and other AEAD modes are not supported in
> the openssl enc app.

Yes, and many of us are eagerly waiting for this deficiency to be remedied! ;-)


>> Using CBC in anything new design does not make much sense.

This depends on the use case and the threat model.

>> - CBC is weak against oracle attacks (online interactions with a
>> decryption oracle)

Assuming decryption oracle is applicable in the given use case. Not everything is online, and not everything is a web service. ;-)

>> - CBC has no protection against modifications. If decryption succeeds,
>> you don't know if the resulting plaintext originated from

Which is why non-AE modes should be accompanied by MAC’ing the ciphertext. (Moxie Marlinspike’s principle ;)

On the other hand, AEAD modes tend to fail catastrophically if key+nonce is reused. Unlike, e.g., CBC, which merely reveals that two cipher texts came from the same plaintext (“pick your poison").
Again, depends on the use case and the threat model.


--
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users
Reply | Threaded
Open this post in threaded view
|

Re: openssl commandline client use

Michael Wojcik
In reply to this post by Viktor Dukhovni
> From: openssl-users [mailto:[hidden email]] On Behalf
> Of Viktor Dukhovni
> Sent: Wednesday, October 10, 2018 23:12
>
> On Thu, Oct 11, 2018 at 01:23:41AM +0000, Michael Wojcik wrote:
>
> > - Data recovery from an encrypted backup is tough. With CBC, one bit goes
> > astray and you've lost everything after that.
>
> No, a 1 bit error in CBC ciphertext breaks only the current block,
> and introduces a 1 bit error into the plaintext of the next block.
> After that, you're back in sync.

Right, right. Emailing at bedtime again... Still, this is trouble enough.

> But yes, indeed "openssl enc" offers little integrity protection.
> One should probably break the data into chunks and encrypt and MAC
> each chunk with the MAC covering the chunk sequence number, and
> whether it is the last chunk.

Clearly an improvement (and better than a single MAC over the entire message, for reasons we've discussed in the past on this list). But we're back to designing and implementing a cryptosystem, and that's fraught with dangers for non-experts (and for experts too, if we're honest).

--
Michael Wojcik
Distinguished Engineer, Micro Focus



--
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users
Reply | Threaded
Open this post in threaded view
|

Re: openssl commandline client use

Michael Wojcik
In reply to this post by Matt Caswell-2
> From: openssl-users [mailto:[hidden email]] On Behalf
> Of Matt Caswell
> Sent: Thursday, October 11, 2018 05:04
>
>
> On 11/10/18 09:47, Peter Magnusson wrote:
> > You would be better off with AES-CCM or such for your backup, that
> > gives you the integrity check.
> >  i.e. you would be reasonably sure what you decrypt is encrypted with your
> key.
>
> I'd just point out that CCM and other AEAD modes are not supported in
> the openssl enc app.

And even if they were, the AEAD modes are fragile (vulnerable to misuse). GCM of course is completely vulnerable to nonce reuse, which is why some people (e.g. Bernstein) disavow it completely. CCM is similarly vulnerable to key+counter reuse, so RFC 4309, for example, requires fresh keys for each encryption.

That was the main point of my original message: roll-your-own cryptosystems are a Bad Idea. I think providing advice like "use an AEAD mode" is bad, because it implies that crypto non-experts can safely create cryptosystems that avoid well-known pitfalls. History suggests otherwise.

--
Michael Wojcik
Distinguished Engineer, Micro Focus



--
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users
Reply | Threaded
Open this post in threaded view
|

Re: openssl commandline client use

OpenSSL - User mailing list
In reply to this post by Paul Chubb

As with essentially all open source software, there is no warranty with OpenSSL.

 

Having said that, people use the OpenSSL applications for all sorts of things, including what you are doing.


--
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users
Reply | Threaded
Open this post in threaded view
|

Re: openssl commandline client use

OpenSSL - User mailing list
In reply to this post by Paul Chubb
On 11/10/2018 06:44, Paul Chubb wrote:

> Hi thanks for the responses. I try not to do crypto for the very
> reasons you raise - i simply don't know enough and your (good) pointed
> questions have demonstrated that.
>
>  Context:
>
> We are trying for GDPR and other privacy law compliance. We probably
> need to meet GDPR, US requirements, Australian requirements, Japanese
> requirements and UK requirements. The data is not hugely critical. It
> contains names and exercise metrics. It doesn't contain credit card
> details or anything above the level of names. I don't think it
> contains addresses but probably does contain names of recognizable
> organisations which could provide a tuple for identification purposes
> if the data was compromised.
>
> A mysqldump of the db in production at present is around 170Gb however
> that is text based and we are using a binary solution based on percola
> xtrabackup so the final size should be smaller for the current time.
> The documentation on this by the backup software provider is very
> simplistic and simply pipes the stream of data through openssl and
> then gzip:
>
> mariabackup --user=root --backup --stream=xbstream | gzip | openssl  enc -aes-256-cbc -k mypass > backup.xb.gz.enc
> There are thousands of posts that do similar and in non-crypto circles
> it is the accepted way of doing things. That was my starting point.
>
> I am  not using a password but generating keys. The symetric key is
> generated by "openssl rand -hex 32" which I have read is suitable. The
> Nonce or IV is generated  by "openssl rand -hex 16". These values are
> used once and then kept for decryption of that file. They in turn are
> encrypted before storing - see below.
>
> The two keys are held in ram while the backup occurs. They are applied
> to openssl using the -K and -iv switches. They are then written out to
> disk. encrypted with a list of public RSA keys and the original
> deleted from disk. I then package it all up and delete the intervening
> encrypted files leaving me with an archive with the encrypted backup
> and several copies of the nonce and key each encrypted by different
> people's public keys.
>
> The backup regime has not been decided as yet. I expect it will be
> something like a full backup per week and then either incrementals or
> differentials on the other days. I expect that the fulls will be kept
> for 30 days and the deltas for 14days. The database backups will sit
> on a secured server disk which in turn will be backed up by the
> hosting provider with whatever process and rotation they use.
>
> I would expect that headers in the backup stream would be predictable,
> whether they provide a good enough attack surface I don't know. In
> addition the clients of course know their data that may also provide
> an attack surface. Finally I have included an encrypted file with a
> known plain text phrase. Based on your comments, this will probably
> not get into production but provides an easy way for testing and
> debugging to check that things are encrypted or not.
>
> The kind of statements that prompted my question was:
> https://security.stackexchange.com/questions/182277/is-openssl-aes-256-cbc-encryption-safe-for-offsite-backup 
> whose comments suggest that openssl should never be used for
> production purposes.Their suggestion was GnuPG which isn't suitable
> for this purpose because it does password/key management that assumes
> a desktop/laptop environment and manual process. I also looked at
> ccrypt and mcrypt but then went back to openssl.
>
> Cheers Paul
>
>
>
>
>
>
> On Thu, Oct 11, 2018 at 2:12 PM Viktor Dukhovni
> <[hidden email] <mailto:[hidden email]>> wrote:
>
>     On Thu, Oct 11, 2018 at 01:23:41AM +0000, Michael Wojcik wrote:
>
>     > - Data recovery from an encrypted backup is tough. With CBC, one
>     bit goes
>     > astray and you've lost everything after that.
>
>     No, a 1 bit error in CBC ciphertext breaks only the current block,
>     and introduces a 1 bit error into the plaintext of the next block.
>     After that, you're back in sync.
>
>     But yes, indeed "openssl enc" offers little integrity protection.
>     One should probably break the data into chunks and encrypt and MAC
>     each chunk with the MAC covering the chunk sequence number, and
>     whether it is the last chunk.
>

I have not tested it with your huge data sizes, but I have had a lot of
success with the following pipeline which avoids a number of security
pitfalls by using higher levelOpenSSL commandline features:

BackupCmd | \
   openssl smime -sign -binary -nodetach -signer SomeDir/mycert.pem \
     -inkey SomeDir/mycert.key -outform DER | \
   gzip -n -9 | \
   openssl smime -encrypt -binary -out backup.enc -outform DER -aes256 \
SomeDir/restorecert.pem

Where mycert.pem is a certificate issued to the system being backed up
by an internal company CA (also used for intranet https servers) and
restorecert.pem is another such certificate where the private key is
available only to restore procedures.

A feature of this pipeline is that backups can be reencrypted with a
different key / mechanism without ruining the integrity signature, and
can also be recompressed with a better compression algorithm in the same
way.

Another feature is that the server being backed up does not need to know
the decryption key (restorecert.key), while the server doing restores or
backup verifications does not need to know the integrity signing key
(mycert.key).

Dealing with the risk of not being able to decrypt a corrupted backup is
handled by having more than one backup, just like the risk of completely
loosing a backup (fire, flood, ...).

Special note: Because the openssl smime (and openssl cms) signature
verify commands do not have an option to verify signatures as of some
past date (such as the date a backup was made) my restore scripts have
to run openssl under the "faketime" utility to make openssl think it is
being run on the day the backup was made.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

--
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users