More fleshing out of the interesting attacks that we want to defend against
|Deletions are marked like this.||Additions are marked like this.|
|Line 47:||Line 47:|
|Another example of such an attack is a mail containing malware in
|Line 75:||Line 77:|
|By including Web of Trust / pgp trust in our trust model and so
enable verification levels that go above anything that could
be reached through automation or a very convincing communication
This attack is similar to the spam phishing described above, but
the stakes are higher. An example of this type of attack is when
an assistant receives an email allegedly from the CEO requesting
that the assistant immediately transfer some funds to a particular
account. Unlike the above attack, in this case, the victim is
targeted, and the potential monetary damage much higher.
Again, automated techniques or the use of history cannot mitigate
this attack; the employee must be trained to recognize certain signals.
A possible mitigation is to have a list of fully trusted keys, and
show messages that are signed with these keys differently. Note: this
doesn't mean that the employee must necessarily curate this list;
this can be done by the IT department.
|Line 81:||Line 93:|
|By tracking the consistency of the used encryption / signature key
for a given sender address.
A Man-in-the-Middle (MitM) attack is when an adversary is actively
decrypting and re-encrypting email. For this to work, the MitM
must 1) get Alice and Bob to use keys that he controls and
2) re-encrypt every communication to avoid detection.
To get Alice and Bob to use keys that he controls, the MitM
must intervene during the initial key discovery. In this case,
we can detect the MitM attack when a valid message eventually
gets through. This could occur if Alice receives a message via a
channel that the attacker doesn't control.
If a good message gets through and is encrypted, Alice will be unable to
decrypt it and she will probably tell Bob that something went wrong.
Most likely, Alice and Bob will not be sufficiently
technically savvy to diagnose the actual problem. Since everything
will work when they use their usual communication channel, they will
ignore the issue. To actual detect a conflict in this situation,
Alice's MUA needs to fetch all of the keys specified in the PK-ESK
packets. This will allow Alice's gpg to detect the conflict.
If the message is only signed, then Alice will see that the message
is signed with the wrong key. In this case, we can prompt Alice to
contact Bob to figure out what the right key is. In this case, we
have a chance of defeating the man-in-the-middle.
Note: if the attacker sends a forged message to Bob, and Bob just
downloads the specified key from the key server, then the attacker
has successfully intervened. On the other hand, if Bob proactively
sends a message to Alice, then he will (hopefully) access Alice's
key via a WKD in which case, the attack would have to break TLS
to make sure Bob gets the attacker's key.
If the MitM attempts to intervene after Alice and Bob have already
successfully communicated, e.g., by sending Bob a forged message,
then we can detect the MitM due to the conflict and
we can prompt the users to exchange fingerprints to figure out the
- Automated Encryption
- Trust Levels
- Conflict handling
- Key discovery and Opportunistic Encryption (Mail only)
- Screenshots from GpgOL
The idea behind automated encryption is simple: enable encryption without (much) user support. The goal is to not just protect users from passive attackers (e.g., someone listening), but also protect the user from man-in-the-middle attacks and forgeries. At least, as much as possible without requiring much user support. In particular, we should only require help from the user if there is a good chance that she is being attacked.
This page is intended as a discussion base for validity display and opportunistic mail encryption and how to use the trust-model tofu+pgp for automated encryption.
Things to do after discussion and feedback not integrated in this draft:
- The yellow checker does not work as a symbol.
- Level 5 is not part of the screenshots.
- Write more about threats, mitigations, and attack scenarios.
What are our goals and how do we archive them?
- Prevent mass surveillance:
It's not difficult to imagine that all clear text emails are saved by many governments and immediately analyzed. By encrypting mail by default whenever possible, we dramatically increase the cost of this type of surveillance. For instance, the government would have interfere with key discovery (similar to how [[https://www.eff.org/de/deeplinks/2014/11/starttls-downgrade-attacks|Verizon inhibited transport level security over SMTP]]) to prevent users from learning about their communication partners' keys.
By providing a fully automated encryption mode that requires no user interaction by default and uses a trust model based on the history of communication.
- Spam phishing:
Phishing is a common type of fraud. A simple example is an email that is apparently from your bank prompting you to take some action that requires you to log in. The link to the log in site is actually to a site controlled by the attacker, which steals your credentials. Another example of such an attack is a mail containing malware in an attachment.
This type of attack can be prevented by using signatures to verify the sender's address. Since we don't require users to actively authenticate their communication partners, preventing this type of attack requires recognizing that the sender is attempting an impersonation.
There are two ways to detect this type of attack.
First, since the attacker will try to imitate the real identity to avoid detection, a common technique is use an email address that is a homograph of the real email address, e.g., using a cyrillic a in place of a latin a. Google detects these types of phishing attacks by checking that email addresses fall into unicode's "highly restricted" restriction level" designation level. We could do something similar and show a warning if an email doesn't pass.
Second, if we assume that the user will regularly receive signed emails from her bank, then we can exploit the communication history to show that signed messages from previously unseen / rarely seen email addresses shouldn't be trusted. This requires vigilance on the part of the user to realize that the message didn't verify, but should have. It also requires that the user be educated. Further, if a spammer uses the same email address & key many times, the email address may eventually appear to be trustworthy using this metric.
- Targeted (spear) phishing or CEO-Fraud:
This attack is similar to the spam phishing described above, but the stakes are higher. An example of this type of attack is when an assistant receives an email allegedly from the CEO requesting that the assistant immediately transfer some funds to a particular account. Unlike the above attack, in this case, the victim is targeted, and the potential monetary damage much higher.
Again, automated techniques or the use of history cannot mitigate this attack; the employee must be trained to recognize certain signals. A possible mitigation is to have a list of fully trusted keys, and show messages that are signed with these keys differently. Note: this doesn't mean that the employee must necessarily curate this list; this can be done by the IT department.
- Man in the Middle attacks:
A Man-in-the-Middle (MitM) attack is when an adversary is actively decrypting and re-encrypting email. For this to work, the MitM must 1) get Alice and Bob to use keys that he controls and 2) re-encrypt every communication to avoid detection.
To get Alice and Bob to use keys that he controls, the MitM must intervene during the initial key discovery. In this case, we can detect the MitM attack when a valid message eventually gets through. This could occur if Alice receives a message via a channel that the attacker doesn't control.
If a good message gets through and is encrypted, Alice will be unable to decrypt it and she will probably tell Bob that something went wrong. Most likely, Alice and Bob will not be sufficiently technically savvy to diagnose the actual problem. Since everything will work when they use their usual communication channel, they will ignore the issue. To actual detect a conflict in this situation, Alice's MUA needs to fetch all of the keys specified in the PK-ESK packets. This will allow Alice's gpg to detect the conflict.
If the message is only signed, then Alice will see that the message is signed with the wrong key. In this case, we can prompt Alice to contact Bob to figure out what the right key is. In this case, we have a chance of defeating the man-in-the-middle.
Note: if the attacker sends a forged message to Bob, and Bob just downloads the specified key from the key server, then the attacker has successfully intervened. On the other hand, if Bob proactively sends a message to Alice, then he will (hopefully) access Alice's key via a WKD in which case, the attack would have to break TLS to make sure Bob gets the attacker's key.
If the MitM attempts to intervene after Alice and Bob have already successfully communicated, e.g., by sending Bob a forged message, then we can detect the MitM due to the conflict and we can prompt the users to exchange fingerprints to figure out the right key.
- Forensic detection of attacks: If an attack happened it will be detectable after the fact by the state of the attacked gnupg data directories. (See limitations below)
Limitations of the automation
The automated system can't protect against an attacker that controls the initial key exchange and persistently re-encrypts all messages with keys controlled by the attacker. This would be detectable if one message gets through and using a WKD would make this attack more expensive but until a User has manually exchanged / checked the fingerprint with his intended communication partner the User can't be sure if the communication is really secure.
This system still caters to users who need that level of assurance (see Level 3 / 4 below) and it can be argued that the costs and detectability of such an attack would likely be higher then other attacks on the users system such an attacker may be capable of.
Definitions of wording:
userid: The userid on a key that matches the smtp Address of the sender.
tofu: Information we have about the communication history and reflects a bit the API of gpgme_tofu_info_t. As tofu info is bound to a given key + userid it is also sometimes called "binding".
key source: The source where the key was imported from, e.g. if if was automatically imported over https, or if it comes from public keyservers.
Key with enough history for basic trust:
(userid.validity <= Marginal AND tofu.signcount == 0 AND tofu.enccount == 0 AND key.source NOT in [cert, pka, danke, wkd])
With trust-model tofu+pgp this level is used only for Keys that were never used before to verify a signature and not obtained by a source that gives some indication that this is an actual key for this address.
The Key should not be used for opportunistic encryption to avoid the problem that the recipient might not be able to decrypt the message because it is a wrong or outdated key.
- Do not automatically encrypt to Level 0.
((userid.validity == Marginal AND tofu.validity < "Key with enough history for basic trust") AND key.source NOT in [cert, pka, dane, wkd]):
This level means that there is some confidence that the recipient actually can use the key as we have seen at least one signature or we have a weak trust path over the web of trust.
So it is unlikely that the recipient can't decrypt and thus its ok to use that Key for opportunistic encryption encryption but when receiving a signed message it should only show in details that a message was signed.
There may be some indication that the sender demonstrated a willingness to use crypto mails, especially if opportunistic encryption is disabled.
- Automatically encrypt to Level 1
- Don't show verified messages as much more trusted then an unverified message.
(userid.validity == Marginal AND ((tofu.validity >= "Key with enough history for basic trust" AND tofu.signfirst >= 3 Days ago) OR key.source IN [cert, pka, dane, wkd])):
The "automated user" that never uses any Certificate Manager or GnuPG will only see Level one and Level two as this is the highest level reachable through full automation.
We have basic confidence that the Sender is who he claims because we either have an established communication history or some "good enough" source of the key (e.g. the mail provider + https) provided the key for this sender.
Conflicts in this level are discussed more details below in the part about Conflict handling.
- Automatically encrypt to Level 2
- Show verified messages as more trusted then an unverified message.
userid.validity == Full AND "no key with ownertrust ultimate signed this userid."
Level 2 is good enough for most use cases, but some organizations or individuals or policies may require and assurance of confidentiality that may never reached through automation.
The idea is that Level 3-4 provide flexibility for Organizational Measures like: You may only send restricted documents to Level 4 keys.
The level is reached either through Web of Trust or if a user explicitly set the tofu Policy to "Good" for this key.
Automatically this Level can only be also be reached through WoT if a user trusted at least one other key.
This is also the level for S/MIME Mails.
- Automatically encrypt to Level 3
- You can assert in the ui that according to gnupg the sender is who he claims to be.
userid.validity == ultimate OR (userid.validity == full AND "any key with ownertrust ultimate signed this userid.")
Same reason as for Level 3. But even more restricted to direct trust, meaning that:
Either yourself or someone that is allowed to make that e.g. your central "CA" style person you may have in an organization has signed that key.
For a user this could also mean something like "with this communication partner I want to be absolutely sure that I'm always using the right key. So I manually verify the fingerprint. And mark that with a local or public signature on that key.
- Automatically encrypt to level 4
- Show verified messages as "the best". Stars and sprinkle level ;-)
Time delay for level 2
A time delay is supposed to make it more expensive for an attacker to reach level 2 as we then start to make claims about the attacker.
The assumption is that an attack that is kept up over some time is more difficult. Especially if it involves some external factors, like a phishing website etc. which might be turned off.
A time delay gives others the chance to intervene if they detect an attack e.g. if it is against a whole organization.
It also mitigates User Experience problems arising from the use of the encryption count in GnuPG's calculation for basic history because if you encrypt 20 drafts or 20 mails / files quickly to the same key it should not become level 2 before you have seen a signature.
A concern is that using the time of the first signature verification before reaching level 2 make lead to bad user experience. E.g.: You look at the same message after three days. Now it's level 2, last time you looked it was level 1.
HTTPS Trust as shortcut to level 2
Using HTTPS for key discovery will automatically bring a key to level 2 because in that case we have a claim by some authenticated source that this key really belongs to the according mail address.
If an attacker controls your HTTPS there are very likely cheaper and less detectable attacks on your communication then intercepting pgp encryption. e.g. compromising your system.
It's also harder to break HTTPS compared to SMTPS/IMAPS because every MUA offers to ignore certificate errors (which dirmngr does not) and a compromised router could claim that your MSP only offers SMTP / IMAP without encryption.
There should only be prominent information when reading a signed mail if:
- There is additional information that the sender is really is the intended communication partner. (Level >= 2)
This could be displayed as a checker or a seal ribbon or something. It should be prominent and next to the signed content. There should be a distinction between Levels two, three and four but it may be slight.
Don't treat signed mails worse then an unsigned mail
A MUA should not treat any signed mail worse then an unsigned mail. If a sender is not verified it should be displayed similar to an unsigned mail because in both cases you have no information that the Sender is actually your intended Communication partner. You may want to show a tofu Conflict more prominent as user interaction is required at this point.
Especially: ignore GPGME's Red suggestion An attacker would have removed the signature instead of invalidating it. It should be treated like an unsigned mail and only additional info in details should be shown for diagnostic purposes. Similarly when Red is set because a key is expired or so. It's not more negative then a unsigned mail so only if your MUA shows unsigned mails as "Red" may you treat signed mails this way, too ;-)
What happens when there is a conflict in the tofu trust model, so you have seen two keys for a senders mailbox and the keys are not cross-signed and both are valid.
When can this happen:
- There is a man in the middle trying to intercept encrypted communication and he did not control all of your communication history.
- There is an impostor trying to claim a false identity.
- There is a Troll trying to hurt usability so much that automated encryption is no longer used.
- There is a man in the middle that controlled all your communication history (e.g. your router)
- A user generated two keys e.g. on two devices and did not cross sign them and uses both.
- A user lost control of his old key, and did not have a revocation certificate.
Both misuse cases should be handled on the senders side because he controls or lost control of the involved keys and can take steps / inform himself what went wrong.
Case two very likely more common as case one would also lead to problems already because the communication partners of the second key would not know to which key they should encrypt the user would be able to read mails on one device but not at the other et. cetra.
Loosing keys can also be assisted by software that provides a bad user experience or does not follow common practices.
Resolving conflicts on the senders side
A tofu aware GUI should (even when TOFU is disabled) check when a secret key is used for signing if there are other secret keys with the same uid available and they are not cross signed a GUI should offer that option.
Similarly, if a signature is verified by a MUA and there are secret keys available with the same userid and they are not cross signed this should be told to the user and offered to do if possible.
A tofu aware GUI should check when signing or from time to time if a different key is uploaded to a WKD for this userid and warn in that case. This will also make a permanent man in the middle attack by a mail service provider more expensive as it would mean providing a different key to the attacked user then to others.
Automated conflict resolution by recipients
Recipient means here that if there are two keys seen for one sender address when verifying a mail, the recipient is the person doing the verification.
A conflict should then be automatically resolved by keeping the old key in use for automated encryption to this address until it is revoked or explicitly marked as bad and treating both keys as not "Green". In the Level system this will mean level 0 for the new key and level 1 for the old key.
On the policy level this means:
- Policy ask for the new key.
- Policy ask for the old key.
The conflict would be detectable in the interface because it will no longer be shown that the sender is verified. A details dialog should provide a clear and easy option to switch keys for this recipient or mark the old key as explicitly good.
If a key has trust through the pgp model (Level 3 or higher) it should be kept at that level to further mitigate the "troll" attack.
If the old key is still published in the Web key directory for his domain it should still be shown as Level 2 key, in that case we have indication that the old key is still good. ( See rationale about WKD as a shortcut to level 2 ) This means policy policy (ask or bad) for the new key and policy auto for the old key.
In case the new key is published in the web key directory it should not be automatically switched over, because in that case we would have two good keys and something is wrong. If a user has lost control / lost his old key and is unable to revoke it we want to create problems for the sender, so that we can get notified by the sender that the new key should be used and we can mark the old key explicitly as bad.
Does this model protect against our threats?
- Man in the middle with full control of communication history: If Alice and Bob always encrypt & sign but Mallory had full control of their communication history and re-encrypts / resigns the messages regularly this attack could be detected:
a) When a Mail gets trough once e.g. if Mallory controls Alice's router and Alice uses an Internet Cafe once. In that case Bob would not be able to decrypt the message. That communication failure could lead to more investigation. A MUA may assist that by showing to which keys a mail was encrypted in case decryption fails.
b) If Alice publishes her key in a Web key directory Bob's MUA can detect that the key used for Alice does not match the one he always used and can signal this by not showing Alice's signatures as "Good" indicating that the communication is not protected.
The Attack is more problematic if Alice and Bob don't encrypt but just sign. In case a) this would mean that the mails that get through from the real Alice would not be shown as valid. But once a mail gets through from the real Alice the messages from both Alice and Mallory will not be shown as valid anymore, indicating that the communication is not secure.
- Man in the Middle attack with established communication:
This attack will be prevented by keeping the established key in use so messages won't get encrypted to Mallory. By not showing the validity indication anymore this attack is also detectable.
- Impostor attack: would still be prevented because the new key is never shown as valid / verified unless user interaction is done.
- The Troll Attack will be prevented as conflicts are not shown so prominent. Using a Web key directory can also mitigate this attack because it will prevent marking many keys as bad trough spam.
Key discovery and Opportunistic Encryption (Mail only)
A MUA should offer automated key discovery and opportunistic encryption. The WKD / WKS helps with automated key discovery and should be used (by using --locate-key)
To determine if a mail can be sent automatically encrypted one of the following rules must match for each recipient:
- There is a Fully / Ultimately valid key for the recipient.
- This is a marginally valid key for the recipient and the first UserID that matches the recipients mailbox has a signcount of at least one. Or the recipients key was obtained through a slightly authenticated source (e.g. WKD).
Auto Key Retrieve
Additionally to WKD lookup, if you receive a message from an Unknown Key a MUA should automatically retrieve it from a public keyserver or a Web Key directory. This Key can then be used for opportunistic encryption because you have seen a signature it is very likely that the recipient can decrypt. (auto-key-retrieve in gnupg)
Screenshots from GpgOL
No key was found. Level 0.
The first signed message form someone else. Level 1.
The second message. Level 1.
Basic trust! Level 2.
Full trust! Level 3.
Invalid sig. Level 0.
Tofu conflict. User attention required.