Size: 12636
Comment:
|
← Revision 50 as of 2018-06-21 10:33:12 ⇥
Size: 30111
Comment: Small improvement on Notee about TLS Mail security.
|
Deletions are marked like this. | Additions are marked like this. |
Line 1: | Line 1: |
## page was renamed from EasyGpg2016/AutomatedEncryption | |
Line 3: | Line 4: |
** Status: ~D~R~AFT/ Work in progress ** | |
Line 5: | Line 6: |
The idea behind automated encryption is simple: enable encryption without (much) user support. The goal is to protect users from passive attackers (e.g., someone listening) **and** from man-in-the-middle attacks and forgeries. At least, as much as possible without requiring much user support. We should only require help from the user if there is a good chance that she is being attacked. |
|
Line 8: | Line 15: |
for automated encryption. == TODO: == Things to do after discussion and Feedback not integrated in this draft * The yellow checker does not work as a symbol. * Level 5 is not part of the screenshots. |
for automated encryption. == What are our goals and how do we archive them? == === Prevent mass surveillance === Clear text emails can be saved and automatically analyzed by a number of players that can gain access to the transport layer. It is documented that a number of governments do this. By encrypting mail by default whenever possible, we significantly increase the cost of this type of surveillance. For instance, a government would have to interfere with pubkey discovery (similar to how [[https://www.eff.org/de/deeplinks/2014/11/starttls-downgrade-attacks|Verizon inhibited transport level security over SMTP]]) to prevent users from learning about their communication partners' pubkeys. Solution: way to find a reasonable pubkey without user's help. (See: WKD / WKS ) === Spam phishing === Phishing is a common type of fraud. A simple example is an email that is apparently from your bank prompting you to take some action that requires you to log in. The link to the log-in site is actually to a site controlled by the attacker, which steals your credentials. Another example of such an attack is a mail containing malware in an attachment. This type of attack can be prevented by using signatures to verify the sender's address. Since we don't require users to actively authenticate their communication partners, preventing this type of attack requires recognizing that the sender is attempting an impersonation. Solution: there are two ways to detect this type of attack. First, phishing attacks are successful, because the mail content and headers look authentic. That is, an integral part of phishing is imitating a person or institution to trick the mark. It is straightforward to detect if there are multiple keys associated with a single email address. Thus, to get around this, a phisher could employ the common technique of using an email address that is a homograph of the real email address, e.g., using a cyrillic a in place of a latin a. [[https://cloud.googleblog.com/2014/08/protecting-gmail-in-global-world.html|Google]] detects these types of phishing attacks by checking that email addresses conform to unicode's [[http://www.unicode.org/reports/tr39/#Restriction_Level_Detection|highly restricted]] designation level. We could do something similar and show a warning if an email address doesn't pass this test. Second, if we assume that the user will regularly receive signed emails from her bank, then we can exploit the communication history to show that signed messages from previously unseen / rarely seen email addresses shouldn't be trusted. For instance, we can show the number of signed mails from the email address in question. Recognizing a discrepancy requires that the user be vigilant, which requires some user education. But, even this can be defeated: if a spammer uses the same email address & key many times, the email address may eventually appear to be trustworthy using this metric. === Targeted (spear) phishing or CEO-Fraud === This attack is similar to the spam phishing described above, but the stakes are higher. An example of this type of attack is when an assistant receives an email allegedly from the CEO requesting that the assistant immediately transfer some funds to a particular account. Unlike the above attack, in this case, the victim is targeted, and the potential monetary damage much higher. Solution: again, automated techniques or the use of history cannot mitigate this attack; the employee must be trained to recognize certain signals. A possible mitigation is to have a list of fully trusted keys, and show messages that are signed with these keys differently. Note: this doesn't mean that the employee must necessarily curate this list; this can be done by the IT department. === Man in the Middle attacks === A Man-in-the-Middle (MitM) attack is when an adversary is actively decrypting and re-encrypting email. For this to work, the MitM must 1) get Alice and Bob to use keys that he controls and 2) re-encrypt every communication to avoid detection. To get Alice and Bob to use keys that he controls, the MitM must intervene during the initial key discovery. In this case, we can detect the MitM attack when a valid message eventually gets through. This could occur if Alice receives a message via a channel that the attacker doesn't control. If a good message gets through and is encrypted, Alice will be unable to decrypt it and she will probably tell Bob that something went wrong. Most likely, Alice and Bob will not be sufficiently technically savvy to diagnose the actual problem. Since everything will work when they use their usual communication channel, they will ignore the issue. To actually detect a conflict in this situation, Alice's MUA could fetch all of the keys specified in the PK-ESK packets. This may allow Alice's gpg to detect a conflict: normally the sender's MUA encrypts emails to not just the recipients, but also the sender so that the sender can later review the sent message. Note: this scenario can occur due to a misconfiguration, e.g., the message is not encrypted to Alice. If the message is only signed, then when we fetch the key used to sign the message, we will detect a conflict. In this case, we can prompt Alice to contact Bob to figure out what the right key is, which gives us a chance of defeating the man-in-the-middle. Note: if Bob proactively sends a message to Alice, then he will (hopefully) access Alice's key via an authenticated key stored, such as WKD, in which case, the attack would have to break the store's protection (e.g., TLS) to make sure Bob gets the attacker's key. On the other hand, if the attacker sends a forged message to Bob, and Bob just downloads the specified key from the key server, then the attacker has successfully intervened. This suggests that we should always check WKD for the right key, if possible. If the MitM attempts to intervene after Alice and Bob have already successfully communicated, e.g., by sending Bob a forged message, then we can detect the MitM due to the conflict and we can prompt the users to exchange fingerprints to figure out the right key. === Forensic detection of attacks === If an attack happened it will typically be possible to detect what happened (which messages were read, and which messages were sent) after the fact based on the state maintained by gpg. (See limitations below) == Limitations of automation == When communication partners authenticate each other over a secure channel, e.g., exchanging business cards in person, a MitM cannot hijack the channel without the communication partners knowing. An automated system doesn't normally have access to a secure channel. But, there are different degrees of insecure. For instance, if the initial key discovery is done over TLS to an accountable entity (i.e., something like WKD), this makes a MitM attack significantly more expensive than just looking up a key from a key server. But given that adversaries capable of monitoring all channels are probably capable of [[https://freedom-to-tinker.com/2015/10/14/how-is-nsa-breaking-so-much-crypto/|circumventing TLS]] or at least compelling mail providers to compromise their WKD using tools like National Security Letters (NSLs), WKD should not be regarded as completely authoritative. To protect against more sophisticated attackers, an automated system can be combined with something like the web of trust. To be usable, the user must be able to distinguish these different trust levels, i.e., the cases where we have reason to believe that the connection is secure, and the cases where we know the connection is secure. (In the system described below, levels 1 & 2 correspond to connections secured using the automated approach, and levels 3 & 4 correspond to connections that were bootstrapped over a secure channel.) We also need to distinguish between keys that are automatically looked up and keys that the user explicitly looks up. WKD works by identifying a key that belongs to an email address. If the user explicitly enters an email address (e.g., when encrypting), then we know that the user intended that email address. On the other hand, if we fetch a key via WKS in order to verify a signature (e.g., we use WKS to find the key for phishy@examp1e.org), then we should be less confident that the key is trustworthy, and that the message is authentic. = Details = |
Line 19: | Line 183: |
* Level 0 (Validity Unknown): With trust-model tofu+pgp this is only for Keys that were never used before to verify a signature. Key should not be used for opportunistic encryption. * Level 1 (Validity Marginal && Touf.validity < Key with enough history for basic trust): This level means that have some confidence that the recipient actually can use the key as we have seen at least one signature or we have a weak trust path over the WOT. It's OK to use that Key for Opportunistic encryption but when receiving a signed message we should only show in details that a message was signed and don't make assumptions about the senders identity. There may be some indication that the sender demonstrated a willingness to use crypto mails, especially if opportunistic encryption is disabled. * Level 2 (Validity Marginal && ((Tofu.validity >= Key with enough history for basic trust && signfirst > 3 Days ago) || Key retrieved through slightly authenticated source (e.g. WKD)): We have some basic confidence that the Sender is who he claims because we either have an established communication history or the source of the key (e.g. the mail provider) provided the key for this sender. This is the highest level you can achieve without user interaction. * Level 3 (Validity Full): This is reached either through Web of Trust or if a user explicitly set the tofu Policy to "Good" for this key. High confidence. Automatically this Level can only be reached through WoT. * Level 4 (Direct Trust): A key that is ultimately trusted has signed this key. Either yourself or someone that is allowed to make that e.g. your central "CA" style person you may have in an organisation. |
Definitions of wording: userid: The userid on a key that matches the smtp address of the sender. userid.validity is set as follows. By default, it is marginal. If the key is fully trusted via the web of trust, then it is Fully. If the key is explicitly marked as bad or unknown, then it is Never or Unknown, respectively. tofu: Information we have about the communication history. This also partially reflects the gpgme_tofu_info_t data structure. This data structure describes a key + userid pair, which is also sometimes called a "binding". Note: if a key has multiple user ids, then there are multiple bindings. A TOFU conflict occurs if a user id is bound to multiple keys. A conflict is a strong sign that the user is either being attacked or there is a configuration error. key source: Where the key was imported from, e.g. if it was automatically imported over https, or if it comes from public keyservers. === Level 0 === Defined as: {{{ (userid.validity <= Marginal AND tofu.signcount == 0 AND tofu.enccount == 0 AND key.source NOT wkd) }}} Explanation: This trust level is assigned to keys that were never used to verify a signature, never used to encrypt a message, and not obtained by a source that gives some basic indication that the key is actually controlled by the alleged owner. That is, we have no evidence that the user can actually decrypt messages encrypted with this key. Consequently, this key should not be automatically used to encrypt a messages to this recipient. Usage: * Encryption: Do not automatically encrypt to Level 0. * Validation: N/A === Level 1 === Defined as: {{{ ((userid.validity == Marginal AND tofu.validity < "Key with enough history for basic trust") AND key.source NOT wkd): }}} Explanation: We have verified at least one message signed using this key, or encrypted at least one message to this key, but not more than a handful, and we didn't find the key via a convincing source (e.g., WKD). This trust level means that there is some evidence that the recipient actually controls the key. Thus, it is okay to automatically encrypt to this recipient using this key. (Note: if there is a MitM, then the intended recipient will still be able to read the message so this decision will not negatively impact usability. Nevertheless, we shouldn't give the impression that secrecy is in anyway guaranteed.) At this trust level, we don't have that much evidence that the key belongs to the stated person. For instance, the key could have been imported when the user examined a phishing mail. As such, we should not indicate to the user that the contents of the messages are in anyway valid. Usage: * Encryption: Automatically encrypt. * Validation: Display the message as if it wasn't signed. (But, it is okay to show this under in the details window.) === Level 2 === Defined as: {{{ (userid.validity == Marginal AND ((tofu.validity >= "Key with enough history for basic trust") OR key.source wkd)): }}} Note: validity is computed based on the number of days on which the user verified a message / encrypted a message, not the total number of verified messages / encrypted messages. Explanation: We have verified a bunch of signatures at different times from this key and/or encrypted messages to this key a bunch of times; or, the key came from a semi-trusted source. Given the evidence that the key is actually controlled by the stated person, it makes sense to both encrypt to this key and to show that the message is signed. Note: a common phisher is unlikely to take the time to get to this level. Thus, non-targetted phishing mails will normally not show up as trusted. Usage: * Encryption: Automatically encrypt * Validation: Show that the message was signed and is probably valid === Level 3 === Defined as: {{{ userid.validity == Full AND "no key with ownertrust ultimate signed this userid." }}} Explanation: The user is fully trusted, but only indirectly so, i.e., WoT, or the TOFU policy was explicitly (by the user!) set to good. Level 3 and level 4 are indications that the communication partner is authenticated. The distinction between levels 3 and 4 is to provide flexibility for Organizational Measures like: You should only send restricted documents to certain keys. This can be realized by having a key be an organization key be ultimately trusted and responsible for signing those keys. This is also the level for S/M~I~M~E Mails. Usage: * Encryption: Automatically encrypt to Level 3 * Validation: Indicate that the message came from the sender. === Level 4 === Defined as: {{{ userid.validity == ultimate OR (userid.validity == full AND "any key with ownertrust ultimate signed this userid.") }}} Explanation: The key is either ultimately trusted or signed by an ultimately trusted key. See level 3 for an explanation of this level. Usage: * Encryption: Automatically encrypt to level 4 * Validation: Show verified messages as "the best". Stars and sprinkle level ;-) |
Line 50: | Line 346: |
=== Won't many Levels hurt Usability? === The "automated user" that never fiddles with GnuPG will only see Level one and Level two. With level one being nearly presented as an "Unsigned" mail the "Basic Trust" will be the only level that is really visible. For more advanced users and from a "Ui Language" point of view only two levels would be too simplistic because we have to distinguish still between just automatic Trust based on history. and trust from Web of Trust or manual checks. Without a manual check we can never fully claim that the senders identity was verified. So with need an additional level for manually checked keys and offer that option e.g. from a details dialog. So that a user can mark "Ok with this communication partner I want to be absolutely sure that I'm always using the right key. So I manually verify the fingerprint.". That's a *can* it should not be suggested that this needs to be done, but it should be offered. The additional Level's also give flexibility for Organisational Measures like: You may only send restricted documents to Directly trusted keys. === Why the time delay for level 2? === The idea is to make it more expensive for an attacker to reach level 2 as we then start to make claims about the attacker. An attack that is kept up over some time is more difficult. Especially if it involves some external factors, like a phishing website etc. that might be turned of. It also gives others the chance to intervene if they detect an attack. It also mitigates User Experience problems arising from the use of the encryption count in GnuPG's calculation for basic history because if you encrypt 20 drafts or 20 mails quickly to the same key it should not become level 2 before you have seen a signature. A concern is that using the time of the first signature verifcation before reaching level 2 make lead to bad user experience. E.g.: You look at the same message after three days. Now it's level 2, last time you looked it was level 1. == Signature Status == There should only be one prominent information when reading a signed mail: * Is there additional information that the sender is really is the intended communication partner. (Level 2 + Level 3) This could be displayed as a checker or a seal ribbon or something. It should |
==== Time delay for level 2 ==== Using the number of days on which we saw signed messages rather than the total number of messages that we saw to determine a key's "believability" makes it more expensive for a simple attack to work. For instance, a phisher might first send a bunch of signed spam so that if the user opens them, the key will be marked as valid (level 2). But, it is much more expensive for the attacker to retain state (the key, how many messages were sent, etc.), then just firing and forgetting. A time delay also gives others the chance to intervene if they detect an attack, e.g., if it is against a whole organization or if it comes from one identifyable source like a special botnet or open mail relay. ==== WKS as shortcut to level 2 ==== Using WKS for key discovery automatically brings a key to level 2, because in this case we have a claim by some (weakly, because of TLS & NSLs) authenticated source that this key really belongs to the stated email address. Since, this trust level doesn't claim to protect you from persistent adversaries, this is perfectly reasonable. Note: It is thought that circumventing H~T~T~P~S is harder than circumventing S~M~T~P~S/I~M~A~P~S because M~U~As might ignore or offer to ignore, certificate errors (which dirmngr does not). It is also more easy to do a [[https://www.eff.org/de/deeplinks/2014/11/starttls-downgrade-attacks|downgrade attack on SMTPS]]. If the key in the WKD changes and the old and new keys are not cross signed, then this may be a sign that the mail provider got an NSL, or there is a user error, for example. In this case, we may want to downgrade the keys to level 0. Preferably, the user should resolve the resulting conflict. == Presentation == There should only be prominent information when reading a signed mail if: * There is additional information that the sender really is the intended communication partner. (Level >= 2) In other words, we do not display that a message is unsigned, and we do not display that a message has a bad signature. These are treated equivalently. (See below.) When we do show that a message is signed / encrypted, it should be displayed as a check or a seal ribbon or something. It should |
Line 96: | Line 395: |
between Levels two and three but it may be slight. | between levels 2, 3 & 4. The distinction between levels 3 & 4 may be slight. But, the distinction between levels 2 & 3 should be noticeable since people are going to make important decisions based on this difference. |
Line 100: | Line 401: |
A MUA should not treat any signed mail worse then an unsigned mail. If a sender is not verified it should be displayed similar to an unsigned mail because in both cases you have no information that the Sender is actually your intended Communication partner. You may want to show a tofu Conflict more prominent as user interaction is required at this point. |
A MUA should not treat signed mails worse then unsigned mails. Thus, if a sender is not verified it should be displayed similar to an unsigned mail, because in both cases we have no information that the sender is actually the alleged communication partner. You may want to show a tofu conflict more prominently as user interaction is required at this point. |
Line 110: | Line 413: |
or so. It's not more negative then a unsigned mail. ==== Put it in Details ==== The information that a Mail was signed should of course be available even if the signature could not be used to verify the sender. And the user should get tips how to verify the sender (e.g. exchange fingerprints out of Band) but this should be "Details". Although easily accessible this should not be needed to establish cryptographically secured communication. |
or so. It's not more negative then a unsigned mail so only if your M~U~A shows unsigned mails as "Red" may you treat signed mails this way, too ;-) |
Line 122: | Line 419: |
A tofu conflict means that you have seen two keys for a userid and that the keys are not cross-signed and both are valid. When can this happen: |
A TOFU conflict occurs when there are multiple keys with the same mailbox and it is not possible to automatically determine which ones are good. For instance, if two keys are cross-signed, then they are not considered to conflict; this is just a case of the user rotating her primary key. Conflicts occur in two situations: |
Line 129: | Line 428: |
# There is a man in the middle trying to intercept encrypted communication. # There is an impostor trying to claim a false identity. # There is a D~O~S attacker trying to hurt usability so much that automated encryption is no longer used. |
# A MitM controlled the initial key exchange. If a good message gets through, there will be a conflict. The "new" key is the correct key. # An attacker attempts a MitM attack, but the user already has the right key. The "old" key is the correct key. # An attacker sends a forged message. The "old" key is the correct key, or both are bad (the first key was also due to a forgery). # There is a Troll trying to hurt usability so much that automated encryption is no longer used (i.e., many forgeries resulting in gratuitous conflicts) |
Line 139: | Line 438: |
certitifcate. The first misuse case should be handled on the senders side. The second misuse case may be handled or mitigated by using slightliy authenticated source for the key retrieval. |
certificate. Both misuse cases should be handled on the sender's side because he controls or lost control of the involved keys and can take steps / inform himself what went wrong. The second misuse case (inaccessible key) is likely more common than the first misuse case (multiple, valid keys). The first misuse case already leads to problems: communication partners need to choose which key to use. We think it might be so common that we would need an automated handling for this, too. (See section selfhealing) Losing keys can also be assisted by software that provides a bad user experience or does not follow common practices. |
Line 147: | Line 457: |
A tofu aware ~G~U~I should (even when TOFU is disabled) check when a secret key is used for signing if there are other secret keys with the same uid available and they are not cross signed a GUI should offer that option. Similarly, if a signature is verified by a mua and there are secret keys available with the same userid and they are not cross signed this should be told to the user and offered to do if possible. |
When an application makes a signature, it should check that there are no other keys with the same user id. If there are and the private key material is available, then the application should prompt the user to make a cross signature to correct this misuse. Similarly, if a signature is verified by an MUA, there are secret keys available with the same userid, and they are not cross signed, the user should be prompted to make a cross signature. |
Line 158: | Line 469: |
A tofu aware ~G~U~I should check when signing or from time to time if a different key is uploaded to a WKD for this userid and warn in that case. This will also make a permanent man in the middle attack by a mail service provider more expensive as it would mean providing a different key to the attacked user then to others. === Resolving conflicts on the receivers side === We will have to see in practice how often tofu conflicts happen and if there are different means we can avoid them. For now the preferred solution is the Conflict Dialog. An alternative approach for discussion is the automated resolution. Resolution through a conflict Dialog hurts the goal to automate encryption. The goal should be to try hard to avoid it. ==== Conflict Dialog ==== A dialog should be offered that asks the user to check with the sender / call his IT Support and then: * Use the new key from now on. (Policy auto on the new Key, Policy bad on the old key) * Reject the new key (Policy bad on the new key, Policy auto on the old key) |
A tofu aware ~G~U~I should periodically check that the key stored in the WKD is her actual key. If done via TOR, then helps detect if the mail provider has replaced the user's key (due, perhaps, to an NSL). (Note: even if TOR is not used, this check still makes it harder for a rogue mail provider to hide.) This check could be triggered by use of the private key (e.g., when signing a message). ==== Automated conflict resolution by recipients ==== Let us assume that there are two keys, K1 and K2, with the email address alice@example.org, and that they are in conflict. K1 is a key with established communication history, K2 is a key without history. Let's assume that we discover K2 either via a signed message or example.org's WKD. In this case, we cannot with certainty resolve the conflict. Consider: - If we discover K2 because a good message or WKD access finally got through a MitM attack, then K2 is the correct key. - If we discover K2 due to a forgery, then K1 is the correct key. In fact, the "correct key" could also be bad! For instance, if the user never interacted with Alice and got two different forgeries, then neither key is the correct one, but when both keys are bad choosing one to encrypt to rather than none does little harm. We are in the case where an attacker controls all your communication and does not care about detection. This attack is out of scope for automated encryption. In case K1 is provided by WKD we can accept K1 as still valid because it's still publicly available and we can assume that a conflict would have been detectable by the sender. If we don't have a WKD we still want to use K1 for encryption but don't show it as valid anymore. We don't want to auto-resolve to K2, because we would have two good keys and something is wrong. If a user has lost control / lost his old key and is unable to revoke it we want to create problems for the sender, so that we can get notified by the sender that the new key should be used and we can mark the old key explicitly as bad. If K2 is available in the Web Key Directory we also don't want to autoresolve to it, because it would make an attack from the Mail Service Provider too easy. Imagine the MSP wants (or is forced) to intercept some specific communication he could ensure that for a specific communication partner a M~I~T~M key (K2) is provided. When we automatically fetch that key through WKD and automatically accept K2 as valid the attacker would have reached its goals. So we stay with K1 for encryption but don't show K1 as valid anymore to make it detectable that there is a problem. If a WKD is not available and we assume that at least one of the keys is probably good, then in all of the situations outlined above except for one (a good message gets through a MitM attack), then the old key is the correct key. If we consider the good message from Bob getting through a MitM attack, we find that there are two situations: the good message is signed, and the good message is signed and encrypted. If the message is signed and encrypted, then Alice will be unable to decrypt it (because the MitM didn't reencrypt it), and we won't actually see a conflict, because we never see the signature and consequently we never see a conflict. The failing-to-decrypt message, however, will likely cause Alice to talk to Bob. But, it is unlikely that they will correctly diagnose the problem, in particular, as once the MitM is back in control, everything will continue to work. Thus, they will likely conclude that there was a misconfiguration or a bug. If the message is only signed, then Alice fetches the key used to sign the message (since she hasn't see it before) and we detect a conflict. We conclude that the above scenario (MitM + good signed message getting through) is sufficiently rare, so we'll automatically decide for old key if there is a conflict. But if the old key is not available through WKD anymore we show the verification status as being "not trusted" (i.e., level 1) which might spur the user to try and figure out why mails from that particular user are no longer marked as verified. So the rare MitM + good signed (but not encrypted) message is getting through will still be detectable because they won't be shown as "green" / valid anymore. Pseudocode: |
Line 183: | Line 569: |
The key used to secure this message differs from the key usually used to secure messages from "foo@example.com" Please call "foo@example.com" or your IT Support to call what happend and check if the new key to secure your communication with him has the fingerprint: 1101 C077 198C 91E3 5BB9 CC78 856B 9E7E 60A1 542A Secure communication with "foo@example.com" is no longer possible until you have ensured that the new key is legitimate. [Ask me later] [Use new key] [Keep using the old key] |
if (K1.source == WKD) { K1.validity = Level 2 (Tofu.validity == Basic Trust) K1.policy = auto K2.validity = Level 0 (Tofu.validity == bad) K2.policy = bad encrypt_to = K1 } else { K1.valididy = Level 1(Tofu.validity == conflict) K1.policy = ask K2.validity = Level 1 (Tofu.validity == conflict) K2.policy = ask encrypt_to = K1 } |
Line 198: | Line 587: |
There should be no option to mark both keys as "good" this case needs to be handled on the sender side to avoid that this becomes common practice and many uses that communicate with the sender run into a conflict. ==== Automated Resolution ==== A conflict could be automatically resolved by keeping the old key in use and treating the new key for validity display just like a fresh key. Technically this is not planned for in the TOFU trust model, but this is an idea for discussion and could be implemented in a MUA. Probably by setting policy to good for both keys and handling that specially in validity display. The conflict would be detectable in the interface for a user who cares about it because it will no longer be shown that the sender is somewhat verified and that there is no basic history yet. Meaning the sender is back to level 1. This necessitates an option to mark the old key explicitly as bad. E.g if your communication partner has lost his old key through misuse. But that would then be needed to be triggered by the sender when he receives replies that he can't decrypt. With the assumption that the sender already knows more about what happened and can explain this then to the recipient. ===== How would this relate to attacks? ===== * A man in the Middle attack will be cheaper because it won't require manual interaction of the user. (This is partly mitigated by keeping the old key in use for encryption.) * Impostor attack would still be prevented until the impostor has built up enough communication history. * The DOS Crypto Attack will be prevented as conflicts are not shown so prominent. ===== Opinion of aheinecke: ===== While I like the idea of still keeping to use the old key in case of conflict as it moves the misuse problems from the receiver to the sender and the usage of the old key will mitigate a Man in the Middle, as a first step automated conflict resolution is a step too far, we need to see if false positive conflicts are really a thing in practice and then think in a next step about automated resolution. Automated encryption already "lowers" security compared to the theoretical web of trust, (Imo not compared to the wot as used in practice) and it might lead to bad publicity and won't be accepted by the existing community. So my conclusion here is that this goes a step too far. |
=== Does this model protect against our threats? === * Man in the middle with full control of communication history: If Alice and Bob always encrypt & sign but Mallory had full control of their communication history and re-encrypts / resigns the messages regularly this attack could be detected: a) When a Mail gets trough once e.g. if Mallory controls Alice's router and Alice uses an Internet Cafe once. In that case Bob would not be able to decrypt the message. That communication failure could lead to more investigation. A MUA may assist that by showing to which keys a mail was encrypted in case decryption fails. b) If Alice publishes her key in a Web key directory Bob's MUA can detect that the key used for Alice does not match the one he always used and can signal this by not showing Alice's signatures as "Good" indicating that the communication is not protected. The Attack is more problematic if Alice and Bob don't encrypt but just sign. In case a) this would mean that the mails that get through from the real Alice would not be shown as valid. But once a mail gets through from the real Alice the messages from both Alice and Mallory will not be shown as valid anymore, indicating that the communication is not secure. * Man in the Middle attack with established communication: This attack will be prevented by keeping the established key in use so messages won't get encrypted to Mallory. By not showing the validity indication anymore this attack is also detectable. * Impostor attack: would still be prevented because the new key is never shown as valid / verified unless user interaction is done. * The Troll Attack will be prevented as conflicts are not shown so prominent. Using a Web key directory can also mitigate this attack because it will prevent marking many keys as bad trough spam. === Selfhealing when a user lost the Key === If a user lost access to her private Key by mistake it is a potential problem that the user must inform every communication partner, thus forcing all partners to interact with the crypto system. Even communication partners who the user rarely contacts. To avoid that we think the system must be selfhealing. For this scenario we assume that there are two kinds of communication partners: Regular Contacts (at least weekly) and Rare Contacts that are not regular. E.g. A University Professor might have rare contacts with her students but regular contact with her support staff. In this case we may need to avoid conflicts at all for the Rare Contacts because an university wide mail "I have lost my encryption stuff, please click there and there to use my new one" might lower the security: # Reacting to such mails becomes common but very bad practice. # The usability is hurt and users might no longer use crypto mail A proposal to handle this would be through Web Key Directories. If there is a new conflicting key in the Web Key Directory we download it automatically but don't encrypt to it due to the fact that we don't fully trust the Mailprovider. Especially the attack where the mail service is asked to help attack a user is made too cheap and hidden if we would do this. To avoid a conflict for the rare communication partners we could accept this key as Level 2 key if it is been in the WKD for a long amount of time. E.g. one month. The rationale for this is that a sustained attack by the user's mail provider to provide a compromised key is either detectable (through communication failures, our own key fetching etc.). Or that it would be so expensive that other attacks on the user / endpoint like some of the examples Schneier has given in his Attack Trees paper from 1999 on [[https://www.schneier.com/images/paper-attacktrees-fig8.gif|Mail]] and [[https://www.schneier.com/images/paper-attacktrees-fig7.gif|OpenPGP]] would be cheaper. So our conclusion is that the advantage to avoid the problems mentioned above outweighs the drawbacks of switching to a WKD key after it has been published for some timespan. Leading to an overall higher security. |
Line 253: | Line 664: |
To determine if a mail can be sent automatically encrypted one of the following rules must match for each recipient: * There is a Fully / Ultimately valid key for the recipient. * This is a marginally valid key for the recipient and the first UserID that matches the recipients mailbox has a signcount of at least **one**. Or the recipients key was obtained through a slightly authenticated source (e.g. WKD). |
To determine if a mail can be sent automatically encrypted: * Is there a key of at least level 1 for each recipient of the mail. |
Line 269: | Line 676: |
== Screenshots from GpgOL == {{gpgol-no-key.png}} No key was found. Level 0. {{gpgol-tofu-first.png}} The first signed message form someone else. Level 1. {{gpgol-tofu-second.png}} The second message. Level 1. {{gpgol-tofu-basic.png}} Basic trust! Level 2. {{gpgol-tofu-fully.png}} Full trust! Level 3. {{gpgol-invalid-sig.png}} Invalid sig. Level 0. {{gpgol-conflict.png}} Tofu conflict. User attention required. |
== Example GpgOL == [[EasyGpg2016/OutlookUi|Example screenshots / UX design from GpgOL]] |
Contents
Automated Encryption
The idea behind automated encryption is simple: enable encryption without (much) user support. The goal is to protect users from passive attackers (e.g., someone listening) and from man-in-the-middle attacks and forgeries. At least, as much as possible without requiring much user support. We should only require help from the user if there is a good chance that she is being attacked.
This page is intended as a discussion base for validity display and opportunistic mail encryption and how to use the trust-model tofu+pgp for automated encryption.
What are our goals and how do we archive them?
Prevent mass surveillance
Clear text emails can be saved and automatically analyzed by a number of players that can gain access to the transport layer. It is documented that a number of governments do this. By encrypting mail by default whenever possible, we significantly increase the cost of this type of surveillance. For instance, a government would have to interfere with pubkey discovery (similar to how Verizon inhibited transport level security over SMTP) to prevent users from learning about their communication partners' pubkeys.
Solution: way to find a reasonable pubkey without user's help. (See: WKD / WKS )
Spam phishing
Phishing is a common type of fraud. A simple example is an email that is apparently from your bank prompting you to take some action that requires you to log in. The link to the log-in site is actually to a site controlled by the attacker, which steals your credentials. Another example of such an attack is a mail containing malware in an attachment.
This type of attack can be prevented by using signatures to verify the sender's address. Since we don't require users to actively authenticate their communication partners, preventing this type of attack requires recognizing that the sender is attempting an impersonation.
Solution: there are two ways to detect this type of attack.
First, phishing attacks are successful, because the mail content and headers look authentic. That is, an integral part of phishing is imitating a person or institution to trick the mark. It is straightforward to detect if there are multiple keys associated with a single email address. Thus, to get around this, a phisher could employ the common technique of using an email address that is a homograph of the real email address, e.g., using a cyrillic a in place of a latin a. Google detects these types of phishing attacks by checking that email addresses conform to unicode's highly restricted designation level. We could do something similar and show a warning if an email address doesn't pass this test.
Second, if we assume that the user will regularly receive signed emails from her bank, then we can exploit the communication history to show that signed messages from previously unseen / rarely seen email addresses shouldn't be trusted. For instance, we can show the number of signed mails from the email address in question. Recognizing a discrepancy requires that the user be vigilant, which requires some user education. But, even this can be defeated: if a spammer uses the same email address & key many times, the email address may eventually appear to be trustworthy using this metric.
Targeted (spear) phishing or CEO-Fraud
This attack is similar to the spam phishing described above, but the stakes are higher. An example of this type of attack is when an assistant receives an email allegedly from the CEO requesting that the assistant immediately transfer some funds to a particular account. Unlike the above attack, in this case, the victim is targeted, and the potential monetary damage much higher.
Solution: again, automated techniques or the use of history cannot mitigate this attack; the employee must be trained to recognize certain signals. A possible mitigation is to have a list of fully trusted keys, and show messages that are signed with these keys differently. Note: this doesn't mean that the employee must necessarily curate this list; this can be done by the IT department.
Man in the Middle attacks
A Man-in-the-Middle (MitM) attack is when an adversary is actively decrypting and re-encrypting email. For this to work, the MitM must 1) get Alice and Bob to use keys that he controls and 2) re-encrypt every communication to avoid detection.
To get Alice and Bob to use keys that he controls, the MitM must intervene during the initial key discovery. In this case, we can detect the MitM attack when a valid message eventually gets through. This could occur if Alice receives a message via a channel that the attacker doesn't control.
If a good message gets through and is encrypted, Alice will be unable to decrypt it and she will probably tell Bob that something went wrong. Most likely, Alice and Bob will not be sufficiently technically savvy to diagnose the actual problem. Since everything will work when they use their usual communication channel, they will ignore the issue. To actually detect a conflict in this situation, Alice's MUA could fetch all of the keys specified in the PK-ESK packets. This may allow Alice's gpg to detect a conflict: normally the sender's MUA encrypts emails to not just the recipients, but also the sender so that the sender can later review the sent message. Note: this scenario can occur due to a misconfiguration, e.g., the message is not encrypted to Alice.
If the message is only signed, then when we fetch the key used to sign the message, we will detect a conflict. In this case, we can prompt Alice to contact Bob to figure out what the right key is, which gives us a chance of defeating the man-in-the-middle.
Note: if Bob proactively sends a message to Alice, then he will (hopefully) access Alice's key via an authenticated key stored, such as WKD, in which case, the attack would have to break the store's protection (e.g., TLS) to make sure Bob gets the attacker's key. On the other hand, if the attacker sends a forged message to Bob, and Bob just downloads the specified key from the key server, then the attacker has successfully intervened. This suggests that we should always check WKD for the right key, if possible.
If the MitM attempts to intervene after Alice and Bob have already successfully communicated, e.g., by sending Bob a forged message, then we can detect the MitM due to the conflict and we can prompt the users to exchange fingerprints to figure out the right key.
Forensic detection of attacks
If an attack happened it will typically be possible to detect what happened (which messages were read, and which messages were sent) after the fact based on the state maintained by gpg. (See limitations below)
Limitations of automation
When communication partners authenticate each other over a secure channel, e.g., exchanging business cards in person, a MitM cannot hijack the channel without the communication partners knowing. An automated system doesn't normally have access to a secure channel. But, there are different degrees of insecure. For instance, if the initial key discovery is done over TLS to an accountable entity (i.e., something like WKD), this makes a MitM attack significantly more expensive than just looking up a key from a key server. But given that adversaries capable of monitoring all channels are probably capable of circumventing TLS or at least compelling mail providers to compromise their WKD using tools like National Security Letters (NSLs), WKD should not be regarded as completely authoritative.
To protect against more sophisticated attackers, an automated system can be combined with something like the web of trust. To be usable, the user must be able to distinguish these different trust levels, i.e., the cases where we have reason to believe that the connection is secure, and the cases where we know the connection is secure. (In the system described below, levels 1 & 2 correspond to connections secured using the automated approach, and levels 3 & 4 correspond to connections that were bootstrapped over a secure channel.)
We also need to distinguish between keys that are automatically looked up and keys that the user explicitly looks up. WKD works by identifying a key that belongs to an email address. If the user explicitly enters an email address (e.g., when encrypting), then we know that the user intended that email address. On the other hand, if we fetch a key via WKS in order to verify a signature (e.g., we use WKS to find the key for phishy@examp1e.org), then we should be less confident that the key is trustworthy, and that the message is authentic.
Details
Trust Levels
Definitions of wording:
userid: The userid on a key that matches the smtp address of the sender. userid.validity is set as follows. By default, it is marginal. If the key is fully trusted via the web of trust, then it is Fully. If the key is explicitly marked as bad or unknown, then it is Never or Unknown, respectively.
tofu: Information we have about the communication history. This also partially reflects the gpgme_tofu_info_t data structure. This data structure describes a key + userid pair, which is also sometimes called a "binding". Note: if a key has multiple user ids, then there are multiple bindings. A TOFU conflict occurs if a user id is bound to multiple keys. A conflict is a strong sign that the user is either being attacked or there is a configuration error.
key source: Where the key was imported from, e.g. if it was automatically imported over https, or if it comes from public keyservers.
Level 0
Defined as:
(userid.validity <= Marginal AND tofu.signcount == 0 AND tofu.enccount == 0 AND key.source NOT wkd)
Explanation:
This trust level is assigned to keys that were never used to verify a signature, never used to encrypt a message, and not obtained by a source that gives some basic indication that the key is actually controlled by the alleged owner. That is, we have no evidence that the user can actually decrypt messages encrypted with this key. Consequently, this key should not be automatically used to encrypt a messages to this recipient.
Usage:
- Encryption: Do not automatically encrypt to Level 0.
- Validation: N/A
Level 1
Defined as:
((userid.validity == Marginal AND tofu.validity < "Key with enough history for basic trust") AND key.source NOT wkd):
Explanation:
We have verified at least one message signed using this key, or encrypted at least one message to this key, but not more than a handful, and we didn't find the key via a convincing source (e.g., WKD).
This trust level means that there is some evidence that the recipient actually controls the key. Thus, it is okay to automatically encrypt to this recipient using this key. (Note: if there is a MitM, then the intended recipient will still be able to read the message so this decision will not negatively impact usability. Nevertheless, we shouldn't give the impression that secrecy is in anyway guaranteed.)
At this trust level, we don't have that much evidence that the key belongs to the stated person. For instance, the key could have been imported when the user examined a phishing mail. As such, we should not indicate to the user that the contents of the messages are in anyway valid.
Usage:
- Encryption: Automatically encrypt.
- Validation: Display the message as if it wasn't signed. (But, it is okay to show this under in the details window.)
Level 2
Defined as:
(userid.validity == Marginal AND ((tofu.validity >= "Key with enough history for basic trust") OR key.source wkd)):
Note: validity is computed based on the number of days on which the user verified a message / encrypted a message, not the total number of verified messages / encrypted messages.
Explanation:
We have verified a bunch of signatures at different times from this key and/or encrypted messages to this key a bunch of times; or, the key came from a semi-trusted source.
Given the evidence that the key is actually controlled by the stated person, it makes sense to both encrypt to this key and to show that the message is signed.
Note: a common phisher is unlikely to take the time to get to this level. Thus, non-targetted phishing mails will normally not show up as trusted.
Usage:
- Encryption: Automatically encrypt
- Validation: Show that the message was signed and is probably valid
Level 3
Defined as:
userid.validity == Full AND "no key with ownertrust ultimate signed this userid."
Explanation:
The user is fully trusted, but only indirectly so, i.e., WoT, or the TOFU policy was explicitly (by the user!) set to good.
Level 3 and level 4 are indications that the communication partner is authenticated. The distinction between levels 3 and 4 is to provide flexibility for Organizational Measures like: You should only send restricted documents to certain keys. This can be realized by having a key be an organization key be ultimately trusted and responsible for signing those keys.
This is also the level for S/MIME Mails.
Usage:
- Encryption: Automatically encrypt to Level 3
- Validation: Indicate that the message came from the sender.
Level 4
Defined as:
userid.validity == ultimate OR (userid.validity == full AND "any key with ownertrust ultimate signed this userid.")
Explanation:
The key is either ultimately trusted or signed by an ultimately trusted key.
See level 3 for an explanation of this level.
Usage:
- Encryption: Automatically encrypt to level 4
- Validation: Show verified messages as "the best". Stars and sprinkle level ;-)
Rationale
Time delay for level 2
Using the number of days on which we saw signed messages rather than the total number of messages that we saw to determine a key's "believability" makes it more expensive for a simple attack to work. For instance, a phisher might first send a bunch of signed spam so that if the user opens them, the key will be marked as valid (level 2). But, it is much more expensive for the attacker to retain state (the key, how many messages were sent, etc.), then just firing and forgetting.
A time delay also gives others the chance to intervene if they detect an attack, e.g., if it is against a whole organization or if it comes from one identifyable source like a special botnet or open mail relay.
WKS as shortcut to level 2
Using WKS for key discovery automatically brings a key to level 2, because in this case we have a claim by some (weakly, because of TLS & NSLs) authenticated source that this key really belongs to the stated email address. Since, this trust level doesn't claim to protect you from persistent adversaries, this is perfectly reasonable.
Note: It is thought that circumventing HTTPS is harder than circumventing SMTPS/IMAPS because MUAs might ignore or offer to ignore, certificate errors (which dirmngr does not). It is also more easy to do a downgrade attack on SMTPS.
If the key in the WKD changes and the old and new keys are not cross signed, then this may be a sign that the mail provider got an NSL, or there is a user error, for example. In this case, we may want to downgrade the keys to level 0. Preferably, the user should resolve the resulting conflict.
Presentation
There should only be prominent information when reading a signed mail if:
- There is additional information that the sender really is the intended communication partner. (Level >= 2)
In other words, we do not display that a message is unsigned, and we do not display that a message has a bad signature. These are treated equivalently. (See below.)
When we do show that a message is signed / encrypted, it should be displayed as a check or a seal ribbon or something. It should be prominent and next to the signed content. There should be a distinction between levels 2, 3 & 4. The distinction between levels 3 & 4 may be slight. But, the distinction between levels 2 & 3 should be noticeable since people are going to make important decisions based on this difference.
Don't treat signed mails worse then an unsigned mail
A MUA should not treat signed mails worse then unsigned mails. Thus, if a sender is not verified it should be displayed similar to an unsigned mail, because in both cases we have no information that the sender is actually the alleged communication partner.
You may want to show a tofu conflict more prominently as user interaction is required at this point.
Especially: ignore GPGME's Red suggestion An attacker would have removed the signature instead of invalidating it. It should be treated like an unsigned mail and only additional info in details should be shown for diagnostic purposes. Similarly when Red is set because a key is expired or so. It's not more negative then a unsigned mail so only if your MUA shows unsigned mails as "Red" may you treat signed mails this way, too ;-)
Conflict handling
A TOFU conflict occurs when there are multiple keys with the same mailbox and it is not possible to automatically determine which ones are good. For instance, if two keys are cross-signed, then they are not considered to conflict; this is just a case of the user rotating her primary key.
Conflicts occur in two situations:
Attacks:
- A MitM controlled the initial key exchange. If a good message gets through, there will be a conflict. The "new" key is the correct key.
- An attacker attempts a MitM attack, but the user already has the right key. The "old" key is the correct key.
- An attacker sends a forged message. The "old" key is the correct key, or both are bad (the first key was also due to a forgery).
- There is a Troll trying to hurt usability so much that automated encryption is no longer used (i.e., many forgeries resulting in gratuitous conflicts)
Misuse:
- A user generated two keys e.g. on two devices and did not cross sign them and uses both.
- A user lost control of his old key, and did not have a revocation certificate.
Both misuse cases should be handled on the sender's side because he controls or lost control of the involved keys and can take steps / inform himself what went wrong.
The second misuse case (inaccessible key) is likely more common than the first misuse case (multiple, valid keys). The first misuse case already leads to problems: communication partners need to choose which key to use.
We think it might be so common that we would need an automated handling for this, too. (See section selfhealing)
Losing keys can also be assisted by software that provides a bad user experience or does not follow common practices.
Resolving conflicts on the senders side
When an application makes a signature, it should check that there are no other keys with the same user id. If there are and the private key material is available, then the application should prompt the user to make a cross signature to correct this misuse.
Similarly, if a signature is verified by an MUA, there are secret keys available with the same userid, and they are not cross signed, the user should be prompted to make a cross signature.
WKD Checks
A tofu aware GUI should periodically check that the key stored in the WKD is her actual key. If done via TOR, then helps detect if the mail provider has replaced the user's key (due, perhaps, to an NSL). (Note: even if TOR is not used, this check still makes it harder for a rogue mail provider to hide.) This check could be triggered by use of the private key (e.g., when signing a message).
Automated conflict resolution by recipients
Let us assume that there are two keys, K1 and K2, with the email address alice@example.org, and that they are in conflict. K1 is a key with established communication history, K2 is a key without history. Let's assume that we discover K2 either via a signed message or example.org's WKD.
In this case, we cannot with certainty resolve the conflict. Consider:
- If we discover K2 because a good message or WKD access finally got through a MitM attack, then K2 is the correct key.
- If we discover K2 due to a forgery, then K1 is the correct key.
In fact, the "correct key" could also be bad! For instance, if the user never interacted with Alice and got two different forgeries, then neither key is the correct one, but when both keys are bad choosing one to encrypt to rather than none does little harm. We are in the case where an attacker controls all your communication and does not care about detection. This attack is out of scope for automated encryption.
In case K1 is provided by WKD we can accept K1 as still valid because it's still publicly available and we can assume that a conflict would have been detectable by the sender.
If we don't have a WKD we still want to use K1 for encryption but don't show it as valid anymore.
We don't want to auto-resolve to K2, because we would have two good keys and something is wrong. If a user has lost control / lost his old key and is unable to revoke it we want to create problems for the sender, so that we can get notified by the sender that the new key should be used and we can mark the old key explicitly as bad.
If K2 is available in the Web Key Directory we also don't want to autoresolve to it, because it would make an attack from the Mail Service Provider too easy. Imagine the MSP wants (or is forced) to intercept some specific communication he could ensure that for a specific communication partner a MITM key (K2) is provided. When we automatically fetch that key through WKD and automatically accept K2 as valid the attacker would have reached its goals. So we stay with K1 for encryption but don't show K1 as valid anymore to make it detectable that there is a problem.
If a WKD is not available and we assume that at least one of the keys is probably good, then in all of the situations outlined above except for one (a good message gets through a MitM attack), then the old key is the correct key.
If we consider the good message from Bob getting through a MitM attack, we find that there are two situations: the good message is signed, and the good message is signed and encrypted.
If the message is signed and encrypted, then Alice will be unable to decrypt it (because the MitM didn't reencrypt it), and we won't actually see a conflict, because we never see the signature and consequently we never see a conflict. The failing-to-decrypt message, however, will likely cause Alice to talk to Bob. But, it is unlikely that they will correctly diagnose the problem, in particular, as once the MitM is back in control, everything will continue to work. Thus, they will likely conclude that there was a misconfiguration or a bug.
If the message is only signed, then Alice fetches the key used to sign the message (since she hasn't see it before) and we detect a conflict.
We conclude that the above scenario (MitM + good signed message getting through) is sufficiently rare, so we'll automatically decide for old key if there is a conflict. But if the old key is not available through WKD anymore we show the verification status as being "not trusted" (i.e., level 1) which might spur the user to try and figure out why mails from that particular user are no longer marked as verified. So the rare MitM + good signed (but not encrypted) message is getting through will still be detectable because they won't be shown as "green" / valid anymore.
Pseudocode:
if (K1.source == WKD) { K1.validity = Level 2 (Tofu.validity == Basic Trust) K1.policy = auto K2.validity = Level 0 (Tofu.validity == bad) K2.policy = bad encrypt_to = K1 } else { K1.valididy = Level 1(Tofu.validity == conflict) K1.policy = ask K2.validity = Level 1 (Tofu.validity == conflict) K2.policy = ask encrypt_to = K1 }
Does this model protect against our threats?
- Man in the middle with full control of communication history: If Alice and Bob always encrypt & sign but Mallory had full control of their communication history and re-encrypts / resigns the messages regularly this attack could be detected:
a) When a Mail gets trough once e.g. if Mallory controls Alice's router and Alice uses an Internet Cafe once. In that case Bob would not be able to decrypt the message. That communication failure could lead to more investigation. A MUA may assist that by showing to which keys a mail was encrypted in case decryption fails.
b) If Alice publishes her key in a Web key directory Bob's MUA can detect that the key used for Alice does not match the one he always used and can signal this by not showing Alice's signatures as "Good" indicating that the communication is not protected.
The Attack is more problematic if Alice and Bob don't encrypt but just sign. In case a) this would mean that the mails that get through from the real Alice would not be shown as valid. But once a mail gets through from the real Alice the messages from both Alice and Mallory will not be shown as valid anymore, indicating that the communication is not secure.
- Man in the Middle attack with established communication:
This attack will be prevented by keeping the established key in use so messages won't get encrypted to Mallory. By not showing the validity indication anymore this attack is also detectable.
- Impostor attack: would still be prevented because the new key is never shown as valid / verified unless user interaction is done.
- The Troll Attack will be prevented as conflicts are not shown so prominent. Using a Web key directory can also mitigate this attack because it will prevent marking many keys as bad trough spam.
Selfhealing when a user lost the Key
If a user lost access to her private Key by mistake it is a potential problem that the user must inform every communication partner, thus forcing all partners to interact with the crypto system. Even communication partners who the user rarely contacts. To avoid that we think the system must be selfhealing.
For this scenario we assume that there are two kinds of communication partners: Regular Contacts (at least weekly) and Rare Contacts that are not regular. E.g. A University Professor might have rare contacts with her students but regular contact with her support staff.
In this case we may need to avoid conflicts at all for the Rare Contacts because an university wide mail "I have lost my encryption stuff, please click there and there to use my new one" might lower the security:
- Reacting to such mails becomes common but very bad practice.
- The usability is hurt and users might no longer use crypto mail
A proposal to handle this would be through Web Key Directories. If there is a new conflicting key in the Web Key Directory we download it automatically but don't encrypt to it due to the fact that we don't fully trust the Mailprovider. Especially the attack where the mail service is asked to help attack a user is made too cheap and hidden if we would do this.
To avoid a conflict for the rare communication partners we could accept this key as Level 2 key if it is been in the WKD for a long amount of time. E.g. one month. The rationale for this is that a sustained attack by the user's mail provider to provide a compromised key is either detectable (through communication failures, our own key fetching etc.). Or that it would be so expensive that other attacks on the user / endpoint like some of the examples Schneier has given in his Attack Trees paper from 1999 on Mail and OpenPGP would be cheaper.
So our conclusion is that the advantage to avoid the problems mentioned above outweighs the drawbacks of switching to a WKD key after it has been published for some timespan. Leading to an overall higher security.
Key discovery and Opportunistic Encryption (Mail only)
A MUA should offer automated key discovery and opportunistic encryption. The WKD / WKS helps with automated key discovery and should be used (by using --locate-key)
To determine if a mail can be sent automatically encrypted:
- Is there a key of at least level 1 for each recipient of the mail.
Auto Key Retrieve
Additionally to WKD lookup, if you receive a message from an Unknown Key a MUA should automatically retrieve it from a public keyserver or a Web Key directory. This Key can then be used for opportunistic encryption because you have seen a signature it is very likely that the recipient can decrypt. (auto-key-retrieve in gnupg)