Users should not sign documents

Just as not signing paper documents can prevent lives from being destroyed - if the signer doesn’t think about the consequences of the instrument being signed before signing.

But one of the things most desperately needed online is accountability.

OK, so let’s do that. We build the Web for the Elites where everything is signed and there is accountability everywhere. What problem does it solve?

Obviously, it won’t stop misinformation, as anyone can still pretend to be anybody.

It won’t stop monopolies to invade privacy in order to collect data, and it will make it worse as now the data can be traded without losing its accuracy.

It won’t stop massive global surveilance, but make it worse for the same reasons. is working to change that.

I’m sure they will change the world™.

re: Obviously, it won’t stop misinformation, as anyone can still pretend to be anybody.

Yes. That’s probably the root of all problems on the Internet. But PKI and digital signatures, associating a key pair to a natural person in a reliable way, can precisely “stop misinformation” because people can’t pretend to be anyone… any more. You’re getting it. You just need to flip it a bit.

Great, but suppose I make a key pair in the name of “Dr Divoplade, professor at Swen Ekaf University”, publish something false and sign with my name.

Now, everywhere on the internet, there will be my statement with a green lock next to it with a message saying “Verified by Dr Divoplade, professor at the Swen Ekaf University”, and it will be OK.

So, this kind of reliable way of stopping misinformation only works for people that are able to check the fingerprint of a public key, as opposed to now where they only need to check a website URI.

The world is not as black and white as depicted in this thread, there are many shades of grey. The standards we adopt online should reflect how things are already done in the real world. Sometimes you want accountability, sometimes not.

You could be either anonymous, pseudonymous or fully verified, depending on the situation, and use many identities for that.

If I go to a festival a ticket is enough to gain me entrance, but at a hotel I have to account for myself by showing my passport. These are identity claims. If I buy a holiday trip I want the agency be accountable for delivering it, or allow me to redeem my ticket. If I am an investigative journalist I want to sign my research article that covers a years long investigation, and others want to see it signed, so they know it is credible. Etcetera.

I don’t know this but they mention standards the Rebooting The Web Of Trust group is working on, like DID and Verifiable Claims. I don’t know too much about them yet, but I find the approach they are taking to tackle the enormous complexity underlying this standardization enormously interesting.

You are talking at the app level. This is fine, you can use any app you want.

The discussion is about the platform. My point is that the platform should not require people to sign data. I understand that it is incompatible with the RWOT (not the verifiable identity claims, though), but it is better that way.

I was not talking about the app level, but more in general. Also not really solid-specific. I am not really sure what their approach is. What I’ve seen so far my gut feeling gives RWOT standards higher chance of widespread adoption, but its gut-only as I’m no expert in this, and lot to learn :slight_smile:

Judging from the success of PGP as an idea for the RWOT future, and that of ActivityPub as an idea for Solid (they are very similar), I would take a few conclusions.

PGP has never had a great adoption. There is a correlation between few users and poor usability, and I think the former implied the latter. If there was more interest in PGP, the email servers would have adopted a few measures to ease its usability. As of now, only Protonmail makes significant work on that front.

Activitypub has a vastly more diversified public, with minorities and groups that may want to protect their privacy (sex workers, indian protesters, for instance, have hit the news). I have to say, it is very different from the traditional demographics that we find using the web of trust, which I must take as a hint that the future of the web is in this direction.

There is a widespread mentality on the web of trust approaches that goes by “Don’t speak up if you can’t take the responsibility” that simply cannot work for anyone but a few, and is definitely not implemented in the successful platforms (be it silos or AP). We need to recognize that this mentality has failed, and prepare the technology for the time when the protection will come from the law. This is how I understand the interaction between Solid as a platform, the contract for the web, and the current legislative advances such as the GDPR.

Agreed. There was once this very complex and unapproachable thing called a computer. A company called Apple made it user friendly. Most everyone has a computer now, even if they only own the computer in their pocket. is making X.509 identity user friendly. references RWOT because it’s an example of efforts in this area. issues X.509 identity certificates. They aren’t based on Blockchain. But, they recognize that Blockchain, Solid, DID and other efforts can all contribute to a reliable identity on the Internet.

Another astute observation. Some protections really need to originate from the law and from authorities. That doesn’t mean that all protections must originate there. makes use of the system (a.k.a. notary public and others) to verify and evaluate the things that contribute to a reliable identity.

In addition, complete anonymity is dangerous. has something called Accountable Anonymity. In other words, your privacy when using certain X.509 identity certificates is assured. Unless you break the law and a valid court order requires that the law breaker be identified. That’s a fine line to walk. Anonymity and Accountability aren’t always mutually exclusive.

I’m not specifically talking about Solid. I’m talking in general terms.

I agree on your notions on AP and PGP. It should be clear that the next generation of ‘Web of Trust’ standards - regardless which ones are adopted - needs to be implemented much better, so they are understood and easy-to-use for the masses. Also agree on the need for much tighter regulation, but this will be a very long road to go, while tech speeds along (plus laws are one thing, effective enforcement an entirely different beast).

I have come across this mentality, and it is very bad, but I have the feeling that this is not reflected in the RWOT standards. On the contrary, it goes very far in supporting all the shades of grey. Like in Verifiable Credentials (use cases) the data model allows incredibly fine-grained creation of credentials, tailored to any situation that corresponds to how trust is represented in the real world.

The spec recognizes there are many privacy considerations and states:

Spectrum of Privacy

The Verifiable Credentials Data Model strives to support the full privacy spectrum and does not take philosophical positions on the correct level of anonymity for any specific transaction. The following sections provide guidance for implementers who want to avoid specific scenarios that are hostile to privacy.

It is up to the implementers to do all of this the right way, and for the consumer to choose the right implementations (implying they need to be well-informed on their choices).

My point about PGP was the opposite. It was not desirable for the masses so it never got implemented correctly. In this regard, I don’t think a new iteration will do much to that problem.

Everything lies in implementation details. If the verifiable credentials is signed by the user, it is bad. If it is signed by an application under the control of the user and whose key can and will be deleted (not revoked) without consequences for the user, then it’s fine.

If it leads to the user losing contacts or losing access to something, then we have a problem.

Regarding the personal information, i don’t really understand why it matters so much, because I could set up my identity provider to sign absolutely anything (provided that it has gained my consent). I fear that the issuer (identity provider) is not controlled by the subject, according to this standard (see for instance, the disputes section). If Solid wants interoperability with this, we need to make sure that the issuer can be chosen by the user, and that it will sign anything the user wants with a disposable key, including incorrect claims.

By the way, I don’t understand how “age over 21” is non-correlatable to an identity.

Yes, I agree with that. Free choice will not always be possible (e.g. government issuing passport), but at least there could be decentralized mirrors (e.g. in case of conflict, the EU).

Just an example, I think, where that’s the only thing you know. More of these combined and you are microtargeted, of course.

This would typically be implemented as an application. I am more concerned with Solid as a platform: do you need a user signature to keep your contacts or access your friends’ data? As opposed to different use cases for different applications, there are really only two case: either the standard for the platform requires that user data are signed regardless of the application, or it does not. Solid as a platform should be in the latter case.

Where is this coming from? What’s the point of signing anything if you then throw away the private key? Do so, and you can digitally sign anything, pretending to be anyone (still doing what creates most problems on the Internet today), then it can’t be verified. It negates any value created by PKI (asymmetric encryption).

You keep both keys and work to reliably associate the key pair to you. So, if you and someone else signs a contract, the parties who signed the contract can be identified and later held to the contract in a court of law. Without that, there’s no value to either signature nor the contract.

Turning from generalities to Solid specifically: The user SHOULD digitally sign stuff that’s done in the name of that Solid identity. But, it’s good practice to use separate keys for Authentication, Digital Signatures and Key Management. WebID+TLS is Authentication. That same key pair shouldn’t also be used for Digital Signatures. You should list your Signing public key in your profile just as you do your Authentication public key.

And, yes. Things done by that Solid pod should be digitally signed by the Signing key pair associated with the Pod. And don’t throw away the keys.

You should ask @divoplade, as he wrote that, not me.

You throw away the key once the other party has verified the signature and so has linked the document to your webid. Once the other party has verified that the document comes from you, the job is done, you can delete the key

You keep them for a reasonable time (a few minutes maybe?), i.e. until the other party has verified the signature.

Exactly, so that the only trust the others put in your key comes from the fact that it is on your profile. The public key in itself does not matter, so if you change it and make new signatures with the new key, the verification for the new signatures will work as expected.


Sorry to intrude but I’m confused about how you could change public keys so often.

You might have sent messages that sit in inboxes for days before the receiver needs to check your public key, but if you change it so often the messages you sent them will no longer be verifiable.

If your public key is in the profile on a pod with a webid that is a pseudonym, then only you and the pod provider know your true identity anyway, and the pod provider should need a court order to divulge it. So everybody can know if any message came from that webid but the identity is still protected if it’s a pseudonym.

Or maybe I’m not understanding this correctly since I’m no expert.