Thinking about permissions for social

(this is a crosspost from our blog, but I think it is useful information and musings for this community, as others might run into the same issues)

During the development of our second prototype “Shepherd”, we realised that permissions for truly distributed social media are a thorny thing. Within the decentralised design of Solid, we have to define how spaces are controlled in interactions between users. We also have to be mindful of preserving the context of the interaction, while also respecting the privacy of individuals who might belong to different networks, technical or otherwise.

Three models of organising content

To explain the whole issue, we need to lay some groundwork on what design decisions have been made, here at Darcy, and for Solid as a whole. Those design decisions define how users can control the spaces of interaction. Let me start with how posts and comments are modelled. All in all, there are three major models of how to structure content and reactions to the content in social media:

1. The Forum model

This assumes that someone sets up a space for conversations. That person (or group) takes ownership of the space and can usually control the terms on which people interact in that space: Who is allowed to post, what gets deleted, or who is allowed in the space. Inside that space, people can start a conversation around a topic and others can reply to it.

We see this in web forums that run for example the venerable phpbb or Simple Machine Forums, or, more modern, Discourse forums.

When it comes to permissions, forums are usually managed by either an individual or a group of “admins” who then set out the rules of who can see, write, react to, or even delete things. The individual user then rarely has to think about permissions at all.

Those administrators can remove whole conversations or just individual replies, although the latter can leave the remaining content stripped of context.

A byproduct of this model is that most forums are somewhat of a close-knit group. They foster a group-identity, and often everyone knows who else is around.

2. The Posts-and-Comments model

In this model, everyone can post content and then others can react to it – be it through emoticons, likes, or comments. We see this model very often, it is used by Facebook, YouTube, Diaspora, and countless other systems.

The upside of this model is that it gives everyone who initially publishes content can usually set fine-grained controls of who gets to see the content and who can react to it. These permissions then seamlessly apply to the reactions and comments attached to this particular content. This means that those who comment on something have usually no control over who gets to see their comments or reactions – they are visible to the same people who can see the original content.

When the original post gets removed, this usually removes all of the comments and reactions to it too. Comments therefore only exist in the context of the initial post.

As far as I know, most of these systems do not tell the commenters who exactly is able to see the original post and consequently the comments. Most often, everything is public. A lot of those systems also have a “visible to friends-of-friends” setting, so you can reach out to people you do not actively know, but still be sort of private.

3. The Everything-is-a-Post model

Twitter and its open source derivate Mastodon feature this model. Instead of having a thorough hierarchical structure between original content and the reactions to it, comments are modelled as original content of their own. If you reply to a tweet or toot, that reply is its own new and original content.

This has interesting effects: Replies can be the starting point of their own conversations, completely decoupled from the original starting point. Even if that original post gets removed, the rest stays untouched and can stand on its own. But it also gives a somewhat disjointed user experience, as people often find it hard to correctly piece together the whole conversation. And when people who reply to an original post limit who can see their content, others might only be able to see a portion of the conversation.

Another side effect is the lack of noise control. “Sliding into ones mentions” can range from being just slightly impolite to outright harassing. The larger the network gets, the harder it becomes to control who can partake in a conversation without limiting oneself.

Why we chose the second model

Darcy makes use of the Posts-and-Comments model of organising these things. We find this to be the best compromise between discoverability, self-moderation (as the original poster can remove comments attached to their post) and data sovereignity (the comments are still stored on the commentors Solid pod, even when unlinked from the original post.

Distributed content and permissions

In Solid, the permissions to a piece of content are set alongside the content, on the Solid pod where they are stored. As a quick reminder, this is how it looks if Alice posts a picture, and Bob comments on it, we have the following data structure:

  • Alice’s Solid pod
    • Picture
      • Link to Bobs comment
  • Bob’s Solid pod
    • Comment on Alices Picture
      • Link to Alices Picture

Now, Alice and Bob can freely decide on who gets to see what on their own pods. Alice might decide to show their picture to Bob, Carol, David, and Eve, while Bob has set his comments to be visible to Alice, David, and Frank.


Frank can now read a comment, but would not be able to see the original post (Alice would need to have set visibility for Frank too).

Carol and Eve can see Alice’s picture, however they cannot see Bob’s comment underneath.

In the interest of having a non-disjointed conversation, we need to solve this. And when thinking of the solutions, we have to imagine that an imaginary attacker has a a good grasp of technical knowledge, and at the same time assume that the potential victim does not. That way we cover the broadest range of scenarios.

Possible solutions

The easiest solution is to simply have everything public, but that is clearly not working long term. People rightfully want to have the option to only have their content show up for a specific audience.

Solid supports two different approaches, both of which have up- and downsides:

Option 1: When commenting on a post, the commentator copies the permission list from the post and applies that to the comment. This has the advantage that the same set of people who can see the post can also see all the comments. The downside is that it exposes the whole (or parts of the) contact list of the original poster to all commenters. This is obviously fine in a close-knit group, however it may not be fine in larger groups. If Alice makes a post visible to all of her friends, family and coworkers, do they want Bob from accounting to know the names of everyone in that group, just by looking at the metadata of the post?

Another downside is that if Alice later on alters the permissions on the original post, one would need to alter these permissions on all the comments too, otherwise newcomers to the post could not read all the comments. But there currently is no mechanism to apply those changes to the Solid pods of all commenters.

(On the other hand, this second downside could be read as a feature: After all, Bob posted his comment under certain conditions. If Alice changes the conditions later on, then it might make sense that these conditions don’t apply retroactively, because Bob didn’t consent to that. Of course, that implies that Bob took notice of who could read the original post in the first place. And, for example on a site like Facebook, Bob might not have such fine-grained information in the first place.)

Option 2: All comments are always set to public while the initial post remains subjected to the permissions set by the initial poster. This isn’t as bad is it initially sounds. Due to the data structure of Darcy, the only way to access a comment is by either being the owner of that comment, or by knowing the exact path and filename of the comment. This path and file name are near-randomly generated by the unique IDs of the original post and the comment. Doing things this way somewhat contextualizes discoverability.

So when accessing a post, it will come with a list of URLs to the comments that are linked to that specific post. That means that only those who have permission to read the post get to know the URLs of the comments.

The downside is that if someone leaks the URL to a comment, everyone who receives that leak can access that comment. Anyone who can read the post can get to those comment URLs. One might argue that people could do that with a screenshot or copy-pasted text too, but with those, one can always claim that they are faked. In contrast, a comment loaded from my own pod is clearly created by myself. This makes such a leak in theory much more sensitive.

Ideal Option 3: The ideal solution from our point of view would involve cryptography and an extension of the current Solid protocol. There would be no access control list of WebIDs attached to a comment. Instead the Solid pod where the comment is stored on will rely on a cryptographic certificate that proves that the WebID trying to access the comment currently has permission to access the original post. Only then will access to the comment be granted.

In the above example, Eve reads the original post on Alice’s pod. She sees a link to Bob’s comment and follows it. In order to load the comment, Eve’s client first loads of timestamped certificate from Alice’s pod that has a signature from Alice and asserts that Eve’s WebID is allowed to see the post. Bob’s Solid pod can then verify the signature, the timestamp and Eves WebID, giving Eve access to the comment.

This would keep both the comment as well as the whole of Alice’s contact list confidential – Bob will only know about those people who actively try to access his comments.

Why is this a problem specific to Solid?

In a centralised system like Facebook for example, this is actually a non-issue. All permissions are managed by the same central instance. That means that this central instance (and those who control it) is the single source of truth about which permission set is visible for whom in the user interface. Thus the central instance acts as the neutral trusted arbiter, making sure that the permission sets for posts and comments are in sync without leaking any information.

The highly decentralised nature of Solid is what makes this a problem in the first place, as the different actors need to synchronise each other while making sure to only transmit a minimum of information.


Alas, option three is, even though we think it would be best, currently way outside of the Solid specification. We had a lot of internal debate about which of the theoretical harms of options one or two would be worse.

In the end, I consulted with a few privacy experts and internet activists, and the unanimous vote from them was that the exfiltration of contact lists through option 1 would be way worse than potentially being critized for something you really did write on the Internet. This means that instead of for example relying on a (potentially forged) screenshot as proof of what someone has written, people who have access to the comment could now forward a link to the comment to others.

So we will adopt option 2 for now: Comments are formally public, but only addressable from the original post. To mitigate harm, we make sure that those who use Darcy will know that their comments are potentially public, if someone leaks the link to outside of the group. That way, users can take this into consideration when writing a comment.

For us, this was a very interesting challenge. It shows that there are a lot of tricky problems and opportunities when we commit to decentralised spaces. Any social media concept needs to spend a lot of time to think about the various levels of privacy, consent, information, ease of use, and other factors. And we need to make the outcome of these deliberations clear to those who then use the platform.

At the time I am writing this blogpost, the Parler hack happened. This prompted me to make another conclusion: Decentralising the social network through Solid, making sure to handle permissions and discoverability right, makes it a lot safer against this kind of data breach. There is no single point of entry that can be exploited, and no single service that would take the whole network down if compromised.

Of course, a lot of that changes once we introduce the moderation services, as those could in theory become an attack vector, but we do plan to implement them properly and safely.


Thank you for this detailed explanation, it really makes it more understandable.

I wonder if some kind of automatic policy negotiation is possible, something like the following, but done in automated exchanges between some kind of policy control modules controlled by chat hosts, posters and commenters. The modules would be kept in linked data files on participants pods.

Chat manners, negotiated automatically. If you don’t like the result of the negotiation, you can adjust your chatting style or even decline to participate.

To the chat host, a poster would ask:

Can I post to your chat?
Ok, if I can post to your chat, then can I also control approval (but not editing) of comments on my post?
Ok, if I can control approval (but not editing) of comments on my post, then can I control approval of comments on those comments?
and so on.

To the poster (if the poster has approval to control comments) or to the chat host (if the poster does not have such approval) commenters would ask:

Can I comment on that post?
Ok, if I can comment on that post, then can I also control approval (but not editing) of comments on my comment?
Ok, if I can control approval (but not editing) of comments on my comment, then can I control approval of comments on those comments?
and so on

I don’t know, maybe this would get too complicated?

I am new in this Solid community. I played around a bit with the technology, but I am a bit frustrated. The setup of the webid, certificates and pods is far too complicated for normal people. Additionally, I get the strong impression that big part of the pod vision is still missing. Our private repositories reachable somewhere in the net by common standards is OK. The fun starts once we can easily administer the access to our data. So far, I was not able to see this access control and access monitoring part of Solid. Your question is releted to chats, but is as well relevant to all other types of data in a pod. All apps I tested so far are nice but quite basic. Have you seen the access control part of Solid?

Its still early days. With the clients and SolidOS currently available, setting access controls is basically like programming, with the same hazards.

No. I think of this as a feature and I want to be able to choose. I don’t want an option taken away.

There are scenarios when one of the options is better than the other

  1. e.g. the picture is of an item Alice sells and I have a private question or question only for the original group Alice shared it with
  2. e.g. Alice is a political activist and we strongly disagree. She will delete all my comments. Then, AAA, I’ll choose option 3. Then on my pod viewers shall see my comment below her post.

And, I think the 3 ways are wrong. There are only two. The central and the decentral model. Which leads to where followup comments/question are stored.

For a lot of scenarios and tastes, this can indeed be a feature. For what we want to build with Darcy, it is a problem. (I didn’t fully explain the interaction model we want to implement, because that would be a series of posts if I were to go into details)

To sum things up, our chosen interaction model demands several things:

  • the creator of a post is supposed to have control over what is visible alongside this post, and who sees it.
  • comments are different from posts and those are different from private messages
  • there is a shared experience, where people who talk about a thing can be reasonably sure that they all see the same thing.
  • there is the possibility for (transparent) moderation, preferably from within the community, but if need be also on a more global level

And at the same time we are committed to the Solid strategy where everything user A creates will be stored on the pod of user A, be that a post, a like, a picture, or a comment.

Going back to the original example, Alice would never be able to actually delete an unwanted comment under their posts - they can only unlink the comment. The original commenter can afterwards decide to publish their now unlinked comment as a post on their own stream, and then show a link to the original post. But that content will not be visible or discoverable coming from Alices post, if Alice does not want that to be.

The reasoning for that is that we want to allow and encourage Darcy users to maintain and manage their own online presence. Their posts and how things are discussed in their direct context are an extension of that.

And if we want to have the interaction model we prefer, and also want to store data in the Solid way as outlined above, then we end up with the conundrum of permission sets as outlined in my original post. :slight_smile:

We do encourage other social media products on Solid to implement whatever model they deem fit, I certainly don’t claim that this is the objective best solution for everyone - but I do think that it is the best solution for the interaction model I want to promote.

I like this. To me this is much more #3 and not #2.

Is there anything to try out already? Do you have any relationship to polypoly?

for that, nothing more than the concept - as outlined, we are currently going with option 2, mainly because the Darcy team, small as it is, does not have the time and coders to start working on option 3.

As to the other question: Being the product owner at polypoly is my day job :slight_smile: So there is an overlap of ideas and interests, but formally, the two projects are unrelated.