Best practices for ownership in social?

This might belong to The Basics, but I think this is more suitable. There should be some best practices for applying Solid to social, so that end users will be able to apply their understanding to several systems.

A common example given for Solid is this scenario:

A publishes a picture and stores it on their Solid Pod PodA
B comments a picture and stores the comment on their PodB
C looks at the picture and also sees the comment, thanks to the magic of RDF

That is the simple usecase and easily understood. The picture belongs to A, the comment to B.

But what if A objects to the comment being shown on their picture? I can imagine that A can then disable the link on PodA- so the comment is not shown anymore when the picture is loaded.

People can probably still go to PodB and see the comment there - and probably also still get to the Picture because there is a comment link still on PodB?

Should A be able to prohibit B from making that comment link?

Things get interesting when we have a group discussion:

A hosts a place where people can discuss topics.
B posts a statement there
C comments on the statement

That leads me to several questions:

  1. Who should be able to modify or delete the comment or original statement?
  2. Where is the statement stored?
  3. Where is the comment stored?
  4. What happens if A eventually decides to hand over the place to D?

The expected behavior would be that when joining the hosted place, people agree that the host acts as a moderator and thus controls everything shown there. That control is usually partly shared, so that everyone can at least modify their own content while some act as additional moderators.

For Solid, I could expect that adding content to such a hosted place would mean that you still keep your own content in your own pod, but also have to give control of that content to the host and their intermediaries?

Or do they only have the permission to remove the link to your content from the hosted place so that your content does not show up there anymore but is still kept at your pod? (I’m currently favoring this approach)


It’s said about Linked Data that

anyone can say anything about any resource.

So you “cannot” stop anybody from saying something about a picture of which he knows the URI. And consequently you can’t stop an app from showing it. However, you could flag that comment as “inappropriate” or something. And the app could then not show the comment based on that. But that depends on the app itself then of course.

ok, got that. So instead of “removing the link”, it should read as “prevent the link from being displayed”. That does make sense and has a very similar end result.

Still, I see a problem coming up nonetheless:

Think spammers - if enough people use Solid, spammers will simply say “A should buy more Bitcoin at $insertSpamlinkHere” about everyone. Which then needs to be flagged, so it won’t clog up the viewport.

Which, if we think this through to the inevitable end, probably means that sensible apps will probably only show linked data from pre-approved sources (which could be people but also other apps), because it probably will be easy to mass-create such linked spam.

(And I haven’t even gotten into the topic of harassment and other abuse. Corollary: Who is in charge of flagging?)

Requests such as your bitcoin one can then be signed. So we know if someone really wants to buy them or not.

1 Like

There’s a conflation here between apps and pods.

I think of it as:

  • data owned by each user - this is what is currently in their pod, but in practice it could be stored in other places in other ways. What matters is the Solid protocol, not whether it is stored in a server based pod
  • apps that can access data on behalf of a user, and display it to that user - again, this is not a particular app on the pod of the user creating the data. It can be this app, or that app, on any Web server and isn’t tied to a pod. It just needs to use the Solid protocol.

There is more to apps than display, but in general they can display any data accessible to the user, but only change data the user owns. They need have no association with any pod. A pod is a bit like a folder on a massive Internet disk which is allocated to a particular user. What matters is that all apps use the Solid protocol to access that massive Internet ‘disk’.

An exception to that is when a user lets selected others modify data in a space controlled by that user. It would invite spam to allow anyone to use that so such a feature would require whitelisting (via ACL) or filtering and moderation (eg to approve / delete contributions from others).

Forgetting that for now, and going back to the bullets above, let’s think about that blog with comments app.

I, as A, publish a post with an image on my pod. B and C can see this by loading an app (not necessarily on my pod) and maybe a different app to the one I used, that displays blog posts published in the way my blogging app published blog posts.

We don’t have to use the same app, so long as the apps understand the underlying format (an RDF blog ontology) and are told where the data is.

One way for B to comment is for their comment to be created and stored in their space, not A’s. This way, their comment could include a link to the post or image they are commenting on.

A and C won’t see B’s comment unless they (and the app they use) knows how to find B’s comments, and have permission to access B’s comment.

So B can choose:

  • to make private comments (their personal notes on the public Web)
  • which comments they share with selected individuals and groups (and those ‘comment readers’ might choose to subscribe in order to keep updated with B’s comments)
  • which comments they make public to everyone, and to which anyone can subscribe.

This is a very different model to the one we are used to. So it takes time to even begin to understand both how it changes the user experience, and what different possibilities will arise from it.

I hope though, it is quickly obvious how decentralising this is. Rather than A publishing a blog and being able to see and control everything in it, the boundary blurs, and B is to an extent publishing their own take on that blog (B could add more than comments, but posts too) and C can also control what they consume, choosing to see A’s posts, B’s comments, but not B’s posts etc etc.

In fact, A and B are at the same level now, each publishing a feed of data, which they make available selectively (private, selected and public). Neither can force their content on the other, nor can they prevent anyone from publishing something linked to the their content. Each person is producing a social feed, while choosing which other feeds to view, what they see from each feed, and with what app. This is the Social in SOcial Linked Data (Solid), as opposed to central publisher v consumers.

The whole thing is very fluid, and creates many new ways of sharing, discovering, and consuming information. I imagine we’ll change our idea of publishing in radical ways.

It’s a bit of a head stretcher, for me anyway :slight_smile:


Classic blog software actually has the pingback functionality. There B doesn’t write a comment on A’s blog, but instead writes their own blogpost, but pointing it towards A’s blogpost. B’s blog software then sends a pingback to A’s blog, so that A can choose to display that pingback on their own blogpost. So, the basics are known :). The thing is that the blog model of social apparently only works for certain people and use cases, otherwise we wouldn’t have all those other platfroms…

The main reason why I’m asking all those questions here is because it the new model cannot be too alien right away, otherwise people have too steep a learning curve adapting the new system.

And this here, is the reason why my original post here exists: :slight_smile:

We need to agree on such a format and how to deal with the abuse cases. Because as soon as we leave the experiment stage, people will abuse things and there needs to be a framework in place to deal with that.

Right now, the way to handle these cases is to delete, block and ban. Not necessarily all of that or in that order, but these are the options available. What I want to figure out is which of these still apply in the Solid world, and for which we need to invent new ideas.


Well put. This was obvious to me as this is how the web functions right now, but it seems not obvious to most others so glad to see it spelt out and done well. I think this is a feature and an important one.

There will be still a need for a directory and the directory will have moderators with all the current controversies. The hope is that a decentralised system (as SOLID) will help people use multiple directories and change them as appropriate, very similar to newspapers I would say. To be precise directories are centralised and moderated, but the data hosting is separate and we should all be able to change (from Facebook). However, although we can all change from Google, we actually tend not to, which is a different story and out of scope here, I think.

1 Like

Great question, @JollyOrc, and a nice clarification, @happybeing. One thing that I think is worth highlighting is that apps will have a great deal of control over what they choose to display and how they choose to display it because they will have logic that allows for the encoding of business rules.

One app may skew toward a more laissez faire, open culture where any comment can be attached to any other piece of content. Another app might build processes that require the original poster to approve or at least enable deletion after the fact. That won’t stop the comment from floating out there in the great linked data web and it won’t delete it from the commenter’s pod, but it will control what gets displayed within that particular app.

Apps will serve many functions, one of which will be content discovery and display. Their logic will determine how well they serve their end-users. We will no doubt have some apps tailored to end-users that prefer laissez faire approaches and others that prefer ‘safer’ social settings. What will be different, this time, is that the underlying data won’t be siloed away from each other, preventing the possibility of content being displayed simultaneously across many different types of user experiences.

So many exciting implications here.


@JollyOrc , good questions. My thoughts:

Q1/2/3: Who should be able to modify or delete the comment or original statement? Where are the statement and comment stored?
Depends on the apps’ logic A/B used to access their PODs, and the permissions they granted to those apps. Basically,

  1. Don’t worry about A’s capability of removing B’s comment link. The posts with A’s permission to make public is supposed to be discussible by anybody. The only way to keep information from further broadcasted is to make people lose the interest and forget it. No technology, design or policy in human history can help more. Just like no celebrity can stop the scandal by deleting his own tweet.
  2. However, in the business point of view, it’s helpful for the service providers (either conventional web service or pod service) to develop a “Delete Comment” feature to attract users. In this case, such feature is also feasible in POD’s system. The logic is: (1) A can apply B’s service to delete B’s comment referring to A’s post. (2) If B’s service does not provide such function, A’s POD can blacklist B’s service in advance. In another word, A can only whitelist those services which provide “source comment deleting” services. (3) If B’s service reject to provide “source comment deleting” service, less and less POD would allow its access, and the service would get very limit source to serve the client B. Then B would go to another service.

Q4: 1. What happens if A eventually decides to hand over the place to D?
In my understanding, the “handover” means moving the WebID and post URIs to another POD server. In this case, another set of URI would be generated and the comments are still pointing to the previous ones. If the POD services are designed friendly enough, A’s service should keep a redirect for a certain period of time, while B’s service should execute a periodic checking to update the reference.

1 Like

Q1/2/3: There are lots of legitimate reasons to have content removed. Think posting nude pictures of someone without their consent. Or linking violent and graphic content to a place where children hang out.

Of course, there is also plenty of legitimate reasons to not have content removed: Dissidents and activists shouldn’t be silenced by oppressive regimes. Companies shouldn’t be able to force whistleblowers into silence.

So when designing a system, it is important to know how to distinguish these two things. Personally, I think that there are more people affected by malignant undeleteable content than suppressed whistleblowers. The latter do need protected outlets for their content, but it does not need to be the same as social media. But social media becomes toxic and thus useless to people when it doesn’t protect them.

Q4 That sounds… error prone :slight_smile:

1 Like

I think the divergence is technical solutions vs social problems. Tools will help but won’t solve everything. Locks deter thieves but don’t stop them.

To reiterate previous posts, better tools/locks and use cases.

Q4 is discussed here "move it at any time, without interruption of service"? but not solved obviously.

This post is relevant to this question of separating apps and their underlying data.

1 Like

Interesting and important issue.

Considering control, I tend to think of the new scheme as divided into access and display domains. Producers control access to their intellectual product (comment, article, image by link permissions and displayers control display of the content links point to in their domain (curation or moderation). It seems pretty straightforward. I haven’t thought it all through, but it shouldn’t matter too much what the content or display domain is.

In a comment app (which could be a component of another app) for example, commenters own their comments and agree to access by posting (providing a link within the comment app domain). They can revoke the access at will.

It’s up to the comment app and its owner to deal with the comments and later changes by the comment owner (revocation of access, revisions, etc.) and the app owner has full control over all display issues. The policies of access and display control can get messy, but I think the ownership and responsibilities are relatively clear. Delete (prevention of display) can be accomplished by either content owner or display domain owner. Blocking and banning remain as display owner measures of control.

The issue of copying and republishing of content (usurping a degree of control) without permission of the creator is an issue raised in this thread and others. It’s is a broad and difficult problem that currently has no true solution (copyright laws don’t work as originally intended). It probably cannot be prevented. A technical solution may be an app that searches for unpermitted republication

1 Like

It sounds quite similar to the posting options in Diaspora, where posts can be private, public, or available to ‘aspects’ (groups of other users pre-defined by the posting user).