Community Solid Server & Persisting User Data to SPARQL Endpoints

I’m looking to start a POD provider coming up here, and one thing I would love to do is provide a Triplestore for my users – if I’m doing that, I would also like their SOLID POD data to persist to this same store.

How … possible is this idea, to store user data per-user to a unique SPARQL endpoint?

3 Likes

I am not a Pro (although I run a provider) but case it helps, this might be related: SOLID Server SPARQL Endpoint

I would also like their SOLID POD data to persist to this same store.

One option would be to use the Community Solid Server with a SPARQL backend. However, this will add the data for all users to the same triplestore so you’d then need to have some kind of custom permissioning on the triplestore, or proxy you put in front of the SPARQL endpoint the access control around this.

I believe a QPF-like interface was also recently added to the Community Solid Server (GitHub - SolidLabResearch/derived-resources-component: Adds support for derived resources); so you could use this to expose a slightly better interface for query to users, and use something like Comunica to allow you to extend to doing full SPAQL queries in the client.

You can also use Comunica with Link Traversal to perform SPARQL queries over any Pod from the client-side.

Thank you! I was eyeing that SPARQL backend endpoint, and thats what led me in this direction. Wrapping a proxy layer around the store that manages access control to separate named graphs for each user seems like it would Okay and Fine, but the idea of scaling that into n thousand users which are suddenly shouldering each other for resources on the same query endpoints feels kinda bad when there’s no real benefit to having them colocated.

More thinking required I guess, or just let the file system solution be fine!

1 Like