I’m only halfway through part 1, but the issues about deep and diverse hierarchies must go way beyond Solid.
It must be similar to issues in biology and psychology, where two or more applications sharing data are comparable to two or more people (or whatever), one being the pod owner, talking about the same thing (the data). How are things agreed on (if they really even are), remembered, recalled, reorganized?
If one doesn’t consider how its done in nature (not that I know much about that), then I don’t see how one can make something thats scalable.
I will finish parts 1 and 2 and hopefully find the answer.
Anyway, thank you for this response on the forum. Sorry to be so blunt. I know that in the do-ocracy, I don’t have a visa (maybe that’s why I can be so blunt) and the way to really get answers is to participate in the panel, but I’ve got all I can do right now to understand this “great reset” thing
Shape expressions can filter changes to a pod, but can those filtering shape expressions themselves be changed? How can a pod change its mind? Can that be suggested with the weight or quality of the info that is rejected by the filters, or can it be done only unilaterally by the pod owner?
Its crazy to try to organize the world and expect it to work for everybody
You have to be adaptable
ok, trusted access to trusted agents I guess
Thanks for working on this, I agree with you that interoperability is very important :).
From the point of view of an app developer, I think something we’re missing in the tooling space is a way to apply reasoning, as I learned from @RubenVerborgh in his post: Shaping Linked Data apps.
It’d be awesome if apps are documented properly, and using the same shapes for the same concepts. But in practice I’m not sure how feasible that is. What I think is more feasible is declaring shape equivalences. Even though I have no idea how that works in practice, so I’d love to learn more. I may tackle it myself at some point, but right now I’m focusing my efforts in other areas.
I’m also convinced that automated conversion between shapes is the way to go, but it’s out of my comfort zone to explore further myself at the moment.
Both the app and the pod could provide shape equivalence data and ideally the conversion algorithm would be fairly generic and seems like the kind of thing that the server should provide because it potentially requires bringing together linked data across multiple documents and saving to multiple documents.
Registering shapes and building client side tooling to use it sounds useful in the meantime but feels like a stopgap to me.
Some shapes might seem completely orthogonal but they might be related. How would the distance between them be measured? And how would that be related to the difference between them?
If you had one shape with only zip codes and phone numbers, it would be completely different from another shape that has only email addresses and names. But they’re both pretty closely related.
The distance might be measured using different maps or axes too. The distance between those two shapes might be small if you’re talking about personal contacts, but it might be large if you’re talking about a voip network.
Having different fields is one thing, but even representing data with same semantic meaning can be very hard. There’s this W3C internationalization best-practice on representing Personal names around the world (here discussed on HN). Things get very complex very quick.
Anyway, I came here just to mention a different take on the matter. A different approach to tackling similar problem area, by using Lenses:
I found this on fediverse here. Now, they’re not using Linked Data, but JSON Schema. But they have a large section on their finding with some interesting general observations wrt Interoperability.
Couldn’t be json schema interoperable with linkeddata through jsonld?
What about a tool that analize pods resources as data fragments with machine learning and suggests pod owner some change or automatically convert data representating a Person ( or other recognized shape) to another shape?
A way could be with ml5 ( easy machine learning/tensorflow access) word2vec representating the content of a pod in the form of vectors
Maybe profile based content negotiation as written about by @RubenVerborgh will be flexible enough to be based on many different shapes incrementally built and agreed on by trusted networks of pods.
Like for example, “we all in this trusted network have over time agreed that basketball is superior to soccer and here is a constitutional shape that includes explanations and assumptions for determining truth and efficiency”.
So these biases, and every pod will not only have biases but need them, can evolve while also being used in negotiated exchanges.
Preferences for the process of revealing biases and the consequences for privacy could itself be evolved with biases.
I am not at this moment deep-diving the subject, but have these resources listed for consideration.
We have the same problems to tackle for ActivityPub / Fediverse. In early days the decision was to KISS. Therefore there’s a very basic machine-readable ‘standardized’ NodeInfo endpoint to learn something about a remote instance. Then there’s a convention to write a FEDERATION.md in a project repo describing how interop works for the particular app.
This does not scale, and we are up to improving upon this mechanism. Grishka of Smithereen wants to build Capability Negotiation, but we need to standardize on that, and a full-blown solution would drag in all the complexity.
Update: Note that both of these methods are concessions that help to avoid the complexity of tackling the full problem, and thus provide a practical way forward without having full-blown solutions in place.
Many years ago, I was doing some research related to interoperability in decentralized environments. Perhaps something can be tried in solid.
I used an artifact I called a ‘web blackboard,’ (honoring the old AI blackboards) a versioned RDF document describing a concept through several coexistent representations. It had an RDF adaptation of JSON-schema (for validation) and rules attached.
The idea is that the user collects concepts (commits to), which creates a relation between his personal-dataspace and a web of blackboards. Concepts are discovered by the users while interacting with other users.
At that time, I learned some stuff:
its better to see schemas as agreements between people.
people quickly agree on some things and never on others.
people describe things in many ways, at the same time.
people understand/accept flat structures quicker.
prefer lattices to hierarchies.
people want to describe things beyond reality.
I was researching this stuff because I wanted ‘meaning support system’ that constantly evolves the mappings in communities, does inference, etc., to pave the way to a (real) personal assistant.
I’m thinking a meme or concept (or blackboard as you describe) would be an rdf document but it would also represent a collection of sparql queries and/or neural networks. The collection might take, for example, text in natural language and return an rdf graph representing a concept like ‘mermaid’.
So if an inexperienced sailor had never heard of a mermaid, and he (I’m old school but ok, she) listens to two different experienced sailors tell him about mermaids, but they differ when it comes to whether the mermaid has a fish tail, then the blackboards they give him might be different in only a small way, probably not the neural network that converts the natural language to rdf, but probably just a small part of the sparql queries having to do with the fish tail. So the inexperienced sailor could keep both blackboards around until he has more experience and decides to discard one or both of them. The neural network converting natural language text to rdf might be a separate more fundamental sort of blackboard.
That post is by @how who is also admin at SocialHub. He’s involved in DREAM research, I believe most recently in UPSYCLE.
I think these insights are one of the major factors that stood in the way of the Semantic Web success, as it is near impossible to describe semantics in a universal way, globally machine-readable. At TerminusDB they wrote a 5-part blog series addressing many issues and shortcomings. And in part 3 they talk about open and closed world reasoning.
On the whole set of standards surrounding RDF they say “This is where things get both really brilliant and ridiculously absurd”. SHACL is also mentioned: “the standard that came out was once again full of logical inconsistencies and impractical and wrong design decisions”.
Whatever you think of that, their decision was to focus on closed world vocabularies. And that has an appeal to me, making app design much more practical. And this fits really well with DDD strategic design where a bounded context maps onto a vocabulary. They can be agreed upon and standardized for specific application/business domains. In the case of Fediverse you can create ActivityStreams exensions also defined as separate vocabularies, specifying in this case federated msg formats, and these can be the basis for a more building-block-like approach to app development.
There’s much complexity to tackle, of course, but I hope that with Linked Data, AS/AP and Solid we can move from silo-first to task-oriented federated app design. I have my eye on Go-Fed where you start with a vocabulary defined in a subset of OWL, generate code from that, and then build your (modular) logic and UI on top of that.
That is just an old term I used in the slides. Probably now, I would call it ‘schema,’ to avoid confusion.
It was ignored; I believe the dominant idea was to only use ‘global semantics.’ I moved on to other things.
I think this is relevant stuff! Especially now the ‘tools for thoughts’ of the last two years. There is a lot of data produced by thousands of people that ‘weave’ their personal data-spaces (or text-spaces) using tools like obsidian.md, roam, foam, org-foam, to name a few. I see lots of enthusiastic people asking themselves in their respective forums: how we connect our thoughts? How we collaborate?
I’m also open to experimenting. I would start connecting existent vaults, always honoring their representation formats. Afterward, perhaps, add SOLID adapters;
…, the open-world assumption shoe does not always fit. We can use both assumptions and be explicit of which parts of the knowledge should be considered complete and which could be seen as incomplete. We lack of simple enough mechanisms to partition triples though.
This looks like a good pointer. Where can I learn more about this? Do you recommend a comprehensive source?