When it comes to what is out there on the web in terms of linked data, one can only conclude that it is a big sprawling mess. It is really hard to get good ontologies together in an application, and - after doing so - avoid creating your own unique interpretation of the semantics within.
This problem applies to the entire Solid + linked data world. For linked data to be widely adopted we need to be on the same page as much as possible, not only in terms of spec standardization, but especially on semantic models / ontologies being adopted.
The problem we need to overcome is well-explained by TerminusDB team member Kevin Feeney in this Medium article:
The [linked data] 4 basic principles were:
- Use URIs for things
- Use HTTP URIs
- Make these HTTP URIs dereferencable, returning useful information about the thing referred to
- Include links to other URIs to allow discovery of more things.
We can supplement these 4 principles with a fifth, which was originally defined as a ‘best practice’ but which effectively became a core principle:
- “People should use terms from well-known RDF vocabularies such as FOAF, SIOC, SKOS, DOAP, vCard, Dublin Core to make it easier for client applications to process Linked Data”
[…]
However, the big problem is that the well-known ontologies and vocabularies such as foaf and dublin-core that have been reused, cannot really be used as libraries in such a manner. They lack precise and correct definitions and they are full of errors and mutual inconsistencies [1] and they themselves use terms from other ontologies — creating huge and unwieldy dependency trees. If, for example, we want to use the foaf ontology as a library, we need to also include several dozen dependant libraries, some of which no longer exist. So, the linked data approach, in fact, just uses these terms as untyped tags — there is no clear and usable definition of what any of these terms actually mean — people just bung in whatever they want — creating a situation where there are effectively no reliable semantics to any of these terms.
I was in a vidcall with @pukkamustard the other day - we share a common interest to offer LD-based knowledge to local communities - about the need for a new initiative that gives a modern and fresh approach on collecting schemas / ontologies for practical application in software designs, rather than the academic data research contexts in which you normally find these things. Maybe this should be a new wikimedia project, or something similar, a big pattern library maybe, idk.
Regarding streamlined app creation I am interested in exploring a DDD + Linked Data approach on which I just posted in TerminusDB community forum:
So I’m interested in looking into combining Domain Driven Design + Linked Data for the fediverse apps I’m elaborating on. This DDD + LD approach is a bit odd, and there is hardly any information in-the-wild on the combination of these two fields. Usually LD brings you to more academic data science sections of the web, while DDD leads to more of enterprise business applications.
This combo is interesting, I think, in order to make rich semantic models available to the masses in well-designed (clean architecture) applications. Eventsourcing, CQRS and DDD has gotten better tool, framework and library support to the extent that it is now within easy reach for a large part of the developer community. Many large, production ready projects use ES, CQRS and DDD is now following along.
Curious what your thoughts are about this subject area…