But If I have some concept that needs new schema, for example, IPFS multihash, how can I publish a new schema and let this schema searchable to other developers?
I think one way is write schema defination in RDF, and put that RDF document on the POD, to get permanent URLs point to words. And I share the URL to RDF in this forum so others know there is a schema about IPFS.
Another way maybe publish schema’s literals (fake or real URI) as npm package, so developers can search this package and import those URI literals. Then they get Interoperability.
Two questions:
any better way to publish new schema so other developers can search for it?
any template or generator that can boost the process to build the schema?
The first idea of the app @anon36056958 mentioned was more a (fast) search index for existing ontologies with a curated list of ontologies. During the development I realised, that the RDF files of those onthologies, allthough accessible by URI, are nowhere near available as expected. Even commonly used onthologies like VCARD or SIOC sometimes failed to download due to server overload or network issues (timeout), others weren’t even available at thier URI - I think this is ridiculous. The ontology core files should be hosted on a CDN and available 24/7. I’m therefore not sure if hosting your own ontology on your solid POD is a good idea, but on the other hand I don’t know how resilient PODs are against heavy network traffic.
Yes, some URL of ontology even being block by a CDN and we need to enter a captcha to visit it. (And it’s not https)
As infrestructure, I thought there could have been a centuralized place like http://npmjs.com to hold all schemas. This could have been done by W3C to build up the consensus.
There is a proposal about decenturalized schema, that cares about the content of schema but not the URI of schema https://github.com/sandhawke/schemove. A centuralized place is the most reliable solution, but if there is no such place, moveable schema may be an option.
The server would send the cached file if available, else it would try to request the file from it’s original location and cache it.
Despite all the open source effort of all things linked data, those ontology files are very hard to come by and are extremely inconsistent. I’m working on improving https://whattheontology.herokuapp.com with the ability to add custom ontologies. To be able to parse them, I need to validate them. Writing a JSONSchema validator for ontology files turned out to be a real challenge.