Linked Data as a first step

I think RDF (description of resources and their linking) is just the basis for that what has to be come.
Next: Our goal is to develop intelligent systems that can draw conclusions from this data.
Im my opinion we have to get a better understanding of systems and processes with feedback (as we find in life, neural networks, communication).
Programming languages need to evolve too - the step from procedural to object oriented paradigma is just a starting point.
We will see a revolution in thinking.


And what do you think about Agent Oriented Programming?
I think, this is more close to the concept of Décentralisation :thinking:

1 Like

Hi @Smag0 - i am sorry, it took me so long to give you an answer in this question.
I think, agent oriented programming is a an approach worth considering.
My vision is a system not so strongly coupled to classes and objects. I prefer to speak from rooms and spaces we are operating with. Everything opens a space and is space and in space. A human being for example is a space associated with life functions, thinking, and more. Spaces or rooms are communicating with each other, and are indefinitly contained in each other and overlapping. I guess it is a question of what philosophy you prefer: Do you want to live in a world of definite objects to manipulate - everything controlling, or do you you want to live and think in the free and open cosmos of life itself.
I hope I achieved to communicate something of the spirit I want to transmit.
Please don’t stop communicating with me :slight_smile:

1 Like

Dear @Joytag2
Since I’ve discovered multi-agents systems, I consider everything as a complexe system interacting with other complexes systems composed of complexes systems and part of complexes systems.
The cosmos, a human, an entreprise, a gouvernement, a country, an association, a town, a building, a robot, a software, a tree, a rock …
All have a space or an environment.
All are complexes systems than can be ressources to others, some are actives, other passives.
I try to develop as a vision of an holonic multi-agents systems, everything must be scalable for bigger or smaller, every systems can be considered as an agent that as his proper aim ( or don’t , I’m not sure a rock can have :thinking:) but considering everything as an agent let the doors open, and not blocked if someone else as more info than me about ‘the language of the trees’ …
All depends of the level of abstraction that you use, the context, and the point of view that you use to consider a fact or a thing.
I think it’s a ‘systemic’ vision, and it is very close to linked data & decentralized knowledge, where many agent can own a part of knowledge, & all together make a big knowledge base.
Not sure if this is really clear and about the translation from French to English, but that drive my perception and my contribution in projects like Solid or my participation in this weekend hackathons in museums
Collaboration, mix, test, exploration… That’s what I love
PS : perhaps this in not the post where to talk about that, but that could be a part of my presentation that I’ve not filled :crazy_face::upside_down_face:


Great to hear your insights. MAS is also the thing I looked into for integrating Solid. Its premises perfect suits Solid. Actually, the agent you designed doesn’t have to ensure other agents (no matter what we defined as a rock or tree) suit the belief-desire-intention model for the system you mentioned, as long as we can perceive “their” information from some channels and reconstruct (or imagine, depending on your view of the world) their existences. Therefore, such systems don’t even need the premises of “environment” or “space” to define agents.

Practically, we don’t have (and it’s very hard to take) a ‘systemic’ vision to adapt to Solid. You can develop one agent as the interpreter of Wikipedia API, another agent hooking with the real museum cameras… as long as they adapt SPARQL/OWL as service language, we can gradually migrate the newly generated data to agents PODs.

The real challenges to making use of decentralized data are two:

  1. How to make machines to deal with statement logic conflicts.
  2. How to make machines to deal with reference changes. (Note on Nov 13: the shift of reference-content mapping. Human as the data sources don’t always assign same symbol the same meaning. and it’s changing over time.)

So glad to see your statement “All depends of the level of abstraction that you use, the context, and the point of view that you use to consider a fact or a thing.”. First time I found someone else idea to bring the focus from Ontology to Epistemology in this forum, which I believe should be where the answers reside.


In my opinion, this in not a real challenge now. At the beginning, perhaps we could develop agents that manifest a blockage and ask questions in a human way as a chatbot. For example ’ oh oh, there is a conflict, here, what do I have to do? ’ or ’ there was a reference here yesterday and it is not anymore, what is wrong?’
Considering both ( human & machine) at the same level as agent that have it’s own intelligence, capacities, intention… and facilitating communication between agents must not be a gap.
Machine could share the human response to others And perhaps with machine learning, machine could learn and anticipate or reduce the conflict


Well, you know multi-agent’s assumption of limited-scope knowledge. By introducing personalized data PODs, the logic conflict will be ubiquitous between any agents due to different knowledge backgrounds. Today even to deal with data heterogeneity (same meanings, different formats) costs endless. Comparing with the speed of generating data, to deal with logic conflicts (different meanings, ideally the same formats) is not the job human can do in practice. Not to say those unverifiable (or too expensive to verify) ones.

However, I didn’t say logic conflict is a bad thing. I said “deal with” rather than “reduce”. Conflicts are one of the most important parts (if not the only part with a more general definition) of wills, and the driving factor of emerging different communities. We need to release its potential.

For reference change, sorry I did say it clearly. I didn’t mean the reference availability issues. I meant the changes in the mapping from reference to its content. Like what you mentioned: “All depends on the level of abstraction that you use, the context, and the point of view that you use to consider a fact or a thing.

For anyone of us, concepts are never consistent. When a Grade 5 and a PhD both say the word “Math”, are they talking about the same thing? No. Even today’s me and tomorrow’s me grant the same concept (reference, icon, symbol, whatever) different meanings. So the semantic triples (although rigorously defined) generated by me on different moments - although the subjects are read identical - don’t always mean that I was talking about the same thing. Espcially when the triples are generately without rationality.

In a closed environment for multi-agent systems, we don’t need to care about this since references are always constant. Time is short and rules are static. But if it comes to the Internet, it’s way more complex and the agents(the pods or their assisted logics) are directly managed by humans.


Well @dprat0821, it seems that we are focusing at different level of abstraction.

Effectively, your scope can find mine limited, as I’m focusing on end-user level, at the level of a house, imagining robots / iot / or softwares at a human level.

I’ve got a preference to ‘litle data’ more than ‘big data’ :blush:. At this level, the number of users is limited, the number of ‘data generators’ is limited to, and as you can know if you’ve followed me a little, I’m working on Spoggy to facilitate a user writing / reading triples on a Pod.
MAS can be applied at this level first.
I think that if one day This robot need to know where to put a new unknown thing that we drop in the room, he could ask it as a chatbot, or with Spoggy storing the answer on the Pod’s house or his own pod, and that must not be too hard for him to learn and share with other iot/robots in the house. I think that here the power of Mas can make the difference. All of them being an sub-agent of the ‘house agent’

Then the house can be considered as a sub-agant of that district, the district as a sub-agent of a town … the town as a sub-agent of a country.

What can be applied to a house can be to a entreprise too…

If the subject of Solid is to give the user the power on his data, let’s start ‘little data’, giving users the possibility to interact with first, before drowning him in some data lake :bath::rowing_man::surfing_man:

1 Like

Great work for Spoggy:+1:
Also, the great meaningful start with the “little data”. Let’s see what will happen :grinning:

I’m not that familiar with agents as a paradigm but suspect it would be good to learn more so am interested in this discussion.

It reminds me of a few years ago, before holochain added a blockchain I was looking into various parts of their model, and one of the most fascinating was their CEPTR demos. It looks like they’ve collected the CEPTR related parts here: although I don’t have time to check it out now.

It’s possibly a little diversion from the core of this thread, but definitely worth a look if you haven’t seen their ideas.

Looking at the going repo the code hasn’t been touched for the years but I think I did run it, and they had some nice demos and screen cast videos.

1 Like

saw this on the ceptr video:

  1. receptors have membranes and are organized fractally
  2. receptors receive and send signals
  3. receptors are lightweight vm’s that manage their coherence

so receptors

manage their membranes (in / out / coherence)
store their state (memory and persistent)
transform their state (by running code)
scape data (index collections, compute relationships)
sync with other instances (holographic storage and distributed byzantine fault tolerance)

The receptors with membranes sound kind of like pods, and the transforms could be in-band hypermedia like hydra, maybe.

1 Like

Its years since I looked, but ceptrs are a very idea, and IIRC have some scheme for defining their inputs and outputs so that it becomes easy to join them together. It was before I knew much about semantic web, so I can’t recall if they used RDF at all, but it would seem a very appropriate way to define those interfaces.