I think RDF (description of resources and their linking) is just the basis for that what has to be come.
Next: Our goal is to develop intelligent systems that can draw conclusions from this data.
Im my opinion we have to get a better understanding of systems and processes with feedback (as we find in life, neural networks, communication).
Programming languages need to evolve too - the step from procedural to object oriented paradigma is just a starting point.
We will see a revolution in thinking.
And what do you think about Agent Oriented Programming? https://en.m.wikipedia.org/wiki/Agent-oriented_programming
I think, this is more close to the concept of DĆ©centralisation
Hi @Smag0 - i am sorry, it took me so long to give you an answer in this question.
I think, agent oriented programming is a an approach worth considering.
My vision is a system not so strongly coupled to classes and objects. I prefer to speak from rooms and spaces we are operating with. Everything opens a space and is space and in space. A human being for example is a space associated with life functions, thinking, and more. Spaces or rooms are communicating with each other, and are indefinitly contained in each other and overlapping. I guess it is a question of what philosophy you prefer: Do you want to live in a world of definite objects to manipulate - everything controlling, or do you you want to live and think in the free and open cosmos of life itself.
I hope I achieved to communicate something of the spirit I want to transmit.
Please donāt stop communicating with me
Dear @Joytag2
Since Iāve discovered multi-agents systems, I consider everything as a complexe system interacting with other complexes systems composed of complexes systems and part of complexes systems.
The cosmos, a human, an entreprise, a gouvernement, a country, an association, a town, a building, a robot, a software, a tree, a rock ā¦
All have a space or an environment.
All are complexes systems than can be ressources to others, some are actives, other passives.
I try to develop as a vision of an holonic multi-agents systems, everything must be scalable for bigger or smaller, every systems can be considered as an agent that as his proper aim ( or donāt , Iām not sure a rock can have ) but considering everything as an agent let the doors open, and not blocked if someone else as more info than me about āthe language of the treesā ā¦
All depends of the level of abstraction that you use, the context, and the point of view that you use to consider a fact or a thing.
I think itās a āsystemicā vision, and it is very close to linked data & decentralized knowledge, where many agent can own a part of knowledge, & all together make a big knowledge base.
Not sure if this is really clear and about the translation from French to English, but that drive my perception and my contribution in projects like Solid or my participation in this weekend hackathons in museums https://www.museomix.org/
Collaboration, mix, test, explorationā¦ Thatās what I love
PS : perhaps this in not the post where to talk about that, but that could be a part of my presentation that Iāve not filled
Great to hear your insights. MAS is also the thing I looked into for integrating Solid. Its premises perfect suits Solid. Actually, the agent you designed doesnāt have to ensure other agents (no matter what we defined as a rock or tree) suit the belief-desire-intention model for the system you mentioned, as long as we can perceive ātheirā information from some channels and reconstruct (or imagine, depending on your view of the world) their existences. Therefore, such systems donāt even need the premises of āenvironmentā or āspaceā to define agents.
Practically, we donāt have (and itās very hard to take) a āsystemicā vision to adapt to Solid. You can develop one agent as the interpreter of Wikipedia API, another agent hooking with the real museum camerasā¦ as long as they adapt SPARQL/OWL as service language, we can gradually migrate the newly generated data to agents PODs.
The real challenges to making use of decentralized data are two:
- How to make machines to deal with statement logic conflicts.
- How to make machines to deal with reference changes. (Note on Nov 13: the shift of reference-content mapping. Human as the data sources donāt always assign same symbol the same meaning. and itās changing over time.)
So glad to see your statement āAll depends of the level of abstraction that you use, the context, and the point of view that you use to consider a fact or a thing.ā. First time I found someone else idea to bring the focus from Ontology to Epistemology in this forum, which I believe should be where the answers reside.
In my opinion, this in not a real challenge now. At the beginning, perhaps we could develop agents that manifest a blockage and ask questions in a human way as a chatbot. For example ā oh oh, there is a conflict, here, what do I have to do? ā or ā there was a reference here yesterday and it is not anymore, what is wrong?ā
Considering both ( human & machine) at the same level as agent that have itās own intelligence, capacities, intentionā¦ and facilitating communication between agents must not be a gap.
Machine could share the human response to others And perhaps with machine learning, machine could learn and anticipate or reduce the conflict
Well, you know multi-agentās assumption of limited-scope knowledge. By introducing personalized data PODs, the logic conflict will be ubiquitous between any agents due to different knowledge backgrounds. Today even to deal with data heterogeneity (same meanings, different formats) costs endless. Comparing with the speed of generating data, to deal with logic conflicts (different meanings, ideally the same formats) is not the job human can do in practice. Not to say those unverifiable (or too expensive to verify) ones.
However, I didnāt say logic conflict is a bad thing. I said ādeal withā rather than āreduceā. Conflicts are one of the most important parts (if not the only part with a more general definition) of wills, and the driving factor of emerging different communities. We need to release its potential.
For reference change, sorry I did say it clearly. I didnāt mean the reference availability issues. I meant the changes in the mapping from reference to its content. Like what you mentioned: āAll depends on the level of abstraction that you use, the context, and the point of view that you use to consider a fact or a thing.ā
For anyone of us, concepts are never consistent. When a Grade 5 and a PhD both say the word āMathā, are they talking about the same thing? No. Even todayās me and tomorrowās me grant the same concept (reference, icon, symbol, whatever) different meanings. So the semantic triples (although rigorously defined) generated by me on different moments - although the subjects are read identical - donāt always mean that I was talking about the same thing. Espcially when the triples are generately without rationality.
In a closed environment for multi-agent systems, we donāt need to care about this since references are always constant. Time is short and rules are static. But if it comes to the Internet, itās way more complex and the agents(the pods or their assisted logics) are directly managed by humans.
Well @dprat0821, it seems that we are focusing at different level of abstraction.
Effectively, your scope can find mine limited, as Iām focusing on end-user level, at the level of a house, imagining robots / iot / or softwares at a human level.
Iāve got a preference to ālitle dataā more than ābig dataā . At this level, the number of users is limited, the number of ādata generatorsā is limited to, and as you can know if youāve followed me a little, Iām working on Spoggy to facilitate a user writing / reading triples on a Pod.
MAS can be applied at this level first.
I think that if one day This robot need to know where to put a new unknown thing that we drop in the room, he could ask it as a chatbot, or with Spoggy storing the answer on the Podās house or his own pod, and that must not be too hard for him to learn and share with other iot/robots in the house. I think that here the power of Mas can make the difference. All of them being an sub-agent of the āhouse agentā
Then the house can be considered as a sub-agant of that district, the district as a sub-agent of a town ā¦ the town as a sub-agent of a country.
What can be applied to a house can be to a entreprise tooā¦
If the subject of Solid is to give the user the power on his data, letās start ālittle dataā, giving users the possibility to interact with first, before drowning him in some data lake
Great work for Spoggy:+1:
Also, the great meaningful start with the ālittle dataā. Letās see what will happen
Iām not that familiar with agents as a paradigm but suspect it would be good to learn more so am interested in this discussion.
It reminds me of a few years ago, before holochain added a blockchain I was looking into various parts of their model, and one of the most fascinating was their CEPTR demos. It looks like theyāve collected the CEPTR related parts here: http://ceptr.org/projects/core although I donāt have time to check it out now.
Itās possibly a little diversion from the core of this thread, but definitely worth a look if you havenāt seen their ideas.
Looking at the going repo the code hasnāt been touched for the years but I think I did run it, and they had some nice demos and screen cast videos.
saw this on the ceptr video:
- receptors have membranes and are organized fractally
- receptors receive and send signals
- receptors are lightweight vmās that manage their coherence
so receptors
manage their membranes (in / out / coherence)
store their state (memory and persistent)
transform their state (by running code)
scape data (index collections, compute relationships)
sync with other instances (holographic storage and distributed byzantine fault tolerance)
The receptors with membranes sound kind of like pods, and the transforms could be in-band hypermedia like hydra, maybe.
Its years since I looked, but ceptrs are a very idea, and IIRC have some scheme for defining their inputs and outputs so that it becomes easy to join them together. It was before I knew much about semantic web, so I canāt recall if they used RDF at all, but it would seem a very appropriate way to define those interfaces.