Linked Data as a first step

I think RDF (description of resources and their linking) is just the basis for that what has to be come.
Next: Our goal is to develop intelligent systems that can draw conclusions from this data.
Im my opinion we have to get a better understanding of systems and processes with feedback (as we find in life, neural networks, communication).
Programming languages need to evolve too - the step from procedural to object oriented paradigma is just a starting point.
We will see a revolution in thinking.

3 Likes

And what do you think about Agent Oriented Programming? https://en.m.wikipedia.org/wiki/Agent-oriented_programming
I think, this is more close to the concept of DĆ©centralisation :thinking:

1 Like

Hi @Smag0 - i am sorry, it took me so long to give you an answer in this question.
I think, agent oriented programming is a an approach worth considering.
My vision is a system not so strongly coupled to classes and objects. I prefer to speak from rooms and spaces we are operating with. Everything opens a space and is space and in space. A human being for example is a space associated with life functions, thinking, and more. Spaces or rooms are communicating with each other, and are indefinitly contained in each other and overlapping. I guess it is a question of what philosophy you prefer: Do you want to live in a world of definite objects to manipulate - everything controlling, or do you you want to live and think in the free and open cosmos of life itself.
I hope I achieved to communicate something of the spirit I want to transmit.
Please donā€™t stop communicating with me :slight_smile:

1 Like

Dear @Joytag2
Since Iā€™ve discovered multi-agents systems, I consider everything as a complexe system interacting with other complexes systems composed of complexes systems and part of complexes systems.
The cosmos, a human, an entreprise, a gouvernement, a country, an association, a town, a building, a robot, a software, a tree, a rock ā€¦
All have a space or an environment.
All are complexes systems than can be ressources to others, some are actives, other passives.
I try to develop as a vision of an holonic multi-agents systems, everything must be scalable for bigger or smaller, every systems can be considered as an agent that as his proper aim ( or donā€™t , Iā€™m not sure a rock can have :thinking:) but considering everything as an agent let the doors open, and not blocked if someone else as more info than me about ā€˜the language of the treesā€™ ā€¦
All depends of the level of abstraction that you use, the context, and the point of view that you use to consider a fact or a thing.
I think itā€™s a ā€˜systemicā€™ vision, and it is very close to linked data & decentralized knowledge, where many agent can own a part of knowledge, & all together make a big knowledge base.
Not sure if this is really clear and about the translation from French to English, but that drive my perception and my contribution in projects like Solid or my participation in this weekend hackathons in museums https://www.museomix.org/
Collaboration, mix, test, explorationā€¦ Thatā€™s what I love
PS : perhaps this in not the post where to talk about that, but that could be a part of my presentation that Iā€™ve not filled :crazy_face::upside_down_face:

4 Likes

Great to hear your insights. MAS is also the thing I looked into for integrating Solid. Its premises perfect suits Solid. Actually, the agent you designed doesnā€™t have to ensure other agents (no matter what we defined as a rock or tree) suit the belief-desire-intention model for the system you mentioned, as long as we can perceive ā€œtheirā€ information from some channels and reconstruct (or imagine, depending on your view of the world) their existences. Therefore, such systems donā€™t even need the premises of ā€œenvironmentā€ or ā€œspaceā€ to define agents.

Practically, we donā€™t have (and itā€™s very hard to take) a ā€˜systemicā€™ vision to adapt to Solid. You can develop one agent as the interpreter of Wikipedia API, another agent hooking with the real museum camerasā€¦ as long as they adapt SPARQL/OWL as service language, we can gradually migrate the newly generated data to agents PODs.

The real challenges to making use of decentralized data are two:

  1. How to make machines to deal with statement logic conflicts.
  2. How to make machines to deal with reference changes. (Note on Nov 13: the shift of reference-content mapping. Human as the data sources donā€™t always assign same symbol the same meaning. and itā€™s changing over time.)

So glad to see your statement ā€œAll depends of the level of abstraction that you use, the context, and the point of view that you use to consider a fact or a thing.ā€. First time I found someone else idea to bring the focus from Ontology to Epistemology in this forum, which I believe should be where the answers reside.

3 Likes

In my opinion, this in not a real challenge now. At the beginning, perhaps we could develop agents that manifest a blockage and ask questions in a human way as a chatbot. For example ā€™ oh oh, there is a conflict, here, what do I have to do? ā€™ or ā€™ there was a reference here yesterday and it is not anymore, what is wrong?ā€™
Considering both ( human & machine) at the same level as agent that have itā€™s own intelligence, capacities, intentionā€¦ and facilitating communication between agents must not be a gap.
Machine could share the human response to others And perhaps with machine learning, machine could learn and anticipate or reduce the conflict

2 Likes

Well, you know multi-agentā€™s assumption of limited-scope knowledge. By introducing personalized data PODs, the logic conflict will be ubiquitous between any agents due to different knowledge backgrounds. Today even to deal with data heterogeneity (same meanings, different formats) costs endless. Comparing with the speed of generating data, to deal with logic conflicts (different meanings, ideally the same formats) is not the job human can do in practice. Not to say those unverifiable (or too expensive to verify) ones.

However, I didnā€™t say logic conflict is a bad thing. I said ā€œdeal withā€ rather than ā€œreduceā€. Conflicts are one of the most important parts (if not the only part with a more general definition) of wills, and the driving factor of emerging different communities. We need to release its potential.

For reference change, sorry I did say it clearly. I didnā€™t mean the reference availability issues. I meant the changes in the mapping from reference to its content. Like what you mentioned: ā€œAll depends on the level of abstraction that you use, the context, and the point of view that you use to consider a fact or a thing.ā€

For anyone of us, concepts are never consistent. When a Grade 5 and a PhD both say the word ā€œMathā€, are they talking about the same thing? No. Even todayā€™s me and tomorrowā€™s me grant the same concept (reference, icon, symbol, whatever) different meanings. So the semantic triples (although rigorously defined) generated by me on different moments - although the subjects are read identical - donā€™t always mean that I was talking about the same thing. Espcially when the triples are generately without rationality.

In a closed environment for multi-agent systems, we donā€™t need to care about this since references are always constant. Time is short and rules are static. But if it comes to the Internet, itā€™s way more complex and the agents(the pods or their assisted logics) are directly managed by humans.

2 Likes

Well @dprat0821, it seems that we are focusing at different level of abstraction.

Effectively, your scope can find mine limited, as Iā€™m focusing on end-user level, at the level of a house, imagining robots / iot / or softwares at a human level.

Iā€™ve got a preference to ā€˜litle dataā€™ more than ā€˜big dataā€™ :blush:. At this level, the number of users is limited, the number of ā€˜data generatorsā€™ is limited to, and as you can know if youā€™ve followed me a little, Iā€™m working on Spoggy to facilitate a user writing / reading triples on a Pod.
MAS can be applied at this level first.
I think that if one day This robot need to know where to put a new unknown thing that we drop in the room, he could ask it as a chatbot, or with Spoggy storing the answer on the Podā€™s house or his own pod, and that must not be too hard for him to learn and share with other iot/robots in the house. I think that here the power of Mas can make the difference. All of them being an sub-agent of the ā€˜house agentā€™

Then the house can be considered as a sub-agant of that district, the district as a sub-agent of a town ā€¦ the town as a sub-agent of a country.

What can be applied to a house can be to a entreprise tooā€¦

If the subject of Solid is to give the user the power on his data, letā€™s start ā€˜little dataā€™, giving users the possibility to interact with first, before drowning him in some data lake :bath::rowing_man::surfing_man:

1 Like

Great work for Spoggy:+1:
Also, the great meaningful start with the ā€œlittle dataā€. Letā€™s see what will happen :grinning:

Iā€™m not that familiar with agents as a paradigm but suspect it would be good to learn more so am interested in this discussion.

It reminds me of a few years ago, before holochain added a blockchain I was looking into various parts of their model, and one of the most fascinating was their CEPTR demos. It looks like theyā€™ve collected the CEPTR related parts here: http://ceptr.org/projects/core although I donā€™t have time to check it out now.

Itā€™s possibly a little diversion from the core of this thread, but definitely worth a look if you havenā€™t seen their ideas.

Looking at the going repo the code hasnā€™t been touched for the years but I think I did run it, and they had some nice demos and screen cast videos.

1 Like

saw this on the ceptr video:

  1. receptors have membranes and are organized fractally
  2. receptors receive and send signals
  3. receptors are lightweight vmā€™s that manage their coherence

so receptors

manage their membranes (in / out / coherence)
store their state (memory and persistent)
transform their state (by running code)
scape data (index collections, compute relationships)
sync with other instances (holographic storage and distributed byzantine fault tolerance)


The receptors with membranes sound kind of like pods, and the transforms could be in-band hypermedia like hydra, maybe.

1 Like

Its years since I looked, but ceptrs are a very idea, and IIRC have some scheme for defining their inputs and outputs so that it becomes easy to join them together. It was before I knew much about semantic web, so I canā€™t recall if they used RDF at all, but it would seem a very appropriate way to define those interfaces.