I created an issue about creating a tool for auto-blocking based on patterns (https://github.com/solid/solid/issues/241). But I realized this is part of a bigger question, namely which tools do we want to have as part of Solid that stops (or at least limits) harassment.
I want to raise this issue for the community, and hear what people think
It’s a complex issue with a lot of difficult sides, so hope to get some productive discussions that might result in ideas/solutions that we can integrate into the core parts of Solid.
ah lots of fun:-)
by pattern, might need a way to unblock individuals you like! The human creature is a peculiar one; two people can say the same thing and by inflection mean just the opposite
This is really a fascinating topic, because abuse can manifest itself in so many different ways.
For example, a piece of content or data could be considered abusive. If i have a chat and messages are coming into my pod that are threatening or abusive, I would want the ability to block the source, or have the ability to block like behavior if i so choose.
Additionally, abuse could be based on volume of activity. For example, if I share some media (like images or video), and the usage skyrockets, far exceeding my expectations, i would want the choice to apply some rate limiting or outright blocking across the board or to specific parties. Otherwise I could run the risk of getting slammed with bandwidth costs, or even worse, making the rest of my data inaccessible because of system / network resource constraints.
(Cross-posting from issue, but I think the text works as well here)
I would think there is a marked for multiple tools that can be used for this, each with their unique take on how the problem can be solved. They should also work in alignment, and not exclude each other, i.e. like a normal Solid app.
The tools/services/apps/whatever could be targeted toward POD providers or users directly, depending on the underlying mechanics provided by Solid and the need of the users and POD providers. I would imagine most users not wanting to bother with it, but have easy enough tools so that they can block users, and maybe auto-block based on patterns.
How these mechanisms work might be an interesting topic for the Solid Community Group, and maybe even be formalized as standards.
we should start by looking at ways how Solid could be abused, and then see what to do about it.
Of the top of my head, there is for example also the concept of malicious linking: If these links are machine-readable and possibly changing a persons reputation, adding lots of links with bogus or harmful info to someone could harm them.
Also notification-spamming could become a problem.
Very interesting topic. Another one is “abusive behavior of my data” by 3rd part, eg Facebook. So many fun/challenging issues to handle.
Another issue is grave misrepresentation of others, such that this tweet expreses: https://twitter.com/Yair_Rosenberg/status/1083036616283832321?s=20
There are probably no easy answers, solutions, or tools to stop this from happening. But unless we can discuss it in a fruitful and constructive matter, there is very little chance that our attempts at resolving them will work.
Will you post it to the Solid CG as you mentioned earlier to start the constructive process to sorting this out?
That is the plan, just want to sort out with the team the proper way of taking this forward.
In the aftermath of the Great Storm of 1900 in Galveston Texas looters and photographers found taking pictures of the dead were executed on the spot. Now the internet has much horrendous content; where to draw the line between such and freedom?
Well, the thing is that you could decide yourself. Thus, we would need tools/methods to be able to hide/stop behaviour we don’t want to see. If you send me picture of dead people, I can block you from sending me pictures. Newspapers might do the opposite.
another thing that the current implementation kinda lacks is this: All the Solid apps that I see so far basically seem to get access to the whole pod, regardless of what else there is.
Users really need an easy and consistent way of understanding the needed access of an app and of limiting the access to either a subfolder of the pod or in other ways. A chess game app shouldn’t theoretically be able to change my health data.
@JollyOrc True, it would be nice to have an app that shows an overview of the access rights of the different apps. On my phone, for example, I can see per app what they can access: location, mic, camera, photos, and so on.
I would make instance containers/folders visible only to the owner and authorized apps/people and then do away with the public folder (or at least make the created folder only visible to the owner). When you parse the publicTypeIndex (as per the data discovery workflow), you’re getting the whole document in the response, but if you did the above, you’d be able to have the user OK the adding of the authorization to read/append to the specific containers it’s asking for and not be able to fetch anything else even though it otherwise has the information to do so.
As for misrepresentation… If this is all supposed to be indexable, and maybe in real-time as it’s going out, then you can have one app that notifies a user if they’re being mentioned, and then another app that handles the reporting of content found on a POD - The report being sent out to the provider and app creator, kinda like a ticket system that updates you on the progress being made. The app creator would get choices to ban/suspend/warn a user, and also have the content removed automatically. There is technically the issue that we are making the files unmodifiable to begin with though.