Which tools do we want to stop abusive behavior on the Solid platform?


#1

I created an issue about creating a tool for auto-blocking based on patterns (https://github.com/solid/solid/issues/241). But I realized this is part of a bigger question, namely which tools do we want to have as part of Solid that stops (or at least limits) harassment.

I want to raise this issue for the community, and hear what people think :slight_smile:

It’s a complex issue with a lot of difficult sides, so hope to get some productive discussions that might result in ideas/solutions that we can integrate into the core parts of Solid.


#2

ah lots of fun:-)

by pattern, might need a way to unblock individuals you like! The human creature is a peculiar one; two people can say the same thing and by inflection mean just the opposite


#3

This is really a fascinating topic, because abuse can manifest itself in so many different ways.

For example, a piece of content or data could be considered abusive. If i have a chat and messages are coming into my pod that are threatening or abusive, I would want the ability to block the source, or have the ability to block like behavior if i so choose.

Additionally, abuse could be based on volume of activity. For example, if I share some media (like images or video), and the usage skyrockets, far exceeding my expectations, i would want the choice to apply some rate limiting or outright blocking across the board or to specific parties. Otherwise I could run the risk of getting slammed with bandwidth costs, or even worse, making the rest of my data inaccessible because of system / network resource constraints.


#4

(Cross-posting from issue, but I think the text works as well here)

I would think there is a marked for multiple tools that can be used for this, each with their unique take on how the problem can be solved. They should also work in alignment, and not exclude each other, i.e. like a normal Solid app.

The tools/services/apps/whatever could be targeted toward POD providers or users directly, depending on the underlying mechanics provided by Solid and the need of the users and POD providers. I would imagine most users not wanting to bother with it, but have easy enough tools so that they can block users, and maybe auto-block based on patterns.

How these mechanisms work might be an interesting topic for the Solid Community Group, and maybe even be formalized as standards.


#5

we should start by looking at ways how Solid could be abused, and then see what to do about it.

Of the top of my head, there is for example also the concept of malicious linking: If these links are machine-readable and possibly changing a persons reputation, adding lots of links with bogus or harmful info to someone could harm them.

Also notification-spamming could become a problem.


#6

Very interesting topic. Another one is “abusive behavior of my data” by 3rd part, eg Facebook. So many fun/challenging issues to handle.


#7

Another issue is grave misrepresentation of others, such that this tweet expreses: https://twitter.com/Yair_Rosenberg/status/1083036616283832321?s=20

There are probably no easy answers, solutions, or tools to stop this from happening. But unless we can discuss it in a fruitful and constructive matter, there is very little chance that our attempts at resolving them will work.


#8

Will you post it to the Solid CG as you mentioned earlier to start the constructive process to sorting this out?


#9

That is the plan, just want to sort out with the team the proper way of taking this forward.


#10

In the aftermath of the Great Storm of 1900 in Galveston Texas looters and photographers found taking pictures of the dead were executed on the spot. Now the internet has much horrendous content; where to draw the line between such and freedom?


#11

Well, the thing is that you could decide yourself. Thus, we would need tools/methods to be able to hide/stop behaviour we don’t want to see. If you send me picture of dead people, I can block you from sending me pictures. Newspapers might do the opposite.