Am I correct that (as I think you suggest in the final paragraph) that this is a limitation in Solid overall (as currently discussed), rather than just in Inrupt’s implementation? Again I’m new here, but as I dig into the Solid Ecosystem via the docs, github, forum, etc, I don’t think I’m seeing a technical guarantee built in that data isn’t exfiltrated. The “guarantee” is that the business has to ask explicitly for permission, rather than using a lengthy ToS?
I say “limitation” rather than flaw b/c different levels of security guarantees for different use cases seems potentially fine to me. Architecturally, I can see saying something like “there should be a base level Solid protocol/design/etc that supports diverse use cases, but that supports additional levels of security built on as makes sense”.
(Also maybe I’m misunderstanding the goals of Solid? I do see at least some discussion here that interoperability, i.e. the decoupling of app and data, is more primary than the data control piece).
Has anything been planned, discussed, etc, in which Pods would integrate with Confidential Computing solutions like TEEs, restricted browser worklets, etc, even if not as part of “core Solid”? This would seem to open up a class of use cases requiring both intense compute and strong privacy guarantees, if a business could provide the compute resources in a sandbox (organization hosted pod or server fetching data) with the user’s pod only releasing data to an attested server.