Understanding security model: session restore, why not store in localStorage?

I am not a cryptographic nor a browser security expert. I was wondering where there was more information / documentation I could read to understand Solid’s security & usability design decisions for federated authentication. I understand the initial signin needs your-app.com to redirect to pod-identity-provider.com where your users authenticate before being redirected back to your-app.com with url params containing session info. When this has occurred, why is this information (url parameters of state and code) only held in memory and not stored in localStorage?

If it is stored in localStorage then any script running on that subdomain could access the information and send it to a different website from which the users’ session could be misused correct? However the user is still trusting your-app.com to not be malign, i.e. your-app.com can misuse its access to the Solid pod resources you have given it access to. I do not yet understand what the additional security is from not storing the information in localStorage. Is the security model of other websites like Supabase (which stores your session / session refresh tokens in localStorage) making the assumption your-app.com and all the scripts you load on to it are not malign?

A separate but related question is: if your-app.com captures the parameters and saves them to local storage, then when the user refreshes the page your-app.com sets these same parameters before calling handleIncomingRedirect, does this work or are the url parameters only a one-time use? … I’m asking as if this not a one time use then one could easily subvert the current security model to avoid the “refresh” step, i.e. when a user first logs in you use a custom client that stores code and state in localstorage so if the user immediately refreshes the page they can still use the application straight away without the redirect to the auth service and back to your app (with a new pair of state and code url parameters). And more importantly please, could you point me to where I could have found that answer myself. Thank you. I guess this is in the code somewhere.

I don’t think I’ve worded this very succinctly but any assistance in understanding this security flow would be appreciated.

I think you are talking about the Javascript client authentication libraries, is that right?

What do you mean by local storage?

Also-- what information do you mean specifically by the information?

These decisions seem like choices made by the code you are using to do authentication.

I’d say the full security model is still being finalised, including being able to restrict access to resources based on the client application, i.e. it is ultimately the intention that a non-secure Solid app that is trusted for some data would not be permitted access to more sensitive data.

Yes, avoiding storing secrets in local storage is a strategy used by Inrupt’s solid-client-authn precisely to avoid security vulnerabilities.

Some discussion you might already have come across:

My intuition is also that yes, it would make sense to call handleIncomingRedirect first. This is also the pattern adopted by onSessionRestore. I addition to any possible security risks, it makes sense to restore state only after you know whether your app is authenticated.
https://docs.inrupt.com/developer-tools/javascript/client-libraries/tutorial/restore-session-browser-refresh/#session-restore-event-handler

Yes that’s correct, apologies for not being more unambiguous.

Window: localStorage property - Web APIs | MDN :slight_smile:

When the identity provider completes authentication of the user it redirects the user’s browser back to the path the user first signed in from and sets some query parameters, e.g. your-app.com/signin?state=abc123&code=secret456 In this case abc123 and secret456 is what I am referring to as I assume with that information the application at your-app.com can obtain the necessary information to access the user’s resources that the application at your-app.com has been authorised to access.

Correct, specifically solid-client-authn as @josephguillaume mentioned.

I want to understand why this security model has been chosen as it adds a couple of redirects which is slower (some might say a very poor user experience) versus most other web applications which if the user has just signed in and they refresh the page or open a second tab, will load / have access to the user’s valid session state synchronously from local storage or cookie and immediately allow the user access to their data via your-app.com (for example). I’m curious but I primarily want to have a good justification to explain to someone who says why do they not just do …, where … might be JWTs stored in localStorage etc.

Thank you. Yes I’d read that before but could not recall where when I was first posted this question. Thanks.

It’s the comments from Vinnl and NSeydoux which I’d like to grok. I understand that putting some auth info into localstorage would allow malicious third party scripts you have added to your-app.com to quicklly copy and send the auth info to evil.com and then do bad things without your browser tab at your-app.com even being open any more. But I think if you have these malicious scripts running on your-app.com they would also still be able to access the solid client session ask it to read/write info to your solid storage and still do bad things that way? Could they also add their own malicious users to editors of all your data?

In my understanding, the primary defence against malicious scripts on your-app.com is to not trust apps you don’t actually trust, and then (eventually) to limit what data each client app has access to.

Beyond that, it’s a question of trusted JS app security generally.
In principle you could ensure the session object is not stored in the global scope and instead e.g. in an anonymous function.
You can also limit which scripts are permitted to run using a Content Security Policy Content Security Policy (CSP) - HTTP | MDN

This is also why you shouldn’t grant “control” privileges to apps that you don’t truly trust.

I assume by “not trust apps you don’t actually trust” you mean “not include third party scripts in your-app.com that you (as the developer of your-app.com) don’t actually trust”? In which case I completely agree and then my follow up is: once the developer of your-app.com is content that they are providing trustworthy third party scripts to support their website, and if the user trusts your-app.com, then why the decision to not store the session and session restore token in localStorage? This is what I would very much like to grok :slight_smile:

I actually meant “trust” in the sense of acl:trustedApp in foaf:PersonalProfileDocument but yes, third party scripts also need to be trusted.

Because it’s still possible there is a security vulnerability somewhere in there :slight_smile:

There’s an abundance of caution here because this could be medical or financial data

Yes it’s an interesting counterfactual to explore. i.e. when there are 10,000+ users of Solid apps and they store medical / financial data in their Pod, what are the justifable/best/necessary ways to increase the security of their data? For example it might be that the current approach is insufficient as there is no two factor auth for login. And perhaps even that there is no synchronous message to a second channel like your mobile phone number when your data is being read. Or perhaps going even further down the security path: that you have to go through a “two-factor-read”, i.e. before any data in your pod (which you have marked sensitive) is accessed you get a message via a text, or another application which asks you to confirm you are ok with this data being read. I personally would really like to have that safety around my medical data; data which just got sold for a pittance by my government without my consent.

Saying that however, I have very sensitive medical, financial and personal data in numerous websites already. Credit card info stored all over the place. Medical notes in online file stores and email servers. Facebook / Google / Twitter from my likes, comments and activity likely know exactly where I live and know me better than my partner… that’s dangerous information I’d prefer they did not have.

I personally feel we need to take the “we have to be safe” approach and also explicitly balance it with the “we have to provide a good experience for our users and developers” so that we can maximise the number of Solid users and thus maximise success: maximise the number of users whose data we are helping to protect. i.e. there will be many local optima of security versus usability (and users). As we make it more secure it may also become more onerous for people to use, and at one extreme most people will not use Solid, they will not protect their data, and the vision will have failed to maximise its goal.
Obviously we will iterate on these tradeoffs and ideas; there will be many right answers to find. I would dearly like to explore those dimensions in this thread (or preferably be pointed to previous documentation of their exploration). I don’t think we’ll have any data yet but we could get some: e.g. ask users which they prefer and mock a website that does the current double redirect approach versus one with the “instant” session restore but that might be less safe than option 1 but more safe than the users’ current apps.

If you will be compassionate and forgiving with me, I would go so far as to say this current security approach might be a form of premature optimisation: premature security. I am very open to being persuaded on this but my primary fear is that there will never be a day when there are millions or billions of Solid users with their data safer in Solid pods. Because the user experience is significantly poorer / confusing for new users of Solid apps (and their Solid app developers). @timbl hinted at this previously: Does not stay logged in after refresh · Issue #423 · inrupt/solid-client-authn-js · GitHub

1 Like

To re-explore this from one technical angle. If there is malicious code then the current technical solution would not prevent the malicious code from doing harm as the code, instead of copying secrets from local storage and using it from another site, it could instead just use the solid session to fetch the data and send that?

It would seem like the next iteration of the solid spec, server & client would benefit from the approach @NoelDeMartin outlined here.

This is a very interesting thread, and I’ll try my best to respond to some of the questions that have been asked. Before getting into the details though, I think it’s good to remember that what we do with Solid is only an additional piece on top of many existing standards. In the domain of authentication, far from reinventing the wheel, the work of the panel extends OpenID Connect, itself based on OAuth 2.0. The (very relevant) questions asked here are also questions that apply to these standards, so you may find answers out of the Solid sphere too :slightly_smiling_face: .

Actually, the code isn’t stored at all: it is only valid one time. Once it has been exchanged for an Access Token, an ID token, and optionally a refresh token, it may be discarded, because re-using it would not only fail, but it would also revoke the tokens that have been previously issued using this code.

What isn’t stored in local storage, though, is the resulting Access Token (and associated DPoP key).

I’m not 100% sure, but I think that would be a possibility indeed that a malicious script is able to make an authenticated request under some circumstances (it’s already bad enough that it has been loaded by a trusted domain). However, having a malicious script acting on your behalf for a limited period of time in a limited context is bad. Having it be able to keep being nefarious forever because it is able to issue itself fresh tokens is much worse. This is one of the reasons why the tokens aren’t stored in local storage: it limits the scope of potential attacks.

I understand this feeling, and I’m also very frustrated when we feel like security is getting in the way of user experience. However, I look at it this way: users don’t think about security, because they assume things are secure. Phrased differently, security is so core to the user experience that they don’t imagine using an insecure system. The problem of not thinking about security from the very beginning is: when do you start ? How can you get an ecosystem that has started relying on unsecure mechanisms to adopt secure ones ? It’s much harder than growing on a secure basis. And if some data breach happens because of security issues, it’s game over: the persons who lost their data will never trust Inrupt again, probably not even Solid, and everyone who hears about it will remember that Solid is insecure.

This said, I very much agree that both user and developer experience are essential, and should be neglected under no circumstances. Enabling desirable use cases while keeping them secure is challenging, but it is a challenge we must overcome for Solid to prevail :slight_smile:

3 Likes