5
votes

I have a Rails app acting as an OAuth 2.0 provider (using the oauth2-provider gem). It stores all the information related to users (accounts, personal information, and roles). There are 2 client apps that both authenticate through this app. The client apps can use the client_credentials grant type to find users by email and do other things that don't require an authorization code. Users can also log in to the client apps using the password grant type.

Now the issue we're facing is that the users' roles are defined globally on the resource host. So if a user is given an admin role on the resource host, that user is admin on both clients. My question is: what should we do to have more fine-grained access control? I.e. a user can be an editor for app1 but not for app2.

I guess the easy way to do this would be to change the role names like so: app1-admin, app2-admin, app1-editor, app2-editor, etc. The bigger question is: are we implementing this whole system correctly; that is, should we be storing so much info on the resource host, or should we denormalize the data onto the client apps?

A denormalized architecture would look like this: all user data on the resource host, localized user data on each client host. So [email protected] would have his personal info on the resource host and have his editor role stored on client app1. If he never uses it, app2 could be completely oblivious of his existence.

The drawback to the denormalized model is that there would be a lot of duplication of data (account ids, roles) and code (User and Role models on each client, separate management interfaces, etc.).

Are there any drawbacks to keeping the data separate? The client apps are both highly trusted--we made them both--but we are likely to add additional client apps, which are not under our control, in the future.

1

1 Answers

3
votes

The most proper way to use oAuth and other similar external Authorization methods as I see it, is for strictly Authentication purposes. All the business/authorization logic should be handled on your server part at all times, and you should always keep a central record of the user, linking to the external info per external type of auth service.

Having a multilevel/multipart set of access, is also a must, if you want your setup to be scalable and future-proof. This is a standard design that is separate from any authorization logic and always in direct relation to business rules.

Stackoverflow does something like this, asking you to create an actual account on the site after you login using an external method.

Update: If the sites are really similar you can subset this design to an object per application that keeps the application specific access rules. This object has to also inherit from a global object that has global rules (thus you can for example impose a ban application-wide or enterprise-wide).

I would go for objects that contain acess settings, and roles that can be related to instances of both application level settings and global settings only for automating/compacting the assignment of access.

Actually you can use this design even if they are not too similar. This will help you avoid redundant settings and meaningless (business-wise) roles. You can identify a role purely by the job title/purpose, and then impose your restrictions by linking to an appropriate acess settings setup.