I'm setting up integration between vault and an postgres instance running in AWS RDS. My strategy is as follows:
Database registration
a. Use the RDS master credentials to create a role named server_admin:
CREATE ROLE server_admin WITH LOGIN PASSWORD '${password}' CREATEROLE CREATEDB
b. Register the database with vault, using the server_admin postgres role (python hvac library):
vault.write("database/config/{0}".format(database_identifier), plugin_name=plugin_name, connection_url=connection_url, allowed_roles="*")`
c. Next, create a vault role for administration (schema creation):
vault.write("database/roles/{0}".format(role_name), db_name=database_identifier, creation_statements='"CREATE ROLE \"{{name}}\" WITH LOGIN INHERIT PASSWORD '{{password}}' IN ROLE SERVER_ADMIN;"', default_ttl="10000h")
Schema registration
a. Use vault to obtain a user with the server_admin role. b. Create the schema
CREATE SCHEMA IF NOT EXISTS ${schema_name} AUTHORIZATION server_admin
c. Create 3 vault roles (read_only, read_write, admin)
Application initialisation
a. Obtain a database user from vault with the admin role, and run a flyway migration
b. Obtain a database user from vault with the read_write role for normal operation
My issue is that during initial schema initialisation by my application (flyway), the tables are owned by the dynamically generated user. The next time I attempt a migration, I obtain a new user (via vault) with the admin role, but it does not have permissions to access the tables created by the first user.
What is the recommended strategy for database/schema/table ownership when using vault integration with postgresql?