Historically we do a full Tomcat stop/restart after deplying an update. We're switching to using Tomcat manager and redeploy new versions of a war files and in some cases also deploying a war file with a version suffix (e.g. mywar.war##1234) using CodeHaus Cargo. While HikariCP has worked smoothly in the past we're now seeing FATAL errors after Maven and Cargo do a deploy/redoploy. The error is:
[FATAL] java.sql.SQLException: HikariDataSource HikariDataSource (HikariPool-1) has been closed.
I've tried adding singleton="true" to the GlobalResources and each JNDI but that didn't solve the issue.
Note: our JNDI datasources are defined in ~tomcat/conf/server.xml in GlobalNamingResources and they are referenced in each war file's context.xml
Here is the JNDI config. Also, what is the correct MySQL wait_timeout value to use? It is currently set to 60 (60 seconds) which is slightly higher than the maxLifetime in our jdbc setings (55000ms or 55 seconds)
<Resource name="jdbc/global_mysql" auth="Container"
factory="com.zaxxer.hikari.HikariJNDIFactory"
type="javax.sql.DataSource"
minimumIdle="1"
singleton="true"
maximumPoolSize="3"
maxLifetime="55000"
connectionTimeout="300000"
driverClassName="com.mysql.cj.jdbc.Driver"
dataSource.implicitCachingEnabled="true"
dataSource.user="<user>"
dataSource.password="<password>"
dataSource.cachePrepStmts="true"
dataSource.prepStmtCacheSize="250"
dataSource.prepStmtCacheSqlLimit="2048"
dataSource.useServerPrepStmts="true"
dataSource.useLocalSessionState="true"
dataSource.rewriteBatchedStatements="true"
dataSource.cacheResultSetMetadata="true"
dataSource.cacheServerConfiguration="true"
dataSource.elideSetAutoCommits="true"
dataSource.maintainTimeStats="false"
jdbcUrl="jdbc:mysql://<host>:3306/db"
/>