I'm getting started with Spring Cloud Data Flow and want to implement a simple Spring Cloud Task I want to use with it.
I created a hello world example from the documentation. When I run it in my IDE it executes without any problems and prints 'hello world'. It is using the following JDBC connection:
o.s.j.datasource.SimpleDriverDataSource : Creating new JDBC Driver Connection to [jdbc:h2:mem:testdb;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=false]
I use the dockerized Local Spring Data Flow Server which uses the following JDBC connection for its metadata:
o.s.c.d.s.config.web.WebConfiguration : Starting H2 Server with URL: jdbc:h2:tcp://localhost:19092/mem:dataflow
When I deploy my task to the server and start it I get the following exception:
org.springframework.context.ApplicationContextException: Failed to start bean 'taskLifecycleListener'; nested exception is java.lang.IllegalArgumentException: Invalid TaskExecution, ID 1 not found
This is because the Task and the server use different H2 databases. I somehow cannot override the database configuration of the task. I have H2 in the classpath and the following application.yml configuration to match the server:
spring:
datasource:
url: jdbc:h2:tcp://localhost:19092/mem:dataflow
username: sa
password:
driver-class-name: org.h2.Driver
It never gets applied. It always uses the preconfigured jdbc:h2:mem:testdb-connection. How can I get this to run?