I've figured out how to do this by picking up the certificates I generated for the local sql client and re-using them in my node app - like this
const pool = new Pool({
host: 'xxxx',
max: 20,
idleTimeoutMillis: 30000,
connectionTimeoutMillis: 2000,
port:26257,
user:"root",
database:"xxxx",
ssl : {
rejectUnauthorized : false,
ca : fs.readFileSync("./ca/ca.crt").toString(),
key : fs.readFileSync("./ca/client.xxxx.key").toString(),
cert : fs.readFileSync("./ca/client.xxxx.crt").toString()
}
});
I exposed the public pod as a service in kubernetes like this
kubectl expose service cockroachdb-public --port=26257 --target-port=26257 --name=cp --type=LoadBalancer
and picked up the external ip address that eventually got assigned to the service.
Pretty straightforward actually, but head scratching when you approach it for the first time. Thanks to those that took the time to comment.
@samstride just noticed your comment. Probably better to use a user other than root, but you can get these certs like this (probably other ways too).
ca (using the cockroachdb-client-secure pod if you still have it running)
kubectl exec cockroachdb-client-secure -it -- cat /cockroach-certs/ca.crt > ./ca.crt
key
kubectl get secret default.client.root -o jsonpath='{.data.key}' | base64 --decode > client.root.key
cert
kubectl get secret default.client.root -o jsonpath='{.data.cert}' | base64 --decode > client.root.crt
sslmode=require
(warning: not MITM protection with that mode). - Marc