2
votes

I have set up an elasticsearch/kibana docker configuration and I want to connect to elasticsearch from inside of a docker container using the @elastic/elasticsearch client for node. However, the connection is "timing out".

The project is taken with inspiration from Patrick Triest : https://blog.patricktriest.com/text-search-docker-elasticsearch/

However, I have made some modification in order to connect kibana, use a newer ES image and the new elasticsearch node client.

I am using the following docker-compose file:

version: "3"
services:
  api:
    container_name: mp-backend
    build: .
    ports:
      - "3000:3000"
      - "9229:9229"
    environment:
      - NODE_ENV=local
      - ES_HOST=elasticsearch
      - PORT=3000

  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.5.1
    container_name: elasticsearch
    environment:
      - node.name=elasticsearch
      - cluster.name=es-docker-cluster
      - discovery.type=single-node
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - "http.cors.allow-origin=*"
      - "http.cors.enabled=true"
      - "http.cors.allow-headers=X-Requested-With,X-Auth-Token,Content-Type,Content-Length,Authorization"
      - "http.cors.allow-credentials=true"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - data01:/usr/share/elasticsearch/data
    ports:
      - 9200:9200
    networks:
      - elastic

  kibana:
    image: docker.elastic.co/kibana/kibana:7.5.1
    ports:
      - "5601:5601"
    links:
      - elasticsearch
    networks:
      - elastic
    depends_on:
      - elasticsearch

volumes:
  data01:
    driver: local

networks:
  elastic:
    driver: bridge

When building/ bringing the container up, I able to get a response from ES: curl -XGET "localhost:9200", "you know, for search"... And kibana is running and able to connect to the index.

I have the following file located in the backend container (connection.js):

const { Client } = require("@elastic/elasticsearch");

const client = new Client({ node: "http://localhost:9200" });

/*Check the elasticsearch connection */
async function health() {
  let connected = false;
  while (!connected) {
    console.log("Connecting to Elasticsearch");
    try {
      const health = await client.cluster.health({});
      connected = true;
      console.log(health.body);
      return health;
    } catch (err) {
      console.log("ES Connection Failed", err);
    }
  }
}

health();

If I run it outside of the container then I get the expected response:

node server/connection.js

Connecting to Elasticsearch
{
  cluster_name: 'es-docker-cluster',
  status: 'yellow',
  timed_out: false,
  number_of_nodes: 1,
  number_of_data_nodes: 1,
  active_primary_shards: 7,
  active_shards: 7,
  relocating_shards: 0,
  initializing_shards: 0,
  unassigned_shards: 3,
  delayed_unassigned_shards: 0,
  number_of_pending_tasks: 0,
  number_of_in_flight_fetch: 0,
  task_max_waiting_in_queue_millis: 0,
  active_shards_percent_as_number: 70
}

However, if I run it inside of the container:

docker exec mp-backend "node" "server/connection.js"

Then I get the following response:

Connecting to Elasticsearch
ES Connection Failed ConnectionError: connect ECONNREFUSED 127.0.0.1:9200
    at onResponse (/usr/src/app/node_modules/@elastic/elasticsearch/lib/Transport.js:214:13)
    at ClientRequest.<anonymous> (/usr/src/app/node_modules/@elastic/elasticsearch/lib/Connection.js:98:9)
    at ClientRequest.emit (events.js:223:5)
    at Socket.socketErrorListener (_http_client.js:415:9)
    at Socket.emit (events.js:223:5)
    at emitErrorNT (internal/streams/destroy.js:92:8)
    at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)
    at processTicksAndRejections (internal/process/task_queues.js:81:21) {
  name: 'ConnectionError',
  meta: {
    body: null,
    statusCode: null,
    headers: null,
    warnings: null,
    meta: {
      context: null,
      request: [Object],
      name: 'elasticsearch-js',
      connection: [Object],
      attempts: 3,
      aborted: false
    }
  }
}

So, I tried changing the client connection to (I read somewhere that this might help):

const client = new Client({ node: "http://172.24.0.1:9200" });

Then I am just "stuck" waiting for a response. Only one console.log of "Connecting to Elasticsearch"

I am using the following version:

"@elastic/elasticsearch": "7.5.1"

As you probably see, I do not have a full grasp of what is happening here... I have also tried to add:

links:
  - elasticsearch
networks:
  - elastic

To the api service, without any luck.

Does anyone know what I am doing wrong here? Thank you in advance :)

EDIT:

I did a "docker network inspect" on the network with *_elastic. There I see the following:

"IPAM": {
    "Driver": "default",
    "Options": null,
    "Config": [
        {
            "Subnet": "172.22.0.0/16",
            "Gateway": "172.22.0.1"
        }
    ]
},

Changing the client to connect to the "GateWay" Ip:

const client = new Client({ node: "http://172.22.0.1:9200" });

Then it works! I am still wondering why as this was just "trial and error" Is there any way to obtain this Ip without having to inspect the network?

1

1 Answers

3
votes

In Docker, localhost (or the corresponding IPv4 address 127.0.0.1, or the corresponding IPv6 address ::1) generally means "this container"; you can't use that host name to access services running in another container.

In a Compose-based setup, the names of the services: blocks (api, elasticsearch, kibana) are usable as host names. The caveat is that all of the services have to be on the same Docker-internal network. Compose creates one for you and attaches containers to it by default. (In your example api is on the default network but the other two containers are on a separate elastic network.) Networking in Compose in the Docker documentation has some more details.

So to make this work, you need to tell your client code to honor the environment variable you're setting that points at Elasticsearch

const esHost = process.env.ES_HOST || 'localhost';
const esUrl = 'http://' + esHost + ':9200';
const client = new Client({ node: esUrl });

In your docker-compose.yml file delete all of the networks: blocks to use the provided default network. (While you're there, links: is unnecessary and Compose provides reasonable container_name: for you; api can reasonably depends_on: [elasticsearch].)

Since we've provided a fallback for $ES_HOST, if you're working in a host development environment, it will default to using localhost; outside of Docker where it means "the current host" it will reach the published port of the Elasticsearch container.