1
votes

I am trying to create a yaml file to deploy gke cluster in a custom network I created. I get an error

JSON payload received. Unknown name \"network\": Cannot find field."

I have tried a few names for the resources but I am still seeing the same issue

resources:
- name: myclus
  type: container.v1.cluster
  properties:
    network: projects/project-251012/global/networks/dev-cloud
    zone: "us-east4-a"
    cluster:
      initialClusterVersion: "1.12.9-gke.13"
      currentMasterVersion: "1.12.9-gke.13"
      ## Initial NodePool config.
      nodePools:
      - name: "myclus-pool1"
        initialNodeCount: 3
        version: "1.12.9-gke.13"
        config:
          machineType: "n1-standard-1"
          oauthScopes:
            - https://www.googleapis.com/auth/logging.write
            - https://www.googleapis.com/auth/monitoring
            - https://www.googleapis.com/auth/ndev.clouddns.readwrite
          preemptible: true
## Duplicates node pool config from v1.cluster section, to get it explicitly managed.
- name: myclus-pool1
  type: container.v1.nodePool
  properties:
    zone: us-east4-a
    clusterId: $(ref.myclus.name)
    nodePool:
      name: "myclus-pool1"

I expect it to place the cluster nodes in this network.

1

1 Answers

0
votes

The network field needs to be part of the cluster spec. The top-level of properties should just be zone and cluster, network should be on the same indentation as initialClusterVersion. See more on the container.v1.cluster API reference page

Your manifest should look more like:

EDIT: there is some confusion in the API reference docs concerning deprecated fields. I offered a YAML that applies to the new API, not the one you are using. I've update with the correct syntax for the basic v1 API and further down I've added the newer API (which currently relies on gcp-types to deploy.

resources:
- name: myclus
  type: container.v1.cluster
  properties:
    projectId: [project]
    zone: us-central1-f
    cluster:
      name: my-clus
      zone: us-central1-f
      network: [network_name]
      subnetwork: [subnet]   ### leave this field blank if using the default network
      initialClusterVersion: "1.13"
      nodePools:
      - name: my-clus-pool1
        initialNodeCount: 0
        config:
          imageType: cos
- name: my-pool-1
  type: container.v1.nodePool
  properties:
    projectId: [project]
    zone: us-central1-f
    clusterId: $(ref.myclus.name)
    nodePool:
      name: my-clus-pool2
      initialNodeCount: 0
      version: "1.13"
      config:
        imageType: ubuntu

The newer API (which provides more functionality and allows you to use more features including the v1beta1 API and beta features) would look something like this:

resources:
- name: myclus
  type: gcp-types/container-v1:projects.locations.clusters
  properties:
    parent: projects/shared-vpc-231717/locations/us-central1-f
    cluster:
      name: my-clus
      zone: us-central1-f
      network: shared-vpc
      subnetwork: local-only   ### leave this field blank if using the default network
      initialClusterVersion: "1.13"
      nodePools:
      - name: my-clus-pool1
        initialNodeCount: 0
        config:
          imageType: cos
- name: my-pool-2
  type: gcp-types/container-v1:projects.locations.clusters.nodePools
  properties:
    parent: projects/shared-vpc-231717/locations/us-central1-f/clusters/$(ref.myclus.name)
    nodePool:
      name: my-clus-separate-pool
      initialNodeCount: 0
      version: "1.13"
      config:
        imageType: ubuntu

Another note, you may want to modify your scopes, the current scopes will not allow you to pull images from gcr.io, some system pods may not spin up properly and if you are using Google's repository, you will be unable to pull those images.

Finally, you don't want to repeat the node pool resource in both the cluster spec and separately. Instead, create the cluster with a basic (default) node pool, for all additional node pools, create them as separate resources to manage them without going through the cluster. There are very few updates you can perform on a node pool, asside from resizing