1
votes

Here my configmap:

apiVersion: v1
kind: ConfigMap
metadata:
    name: chart-1591249502-zeppelin
    namespace: ra-iot-dev
    labels:
        helm.sh/chart: zeppelin-0.1.0
        app.kubernetes.io/name: zeppelin
        app.kubernetes.io/instance: chart-1591249502
        app.kubernetes.io/version: "0.9.0"
        app.kubernetes.io/managed-by: Helm
data:
  log4j.properties: |-
    log4j.rootLogger = INFO, dailyfile
    log4j.appender.stdout = org.apache.log4j.ConsoleAppender
    log4j.appender.stdout.layout = org.apache.log4j.PatternLayout
    log4j.appender.stdout.layout.ConversionPattern=%5p [%d] ({%t} %F[%M]:%L) - %m%n
    log4j.appender.dailyfile.DatePattern=.yyyy-MM-dd
    log4j.appender.dailyfile.DEBUG = INFO
    log4j.appender.dailyfile = org.apache.log4j.DailyRollingFileAppender
    log4j.appender.dailyfile.File = ${zeppelin.log.file}
    log4j.appender.dailyfile.layout = org.apache.log4j.PatternLayout
    log4j.appender.dailyfile.layout.ConversionPattern=%5p [%d] ({%t} %F[%M]:%L) - %m%n
    log4j.logger.org.apache.zeppelin.python=DEBUG
    log4j.logger.org.apache.zeppelin.spark=DEBUG

I'm trying to mount this file into /zeppelin/conf/log4j.properties pod directory file.

Here my deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: chart-1591249502-zeppelin
  labels:
    helm.sh/chart: zeppelin-0.1.0
    app.kubernetes.io/name: zeppelin
    app.kubernetes.io/instance: chart-1591249502
    app.kubernetes.io/version: "0.9.0"
    app.kubernetes.io/managed-by: Helm
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: zeppelin
      app.kubernetes.io/instance: chart-1591249502
  template:
    metadata:
      labels:
        app.kubernetes.io/name: zeppelin
        app.kubernetes.io/instance: chart-1591249502
    spec:
      serviceAccountName: chart-1591249502-zeppelin
      securityContext:
        {}
      containers:
        - name: zeppelin
          securityContext:
            {}
          image: "apache/zeppelin:0.9.0"
          imagePullPolicy: IfNotPresent
          ports:
            - name: http
              containerPort: 8080
              protocol: TCP
          livenessProbe:
            httpGet:
              path: /
              port: http
          readinessProbe:
            httpGet:
              path: /
              port: http
          resources:
            {}
          env:
            - name: ZEPPELIN_PORT
              value: "8080"
            - name: ZEPPELIN_K8S_CONTAINER_IMAGE
              value: apache/zeppelin:0.9.0
            - name: ZEPPELIN_RUN_MODE
              value: local
          volumeMounts:
            - name: log4j-properties-volume
              mountPath: /zeppelin/conf/log4j.properties
      volumes:
        - name: log4j-properties-volume
          configMap:
            name: chart-1591249502-zeppelin
            items:
              - key: log4j.properties
                path: keys

I'm getting this error event in kubernetes:

Error: failed to start container "zeppelin": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "process_linux.go:364: container init caused \"rootfs_linux.go:54: mounting \\"/var/lib/origin/openshift.local.volumes/pods/63ac209e-a626-11ea-9e39-0050569f5f65/volumes/kubernetes.io~configmap/log4j-properties-volume\\" to rootfs \\"/var/lib/docker/overlay2/33f3199e46111afdcd64d21c58b010427c27761b02473967600fb95ab6d92e21/merged\\" at \\"/var/lib/docker/overlay2/33f3199e46111afdcd64d21c58b010427c27761b02473967600fb95ab6d92e21/merged/zeppelin/conf/log4j.properties\\" caused \\"not a directory\\"\"" : Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type

Take in mind, that I only want to replace an existing file. I mean, into /zeppelin/conf/ directory there are several files. I only want to replace /zeppelin/conf/log4j.properties.

Any ideas?

2
There are 2 things. Using SubPath in your spec.containers.volumeMounts and also issue with namespace (ConfigMap is in namespace: ra-iot-dev and deployment is in default. Writing answer with wider explanation.PjoterS

2 Answers

1
votes

From logs I saw that you are working on OpenShift, however I was able to do it on GKE.

I've deployed pure zeppelin deployment form your example.

zeppelin@chart-1591249502-zeppelin-557d895cd5-v46dt:~/conf$ cat log4j.properties
#
# Licensed to the Apache Software Foundation (ASF) under one or more
...
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
...
# limitations under the License.
#

log4j.rootLogger = INFO, stdout

log4j.appender.stdout = org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout = org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%5p [%d] ({%t} %F[%M]:%L) - %m%n
zeppelin@chart-1591249502-zeppelin-557d895cd5-v46dt:~/conf$ 

If you want to repleace one specific file, you need to use subPath. There is also article with another example which can be found here.

Issue 1. ConfigMap belongs to namespace

Your deployment did not contains any namespace so it was deployed in default namespace. ConfigMap included namespace: ra-iot-dev.

$ kubectl api-resources
NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
...
configmaps                        cm                                          true         ConfigMap
...

If you will keep this namespace, you will probably get error like:

MountVolume.SetUp failed for volume "log4j-properties-volume" : configmap "chart-1591249502-zeppelin" not found

Issue 2. subPath to replace file

Ive changed one part in deployment (added subPath)

    volumeMounts:
      - name: log4j-properties-volume
        mountPath: /zeppelin/conf/log4j.properties
        subPath: log4j.properties
volumes:
  - name: log4j-properties-volume
    configMap:
      name: chart-1591249502-zeppelin

and another in ConfigMap (removed namespace and set proper names)

apiVersion: v1
kind: ConfigMap
metadata:
    name: chart-1591249502-zeppelin
    labels:
        helm.sh/chart: zeppelin-0.1.0
        app.kubernetes.io/name: zeppelin
        app.kubernetes.io/instance: chart-1591249502
        app.kubernetes.io/version: "0.9.0"
        app.kubernetes.io/managed-by: Helm
data:
  log4j.properties: |-
...

After that output of the file looks like:

$ kubectl exec -ti chart-1591249502-zeppelin-64495dcfc8-ccddr -- /bin/bash
zeppelin@chart-1591249502-zeppelin-64495dcfc8-ccddr:~$ cd conf
zeppelin@chart-1591249502-zeppelin-64495dcfc8-ccddr:~/conf$ ls
configuration.xsl  log4j.properties   log4j_yarn_cluster.properties  zeppelin-env.cmd.template  zeppelin-site.xml.template
interpreter-list   log4j.properties2  shiro.ini.template             zeppelin-env.sh.template
zeppelin@chart-1591249502-zeppelin-64495dcfc8-ccddr:~/conf$ cat log4j.properties
log4j.rootLogger = INFO, dailyfile
log4j.appender.stdout = org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout = org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%5p [%d] ({%t} %F[%M]:%L) - %m%n
log4j.appender.dailyfile.DatePattern=.yyyy-MM-dd
log4j.appender.dailyfile.DEBUG = INFO
log4j.appender.dailyfile = org.apache.log4j.DailyRollingFileAppender
log4j.appender.dailyfile.File = ${zeppelin.log.file}
log4j.appender.dailyfile.layout = org.apache.log4j.PatternLayout
log4j.appender.dailyfile.layout.ConversionPattern=%5p [%d] ({%t} %F[%M]:%L) - %m%n
log4j.logger.org.apache.zeppelin.python=DEBUG
log4j.logger.org.apache.zeppelin.spark=DEBUGzeppelin@chart-1591249502-zeppelin-64495dcfc8-ccddr:~/conf$
0
votes
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: application-config-test
  namespace: ***
  labels:
    app: test
    environment: ***
    tier: backend
data:
  application.properties: |-
    ulff.kafka.configuration.acks=0
    ulff.kafka.configuration[bootstrap.servers]=IP
    ulff.kafka.topic=test-topic
    ulff.enabled=true
    logging.level.com.anurag.gigthree.ulff.kafka=DEBUG

    management.port=9080
    management.security.enabled=false 
    management.endpoints.web.exposure.include= "metrics,health,threaddump,prometheus,heapdump"
    management.endpoint.prometheus.enabled=true
    management.metrics.export.prometheus.enabled=true
    ## For apigee PROD
    apigee.url=****   
    
    ### Secrets in Kubenetes accessed by ENV variables
    apigee.clientID=apigeeClientId
    apigee.clientsecret=apigeeClientSecret

    spring.mvc.throw-exception-if-no-handler-found=true

    #For OAuth details for apigee
    oauth2.config.clientId=${apigee.clientID}
    oauth2.config.clientSecret=${apigee.clientsecret}
    oauth2.config.authenticationScheme=form
    oauth2.config.scopes[0]=test_INTEGRATION_ALL
    oauth2.config.accessTokenUri=${apigee.url}/oauth2/token
    oauth2.config.requestTimeout=55000

    oauth2.restTemplateBuilder.enabled=true

    #spring jackson properties
    spring.jackson.default-property-inclusion=always
    spring.jackson.generator.ignore-unknown=true
    spring.jackson.mapper.accept-case-insensitive-properties=true
    spring.jackson.deserialization.fail-on-unknown-properties=false

    # service urls for apply profile
    services.apigeeIntegrationAPI.doProfileChangeUrl=${apigee.url}/v1/testintegration
    services.apigeeIntegrationAPI.modifyServiceOfSubscriberUrl=${apigee.url}/v1/testintegration/subscribers

    # service urls for retrieve profile
    services.apigeeIntegrationAPI.getProfileUrl=${apigee.url}/v1
    services.apigeeIntegrationAPI.readKeyUrl=${apigee.url}/v1/testintegration

   

    

    test.acfStatusConfig[1].country-prefix=
    test.acfStatusConfig[1].country-code=
    test.acfStatusConfig[1].profile-name=
    test.acfStatusConfig[1].adult=ON
    test.acfStatusConfig[1].hvw=ON
    test.acfStatusConfig[1].ms=ON
    test.acfStatusConfig[1].dc=ON
    test.acfStatusConfig[1].at=OFF
    test.acfStatusConfig[1].gambling=
    test.acfStatusConfig[1].dating=OFF
    test.acfStatusConfig[1].sex=OFF
    test.acfStatusConfig[1].sn=OFF


    logging.pattern.level=%X{ulff.transaction-id:-} -%5p
    logging.config=/app/config/log4j2.yml
  log4j2.yml: |-
    Configutation:
      name: test-ms
      packages : 
      Appenders:
        Console:
          - name: sysout
            target: SYSTEM_OUT
            PatternLayout:
              pattern: "%d{HH:mm:ss.SSS} %-5p [%-7t] %F:%L - %m%n"
          - name: syserr
            target: SYSTEM_ERR
            PatternLayout:
              pattern: "%d{HH:mm:ss.SSS} %-5p [%-7t] %F:%L - %m%n"
            Filters:
              ThresholdFilter :
                level: "WARN"
                onMatch: "ACCEPT"
        Kafka:
          name : kafkaAppender
          topic: af.prod.ms.test.tomcat
          JSONLayout:
            complete: "false"
            compact: "false"
            eventEol: "true"
            includeStacktrace: "true"
            properties: "true"
          Property:
            name: "bootstrap.servers"
            value: ""
      Loggers:
        Root:
          level: INFO
          AppenderRef:
           - ref: sysout
           - ref: syserr
        #### test 1 test 2 Separate kafka log from application log
        Logger:
          - name: com.anurag
            level: INFO
            AppenderRef:
              - ref: kafkaAppender
          - name: org.springframework
            level: INFO
            AppenderRef:
              - ref: kafkaAppender