26
votes

According to https://docs.microsoft.com/en-us/azure/devops/pipelines/library/service-endpoints there's a rich array of Service Connection types. I can easily manage a set of service connections at the project level and set permissions to limit which users are able to view/edit them -- this is all good.

But I can't figure out how to access a Service Connection with a script step in my build pipeline. For example, let's say I have a Service Connection representing credentials for an Azure Service Principal. I'd like to access those credentials in a script step.

How can I write a script step that makes use of them?

4
you mean in the yaml file?Sajeetharan
Well, all pipelines are defined YAML files today, so yes. But more specifically, Azure Devops defines many task types, like docs.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/… or docs.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/… -- and the various tasks each have their own input parameters that can be used to pass Service Connections. I'm interested in providing a Service Connection as an input to a generic bash task (docs.microsoft.com/en-us/azure/devops/pipelines/tasks/utility/…).Bosh
@Bosh did you find a way to work around this? It seems like writing a custom task that takes a service connection and exports the variables might be the best option. Maybe there already such a task available.DELUXEnized

4 Answers

7
votes

Because a Service Connection involves data shaped specifically to the connected service (the Generic Service Connection being the exception that proves the rule...), you won't be able to make use of strongly typed properties in your Bash task. Instead, you may want to examine environment variables and process the service connection data manually.

Based on a survey of some of the tasks in the Azure DevOps repos, it appears that service connections and their data are populated as environment variables on the agent running the build task. The service connections are retrieved via a method that runs a given name string through the following regex before retrieving the resultant environment key's value:

process.env[name.replace(/\./g, '_').toUpperCase()];

The retrieval of various Service Endpoint data is wrapped in the vsts-task-lib/task module, allowing consuming tasks to write code like so:

taskLib.getEndpointAuthorization('SYSTEMVSSCONNECTION', false);

taskLib.getEndpointDataParameter('MYSERVICECONNECTION', 'SOME_PARAMETER_NAME', false);

taskLib.getEndpointUrl('MYSERVICECONNECTION', false) // <-- last param indicates required or not

Therefore, if you wanted to access service connections in a bash script without any additional customization, I would recommend that you:

a) Validate the availability of service connection information in the build script task by iterating and writing environment variables, setting the system.debug environment variable. There's some indication that build tasks aren't "seeded" with connections they aren't requesting specifically, so you may need to create a custom build task which has as one of its' inputs the service connection name you want to use

b) read the desired values from variables as outlined above in your bash script. Service connection variable names may be computed similarly to this:

   var dataParam = getVariable('ENDPOINT_DATA_' + id + '_' + key.toUpperCase());  

You may need to iterate against this to determine the data schema/structure.

6
votes

I've been wondering about this too. The solution I've settled on is to use the 'Azure CLI' task rather than the basic 'Script' (or 'Bash') task. This is ostensibly for running Az CLI commands, but there's nothing to stop you running only standard Bash scripts (or PSCore if that's your thing).

If you examine the environment variables present when you run this task, you'll see a bunch of information about the Service Connection in variables prefixed with 'ENDPOINT_DATA_'. This tallies up with what Josh E was saying. It includes Azure Subscription ID, name, Service Principle Object ID, etc.

Optionally you can enable the Service Principle details to be added to the environment too. This will then include SPN key, TenantID, etc. as secret environment variables.

Here's what the tasks look like:

- task: AzureCLI@2
  displayName: 'Azure CLI'
  inputs:
    scriptType: bash
    scriptLocation: inlineScript
    azureSubscription: '<Service Connection Name>'
    inlineScript: |
      env | sort

- task: AzureCLI@2
  displayName: 'Azure CLI, with SPN info'
  inputs:
    scriptType: bash
    scriptLocation: inlineScript
    azureSubscription: '<Service Connection Name>'
    addSpnToEnvironment: true
    inlineScript: |
      env | sort

Of course this is all only applicable to Azure Cloud Service Connections. There might be similar techniques you could use for other Service Connections, but I haven't investigated them.

5
votes

I found that if I use the Kubectl task with the command to login right before I run my bash Task, I do not need to authenticate or use a hostname.

KUBERNETESNODE and SERVICEPROTOCOL are Pipeline variables that I set a priori.

      - task: Kubernetes@1
        displayName: 'Kubernetes Login'
        # This is needed to run kubectl command from bash.
        inputs:
          connectionType: 'Kubernetes Service Connection'
          kubernetesServiceEndpoint: '<Service Connection Name>'
          command: 'login'

      - task: Bash@3
        displayName: 'Run Component Test'        
        inputs:
          targetType: 'inline'
          script: |
            #Get the Node Port
            nodePort=`kubectl get --namespace $(Build.BuildId) svc <service name> -o=jsonpath='{.spec.ports[0].nodePort}'`
            #Run Newman test
            newman run postman/Service.postman_collection.json --global-var host=$KUBERNETESNODE --global-var protocol=$SERVICEPROTOCOL --global-var port=$nodePort -r junit
3
votes

I am using the same service connection in my scripts/tools as for the ARM deployments.

In order to export the variables, I created the following template.

parameters:
- name: azureSubscription
  type: string
- name: exportAsOutput
  type: boolean
  default: false
  
steps:  
- task: AzureCLI@2
  name: exported_azure_credentials
  displayName: 'Export Azure Credentials'
  inputs:
    azureSubscription: '${{ parameters.azureSubscription }}'
    scriptType: pscore
    scriptLocation: inlineScript
    addSpnToEnvironment: true
    ${{ if eq(parameters.exportAsOutput, true) }}:
      inlineScript: |
        Write-Host "##vso[task.setvariable variable=AZURE_TENANT_ID]$env:tenantId"
        Write-Host "##vso[task.setvariable variable=AZURE_TENANT_ID;isOutput=true]$env:tenantId"
        Write-Host "##vso[task.setvariable variable=AZURE_CLIENT_ID]$env:servicePrincipalId"
        Write-Host "##vso[task.setvariable variable=AZURE_CLIENT_ID;isOutput=true]$env:servicePrincipalId"
        Write-Host "##vso[task.setvariable variable=AZURE_CLIENT_SECRET]$env:servicePrincipalKey"
        Write-Host "##vso[task.setvariable variable=AZURE_CLIENT_SECRET;isOutput=true]$env:servicePrincipalKey"
    ${{ if eq(parameters.exportAsOutput, false) }}:
      inlineScript: |
        Write-Host "##vso[task.setvariable variable=AZURE_TENANT_ID]$env:tenantId"
        Write-Host "##vso[task.setvariable variable=AZURE_CLIENT_ID]$env:servicePrincipalId"
        Write-Host "##vso[task.setvariable variable=AZURE_CLIENT_SECRET]$env:servicePrincipalKey"

DevOps is really clever about secrets, so they do not show up in the pipeline logs.