194
votes

I use curl to get http headers to find http status code and also return response. I get the http headers with the command

curl -I http://localhost

To get the response, I use the command

curl http://localhost

As soon as use the -I flag, I get only the headers and the response is no longer there. Is there a way to get both the http response and the headers/http status code in in one command?

19

19 Answers

188
votes

I was able to get a solution by looking at the curl doc which specifies to use - for the output to get the output to stdout.

curl -o - http://localhost

To get the response with just the http return code, I could just do

curl -o /dev/null -s -w "%{http_code}\n" http://localhost
90
votes

the verbose mode will tell you everything

curl -v http://localhost
57
votes

I use this command to print the status code without any other output. Additionally, it will only perform a HEAD request and follow the redirection (respectively -I and -L).

curl -o -I -L -s -w "%{http_code}" http://localhost

This makes it very easy to check the status code in a health script:

sh -c '[ $(curl -o -I -L -s -w "%{http_code}" http://localhost) -eq 200 ]'
57
votes

I found this question because I wanted independent access to BOTH the response and the content in order to add some error handling for the user.

Curl allows you to customize output. You can print the HTTP status code to std out and write the contents to another file.

curl -s -o response.txt -w "%{http_code}" http://example.com

This allows you to check the return code and then decide if the response is worth printing, processing, logging, etc.

http_response=$(curl -s -o response.txt -w "%{http_code}" http://example.com)
if [ $http_response != "200" ]; then
    # handle error
else
    echo "Server returned:"
    cat response.txt    
fi

The %{http_code} is a variable substituted by curl. You can do a lot more, or send code to stderr, etc. See curl manual and the --write-out option.

-w, --write-out

Make curl display information on stdout after a completed transfer. The format is a string that may contain plain text mixed with any number of variables. The format can be specified as a literal "string", or you can have curl read the format from a file with "@filename" and to tell curl to read the format from stdin you write "@-".

The variables present in the output format will be substituted by the value or text that curl thinks fit, as described below. All variables are specified as %{variable_name} and to output a normal % you just write them as %%. You can output a newline by using \n, a carriage return with \r and a tab space with \t.

The output will be written to standard output, but this can be switched to standard error by using %{stderr}.

https://man7.org/linux/man-pages/man1/curl.1.html

35
votes

The -i option is the one that you want:

curl -i http://localhost

-i, --include Include protocol headers in the output (H/F)

Alternatively you can use the verbose option:

curl -v http://localhost

-v, --verbose Make the operation more talkative

20
votes

I have used this :

    request_cmd="$(curl -i -o - --silent -X GET --header 'Accept: application/json' --header 'Authorization: _your_auth_code==' 'https://example.com')"

To get the HTTP status

    http_status=$(echo "$request_cmd" | grep HTTP |  awk '{print $2}')
    echo $http_status

To get the response body I've used this

    output_response=$(echo "$request_cmd" | grep body)
    echo $output_response
14
votes

This command

 curl http://localhost -w ", %{http_code}"

will get the comma separated body and status; you can split them to get them out.

You can change the delimiter as you like.

11
votes

This is a way to retrieve the body "AND" the status code and format it to a proper json or whatever format works for you. Some may argue it's the incorrect use of write format option but this works for me when I need both body and status code in my scripts to check status code and relay back the responses from server.

curl -X GET -w "%{stderr}{\"status\": \"%{http_code}\", \"body\":\"%{stdout}\"}"  -s -o - “https://github.com” 2>&1

run the code above and you should get back a json in this format:

{
"status" : <status code>,
"body" : <body of response>
}

with the -w write format option, since stderr is printed first, you can format your output with the var http_code and place the body of the response in a value (body) and follow up the enclosing using var stdout. Then redirect your stderr output to stdout and you'll be able to combine both http_code and response body into a neat output

8
votes

For programmatic usage, I use the following :

curlwithcode() {
    code=0
    # Run curl in a separate command, capturing output of -w "%{http_code}" into statuscode
    # and sending the content to a file with -o >(cat >/tmp/curl_body)
    statuscode=$(curl -w "%{http_code}" \
        -o >(cat >/tmp/curl_body) \
        "$@"
    ) || code="$?"

    body="$(cat /tmp/curl_body)"
    echo "statuscode : $statuscode"
    echo "exitcode : $code"
    echo "body : $body"
}

curlwithcode https://api.github.com/users/tj

It shows following output :

statuscode : 200
exitcode : 0
body : {
  "login": "tj",
  "id": 25254,
  ...
}
5
votes

To get response code along with response:

$ curl -kv https://www.example.org

To get just response code:

$ curl -kv https://www.example.org 2>&1 | grep -i 'HTTP/1.1 ' | awk '{print $3}'| sed -e 's/^[ \t]*//'
  • 2>&1: error is stored in output for parsing
  • grep: filter the response code line from output
  • awk: filters out the response code from response code line
  • sed: removes any leading white spaces
4
votes

My way to achieve this:

To get both (header and body), I usually perform a curl -D- <url> as in:

$ curl -D- http://localhost:1234/foo
HTTP/1.1 200 OK
Connection: Keep-Alive
Transfer-Encoding: chunked
Content-Type: application/json
Date: Wed, 29 Jul 2020 20:59:21 GMT

{"data":["out.csv"]}

This will dump headers (-D) to stdout (-) (Look for --dump-header in man curl).

IMHO also very handy in this context:

I often use jq to get that json data (eg from some rest APIs) formatted. But as jq doesn't expect a HTTP header, the trick is to print headers to stderr using -D/dev/stderr. Note that this time we also use -sS (--silent, --show-errors) to suppress the progress meter (because we write to a pipe).

$ curl -sSD/dev/stderr http://localhost:1231/foo | jq .
HTTP/1.1 200 OK
Connection: Keep-Alive
Transfer-Encoding: chunked
Content-Type: application/json
Date: Wed, 29 Jul 2020 21:08:22 GMT

{
  "data": [
    "out.csv"
  ]
}

I guess this also can be handy if you'd like to print headers (for quick inspection) to console but redirect body to a file (eg when its some kind of binary to not mess up your terminal):

$ curl -sSD/dev/stderr http://localhost:1231 > /dev/null
HTTP/1.1 200 OK
Connection: Keep-Alive
Transfer-Encoding: chunked
Content-Type: application/json
Date: Wed, 29 Jul 2020 21:20:02 GMT

Be aware: This is NOT the same as curl -I <url>! As -I will perform a HEAD request and not a GET request (Look for --head in man curl. Yes: For most HTTP servers this will yield same result. But I know a lot of business applications which don't implement HEAD request at all ;-P

4
votes

A one-liner, just to get the status-code would be:

curl -s -i https://www.google.com | head -1

Changing it to head -2 will give the time as well.


If you want a while-true loop over it, it would be:

URL="https://www.google.com"

while true; do
    echo "------"
    curl -s -i $URL | head -2
    sleep 2;
done

Which produces the following, until you do cmd+C (or ctrl+C in Windows).

------
HTTP/2 200
date: Sun, 07 Feb 2021 20:03:38 GMT
------
HTTP/2 200
date: Sun, 07 Feb 2021 20:03:41 GMT
------
HTTP/2 200
date: Sun, 07 Feb 2021 20:03:43 GMT
------
HTTP/2 200
date: Sun, 07 Feb 2021 20:03:45 GMT
------
HTTP/2 200
date: Sun, 07 Feb 2021 20:03:47 GMT
------
HTTP/2 200
date: Sun, 07 Feb 2021 20:03:49 GMT
3
votes

Some good answers here, but like the OP I found myself wanting, in a scripting context, all of:

  • any response body returned by the server, regardless of the response status-code: some services will send error details e.g. in JSON form when the response is an error
  • the HTTP response code
  • the curl exit status code

This is difficult to achieve with a single curl invocation and I was looking for a complete solution/example, since the required processing is complex.

I combined some other bash recipes on multiplexing stdout/stderr/return-code with some of the ideas here to arrive at the following example:

{
  IFS= read -rd '' out
  IFS= read -rd '' http_code
  IFS= read -rd '' status
} < <({ out=$(curl -sSL -o /dev/stderr -w "%{http_code}" 'https://httpbin.org/json'); } 2>&1; printf '\0%s' "$out" "$?")

Then the results can be found in variables:

echo out $out
echo http_code $http_code
echo status $status

Results:

out { "slideshow": { "author": "Yours Truly", "date": "date of publication", "slides": [ { "title": "Wake up to WonderWidgets!", "type": "all" }, { "items": [ "Why <em>WonderWidgets</em> are great", "Who <em>buys</em> WonderWidgets" ], "title": "Overview", "type": "all" } ], "title": "Sample Slide Show" } }
http_code 200
status 0

The script works by multiplexing the output, HTTP response code and curl exit status separated by null characters, then reading these back into the current shell/script. It can be tested with curl requests that would return a >=400 response code but also produce output.

Note that without the -f flag, curl won't return non-zero error codes when the server returns an abnormal HTTP response code i.e. >=400, and with the -f flag, server output is suppresses on error, making use of this flag for error-detection and processing unattractive.

Credits for the generic read with IFS processing go to this answer: https://unix.stackexchange.com/a/430182/45479 .

3
votes

A clear one to read using pipe

function cg(){
    curl -I --silent www.google.com | head -n 1 | awk -F' ' '{print $2}'
}
cg
# 200

Welcome to use my dotfile script here

Explanation

  • --silent: Don't show progress bar when using pipe
  • head -n 1: Only show the first line
  • -F' ': separate text by columns using separator space
  • '{print $2}': show the second column
2
votes

In my experience we usually use curl this way

curl -f http://localhost:1234/foo || exit 1

curl: (22) The requested URL returned error: 400 Bad Request

This way we can pipe the curl when it fails, and it also shows the status code.

1
votes

Append a line "http_code:200" at the end, and then grep for the keyword "http_code:" and extract the response code.

result=$(curl -w "\nhttp_code:%{http_code}" http://localhost)

echo "result: ${result}"   #the curl result with "http_code:" at the end

http_code=$(echo "${result}" | grep 'http_code:' | sed 's/http_code://g') 

echo "HTTP_CODE: ${http_code}"  #the http response code

In this case, you can still use the non-silent mode / verbose mode to get more information about the request such as the curl response body.

0
votes

Wow so many answers, cURL devs definitely left it to us as a home exercise :) Ok here is my take - a script that makes the cURL working as it's supposed to be, i.e.:

  • show the output as cURL would.
  • exit with non-zero code in case of HTTP response code not in 2XX range

Save it as curl-wrapper.sh:


#!/bin/bash

output=$(curl -w "\n%{http_code}" "$@")
res=$?

if [[ "$res" != "0" ]]; then
  echo -e "$output"
  exit $res
fi

if [[ $output =~ [^0-9]([0-9]+)$ ]]; then
    httpCode=${BASH_REMATCH[1]}
    body=${output:0:-${#httpCode}}

    echo -e "$body"

    if (($httpCode < 200 || $httpCode >= 300)); then
        # Remove this is you want to have pure output even in 
        # case of failure:
        echo
        echo "Failure HTTP response code: ${httpCode}"
        exit 1
    fi
else
    echo -e "$output"
    echo
    echo "Cannot get the HTTP return code"
    exit 1
fi

So then it's just business as usual, but instead of curl do ./curl-wrapper.sh:

So when the result falls in 200-299 range:

./curl-wrapper.sh www.google.com 
# ...the same output as pure curl would return...
echo $?
# 0

And when the result is out of in 200-299 range:

./curl-wrapper.sh www.google.com/no-such-page
# ...the same output as pure curl would return - plus the line
#    below with the failed HTTP code, this line can be removed if needed:
#
# Failure HTTP response code: 404
echo $?
# 1

Just do not pass "-w|--write-out" argument since that's what added inside the script

0
votes

I used the following way of getting both return code as well as response body in the console.

NOTE - use tee which append the output into a file as well as to the console, which solved my purpose.

Sample CURL call for reference:

curl -s -i -k --location --request POST ''${HOST}':${PORT}/api/14/project/'${PROJECT_NAME}'/jobs/import' \
--header 'Content-Type: application/yaml' \
--header 'X-Rundeck-Auth-Token: '${JOB_IMPORT_TOKEN}'' \
--data "$(cat $yaml_file)" &>/dev/stdout | tee -a $response_file

return_code=$(cat $response_file | head -3 | tail -1 | awk {'print $2'})

if [ "$return_code" != "200" ]; then
  echo -e "\Job import api call failed with rc: $return_code, please rerun or change pipeline script."
  exit $return_code
else
  echo "Job import api call completed successfully with rc: $return_code"
fi

Hope this would help a few.

-2
votes
while : ; do curl -sL -w "%{http_code} %{url_effective}\\n" http://host -o /dev/null; done