I also need to simply list the contents of a bucket.  Ideally I would like something similar to what tf.gfile provides. tf.gfile has support for determining if an entry is a file or a directory.
I tried the various links provided by @jterrace above but my results were not optimal. With that said its worth showing the results.
Given a bucket which has a mix of "directories" and "files" its hard to navigate the "filesystem" to find items of interest.  I've provided some comments in the code 
on how the code referenced above works.
In either case, I am using a datalab notebook with credentials included by the notebook.  Given the results, I will need to use string parsing to determine which files are in a particular directory.  If anyone knows how to expand these methods or an alternate method to parse the directories similar to tf.gfile, please reply.
Method One
import sys
import json
import argparse
import googleapiclient.discovery
BUCKET = 'bucket-sounds' 
def create_service():
    return googleapiclient.discovery.build('storage', 'v1')
def list_bucket(bucket):
    """Returns a list of metadata of the objects within the given bucket."""
    service = create_service()
    # Create a request to objects.list to retrieve a list of objects.
    fields_to_return = 'nextPageToken,items(name,size,contentType,metadata(my-key))'
    #req = service.objects().list(bucket=bucket, fields=fields_to_return)  # returns everything
    #req = service.objects().list(bucket=bucket, fields=fields_to_return, prefix='UrbanSound')  # returns everything. UrbanSound is top dir in bucket
    #req = service.objects().list(bucket=bucket, fields=fields_to_return, prefix='UrbanSound/FREE') # returns the file FREESOUNDCREDITS.TXT
    #req = service.objects().list(bucket=bucket, fields=fields_to_return, prefix='UrbanSound/FREESOUNDCREDITS.txt', delimiter='/') # same as above
    #req = service.objects().list(bucket=bucket, fields=fields_to_return, prefix='UrbanSound/data/dog_bark', delimiter='/') # returns nothing
    req = service.objects().list(bucket=bucket, fields=fields_to_return, prefix='UrbanSound/data/dog_bark/', delimiter='/') # returns files in dog_bark dir
    all_objects = []
    # If you have too many items to list in one request, list_next() will
    # automatically handle paging with the pageToken.
    while req:
        resp = req.execute()
        all_objects.extend(resp.get('items', []))
        req = service.objects().list_next(req, resp)
    return all_objects
# usage
print(json.dumps(list_bucket(BUCKET), indent=2))
This generates results like this:
[
  {
    "contentType": "text/csv", 
    "name": "UrbanSound/data/dog_bark/100032.csv", 
    "size": "29"
  }, 
  {
    "contentType": "application/json", 
    "name": "UrbanSound/data/dog_bark/100032.json", 
    "size": "1858"
  } stuff snipped]
Method Two
import re
import sys
from google.cloud import storage
BUCKET = 'bucket-sounds'
# Create a Cloud Storage client.
gcs = storage.Client()
# Get the bucket that the file will be uploaded to.
bucket = gcs.get_bucket(BUCKET)
def my_list_bucket(bucket_name, limit=sys.maxsize):
  a_bucket = gcs.lookup_bucket(bucket_name)
  bucket_iterator = a_bucket.list_blobs()
  for resource in bucket_iterator:
    print(resource.name)
    limit = limit - 1
    if limit <= 0:
      break
my_list_bucket(BUCKET, limit=5)
This generates output like this.  
UrbanSound/FREESOUNDCREDITS.txt
UrbanSound/UrbanSound_README.txt
UrbanSound/data/air_conditioner/100852.csv
UrbanSound/data/air_conditioner/100852.json
UrbanSound/data/air_conditioner/100852.mp3
xyz, but the commandgsutil ls gs://abc/xyzreturns all objects inxyz, including non-folder items. So, which are you asking for? All folders, or all items, including folders? - Robino