0
votes

From what I understand, there is quite a couple of ways one can use in order to add an application to a project in the OpenShift Container Platform.

  • Build a Docker image from source (s2i) and then, deploy the image to OpenShift
  • Build a Docker image from a Dockerfile and then, deploy the image to OpenShift
  • Build a Docker image separately (outside OpenShift) and then, deploy the image to OpenShift
  • Build a binary of the application separately (outside OpenShift) and then, deploy the binary to OpenShift

I understand there's no one size that fits all, but I'm after answers that highlight the circumstances around each option, e.g. why would I use one particular option instead of the others?

Update Perhaps another way of asking the question is what approach does your company use and why?

2
You can also run the s2i program outside of OpenShift as well to create an image that you then deploy to OpenShift. Not that it necessarily explains why one way would be used over another, but if you haven't already you might read openshift.com/deploying-to-openshift which covers each. - Graham Dumpleton

2 Answers

2
votes

I think, you are referring to different Build Strategy Options. And, in your question, you probably missed the Pipeline Build strategy.

Source to Image

Suppose, you have developed a basic NodeJS back-end application. Your aim is to quickly get it up and running in an OpenShift cluster. When I say quickly, I mean, not having to pick up Docker knowledge. In such a scenario, you would use S2I.

  1. Include a start script in package.json to point to node index.js.
  2. Commit code to GitHub (or any other Git repository).
  3. oc new-app .
  4. Done.

Docker

Continuing with the basic NodeJS backend example, you may realise that the base NodeJS image used by OpenShift may not be suitable for you (for any number of reasons). Maybe, you want to use Docker official NodeJS image. Of course, there maybe other non-NodeJS reasons to build a container for your application e.g. special environment variables, etc. Here, you would use a Dockerfile.

Docker image from Docker Hub

You could also deploy a pre-built Docker image. Note that, by default, OpenShift security constraint does not allow you to run containers as root. The article link calls this and a workaround for this constraint.

RedHat Registry

Same as in previous case, however, this deployment needs secrets as described in this article (requires login).

So, as long as, your container repository can be accessed in an OpenShift 'blessed' manner, you can definitely build image outside and have it pulled in for deployment.

Binary

The closest that I can quote as a 'scenario' is probably this question. Else, perhaps, the build approach is so niche that, it is better done in a 'controlled' environment before it is to be executed.

My preference

Speaking for myself, I would follow this path.

  1. The developers (including myself) needs to focus on getting the functionality done - so, use S2I.
  2. The operations team (including myself) needs to focus on packaging - so, use developed functionality in 1 and convert into a Dockerfile. Or, even a Jenkinsfile.
  3. The management team (including myself) is sensitive towards putting images on the internet - so, push the Docker images to a private repository.
  4. If all of the above is to be done for something proprietary, use Binary build option.

Hope this helps.

1
votes

My personal thought of each options is as follows,