Cloudifying your legacy applications

In this article we will be dealing with OpenShift and Kubernetes technology. You can find some explanations about the terms used at the end of the article.

If you want to take the neccessary steps to upgrade your own application, the first thing to do will be turning your pile of code into a container image.

The first step: containerization

The Source-to-Image (s2i) method:
Source-to-Image is an OpenShift tool for building reproducible Docker images by injecting source code into a Docker image. The new image will include the base (builder) image and the injected code.
What are the advantages?

  • There will be no Dockerfile, so there won’t be any container-specific junk in the code
  • The commands running inside the container won’t run as root (so it’s safe for the enterprise)
  • Operations teams can inspect and control s2i builders for security
  • You can build your own custom builder image on top of an existing stock builder image (by adding layers to the base image)

How does it work?
$ s2i build
Advantages of building a custom builder image:

  • Reduced build time: you don’t need to download the dependencies, you can simply pre-package them into the builder
  • But beware! Leave some flexibility for the developers, so they can add new modules at runtime when possible
  • It can be done within a week (depending on the application)
  • You can consider using more than one container, but its an app-by-app decision

 

Second step: configuration

When it comes to configurations there are three options:

  1. Baked-in config (config-and-run)
  2. Self-configuration (run-and-config): the application starts as PID 1 inside the container, looks for its own config and generates one if it doesn’t exist
  3. Configmap method (OpenShift only). The configmap is a Kubernetes object with a name and a set of key-value pairs

Third step: cluster deployments

Using OpenShift: $ oc cluster up

  1. Take a laptop with Docker running on it.
  2. Download the oc binary.
  3. Run oc cluster up.
  4. It stands up an entire cluster, pulls down a registry and a router for running that cluster.

This is the fastest way to deploy an OpenShift cluster. You can go from zero to OpenShift in under 20 seconds – but on the first run it will pull down some Docker images.
Useful to keep in mind

  • Run it once: $ oc new-app -
  • Make it repeatable by building a reusable template that can be used in any namespace in az OpenShift cluster: $ oc export bc,dc,svc,ls –as-template=myapp
  • Make it resilient through liveness and readiness probes
  • Make it stateful with persistent volumes and persistent volume claims

Some explanation

Liveness check: You can try and load the homepage of your application. If it won’t load, kill the pod (or container) and start a new one.
Readiness check: It is useful for checking if the app is capable of doing the jobs it’s supposed to do? Can it handle requests?
Pod: It is a group of one or more containers in Kubernetes. A pod also contains the shared storage for those containers and options about how to run the containers.
Persistent volume: It is a piece of networked storage in the cluster that has been provisioned by an administrator.
Persistent volume claim: It is a request for storage by a user.

Have some fun at the end!

If you like role-playing games or web applications generating random data about fictional planets and characters, you will enjoy this.
http://swn.emichron.com/

trial
If you have no more queries, 
take the next step and sign up!
Don’t worry, the installation process is quick and straightforward!
AICPA SOC BitNinja Server Security
Privacy Shield BitNinja Server Security
GDPR BitNinja Server Security
CCPA BitNinja Server Security
2024 BitNinja. All Rights reserved.
Hexa BitNinja Server SecurityHexa BitNinja Server Security
magnifiercross