XenonStack

A Stack Innovator

Post Top Ad

Showing posts with label kubernetes. Show all posts
Showing posts with label kubernetes. Show all posts

Saturday, 14 December 2019

12/14/2019 05:39:00 pm

Microservices Architecture Design and Best Practices 


What is Microservices Architecture?

A microservices architecture style is an approach for developing small services each running in its process. It enables the continuous delivery/deployment of large, complex applications. It also allows an organization to evolve its technology stack.

Why Microservices Architecture?

Microservices came in a picture of building systems that were too big. The idea behind microservices is that there are some applications that can easily build and maintain when they are broken down into smaller applications that work together. Each component is continuously developed and separately managed, and the application is then merely the sum of its constituent elements. Whereas in traditional “monolithic” application which is all developed all in one piece.

Microservices Architecture Design

Distributed architecture
All the services communicate with the API gateway through REST or RPC. These services can be deployed as multiple instances, and the requests can be distributed to these instances.for
Separately deployed components
Each component is deployed separately. If one component needs changes, others don’t have to deploy again.
Service components
Services components communicate with each other via service discovery
Bounded by contexts
It encapsulates the details of a single domain, and define the integration with other domains. It is about implementing a business capability.

Benefits of Adopting Microservices Architecture Design

  • Asynchronicity.
  • Integration & Disintegration.
  • Complected Deployments.
  • Evolutionary Architecture.
  • Components are deployed.
  • Features are released.
  • Applications consist of routing.
  • Easier to understand the code — It is easy to distinguish one small service and flow of the whole service rather than one big codebase.
  • Fast Software delivery — Each service can be developed by different developers and in many different languages.
  • Efficient debugging — Don’t have to jump through multiple layers of an application and in essence better fault isolation.
  • Reusable — Since it is an independent service it can be used in other projects also.
  • Scalability
  • Horizontal scaling
  • Workload partitioning
  • Don’t have to scale the whole project. We only need to scale up that component that needs to scale up.
  • Deployment — Need only to deploy that service which has been changed not the whole project again.

Characteristics of Microservices Architecture Design

  • Small in size
  • Messaging enabled
  • Bounded by contexts
  • Autonomously developed
  • Independently deployable
  • Decentralized
  • Built and released with automated processes

Continue Reading: XenonStack/Insights

Monday, 18 September 2017

9/18/2017 01:12:00 pm

Overview

In this Blog, We will Cover How to Build Slack like Online Chat using Rocket Chat and Deploy on Containers using Docker and Kubernetes.

Before This, We are Using Rocket Chat application on OpenStack Instances on On-Premises Deployment.

So We Migrated our Existing On-Premises Cloud Infrastructure to Containers based on Docker and Kubernetes.

As per official Docker Documentation, Docker is an open platform for developers and sysadmins to build, ship, and run distributed applications, whether on laptops, data center VMs, or the cloud.

Kubernetes is container orchestration layer on top of container runtime/engine to manage and deploy Containers effectively.


Prerequisites for Rocket Chat Deployment on Kubernetes

For Deployment you need
Kubernetes is for automating deployment, scaling, management, the orchestration of containerized applications. We can use kubernetes cluster or for testing purpose we can also use minikube for Kubernetes.

For Shared Persistent Storage, we are using GlusterFS. GlusterFS is a scalable network file system.

Rocket.Chat is a Web-based Chat Server, developed in JavaScript, using the Meteor full stack framework.

Dockerfile is a text document that contains all the information/commands that what we need to configure any application in the respective container.

The Registry is an online storage for container images and lets you distribute Container images.

We can use any of following Container Registry for storing.

Kubectl is command line tool to manage Kubernetes cluster remotely and you can also configure in your machine follow this link.

Notes - If you are using an official image of Rocket Chat and MongoDB then you can skip Step 1, 2, 3, 4, 5, 6 and move forward to Storage Volume (Step 7).


Step 1 - Create a Rocket Chat Container Custom Image

Create a file name “Dockerfile” for Rockets Chat Container Image.

$ touch Dockerfile

Now Add the Following content to the dockerfile of Rocket Chat application

FROM node:4-slim
MAINTAINER XenonStack
COPY bundle/ /app/
RUN cd /app/programs/server \&& npm install
ENV PORT=3000 \
ROOT_URL=http://localhost:3000
EXPOSE 3000
CMD ["node", "/app/main.js"]

This Rocket Chat Application is based on NodeJS so we need to use NodeJS docker image from docker hub as a base image for Rocket Chat application.

After then we put our custom code of the Rocket Chat application to docker container and install all the required dependencies of rocket chat application to docker container.


Step 2 - Build Rocket Chat Docker Custom Image

$ docker build -t rocketchat:v1.0


Step 3 - Create a MongoDB Container Custom Image

Create a file name “Dockerfile” for MongoDB Container Image in new Folder named MongoDB.

$ mkdir mongodb && cd mongodb

$ touch Dockerfile

Now Add the Following content to the dockerfile of Mongo -
FROM ubuntu
MAINTAINER XenonStack
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10 && \
echo "deb http://repo.mongodb.org/apt/ubuntu trusty/mongodb-org/3.0 multiverse" | tee /etc/apt/sources.list.d/mongodb-org-3.0.list && \
apt-get update && \
apt-get install -y mongodb-org
VOLUME ["/data/db"]
WORKDIR /data
EXPOSE 27017
CMD ["mongod"]

This MongoDB image has a base image of Ubuntu but we can also use official docker image of MongoDB. We have created this dockerfile for MongoDB Version 3.0 for some compatibility reasons with Rocket Chat Application.

Next, we mount Volume “/data/db” for persistent storage of container.

Next, we expose 27017 port for incoming requests to MongoDB server. Then, we start MongoDB server in dforeground mode so that we can see logs in “stdout” of container.


Step 4 - Building a MongoDB Docker Custom Image

$ docker build -t mongo:v3.0


Step 5 - Adding Container Registry to Docker Daemon

If you are using docker registry other than docker hub to store images then you need to add that container registry to your local docker daemon and kubernetes Docker Nodes also.

There are so many ways to add container registry to docker daemon as per different operating systems.

So i will explain one of them which i'm using daily basis.
$ docker version
Client:
Version: 17.03.1-ce
API version: 1.27
Go version: go1.7.5
Git commit: c6d412e
Built: Mon Mar 27 17:14:09 2017
OS/Arch: linux/amd64 (Ubuntu 16.04)

Now we need to Create a “daemon.json” in below mentioned location

$ sudo nano /etc/docker/daemon.json

And add the following content to it.
{

"insecure-registries": ["<name of your private registry>"]

}

Now Run the following commands to reload systemctl and restart docker daemon.

$ sudo systemctl daemon-reload

$ sudo service docker restart

To verify that your container registry is added to local docker daemon, use the below mentioned steps.

$ docker info

In output of above you get your container registry like this

Insecure Registries:

<your container registry name>

127.0.0.0/8


Step 6 - Pushing Custom PostgreSQL Container Image to Container Registry

Let`s start to uploading our custom images to container registry like

If you have authentication enabled on container registry then you need to login first then we can upload or download images from container registry.

To Login follow below mentioned command

$ docker login <name of your container registry>

Username : xxxx

Password: xxxxx

For AWS ECR you will get registry url, username and password from respective cloud provider when you launch container registry on cloud.

Here is shell script that will add your aws credentials for Amazon ECR.

#!/bin/bash
pip install --upgrade --user awscli
mkdir -p ~/.aws && chmod 755 ~/.aws
cat << EOF > ~/.aws/credentials
[default]
aws_access_key_id = XXXXXX
aws_secret_access_key = XXXXXX
EOF
cat << EOF > ~/.aws/config
[default]
output = json
region = XXXXX
EOF
chmod 600 ~/.aws/credentials
ecr-login=$(aws ecr get-login --region XXXXX)
$ecr-login

Now we need to tag rocketchat images and push them to any of the above mentioned container registry.

To Tag images

$ docker tag rocketchat:v1.0 <name of your registry>/rocketchat:v1.0

$ docker tag mongo:v3.0 <name of your registry>/mongo:v3.0

To Push Images

$ docker push <name of your registry>/rocketchat:v1.0

$ docker push <name of your registry>/mongo:v3.0

Similarly we can push images to any of above mentioned container registry like aws ecr , google container registry or azure container registry etc.


Step 7 - Create a Storage Volume (Using GlusterFS)

Using below mentioned command we create a volume in GlusterFS cluster for MongoDB. As we are using glusterfs as persistent volume to mongodb container so we need to create volume in GlusterFS. We need to add the IP Address or DNS instead of node1 and node2 as you specified in your installation of glusterfs.

$ gluster volume create apt-cacher replica 2 transport tcp k8-master:/mnt/brick1/mongodb-disk k8-1:/mnt/brick1/mongodb-disk
$ gluster volume start mongodb-disk
$ gluster volume info mongodb-disk

deploying rocket chat on kubernetes

Figure - Information of Gluster Volume


Step 8 - Deploy MongoDB on Kubernetes

Deploying MongoDB Single Node on Kubernetes have following prerequisites -
  • Docker Image: We have created a Docker Image for MongoDB in Step 4 and pushed to docker hub or private docker registry.

Continue Reading The Full Article at - XenonStack.com/Blog

Thursday, 7 September 2017

9/07/2017 04:24:00 pm

Deploying Python Application on Docker & Kubernetes


Overview


In this post, We’ll share the process how you can Develop and Deploy Python Application using Docker and Kubernetes and adopt DevOps in existing Python Applications.

Prerequisites are mentioned below


To follow this guide you need

Kubernetes is an open source platform that automates container operations, and Minikube is best for testing kubernetes in a local environment.

Kubectl is command line interface to manage kubernetes cluster either remotely or locally. To configure kubectl in your machine follow this link.

Shared Persistent Storage is permanent storage that we attach to the kubernetes container. We will be using cephfs as a persistent data store for kubernetes container applications.

Application Source Code is source code that we want to run inside a kubernetes container.

Dockerfile contains all the actions that are performed to build python application.

The Registry is an online image store for container images.

Below mentioned options are few most popular registries.

2. AWS ECR

Dockerfile


The Below mentioned code is sample docker file for Python applications. In which we are using python 2.7 development environment.


FROM python:2.7
MAINTAINER XenonStack

# Creating Application Source Code Directory
RUN mkdir -p /usr/src/app

# Setting Home Directory for containers
WORKDIR /usr/src/app

# Installing python dependencies
COPY requirements.txt /usr/src/app/
RUN pip install --no-cache-dir -r requirements.txt

# Copying src code to Container
COPY . /usr/src/app

# Application Environment variables
ENV APP_ENV development

# Exposing Ports
EXPOSE 5035

# Setting Persistent data
VOLUME ["/app-data"]

# Running Python Application
CMD ["python", "wsgi.py"]

Building Python Docker Image

The Below mentioned command will build your application container image.

$ docker build -t <name of your python application>:<version of application> .

Publishing Container Image


To publish Python container image, we can use different private/public cloud repository like Docker HubAWS ECRGoogle Container RegistryPrivate Docker Registry.

  • Adding Container Registry to Docker Daemon
If you are using docker registry other than docker hub to store images, then we need to add that container registry to our local docker daemon and kubernetes Docker daemons.

You must have following things to follow next steps.

$ docker version
Client:
 Version:   17.03.1-ce
 API version:  1.27
 Go version:   go1.7.5
 Git commit:   c6d412e
 Built:     Mon Mar 27 17:14:09 2017
 OS/Arch:   linux/amd64 (Ubuntu 16.04)


Now we need to Create a “daemon.json” in below-mentioned location

$ sudo nano /etc/docker/daemon.json


And add the following content to it.

{
 "insecure-registries": ["<name of your private registry>"]
}


Now Run the following commands to reload systemctl and restart docker daemon.

$ sudo systemctl daemon-reload
$ sudo service docker restart


To verify that your container registry is added to local docker daemon, use the below-mentioned steps.

$ docker info


In output of above, you get your container registry like this

Insecure Registries:
 <your container registry name>
 127.0.0.0/8

  • Pushing container Images to Registry

I'm using AWS ECR for publishing container images.

You must have an AWS account with Amazon ECR permissions. Create AWS ECR repository using a below-mentioned link.

After creation, you will get registry URL, username, and password from own AWS cloud.

Here is a shell script that will add your AWS credentials for Amazon ECR in your local system so that you can push images to AWS ECR.

#!/bin/bash
pip install --upgrade --user awscli

mkdir -p ~/.aws && chmod 755 ~/.aws

cat << EOF > ~/.aws/credentials
[default]
aws_access_key_id = XXXXXX
aws_secret_access_key = XXXXXX
EOF

cat << EOF > ~/.aws/config
[default]
output = json
region = XXXXX
EOF

chmod 600 ~/.aws/credentials

ecr-login=$(aws ecr get-login --region XXXXX)
$ecr-login

Now we need to retag python application image and push them to docker hub container registry.

To Retag application container image

$ docker tag <name of your application>:<version of your application> <aws ecr repository link>/<name of your application >:<version of your application>

Continue Reading The Full Article At - XenonStack.com/Blog

Saturday, 10 June 2017

6/10/2017 12:34:00 pm

Deploying .NET Application on Docker & Kubernetes

Overview

 
In this Post , We’ll share the Process how you can Develop and Deploy .NET Application using Docker and Kubernetes and Adopt DevOps in existing .NET Applications

Prerequisites  

 

To follow this guide you need

  • Kubernetes - Kubernetes is an open source platform that automates container operations and Minikube is best for testing Kubernetes.

  • Kubectl Kubectl is command line interface to manage Kubernetes cluster either remotely or locally. To configure kubectl in your machine follow this link.

  • Shared Persistent Storage - Shared Persistent Storage is permanent storage that we can attach to the Kubernetes container so that we don`t lose our data even container died. We will be using GlusterFS as a persistent data store for Kubernetes container applications.

  • .NET Application Source Code - Application Source Code is source code that we want to run inside a kubernetes container.

  • Dockerfile - Dockerfile contains a bunch of commands to build .NET application.

  • Container-Registry - The Container Registry is an online image store for container images.

Below mentioned options are few most popular registries.

Create a Dockerfile

 
The below-mentioned code is sample dockerfile for .NET applications. In which we are using Microsoft .NET 1.1 SDK for .NET Application.

FROM microsoft/dotnet:1.1-sdk
# Setting Home Directory for application 
WORKDIR /app 

# copy csproj and restore as distinct layers
COPY dotnetapp.csproj .
RUN dotnet restore

# copy and build everything else
COPY . .
RUN dotnet publish -c Release -o out

EXPOSE 2223

ENTRYPOINT ["dotnet", "out/main.dll"]
 

Building .NET Application Image


The below-mentioned command will build your application container image.

$ docker build -t <name of your application>:<version of application> .
 

Publishing Container Image

 

Now we publish our .NET application container images to any container registry like Docker Hub, AWS ECR, Google Container Registry, Private Docker Registry.

We are using Azure Container Registry for publishing Container Images.

You also need to Sign Up on Azure Cloud Platform and then create Container Registry using this link. 

Now Click The Link to Pull and Push to Azure Container Registry.

Similarly, we can Push or Pull any container image to any of the below-mentioned Container Registry like Docker Hub, AWS ECR, Private Docker Registry, Google Container Registry etc.

Creating Deployment Files for Kubernetes


Deploying application on kubernetes with ease using deployment and service files either in JSON or YAML format.

  • Deployment File

Following Content is for “<name of application>.deployment.yml” file of Python container application.

 

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: <name of application>
  namespace: <namespace of Kubernetes>
spec:
  replicas: <number of application pods>
  template:
 metadata:
   labels:
    k8s-app: <name of application>
 spec:
   containers:
   - name: <name of application>
     image: <image name >:<version tag>
     imagePullPolicy: "IfNotPresent"
     ports:
      - containerPort: 2223
 

  • Service File

Following Content is for “<name of application>.service.yml” file of Python container application.

apiVersion: v1
kind: Service
metadata:
  labels:
 k8s-app: <name of application>
  name: <name of application>
  namespace: <namespace of Kubernetes>
spec:
  type: NodePort
  ports:
  - port: 2223
  selector:
 k8s-app: <name of application>
 
 

Running .NET Application on Kubernetes


.NET Application Container can be deployed either by kubernetes Dashboard or Kubectl (Command line).

I`m explaining command line that you can use in production Kubernetes cluster.

$ kubectl create -f <name of application>.deployment.yml
$ kubectl create -f <name of application>.service.yml
 
Now we have successfully deployed .NET Application on Kubernetes.

Verification


We can verify application deployment either by using Kubectl or Kubernetes Dashboard.

The below-mentioned command will show you running pods of your application with status running/terminated/stop/created.

$ kubectl get po --namespace=<namespace of kubernetes> | grep <application name>
 
 
Result of above command
 
 

 
 


Tuesday, 6 June 2017

6/06/2017 11:46:00 am

Top 10 Things To Know in DevOps

Introduction To DevOps


DevOps is a Modern software engineering Culture and Practices to develop a software where the development and operation teams work hand in hand as one unit, unlike the traditional ways i.e. Agile Methodology where they worked individually to develop a software or provide required services.

The traditional methods before DevOps were time-consuming and lacked understanding between different departments of software development, which lead to more time for updates and to fix bugs, therefore ultimately leading to customer dissatisfaction. Even to make a small change, the developer has to change the software from the beginning.

That’s why we are adopting such a culture, that allows fast, efficient, reliable software delivery through production.


DevOps Features


  • Maximize speed of delivery of the product.
  • Enhanced customer experience.
  • Increased time to value.
  • Enables fast flow of planned work into production.
  • Use Automated tools at each level.
  • More stable operating environments.
  • Improved communication and collaboration.
  • More time to innovate.


    DevOps Consists of 5 C’s


    DevOps practices lead to high productivity, lesser bugs, improved communication, enhanced quality, faster resolution of problems, more reliability, better and timely delivery of software.

  • Continuous Integration
  • Continuous Testing
  • Continuous Delivery
  • Continuous Deployment
  • Continuous Monitoring



 

1. Continuous Integration 


Continuous integration means isolated changes are tested and reported when they are added to a larger code base. The goal of continuous integration is to give rapid feedback so that any defect can be identified and corrected as soon as possible.

Jenkins is used for continuous integration which follows 3 step rule i.e. build, test and deploy. Here developer does frequent changes to the source code in shared repository several times a day.

Along with Jenkins, we have more tools too i.e. BuildBot, Travis etc. Jenkins widely used because it provides plugins for testing, reporting, notification, deployment etc.

2. Continuous Testing 


Continuous Testing is done to obtain immediate feedback on the business risk associated with Software Release. It's basically difficult and essential part of the software. Software rating depends upon Testing. Test function helps the developer to balance the quality and speed. Automated tools are used for testing as it is easier to do testing continuously instead of testing a whole software. Tool used for testing the software is Selenium 

3. Continuous Delivery 


Continuous Delivery is the ability to do changes like including new features, configuration management, fixes bugs and experiments into production. Our motive for doing continuous delivery is the continuous daily improvement. If there is any kind of error in the production code, we can quickly fix it that time. So, here we are developing and deploying our application rapidly, reliably and repeatedly with minimum overhead.

4. Continuous Deployment 


The code is automatically deployed to the production environment as it passes through all the test cases. Continuous versioning ensures that multiple versions of the code are available at proper places. Here every changed code is put into production that automatically resulting in many deployments in production environment every day.


5. Continuous Monitoring 


Continuous Monitoring is a reporting tool because of which developers and testers understand the performance and availability of their application, even before it is deployed to operations. Feedback provided by continuous monitoring is essential for lowering cost of errors and change. Nagios tool is used for continuous monitoring.

Learn How XenonStack DevOps Solutions can help you Enable Continuous Delivery Pipeline Across Cloud Platforms for Increased Efficiency and Reduced Cost Or Talk With Our Experts

Key Technologies and Terminologies In DevOps

 

6. Microservices


Microservices is an architectural style of developing a complex application by dividing it into smaller modules/microservices. These microservices are loosely coupled, deployed independently and are focused properly by small teams.

With Microservices developers can decide how to use, design, language to choose, platform to run, deploy, scale etc.


Advantages Of Microservices


  • Microservices can be developed in variable programming languages.
  • Errors in any module or microservices can easily be found out, thus saves time.
  • Smaller modules or microservices are easier to manage.
  • Whenever any update required, it can be immediately pushed on that particular microservices, otherwise, the whole application needs to be updated.
  • According to client need, we can scale up and down particular microservice without affecting the other microservices.
  • It also leads to increase in productivity.
  • If any one module goes down, the application remains largely unaffected.

Disadvantages Of Microservices


  • If any application involves the number of microservices, then managing them becomes a little bit difficult.
  • Microservices leads to more memory consumption.
  • In some cases, testing microservices becomes difficult.
  • In production, it also leads to complexity of deploying and managing a system comprised of different types of services.


7. Containers

 

 

Containers create a virtualization environment that allows us to run multiple applications or operating system without interrupting each other.

With the container, we can quickly, reliably and consistently deploy our application because containers have their own CPU, memory, network resources and block I/O that shares with the kernel of host operating system.

Containers are lightweight because they don’t need the extra load of a hypervisor, they can be directly run within host machine.

Before we were facing a problem that code can easily run on developer environment but while executing it in the production environment, dependency issue occurs.

Then virtual machines came, but they were heavyweight that leads to wastage of RAM, the processor is also not utilized completely. If we need more than 50 microservices to run then, VM is not the best option.

Docker is light weighted Container that has inbuilt images and occupies very less space comparatively. But for running a docker we need a Linux or Ubuntu as a host machine.


Terms used in docker that are:-

Docker Hub - It's cloud hosted service provided by Docker. Here we can upload our own image or also can pull the images in public repository.

Docker Registry - Storage component for docker images Either we can store in public repository or in private repository. We are using this to integrate image storage with our in-house development workflow and also to control where images are to be stored.

Docker images - Read only template that is used to create the container. Built by docker user and stored on docker hub or local registry.

Docker Containers - It's runtime instance of Docker image. It's built from 1 or more images.

Hence Docker helps in achieving application issues, Application Isolation, and faster development.


Advantages Of Containers


  • Wastage of resources like RAM, Processor, Disc space are controlled as now there is no need to pre-locate these resources and are met according to application requirements.
  • It’s easy to share a container.
  • Docker provides a platform to manage the lifecycle of containers.
  • Containers provide consistent computation environment.
  • Containers can run separate applications within a single shared operating system.

8. Container Orchestration


Container Orchestration is Automated, Arrangement, Coordination, and Management of containers and the resources they consume during deployment of a multi-container packed application.

Various features of Container Orchestration includes 

  • Cluster Management - Developer’s task is limited to launch a cluster of container instances and specify the tasks which are needed to run. Management of all containers is done by Orchestration.

  • Task Definitions - It allows the developer to define task where they have to specify the number of containers required for the task and their dependencies. Many tasks can be launched through single task definition.

  • Programmatic Control - With simple API calls one can register and deregister tasks, and launch and stop Docker containers.

  • Scheduling - Container scheduling deals with placing the containers from the cluster according to the resources they need and the availability of requirements.

  • Load Balancing - Helps in distributing traffic across the containers/deployment.

  • Monitoring - One can monitor CPU and memory utilization of running tasks and also gets alerted if scaling is needed by containers.

Tools used for Container Orchestration


For Container orchestration different tools are used, few are open source tools like Kubernetes, and Docker Swarn which can be used privately, also some paid tools are there like AWS ECS from Amazon, Google Containers, and Microsoft Containers.

Some of these tools are briefly explained below:


 


  • Amazon ECS - Amazon ECS is yet another product from Amazon Web Services that provides the runtime environment for Docker Containers and provide orchestration. It allows running Dockerized applications on top of Amazon’s Infrastructure.


  • Docker Swarm - It’s an open source tool, part of Docker’s landscape. With this tool, we can run multiple docker engines as a single virtual Docker. This is Dockers own containers orchestration Tool. It consists of the manager and worker nodes that run different services for orchestration. Managers that distributes tasks across the cluster and worker node run containers assigned by managers.

  • Google Container Engine - Google Container Engine allow us to run Docker containers on Google Cloud Platform. It schedules the containers into the cluster and manages them as per the requirements were given. It is built on the top of Kubernetes i.e. an open source Containers Orchestration tool.

    Continue Reading About Latest DevOps Trends At: XenonStack.com/Blog

Wednesday, 22 March 2017

3/22/2017 03:04:00 pm

How To Deploy PostgreSQL on Kubernetes


What is PostgreSQL?


PostgreSQL is a powerful, open source Relational Database Management System.

PostgreSQL is not controlled by any organization or any individual. Its source code is available free of charge. It is pronounced as "post-gress-Q-L".

PostgreSQL has earned a strong reputation for its reliability, data integrity, and correctness.
  • It runs on all major operating systems, including Linux, UNIX (AIX, BSD, HP-UX, SGI IRIX, MacOS, Solaris, Tru64), and Windows.
  • It is fully ACID compliant, has full support for foreign keys, joins, views, triggers, and stored procedures (in multiple languages)
  • It includes most SQL:2008 data types, including INTEGER, NUMERIC, BOOLEAN, CHAR, VARCHAR, DATE, INTERVAL, and TIMESTAMP.
  • It also supports storage of binary large objects, including pictures, sounds, or video.
  • It has native programming interfaces for C/C++, Java, .Net, Perl, Python, Ruby, Tcl, ODBC, among others, and exceptional documentation.


Prerequisites


To follow this guide you need -


Step 1 - Create a PostgreSQL Container Image

Create a file name “Dockerfile” for PostgreSQL. This image contains our custom config dockerfile which will look like -

FROM ubuntu:latest
MAINTAINER XenonStack

RUN apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys B97B0AFCAA1A47F044F244A07FCC7D46ACCC4CF8

RUN echo "deb http://apt.postgresql.org/pub/repos/apt/ xenial-pgdg main" > /etc/apt/sources.list.d/pgdg.list

RUN apt-get update && apt-get install -y python-software-properties software-properties-common postgresql-9.6 postgresql-client-9.6 postgresql-contrib-9.6

RUN /etc/init.d/postgresql start &&\
 psql --command "CREATE USER root WITH SUPERUSER PASSWORD 'xenonstack';" &&\
 createdb -O root xenonstack

RUN echo "host all  all 0.0.0.0/0  md5" >> /etc/postgresql/9.6/main/pg_hba.conf

RUN echo "listen_addresses='*'" >> /etc/postgresql/9.6/main/postgresql.conf

# Expose the PostgreSQL port
EXPOSE 5432

# Add VOLUMEs to allow backup of databases
VOLUME  ["/var/lib/postgresql"]

# Set the default command to run when starting the container
CMD ["/usr/lib/postgresql/9.6/bin/postgres", "-D", "/var/lib/postgresql", "-c", "config_file=/etc/postgresql/9.6/main/postgresql.conf"]

This Postgres image has a base image of ubuntu xenial. After that, we create Super User and default databases. Exposing 5432 port will help external system to connect the PostgreSQL server.

Step 2 - Build PostgreSQL Docker Image


$ docker build -t dr.xenonstack.com:5050/postgres:v9.6

Step 3 - Create a Storage Volume (Using GlusterFS)

Using below-mentioned command create a volume in GlusterFS for PostgreSQL and start it.

As we don’t want to lose our PostgreSQL Database data just because a Gluster server dies in the cluster, so we put replica 2 or more for higher availability of data.


$ gluster volume create postgres-disk replica 2 transport tcp k8-master:/mnt/brick1/postgres-disk  k8-1:/mnt/brick1/postgres-disk
$ gluster volume start postgres-disk
$ gluster volume info postgres-disk





Step 4 - Deploy PostgreSQL on Kubernetes

Deploying PostgreSQL on Kubernetes have following prerequisites -
  • Docker Image: We have created a Docker Image for Postgres in Step 2
  • Persistent Shared Storage Volume: We have created a Persistent Shared Storage Volume in Step 3
  • Deployment & Service Files: Next, we will create Deployment & Service Files

Create a file name “deployment.yml” for PostgreSQL. This deployment file will look like -

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: postgres
  namespace: production
spec:
  replicas: 1
  template:
 metadata:
   labels:
    k8s-app: postgres
 spec:
   containers:
   - name: postgres
     image: dr.xenonstack.com:5050/postgres:v9.6
     imagePullPolicy: "IfNotPresent"
     ports:
     - containerPort: 5432
     env:
     - name: POSTGRES_USER
       value: postgres
     - name: POSTGRES_PASSWORD
       value: superpostgres
     - name: PGDATA
       value: /var/lib/postgresql/data/pgdata
     volumeMounts:
        - mountPath: /var/lib/postgresql/data
          name: postgredb
   volumes:
     - name: postgredb
       glusterfs:
         endpoints: glusterfs-cluster
         path: postgres-disk
         readOnly: false

Continue Reading The Full Article At - XenonStack.com/Blog