XenonStack

A Stack Innovator

Post Top Ad

Tuesday 27 March 2018

3/27/2018 12:34:00 pm

Deploying Microservices Based Java Application on Docker & Kubernetes

Blog Single Post Image

Overview

Running Containers at any real-world scale requires container orchestration and scheduling platform like Docker SwarmApache Mesos, AWS ECS but the most popular out of it is Kubernetes. Kubernetes is an open source system for automating deployment and management of containerized applications. 
In this post, We’ll share the process how you can Develop and Deploy Microservices based Java Application on the Container Environment -  Docker and Kubernetes and adopt DevOps in existing Java Application.

Prerequisites

To follow this guide you need -
Kubernetes is an open source platform that automates container operations and Minikube is best for testing kubernetes.
Kubectl is command line interface to manage kubernetes cluster either remotely or locally. To configure kubectl on your machine follow this link.
Shared Persistent Storage is permanent storage that we can attach to the kubernetes container so that we don`t lose our data even container died.we will be using GlusterFS as the persistent data store for kubernetes container applications.
Application Source Code is source code that we want to run inside a kubernetes container.
Dockerfile contains a bunch of commands to build java application.
The Registry is an online image store for container images.
The below-mentioned options are few most popular registries.

Creating a Dockerfile

The below-mentioned code is sample dockerfile for Java applications. In which we are using Maven 3 as the builder and OpenJDK 8 as Java development environment with Alpine Linux due to its very compact size.
FROM maven:3-alpine

MAINTAINER XenonStack

# Creating Application Source Code Directory
RUN mkdir -p /usr/src/app

# Setting Home Directory for containers
WORKDIR /usr/src/app

# Copying src code to Container
COPY . /usr/src/app

# Building From Source Code
RUN mvn clean package

# Setting Persistent drive
VOLUME ["/data"]

# Exposing Port
EXPOSE 7102

# Running Java Application
CMD ["java", "-jar", "target/<name of your jar>.jar"]

Building Java Application Image

$ docker build -t <name of your java application>:<version of application>


Continue Reading:XenonStack/Blog

Monday 19 March 2018

3/19/2018 11:10:00 am

Test Driven & Behaviour Driven Development in Scala

Blog Single Post Image

Overview

With the evolutionary growth of software development process, Test Driven Development is an approach that consolidates test first driven methodology and refactoring.
Test Driven Development is a key practice for extreme programming, it suggests that the code is developed or changed exclusively on the basis of the Unit Testing.

TDD (Test Driven Development)

It is a practice of writing a (failing) test prior to writing the code of a feature. Feature code is refined until it passes the unit test.
Steps for the same are given below -
  • Firstly, add a test.
  • Run all the tests and see if any new test fails.
  • Update the code to make it pass the new tests.
  • Run the test again and if they fail then refactor again and repeat.
Test Driven Development Approach

BDD (Behaviour Driven Development)

Behaviour Driven Development (BDD) is similar to the Test Driven Development. In other words, Behaviour Driven Development is the extended version of Test Driven Development.
The process is similar to Test Driven Development. In this also, the code is first written in Behaviour Driven Development and the production code. But, the main difference is that in Behaviour Driven Development the test is written in plain descriptive English type grammar as opposed to Test Driven Development.
This type of development -
  • Explained the behaviour of the software/program.
  • User friendly
The major benefit of Behaviour Driven Development is that it can easily be understood by a non-technical person also.

What is Unit Testing?

Unit tests operate on a low level and are tightly coupled with source code. In the best of worlds, one unit test always correlates to one minimal piece of functionality. These functionalities are independent of the other components of the code

What is Integration Testing?

Integration tests validate that software components work correctly when combined together. Only the interfaces between components are tested and it is assumed that each component has been individually unit tested.

Test Driven Development Workflow

Test Driven Development promotes the idea of each test case testing one piece of functionality at a time. The workflow of Test Driven Development is as follows - 
  • Write a concise, minimal test for a new piece of functionality. This test will fail since the functionality isn’t implemented yet (if it passes one knows that the functionality either already exists or that the test isn’t designed correctly).
  • Implement the new functionality and run all tests, both the new and pre-existing ones. Repeat until all tests pass.
  • Clean up the code and ensure that all tests still pass, then return to step 1.
Core Overview to Test Driven Development

Test First Development in Agile

  • For this programming approach we write the tests before programming and during testing for functionality, all these test cases are run whenever we test the functionality.
  • For test first programming we can also say that we are going to write the automated test cases before the functionality and the developer has to write the test case to satisfy these tests cases.

Test Driven Development

It is the practice of first writing a (failing) test prior to writing the code for a feature. Feature code is refined until it passes tests(s).
  • Firstly when we go to design or implement a new feature we check whether an existing design is the best design which will help us to implement a new functionality.
  • If the above condition satisfies then we follow the TFD approach for the implementation of new functionality.
If the above condition is not satisfied, then it allows us to change the portion of design affected by new functionality and allows us to add new features very easily.


Continue Reading:XenonStack/Blog

Tuesday 13 March 2018

3/13/2018 10:33:00 am

Building Decentralized Applications on BlockChain and Overview of BlockChain Technology

Blog Single Post Image

 What is a decentralized application?

Decentralized applications (or Dapps)includes serverless provisions that might be run mutually on the customer side and inside a blockchain based dispersed network, for example, such that Ethereum.
The customer tool manages the front-end and consumer credentials, at the same time as the back ends runs within a disbursed network of computers that offer for the processing and storage necessities.

Features of Decentralized applications

  • Open Source: The application’s code structure must be mostly available for public research.
  • Decentralized: The application’s data should be stored on a public and decentralized blockchain platform.
  • Incentive: The application must utilize tokens/digital resources to reward its network supporters.
  • Protocol: The application must generate tokens using a cryptographic consensus algorithm to demonstrate proof of value.

Building Decentralized Applications

Publish White Paper on the DApp

Publishing a white paper defining the blueprint, features, and technicalities of the DApp is crucial, and is also the very first step. Your whitepaper should address a problem you wish to solve. It should state the intentions and goals of the Dapp.

Gather Community

Gain community involvement by stating the plan and discussing the proportion that will go to the development budget and other essential allocations. Being transparent about the distribution of tokens is critical.

Begin Development

When all is stated and achieved, after gaining the price range and tuning the idea, it’s time to start development. And once you have got commenced, it turns crucial to percentage weekly or month-to-month updates that help in constructing surroundings for community individuals.

Launch the Product

Launch the product with release notes stating the maintenance plans so that the community is involved.

Overview

There are different kinds of people in the world, who have different languages, different cultures, different eating styles but one of the things that brings them together is money, which is everybody's necessity.
In today’s fast moving and developing era, we need a safe and secure platform to be able to make online transactions, for whatever purpose they might be. In whatever transactions we do, banks are the third party involved in it, and they have all records of our transaction and of one’s account holdings.
Blockchain Technology aims at removing this third-party involvement and keeping the transactions only limited to the sender and the receiver. A broad idea behind this technology was to build a secure platform where safe transactions could happen in a very transparent manner.
With the help of Blockchain technology, we can save ourselves from any kind of overheads and third party involvements and send or receiving money could be as hassle-free as sending emails. BlockChain Technology is a peer to peer software technology that protects a digital piece of information.

What is BlockChain Technology?

What is BlockChain Technology

The origin of Blockchain technology is a little uncertain and still for experts to find out for sure. It is said to be invented by a person or a group of people known by the name of Satoshi Nakamoto in the year 2009.
Initially, it was developed to enable digital transactions between two parties in an anonymous fashion, without having the need of a third party to perform verification of the transaction.
The main inspiration behind the development of such a great technology at the time was to facilitate the transfer of Bitcoins, but it later caught on and is today being used for many other important things.
The blockchain technology is an open system for transaction processing that follows distributed ledger approach whose goal is to automate the processes and reduce data storage costs and provide data security and eliminate duplicates.
We can also say that BlockChain is a method of recording data, i.e., transactions, contracts, agreements that need to be recorded independently and verified as they are happening.
A very good way to understand the concept of Blockchain technology is the google docs analogy. Just the way Google Docs shared between two people enables both of them to make changes to the document at the same time and visible to the other party, similarly the transparency of blockchain works.
BlockChain sounds like a revolution and is the underlying Technology behind Bitcoin. Truly, BlockChain is a mechanism that brings everyone to the highest degree of accountability, i.e., no more missed transaction, no more third party involvement in the transaction.
Blockchain guarantees the validity of a transaction by recording it not only on the main register but also by connecting distributed system of registers, and all of which are connected through a secure validation mechanism. It may have been invented to create the alternative currency Bitcoin but can be used for other purposes like online signature services, voting systems, and many other applications.
It's a good way instead of sending our payment information through servers, in BlockChain Technology all transactions are Copied and cross-checked between every computer in a system, which becomes very safe at scale.
BlockChain Technology is a type of distributed ledger that means a database that is consensually shared and synchronized across network spread across multiple sites, institutions. Blockchain provides an unalterable, public record of digital transactions in packages called Blocks. These digitally recorded “Blocks” of data are stored in a linear path, and each block contains cryptographically hashed data.

Key Applications of BlockChain

  • Capable of transforming
  • Making transaction faster
  • Reducing cost
  • More security
  • Transparency
  • Seamless and simultaneous integration of transaction
  • Settlements and ledger updates directly between multiple parties
  • Creates a secure way to share information
  • Conduct transactions without the need for a single, central party to approve them.
  • Only authorized network members can see details of their transactions, providing confidentiality and privacy.
  • All updates to the shared ledger are validated and recorded on all participants shared ledgers, which drives security and accuracy.
  • All updates to the ledger are unchangeable and auditable. Network members can accurately trace their past activity.

How Does BlockChain Works?

Working Of BlockChain Technology
The Blockchain technology enables direct transactions between two parties without any intermediary such as a bank or a governing body. It is essentially a database of all transactions happening in the network.
The database is public and therefore, not owned by any one party, it is distributed that is, it is not stored on a single computer. Instead, it is stored on many computers across the world. The database is constantly synchronized to keep the transactions up to date and is secured by the art of cryptography making it hacker proof.
The basic framework on which the whole blockchain technology works is actually two-fold first is gathering data (transaction records) and the second is putting these blocks together securely in a chain with the help of cryptography.
Say a transaction happens, this transaction information is shared with everybody on the blockchain network. These transactions are individually timestamped and once these transactions are put together in a block, they have timestamped again as a whole block.
Now, this complete block is appended to the chain in the blockchain network. Other participants might also be adding to the network at the same time, but the timestamps added to each block takes care of the order in which the blocks are appended to the network.
The timestamps also take care of any duplicity issues hence, everybody on the network has the recent version of the chain available to them. The main cryptographic element that makes this whole system tamper-free is the hash function.
Each block's information is taken and a hash function is applied to it. The value computed from this is then stored in the next block and so on. So in this way, each block’s hash function value is being carried by the next block in the chain which makes tampering with the contents of the block very difficult.
Even if some changes are made to the block, one could easily find out because that block’s hash value would not be the same as the already calculated value of the hash function that was stored in the next block of the chain.
Blockchain works as a network of computers. Bitcoin photography is used to keep transactions secure and also shared among those in the network after the transaction is validated, the details of the transfer are recorded on a public ledger that anyone on the network and sees in the existing financial system essentially ledger maintained by the institution access the custodian of the information.
But on a blockchain the information is transparently held in a shared database and no one party access the movement, thus increasing the trust among parties.

How Bitcoin Transaction Works?

For everyone, it's easy to download a simple piece of software and install it on the computer. But to use Bitcoin which is a decentralized be out of your system, we do not need to register an account with any particular company or handle or any of your personal details, once you have a wallet you can create addresses which effectively become your identity within the network.
Suppose party A wants to send money to party B in the form of Bitcoins. For both party A and Party B, the transaction is collected in a block. A block record some or all of the most recent Bitcoin transactions that have not yet entered any cry of blocks, the new block is then broadcasted to all the parties or so-called nodes in the network the parties in the network approve that the transaction is valid through a process called mining.
Bitcoin Transaction Process

Building Blocks With BlockChain

A very simple definition of blockchain is that “the blockchain is distributed, digital ledger.” One of the key features of the blockchain is that it is a ‘Distributed Database’ that is to say, the database exists in multiple copies across multiple computers.

Concept of BlockChain Technology

  • Shared View - One of the most powerful features of BlockChain Technology is it’s shared a view of data for all participants in a peer-to-peer network. Transaction records can be shared but cannot be altered. Shared views have two approaches i.e. Traditional Approach where each party maintains their own independent ledger and Blockchain Approach where all parties share and maintain the same ledger.
  • Cryptography - Cryptography is used to establish identity and protect the integrity of the underlying data. One of its concepts is hashing and this concept is used in the blockchain technology. Hashing is an effective means of determining if any piece of data has been changed or not. It generates a fingerprint for a piece of data by applying a cryptographic function to it. Changing one character in the original string results in a completely different hash value. Also, the original string cannot be reverse engineered from the hash function.

CryptoCurrency - Everything You Need to Know

Cryptocurrency is a virtual currency which uses cryptography for security. The main defining features of this currency which make what it is are its decentralized nature, transparent ledgers and it's security feature which makes it resistant to any kind of malicious manipulation. Bitcoin is the is the first and the most famous cryptocurrency. It was invented in 2009 and has seen an enormous rise in value ever since.
The cryptographic technique that it follows is SHA-256, it is a hash function that is used to encrypt every transaction before being added to the blockchain. The underlying and main aspect on which cryptocurrency works is that all the transactions happen in a transparent and decentralized ledger for everyone to see and verify if need be. One can go back to the very first transaction of a user to verify their credibility by tracking back in the ledger.

Benefits of CryptoCurrencies

The advantages of cryptocurrency are much fold. To name a few would be:
  • No Fraudulent activities - Transactions happening in this process cannot be reversed or in any other way be tampered with, without at least letting anybody know. The whole architecture is such that any discrepancies cannot go unnoticed and can be detected.
  • Faster transactions - The traditional way of settling payments involves a trusted third party, namely banks mainly, this makes the process take more time. In cryptocurrency, since the payments are peer-to-peer without any involvement of a third party, the payment process happens immediately without any delay. No Overheads- With the elimination of third party involvements, the overheads in the form of service fees that these institutions charge are also eliminated. This makes cryptocurrencies more cost-effective.
  • The integrity of identity - Unlike credit cards, which always have a risk of getting misused since all the information regarding it is given to the vendor, this mode of payment does not divulge any more information than required and only pushes the amount of money that is required to be paid.

How is Blockchain Changing Money and Business?

Blockchain technology is likely to have a great impact on next few decades. Currently, Blockchain is not the most thundering concept in the world, but it is believed that it will be the next generation of the internet. For the past few decades, we've had the internet for information.
The crucial difference between Internet and BlockChain is that the Internet enables the exchange of data, blockchain could enable the exchange of value, i.e., it could allow users to carry out trade and commerce across the globe without the need for payment processors, curator, and settlement and adjusting entities. Trust is a very crucial thing, and blockchain is one of the biggest technologies that peer.
Trust enables people everywhere to trust each other and transact peer established, not by some big institution, but by collaboration, by cryptography and by some smart code. And because trust is native to the technology, so we can say that it's “The Trusted Protocol.”
Why BlockChain Technology

Continue Reading:XenonStack/BLOG

Tuesday 6 March 2018

3/06/2018 11:40:00 am

Docker Overview - A Complete Guide

Blog Single Post Image

Docker Ecosystem

Docker is an open platform tool to make it easier to create, deploy and to execute the applications by using containers. Docker Containers allow us to separate the applications from the infrastructure so we can deploy application/software faster.
Docker have main components which includes Docker Swarm, Docker Compose, Docker Images, Docker Daemon, Docker Engine. 
We can manage our infrastructure in the same ways as we manage our applications. The Docker is like a virtual machine but creating a new whole virtual machine; it allows us to use the same Linux kernel.
The advantage of Docker platform is to ship, test, and deploy code quicker so that we can reduce the time between writing code and execute it in production.
And the main important thing about Docker is that its open source, i.e., anyone can use it and can contribute to Docker to make it easier and more features in it which aren’t available in it.

Docker Platform

The advantage of Docker is to build the package and run the application in sandbox environment said Container.
The docker container system utilizes the operating system virtualization to use and combine the components of an application system which support every standard Linux machine.
The isolation and security factors allow us to execute many containers parallel on a given system.
Containers are lightweight in size because they don’t need the extra resource of a HyperV or VMware, but run directly within the machine kernel. We can even run Docker containers within machines that are actually virtual/hyper machines.

Docker Container Components

The Core of the Docker consists of Docker Engine, Docker Containers, Docker images, Docker Client, Docker daemon, etc. Let discuss the components of the Docker.

Docker Engine

The Docker engine is a part of Docker which create and run the Docker containers. The docker container is a live running instance of a docker image. Docker Engine is a client-server based application with following components -
  • A server which is a continuously running service called a daemon process.
  • A REST API which interfaces the programs to use talk with the daemon and give instruct it what to do.
  • A command line interface client.
Docker Engine
The command line interface client uses the Docker REST API to interact with the Docker daemon through using CLI commands. Many other Docker applications also use the API and CLI. The daemon process creates and manage Docker images, containers, networks, and volumes.

Docker Daemon

The docker daemon process is used to control and manage the containers. The Docker daemon listens to only Docker API requests and handles Docker images, containers, networks, and volumes. It also communicates with other daemons to manage Docker services.

Docker Client

Docker client is the primary service using which Docker users communicate with the Docker. When we use commands “docker run” the client sends these commands to dockerd, which execute them out.
The command used by docker depend on Docker API. In Docker, client can interact with more than one daemon process.

Docker Images

The Docker images are building the block of docker or docker image is a read-only template with instructions to create a Docker container. Docker images are the most build part of docker life cycle.
Mostly, an image is based on another image, with some additional customization in the image.
We can build an image which is based on the centos image, which can install the Nginx web server with required application and configuration details which need to make the application run.
We can create our own images or only use those created by others and published in registry directory. To build our own image is very simple because we need to create a Dockerfile with some syntax contains the steps that needed to create the image and make to run it.
Each instruction in a Dockerfile creates a new layer in the image. If we need to modify the Dockerfile we can do the same and rebuild the image, the layers which have changed are rebuilt.
This is why images are so lightweight, small, and fast when compared to other virtualization technologies.

Docker Registries

A Docker registry keeps Docker images. We can run our private registry.
When we execute the docker pull and docker run commands, the required images are removed from our configured registry directory.
Using Docker push command, the image can be uploaded to our configured registry directory.

Docker Containers

A container is the instance of an image. We can create, run, stop, or delete a container using the Docker CLI. We can connect a container to more than one networks, or even create a new image based on its current state.
By default, a container is well isolated from other containers and its system machine. A container defined by its image or configuration options that we provide during to create or run it.

Namespaces

Docker using a service named namespaces is provided to the isolated environment called container. When we run a container, Docker creates a set of namespaces for that particular container. The namespaces provide a layer of isolation. Some of the namespace layer is -
  • Namespace PID provides isolation for the allocation of the process, lists of processes with details. In new namespace is isolated from other processes in its "parent" namespace still see all processes in child namespace
  • Namespace network isolates the network interface controllers, IP tables firewall rules, routing tables etc. Network namespaces can be connected with each other using the virtual Ethernet device.

Control Groups

Docker Engine in Linux relies on named control groups. A group limits the application to a predefined set of resources.
Control groups used by Docker Engine to share the available hardware resources to containers.
Using control groups, we can define the memory available to a particular container.

Union File Systems

Union file systems is a file system which is used by creating layers, making them lightweight and faster. Docker Engine using union file system provide the building blocks to containers.
Docker Engine uses many UnionFS variants some of including are AUFS, btrfs, vfs, Device Mapper, etc.

Container Format

Docker Engine adds the namespaces, control groups & UnionFS into a file called a container format. The default size for the container is lib container.

Docker File

docker file is a text file that consists of all commands so that user can call on the command line to build an image. Use of base Docker image add and copy files, run commands and expose the ports.
The docker file can be considered as the source code and images to make compile for our container which is running code. The Dockerfile are portable files which can be shared, stored and updated as required. Some of the docker files instruction is -
  • FROM - This is used for to set the base image for the instructions. It is very important to mention this in the first line of docker file.
  • MAINTAINER - This instruction is used to indicate the author of the docker file and its non-executable.
  • RUN - This instruction allows us to execute the command on top of the existing layer and create a new layer with the result of command execution.
  • CMD - This instruction doesn’t perform anything during the building of docker image. It Just specifies the commands that are used in the image.
  • LABEL - This Instruction is used to assign the metadata in the form key-value pairs. It is always best to use few LABEL instructions as possible.
  • EXPOSE - This instruction is used to listen on specific as required by application servers.
  • ENV - This instruction is used to set the environment variables in the Docker file for the container.
  • COPY - This instruction is used to copy the files and directory from specific folder to destination folder.
  • WORKDIR - This instruction is used to set the current working directory for the other instruction, i.e., RUN, CMD, COPY, etc.

Docker Architecture

Docker uses a client-server based architecture model. The Docker client communicates with the Docker daemon, which does process the lifting of the building, running, and distributing Docker containers.
We can connect a Docker client to another remote Docker daemon. The Docker client and daemon communicate using of REST API and network interface.
Docker Architecture

Key Features Of Docker

  • Docker allows us to faster assemble applications from components and eliminates the errors which can come when we shipping the code. For example, we can have two Docker containers running two different versions of the same app on the same system.
  • Docker helps us to test the code before we deploy it to production as soon as possible.
  • Docker is simple to use. We can get started with Docker on a minimal Linux, Mac, or Windows system running with compatible Linux kernel directly or in a Virtual Machine with a Docker binary.
  • We can "dockerize" our application in fewer hours. Mostly Docker containers can be launch with in a minute.
  • Docker containers run everywhere. We can deploy containers on desktops, physical servers, virtual machines, into data centers, and up to public and private clouds. And, we can run the same containers everywhere.

Docker Security

Docker security should be considered before deploy and run it. There are some areas which need to be considered while checking the Docker security which include security level of the kernel and how it support for namespaces and groups.
  • Docker daemon surface.
  • The container configuration file which can have loopholes by default or user has customized it.
  • The hardening security policy for the kernel and how it interacts with containers.

Overview of Docker Compose

Docker Compose is a tool which is used to define and running multiple-containers in Docker applications. Docker composes use to create a compose file to configure the application services. After that, a single command, we set up and start all the services from our configuration.
Docker Compose is a beneficial tool for development, testing, and staging environments.
Docker Compose is a three-step process.
  • Define the app’s environment with a Dockerfile so that it can be reproduced anytime and anywhere.
  • Define the services in docker-compose.yml after that we can be run together in an isolated environment.
  • After that, using docker-compose up and Compose will start and execute the app.

Features of Docker Compose

The features of docker compose that make it unique are -
  • Multiple isolated environments can be run on a single host
  • Store volume data when containers are created
  • Only recreate containers in which configurations have been changed.

Getting Started With Swarm Mode

Docker Engine version also includes swarm mode for managing a cluster of Docker Engines said a swarm. With the help of Docker CLI, we create a swarm, deploy application services to a swarm, and manage swarm.

Features of Swarm Mode

  • Cluster management integrated with Docker Engine - Using the Docker Engine CLI we create a swarm of Docker Engines where we can easily deploy application services. We don’t need any additional software to create or manage a swarm.
  • Decentralized design - We can deploy any kinds of node, manager, and worker, using the Docker Engine. It means we can build an entire swarm from a single disk image.
  • Service model - Docker Engine uses a declarative approach so that we can define the desired state of the various services in our application stack.
  • Scaling - For every service, declare the number of tasks we want to run. When we scale up or down, the swarm manager automatically does the changes by add or remove functions to maintain the required state.
  • MultiSystem Networking - We can use an overlay network for the services or applications. Swarm manager assigns addresses to the containers on the overlay network when it starts the application.
  • Discovery Service - Swarm manager nodes assign each service in the swarm a DNS name and load balance running containers. We can query any container running in the swarm through a DNS server in the swarm.
  • Load balancing - We can expose the ports for services to an external load balancer. In Internal, using swarm, we can decide how to distribute service containers between nodes or hosts.
  • TLS Certificate - Each node in the swarm mode enforces to use TLS authentication & encryption to secure communications with all other nodes. We have the option to use self-signed root certificates or certificates from a custom root CA.
  • Rolling updates in Docker - We can apply service updates to nodes incrementally. The swarm manager controls the delay between service deployment to different sets of hosts. If something goes wrong, we can roll-back a task to a previous version of the service.

Continue Reading:XenonStack/Blog