XenonStack

A Stack Innovator

Post Top Ad

Thursday 16 February 2017

2/16/2017 11:21:00 am

BlockChain App Deployment Using Microservices With Kubernetes


What is a BlockChain?


BlockChain is a distributed database that maintains a continuously-growing list of ordered records called blocks. This technology underlying Bitcoin and other cryptocurrencies.

It is a public ledger of all Bitcoin transaction. These blocks are added in a chronological order. In order to deploy a blockchain application, you need a distributed Hyperledger blockchain on your choice of infrastructure (on-premise or cloud).

In this article, we will deploy a Hyperledger Fabric cluster using Kubernetes.

Prerequisites


To follow this guide you need a system with working kubernetes cluster on it. We will use Fabric which is an implementation of blockchain technology written in Golang so go version go1.6.2 or above is required.

Before proceeding further let’s have a look on Hyperledger Fabric.

The Hyperledger Project


Hyperledger is an open source project with collaborative effort created to advance blockchain technology.

It helps in cross-industry distributed ledgers which support transaction system, property transaction, and other services.


Hyperledger Fabric


The Fabric is an implementation of blockchain technology. It provides a modular architecture allowing pluggable implementations of the various function.


Setting Hyperledger Cluster on Kubernetes


Hyperledger Kubernetes Replication Controller


We will launch hyperledger on kubernetes as a Replication Controller it will ensure us the high - availability of hyperledger pods.

Create a file named membersrvc-rc.yml.
 
apiVersion: v1
kind: ReplicationController
metadata:
  creationTimestamp: null
  labels:
    service: membersrvc
  name: membersrvc
  namespace: default
spec:
  replicas: 1
  selector:
    service: membersrvc
  template:
    metadata:
      creationTimestamp: null
      labels:
        service: membersrvc
    spec:
      containers:
      - command:
        - membersrvc
        image: hyperledger/fabric-membersrvc
        imagePullPolicy: ""
        name: membersrvc
        ports:
        - containerPort: 7054
        resources: {}
      restartPolicy: Always
      serviceAccountName: ""
      volumes: null
status:
  replicas: 0  

 
In the same way, create another file vp0-rc.yml


apiVersion: v1
kind: ReplicationController
metadata:
  creationTimestamp: null
  labels:
    service: vp0
  name: vp0
  namespace: ${NAMESPACE}
spec:
  replicas: 1
  selector:
    service: vp0
  template:
    metadata:
      creationTimestamp: null
      labels:
        service: vp0
    spec:
      containers:
      - command:
        - sh
        - -c
        - sleep 5; peer node start --peer-chaincodedev
        env:
        - name: CORE_PEER_ADDRESSAUTODETECT
          value: "true"
        - name: CORE_VM_ENDPOINT
          value: unix:///var/run/docker.sock
        - name: CORE_LOGGING_LEVEL
          value: DEBUG
        - name: CORE_PEER_ID
          value: vp0
        - name: CORE_PEER_PKI_ECA_PADDR
          value: membersrvc:7054
        - name: CORE_PEER_PKI_TCA_PADDR
          value: membersrvc:7054
        - name: CORE_PEER_PKI_TLSCA_PADDR
          value: membersrvc:7054
        - name: CORE_SECURITY_ENABLED
          value: "false"
        - name: CORE_SECURITY_ENROLLID
          value: test_vp0
        - name: CORE_SECURITY_ENROLLSECRET
          value: MwYpmSRjupbT
        image: hyperledger/fabric-peer
        imagePullPolicy: ""
        name: vp0
        ports:
        - containerPort: 7050
        - containerPort: 7051
        - containerPort: 7053
        resources: {}
      restartPolicy: Always
      serviceAccountName: ""
      volumes: null
status:
  replicas: 0


That’s enough with replication controller. Now our next target is to deploy services for the Replication Controller.

Create a file called membersrvc-srv.yml


apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  name: membersrvc
  namespace: default
spec:
  ports:
  - name: ""
    nodePort: 0
    port: 7054
    protocol: ""
    targetPort: 0
  selector:
    service: membersrvc
status:
  loadBalancer: {}


Create another file vp0-srv.yml


apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  name: vp0
  namespace: default
spec:
  type: NodePort
  ports:
  - name: "port1"
    port: 7050
    protocol: ""
    targetPort: 0
  - name: "port2"
    nodePort: 0
    port: 7051
    protocol: ""
    targetPort: 0
  - name: "port3"
    nodePort: 0
    port: 7053
    protocol: ""
    targetPort: 0
  selector:
    service: vp0
status:
  loadBalancer: {}

Running Hyperledger Pods


After creating all the necessary file, next step is to start these rc pods

$ kubectl create -f membersrvc-rc.yml

$ kubectl create -f vp0-rc.yml




blockchain apps deployment using microservices with kubernetes







Continue Reading the Full Article At - XenonStack.com/Blog

Friday 10 February 2017

2/10/2017 03:45:00 pm

Building Serverless Microservices With Python


Serverless Computing is Exploding


As we move to the different models of production, distribution, and management when it comes to applications, it only makes sense that abstracting out the, behind the scenes processes should be handled by third parties, in a move towards further decentralization.

And that’s exactly what serverless computing does – and startups and big companies are adopting this new way of running applications.

In this post, we will discover answers to questions:

What Serverless is all about and how does this new trend affect the way people write and deploy applications?

Serverless Computing


"Serverless” denotes a special kind of software architecture in which application logic is executed in an environment without visible processes, operating systems, servers or virtual machines.

It’s worth mentioning that such an environment is actually running on the top of an operating system and use physical servers or virtual machines, but the responsibility for provisioning and managing the infrastructure entirely belongs to the service provider.

Therefore, a software developer focus more on writing code.

Serverless Computing Advances the way Applications are Developed


Serverless applications will change the way we develop applications. Traditionally a lot of business rules, boundary conditions, complex integrations are built into applications and this prolongs the completion of the system as well as introduces a lot of defects and in effect, we are hard wiring the system for certain set of functional requirements.

The serverless application concept moves us away from dealing with complex system requirements and evolves the application with time. It is also easy to deploy these microservices without intruding the system.

Below figure shows how the way of application development changed with time.

Monolith- A monolith application puts all its functionality into a single process and scale by replicating the monolith on multiple servers.

Microservice- A microservice architecture puts each functionality into a separate service and scale by distributing these services across servers, replicating as needed.

FaaS- Distributing Microservices further into functions which are triggered based on events.

Monolith => Microservice => FaaS


building serverless microservices with python


Let’s get started with the deployment of a Serverless Application on NexaStack.To create a function, you first package your code and dependencies in a deployment package. 

Then, you upload the deployment package on our environment to create your function.
  • Creating a Deployment Package
  • Uploading a Deployment Package

You May also Like: Building Serverless Microservices With Java

Database Integration For Your Application


  • Install MongoDB and configure it to get started.
  • Create Database EmployeeDB
  • Create table Employee
  • Insert some records into the table for the demo.
  • Write a file “config.py” to setup configuration on the serverless architecture as shown below.


building serverless microservices with python

 

Continue Reading the full Article at: XenonStack.com/Blog

Monday 6 February 2017

2/06/2017 11:16:00 am

Building Serverless Microservices With JAVA


Serverless Architecture


The phrase “serverless” doesn’t mean servers are no longer required. It solely proposes that developers no longer have to think that much about them.

Going serverless lets developers shift their focus from the server level to the task level which is writing codes.


serverless microservices with java


 

What it means to have servers?


First, let’s talk about what it means to have servers (virtual servers) providing the computing power required by your application. Owning servers comes with responsibilities -
  • Managing how the primitives (functions in the case of applications, or objects when it comes to storage) map to server primitives (CPU, memory, disk etc.).
  • Own provisioning (and therefore paying) for the capacity to handle your application’s projected traffic, independent of whether there’s actual traffic or not.
  • Own managing reliability and availability constructs like redundancy, failover, retries etc.

Advantages of going Serverless


Why should one move to serverless architecture can be adequately described through its benefits.

  • PaaS and Serverless - A user of traditional PaaS have to specify the amount of resources—such as dynos for Heroku or gears for OpenShift—for the application. The Serverless platform will take care of finding a server where the code is to run and to scale up when necessary.
  • Lower operational and development costs - The containers used to run these functions are decommissioned as soon as the execution ends. And the execution is metered in units of 100 ms, You don't pay anything when your code isn't running.
  • Fits with microservices, which can be implemented as functions.

Serverless architectures refer to applications that significantly depend on third-party services (knows as Backend as a Service or "BaaS") or on custom code that's run in ephemeral containers (Function as a Service or "FaaS").

But there are cons related to moving your application to FaaS which is discussed in our next post: Building Serverless Microservices with Python

Simplest way of thinking about FaaS is that it changes thinking from "build a framework to sit on a server to react to multiple events to "build/use micro-functionality to react to a single event."

How to migrate to a Microservices Architecture?


In a simple definition, Microservices are independently scalable, independently deployable systems that communicate over some protocols HTTP (XML, JSON), Thrift, Protocol Buffers etc.

Microservices are Single Responsibility Principle at code base level.

Below are some of the factors that can be followed to build Microservices:
  • One code per app/service: There is always a one-to-one correlation between the codebase and the service.
  • Explicitly declare and isolate dependencies: This can be done by using packaging systems.
  • Use environment variables to store configurations.
  • Strictly separate build, release and run stages.
  • Treat logs as event streams. Route log event stream to analysis system such as Splunk for log analysis.
  • Keep development, staging, and production as similar as possible.


Microservices Architecture: Benefits


Microservices Architectures have lots of very real and significant benefits:
  • Systems built in this way are inherently loosely coupled
  • The services themselves are very simple, focusing on doing one thing well
  • Multiple developers and teams can deliver independently under this model
  • They are a great enabler for continuous delivery, allowing frequent releases whilst keeping the rest of the system available and stable
In this post, we will implement a Nexastack function which integrates with a database(MongoDB used here).

We are going to implement this new function in Java using Spring Framework. So, Let’s get started -

Employee Service


We are going to build an Employee Service consisting of a function to show Employees information from the database.

For Demo purpose we are here implementing one function “GetEmployee”.
serverless microservices with java

1. Setting up MongoDB Instance 


  • Install MongoDB and configure it to get started.
  • Create Database EmployeeDB
  • Create table Employee
  • Insert some records into the table for demo.
  • Write a file “config.properties” to setup configuration on the serverless architecture


serverless microservices with java


 Continue Reading the full article at: XenonStack.com/Blog

Wednesday 1 February 2017

2/01/2017 01:34:00 pm

BlockChain Apps Deployment Using Microservices With Dockers


BlockChain on Docker


What is a BlockChain?


Blockchain is a distributed database that maintains a continuously-growing list of ordered records called blocks. This technology underlying Bitcoin and other cryptocurrencies.

It is a public ledger of all Bitcoin transaction. These blocks are added in a chronological order. 

In order to deploy a Blockchain applicatiIn order to deploy a Blockchain application, you need a distributed Hyperledger Blockchain on your choice of infrastructure (on-premise or cloud).


blockchain technology






In this article we will deploy a Hyperledger Fabric cluster using Docker.


Prerequisites


To follow this guide you need a system with working Docker engine and docker-compose on it. We will use Fabric which is an implementation of Blockchain technology written in Golang, so go version go1.6.2 or above is required. Before proceeding further let’s have a look on Hyperledger Fabric.


The HyperLedger Project


Hyperledger is an open source project with collaborative effort created to advance Blockchain technology.

It helps in cross-industry distributed ledgers which support transaction system, property transaction, and other services.


HyperLedger Fabric

Fabric is an implementation of blockchain technology. It provides a modular architecture allowing pluggable implementations of the various function.


Setting HyperLedger Cluster

Pulling Images


First, pull the latest images published by the Hyperledger fabric project from DockerHub.

docker pull hyperledger/fabric-peer:latest

docker pull hyperledger/fabric-membersrvc:latest

Now in order to run these images. Create a docker-compose file which will launch both of these services.


membersrvc:
image: hyperledger/fabric-membersrvc
ports:
- "7054:7054"
command: membersrvc
vp0:
image: hyperledger/fabric-peer
ports:
- "7050:7050"
- "7051:7051"
- "7053:7053"
environment:
- CORE_PEER_ADDRESSAUTODETECT=true
- CORE_VM_ENDPOINT=unix:///var/run/docker.sock
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_ID=vp0
- CORE_PEER_PKI_ECA_PADDR=membersrvc:7054
- CORE_PEER_PKI_TCA_PADDR=membersrvc:7054
- CORE_PEER_PKI_TLSCA_PADDR=membersrvc:7054
- CORE_SECURITY_ENABLED=false
- CORE_SECURITY_ENROLLID=test_vp0
- CORE_SECURITY_ENROLLSECRET=MwYpmSRjupbT
links:
- membersrvc
command: sh -c "sleep 5; peer node start --peer-chaincodedev"


That’s it now we are ready to launch these service by simply running docker-compose up 


 
blockchain technology










Running the ChainCode


Before running chaincode you need to set your $GOPATH and then make a directory to download the sample chaincode in the src directory.


mkdir -p $GOPATH/src/github.com/chaincode_example02/
cd $GOPATH/src/github.com/chaincode_example02
Curl --request GET https://raw.githubusercontent.com/hyperledger/fabric/master/examples/chaincode/go/chaincode_example02/chaincode_example02.go > chaincode_example02.go


Next, you’ll need to download the Hyperledger fabric to your local $GOPATH, after that you have to build the chaincode.


mkdir -p $GOPATH/src/github.com/hyperledger
cd $GOPATH/src/github.com/hyperledger
git clone http://gerrit.hyperledger.org/r/fabric


Go to chaincode_example02 directory and build the code


cd $GOPATH/src/github.com/chaincode_example02
go build

Starting And Registering The ChainCode


Run the following command to start the chaincode.


CORE_CHAINCODE_ID_NAME=mycc CORE_PEER_ADDRESS=0.0.0.0:7051 ./chaincode_example02


After that chaincode console will display the message “Received REGISTERED, ready for invocations” which shows that chaincode is ready for use.


blockchain technology






Running Rest API

To log in with the help of REST API, send a POST request to the /registrar endpoint, with the enrollment ID and enrollment PW. These parameters are listed in the eca.users section of the membersrvc.yaml file.

REST Request:


POST localhost:7050/registrar

{
"enrollId": "jim",
"enrollSecret": "6avZQLwcUe9b"
}

REST Response:

200 OK
{
"OK": "Login successful for user 'jim'."
}

Read The Full Article At: XenonStack.com/blog