XenonStack

A Stack Innovator

Post Top Ad

Showing posts with label devops. Show all posts
Showing posts with label devops. Show all posts

Monday, 16 December 2019

12/16/2019 05:29:00 pm

What is DevSecOps and Benefits of Adopting DevSecOps

What is DevSecOps?

DevSecOps is an approach to provide security to application and infrastructure based on the methodology of DevOps, which makes sure the application is less vulnerable and ready for users' uses. All things automated, and security checks started from the beginning of the application’s pipelines.
Selecting the right tools for Continuous Integration security achieves security goals, but the selection of tools is not enough, also need security teams along with the right tools to meet the required security.

How DevSecOps Works?

The fundamental goal of DevSecOps is to secure the application by making security and operations team members practicing and co-operating with development from the very beginning of a project. Below is the overview of its working :
Analysis of infrastructure and environments to get the idea of challenges involves -
  • Applications and APIs.
  • Libraries and Frameworks.
  • Container and Cloud.
  • Network.
  • Secure: After analyzing, secure it, and choose the right path according to culture.
  • Automate Security Testing and verify it.
  • Detect Attacks and prevent Exploits, i.e. defend the system.

How to Adopt DevSecOps?

Nowadays the greatest obstacle to DevSecOps is culture, not technology. Traditionally, security teams and dev teams work separately. To successfully move to a DevSecOps methodology, follow the DevOps methodology in both sec. and dev. Teams must make application security an integrated strategy and continue to encourage security awareness.
Effective ways to adopt it:
  • Automate the process as much as possible.
  • Follow the DevOps methodology.
  • Train to code securely.
  • Evaluation of current security measures and concluding what to do to overcome problems.
  • Integrate the security to DevSecOps.
  • By adopting the right DevSecOps tools.
  • Monitoring Continuous Integration and Continuous Delivery.
  • Analyze code and do a vulnerability assessment.
  • Mandatory security at every stage.
Define a model that the organizations can adapt to implement DevSecOps. For example which one of the below models is better for organizations -
  • Static Analysis Security Testing (SAST).
  • Dynamic Analysis Security Testing (DAST).
  • Software Composition Analysis (SCA).
  • Container security.
How to know whether the adoption of DevSecOps is successful or not?
Successful Adoption of DevSecOps depends upon -
  • Detection of threats, security defects, and flaws.
  • Deployment frequency.
  • Meantime to their repair and recovery.
  • Lead time.
  • Test coverage.

Benefits of DevSecOps

Some of the benefits of adopting DevSecOps are -
  • Reduction of expenses and Delivery rate increases.
  • Security, Monitoring, Deployment check and notifying systems from the beginning.
  • It supports openness and Transparency right from the start of development.
  • Secure by Design and the ability to measure.
  • Faster Speed of recovery in the case of a security incident.
  • Improving Overall Security by enabling Immutable infrastructure which further involves security automation.

Why DevSecOps Matters?

Due to the joint venture of development and
  • Focus on the application’s security from the beginning.
  • It finds vulnerabilities and encourages practitioners to build security processes.
  • It seeks to provide better results at greater speed same as DevOps.
  • Reducing vulnerabilities, and increases code coverage and automation.

Best Practices of DevSecOps

  • Integrate security throughout the dev process.
  • Train on secure coding.
  • Automate the whole pipeline from Continuous Integration to Continuous Deployment.
  • Choose the appropriate tools for the security check.
  • Move to Git as a single source of truth.
  • Know code dependencies.
  • Use an analytics-driven SIEM platform.

Top DevSecOps Integration Tools

Some tools to integrate throughout DevOps Pipeline

Understanding Holistic Approach

DevSecOps, as discussed above is an approach to implement protection to application and infrastructure based on the methodology of DevOps, which makes sure the application is less exposed and ready for user's uses. All things automated, and security checks started from the beginning of the application’s pipelines. To understand more about DevSecOps, you are advised to look into the below steps:

Saturday, 14 December 2019

12/14/2019 05:39:00 pm

Microservices Architecture Design and Best Practices 


What is Microservices Architecture?

A microservices architecture style is an approach for developing small services each running in its process. It enables the continuous delivery/deployment of large, complex applications. It also allows an organization to evolve its technology stack.

Why Microservices Architecture?

Microservices came in a picture of building systems that were too big. The idea behind microservices is that there are some applications that can easily build and maintain when they are broken down into smaller applications that work together. Each component is continuously developed and separately managed, and the application is then merely the sum of its constituent elements. Whereas in traditional “monolithic” application which is all developed all in one piece.

Microservices Architecture Design

Distributed architecture
All the services communicate with the API gateway through REST or RPC. These services can be deployed as multiple instances, and the requests can be distributed to these instances.for
Separately deployed components
Each component is deployed separately. If one component needs changes, others don’t have to deploy again.
Service components
Services components communicate with each other via service discovery
Bounded by contexts
It encapsulates the details of a single domain, and define the integration with other domains. It is about implementing a business capability.

Benefits of Adopting Microservices Architecture Design

  • Asynchronicity.
  • Integration & Disintegration.
  • Complected Deployments.
  • Evolutionary Architecture.
  • Components are deployed.
  • Features are released.
  • Applications consist of routing.
  • Easier to understand the code — It is easy to distinguish one small service and flow of the whole service rather than one big codebase.
  • Fast Software delivery — Each service can be developed by different developers and in many different languages.
  • Efficient debugging — Don’t have to jump through multiple layers of an application and in essence better fault isolation.
  • Reusable — Since it is an independent service it can be used in other projects also.
  • Scalability
  • Horizontal scaling
  • Workload partitioning
  • Don’t have to scale the whole project. We only need to scale up that component that needs to scale up.
  • Deployment — Need only to deploy that service which has been changed not the whole project again.

Characteristics of Microservices Architecture Design

  • Small in size
  • Messaging enabled
  • Bounded by contexts
  • Autonomously developed
  • Independently deployable
  • Decentralized
  • Built and released with automated processes

Continue Reading: XenonStack/Insights

Thursday, 12 December 2019

12/12/2019 05:34:00 pm

DevOps for Machine Learning, Tensor Flow and PyTorch

Why Continuous Integration and Deployment?

Like in Modern Web Applications, it needs Agile systems because of the ever-changing requirements of the clients and consumers. In Machine Learning the challenge is to make the system that works well with the real world, and the real-world scenarios change continuously. The system needs continuous learning and training from the real world. The solution is DevOps for Machine learning and deep learning. Which continuously trains the model on the new data after some time and then validates and tests the model accuracy to make sure it will work well with the current real-world scenarios.

DevOps for Machine Learning, Tensor Flow and PyTorch

TensorFlow and PyTorch are open source tools that help for DevOps for machine learning. Google develops TensorFlow and based on Theano Whereas PyTorch is developed by Facebook and based on Torch. Both frameworks define computational graphs. In TensorFlow, it needs to determine the entire computational graph and then run the ML algorithms. PyTorch uses dynamic graphs and creates it on the go.
TensorFlow has Tensorboard for visualization and enables directly on the browser. PyTorch doesn’t have a tool like that, but Matplotlib can be used with it. TensorFlow has more community support and online solutions than PyTorch.Whichever framework is used to build the Machine Learning Model, the CI/CD is a much-needed thing. Developers or Data Scientists spend most of their time in managing and Deploying their model to production manually, and this makes a lot of human errors. This needs to be an automated process with a well-defined pipeline and Model Versioning.
The skills needed by a data scientist is changing now, its less visualization and statistics-based and moving closer to engineering. Continuous Integration and Deployment of Machine Learning Models is the real challenge in the Data Science world, Productionizing the models requires proper Integration and Deployment pipeline. As the real world changes continuously so the system should have the capability to learn with time. Continuous Integration and Deployment Pipeline make this happen.
Currently, if a modern application needed to be developed a continuous pipeline then tools like Git, Bitbucket, GitCI, Jenkins were used for versioning and management of the code. As in the case of Modern application, only the codebase is required to be managed and versioned but in Machine Learning and AI Applications things are more iterative and complex. The data is another thing to accomplish here. A system is required which can version and manage the data, models and intermediate data.

Continuous Development life cycle

Git is a source code management and also a version control management tool. Versioning the code is much more critical for product releases, and there is history for every file for exploring the changes and reviewing the code.

Git for versioning and managing code

In Machine learning and AI systems, the model code needs to manage for releases and changes tracking. Git is the most used source code management tool used. In Continuous Integration and Continuous Deployment, Git manages the versions by tagging branches, and git-flow can be used for feature branches.

Dvc for versioning models and data

Unlike source code, the size of the model and data is much larger, and Git is not suitable for this kind of cases where the data is, and models files are large. Dvc is a Data science version control system that provides the end to end support for managing the Training data, intermediate data and model data.

Version control

DVC provides the commands like Git to add commit and push models and data to S3, Azure, GCP, Minio, SSH. It also includes data provenance for tracking the evolution of Machine Learning Models. Dvc helps in reproducibility if need to get back to a particular experiment.

Experiment management

Metric tracking is easy to use using DVC. It provides a Metric tracking feature that lists all the branches along with metrics values and picks the best version of the experiment.

Deployment and Collaboration

Dvc push-pull command is available to push the changes to production or staging. It also has a built-in way to create DAG using ML steps. DVC run command is used to create the deployment pipeline. It streamlines the work into a single, reproducible environment and also makes it easy to share the environment.

Packaging Models

There are a vast number of ways with models that can be packaged but the most convenient and automated using Docker on Kubernetes. Docker is not only applied to packaging but also as a development environment. It also handles version dependency management. It provides more reliability than running Flask on a Virtual Machine.
In this approach, Nginx, Gunicorn and Docker Compose will be used to create a scalable, repeatable template for making it easy to run with continuous integration and deployment.

Directory Structure

├── README.md
├── nginx/
├ ├── Dockerfile
├ └── nginx.conf
├── api/
├ ├── Dockerfile
├ ├── app.py
├ ├── __init__.py
├ └── models/
├── docker-compose.yml
└── run_docker.sh

How to Perform Continuous Model Testing for PyTorch and TensorFlow?

Feature Test

  • value of features lies between the threshold values
  • feature importance changed concerning previous
  • Feature a relationship with the outcome variable in terms of correlation coefficients.
  • Feature unsuitability by testing RAM usage, inference latency, etc.
  • generated feature violates the data compliance-related issues
  • code coverage of the code generating functions
  • static code analysis outcome of code generating features


Continue Reading: 
XenonStack/Blogs


Saturday, 10 June 2017

6/10/2017 12:34:00 pm

Deploying .NET Application on Docker & Kubernetes

Overview

 
In this Post , We’ll share the Process how you can Develop and Deploy .NET Application using Docker and Kubernetes and Adopt DevOps in existing .NET Applications

Prerequisites  

 

To follow this guide you need

  • Kubernetes - Kubernetes is an open source platform that automates container operations and Minikube is best for testing Kubernetes.

  • Kubectl Kubectl is command line interface to manage Kubernetes cluster either remotely or locally. To configure kubectl in your machine follow this link.

  • Shared Persistent Storage - Shared Persistent Storage is permanent storage that we can attach to the Kubernetes container so that we don`t lose our data even container died. We will be using GlusterFS as a persistent data store for Kubernetes container applications.

  • .NET Application Source Code - Application Source Code is source code that we want to run inside a kubernetes container.

  • Dockerfile - Dockerfile contains a bunch of commands to build .NET application.

  • Container-Registry - The Container Registry is an online image store for container images.

Below mentioned options are few most popular registries.

Create a Dockerfile

 
The below-mentioned code is sample dockerfile for .NET applications. In which we are using Microsoft .NET 1.1 SDK for .NET Application.

FROM microsoft/dotnet:1.1-sdk
# Setting Home Directory for application 
WORKDIR /app 

# copy csproj and restore as distinct layers
COPY dotnetapp.csproj .
RUN dotnet restore

# copy and build everything else
COPY . .
RUN dotnet publish -c Release -o out

EXPOSE 2223

ENTRYPOINT ["dotnet", "out/main.dll"]
 

Building .NET Application Image


The below-mentioned command will build your application container image.

$ docker build -t <name of your application>:<version of application> .
 

Publishing Container Image

 

Now we publish our .NET application container images to any container registry like Docker Hub, AWS ECR, Google Container Registry, Private Docker Registry.

We are using Azure Container Registry for publishing Container Images.

You also need to Sign Up on Azure Cloud Platform and then create Container Registry using this link. 

Now Click The Link to Pull and Push to Azure Container Registry.

Similarly, we can Push or Pull any container image to any of the below-mentioned Container Registry like Docker Hub, AWS ECR, Private Docker Registry, Google Container Registry etc.

Creating Deployment Files for Kubernetes


Deploying application on kubernetes with ease using deployment and service files either in JSON or YAML format.

  • Deployment File

Following Content is for “<name of application>.deployment.yml” file of Python container application.

 

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: <name of application>
  namespace: <namespace of Kubernetes>
spec:
  replicas: <number of application pods>
  template:
 metadata:
   labels:
    k8s-app: <name of application>
 spec:
   containers:
   - name: <name of application>
     image: <image name >:<version tag>
     imagePullPolicy: "IfNotPresent"
     ports:
      - containerPort: 2223
 

  • Service File

Following Content is for “<name of application>.service.yml” file of Python container application.

apiVersion: v1
kind: Service
metadata:
  labels:
 k8s-app: <name of application>
  name: <name of application>
  namespace: <namespace of Kubernetes>
spec:
  type: NodePort
  ports:
  - port: 2223
  selector:
 k8s-app: <name of application>
 
 

Running .NET Application on Kubernetes


.NET Application Container can be deployed either by kubernetes Dashboard or Kubectl (Command line).

I`m explaining command line that you can use in production Kubernetes cluster.

$ kubectl create -f <name of application>.deployment.yml
$ kubectl create -f <name of application>.service.yml
 
Now we have successfully deployed .NET Application on Kubernetes.

Verification


We can verify application deployment either by using Kubectl or Kubernetes Dashboard.

The below-mentioned command will show you running pods of your application with status running/terminated/stop/created.

$ kubectl get po --namespace=<namespace of kubernetes> | grep <application name>
 
 
Result of above command
 
 

 
 


Tuesday, 6 June 2017

6/06/2017 11:46:00 am

Top 10 Things To Know in DevOps

Introduction To DevOps


DevOps is a Modern software engineering Culture and Practices to develop a software where the development and operation teams work hand in hand as one unit, unlike the traditional ways i.e. Agile Methodology where they worked individually to develop a software or provide required services.

The traditional methods before DevOps were time-consuming and lacked understanding between different departments of software development, which lead to more time for updates and to fix bugs, therefore ultimately leading to customer dissatisfaction. Even to make a small change, the developer has to change the software from the beginning.

That’s why we are adopting such a culture, that allows fast, efficient, reliable software delivery through production.


DevOps Features


  • Maximize speed of delivery of the product.
  • Enhanced customer experience.
  • Increased time to value.
  • Enables fast flow of planned work into production.
  • Use Automated tools at each level.
  • More stable operating environments.
  • Improved communication and collaboration.
  • More time to innovate.


    DevOps Consists of 5 C’s


    DevOps practices lead to high productivity, lesser bugs, improved communication, enhanced quality, faster resolution of problems, more reliability, better and timely delivery of software.

  • Continuous Integration
  • Continuous Testing
  • Continuous Delivery
  • Continuous Deployment
  • Continuous Monitoring



 

1. Continuous Integration 


Continuous integration means isolated changes are tested and reported when they are added to a larger code base. The goal of continuous integration is to give rapid feedback so that any defect can be identified and corrected as soon as possible.

Jenkins is used for continuous integration which follows 3 step rule i.e. build, test and deploy. Here developer does frequent changes to the source code in shared repository several times a day.

Along with Jenkins, we have more tools too i.e. BuildBot, Travis etc. Jenkins widely used because it provides plugins for testing, reporting, notification, deployment etc.

2. Continuous Testing 


Continuous Testing is done to obtain immediate feedback on the business risk associated with Software Release. It's basically difficult and essential part of the software. Software rating depends upon Testing. Test function helps the developer to balance the quality and speed. Automated tools are used for testing as it is easier to do testing continuously instead of testing a whole software. Tool used for testing the software is Selenium 

3. Continuous Delivery 


Continuous Delivery is the ability to do changes like including new features, configuration management, fixes bugs and experiments into production. Our motive for doing continuous delivery is the continuous daily improvement. If there is any kind of error in the production code, we can quickly fix it that time. So, here we are developing and deploying our application rapidly, reliably and repeatedly with minimum overhead.

4. Continuous Deployment 


The code is automatically deployed to the production environment as it passes through all the test cases. Continuous versioning ensures that multiple versions of the code are available at proper places. Here every changed code is put into production that automatically resulting in many deployments in production environment every day.


5. Continuous Monitoring 


Continuous Monitoring is a reporting tool because of which developers and testers understand the performance and availability of their application, even before it is deployed to operations. Feedback provided by continuous monitoring is essential for lowering cost of errors and change. Nagios tool is used for continuous monitoring.

Learn How XenonStack DevOps Solutions can help you Enable Continuous Delivery Pipeline Across Cloud Platforms for Increased Efficiency and Reduced Cost Or Talk With Our Experts

Key Technologies and Terminologies In DevOps

 

6. Microservices


Microservices is an architectural style of developing a complex application by dividing it into smaller modules/microservices. These microservices are loosely coupled, deployed independently and are focused properly by small teams.

With Microservices developers can decide how to use, design, language to choose, platform to run, deploy, scale etc.


Advantages Of Microservices


  • Microservices can be developed in variable programming languages.
  • Errors in any module or microservices can easily be found out, thus saves time.
  • Smaller modules or microservices are easier to manage.
  • Whenever any update required, it can be immediately pushed on that particular microservices, otherwise, the whole application needs to be updated.
  • According to client need, we can scale up and down particular microservice without affecting the other microservices.
  • It also leads to increase in productivity.
  • If any one module goes down, the application remains largely unaffected.

Disadvantages Of Microservices


  • If any application involves the number of microservices, then managing them becomes a little bit difficult.
  • Microservices leads to more memory consumption.
  • In some cases, testing microservices becomes difficult.
  • In production, it also leads to complexity of deploying and managing a system comprised of different types of services.


7. Containers

 

 

Containers create a virtualization environment that allows us to run multiple applications or operating system without interrupting each other.

With the container, we can quickly, reliably and consistently deploy our application because containers have their own CPU, memory, network resources and block I/O that shares with the kernel of host operating system.

Containers are lightweight because they don’t need the extra load of a hypervisor, they can be directly run within host machine.

Before we were facing a problem that code can easily run on developer environment but while executing it in the production environment, dependency issue occurs.

Then virtual machines came, but they were heavyweight that leads to wastage of RAM, the processor is also not utilized completely. If we need more than 50 microservices to run then, VM is not the best option.

Docker is light weighted Container that has inbuilt images and occupies very less space comparatively. But for running a docker we need a Linux or Ubuntu as a host machine.


Terms used in docker that are:-

Docker Hub - It's cloud hosted service provided by Docker. Here we can upload our own image or also can pull the images in public repository.

Docker Registry - Storage component for docker images Either we can store in public repository or in private repository. We are using this to integrate image storage with our in-house development workflow and also to control where images are to be stored.

Docker images - Read only template that is used to create the container. Built by docker user and stored on docker hub or local registry.

Docker Containers - It's runtime instance of Docker image. It's built from 1 or more images.

Hence Docker helps in achieving application issues, Application Isolation, and faster development.


Advantages Of Containers


  • Wastage of resources like RAM, Processor, Disc space are controlled as now there is no need to pre-locate these resources and are met according to application requirements.
  • It’s easy to share a container.
  • Docker provides a platform to manage the lifecycle of containers.
  • Containers provide consistent computation environment.
  • Containers can run separate applications within a single shared operating system.

8. Container Orchestration


Container Orchestration is Automated, Arrangement, Coordination, and Management of containers and the resources they consume during deployment of a multi-container packed application.

Various features of Container Orchestration includes 

  • Cluster Management - Developer’s task is limited to launch a cluster of container instances and specify the tasks which are needed to run. Management of all containers is done by Orchestration.

  • Task Definitions - It allows the developer to define task where they have to specify the number of containers required for the task and their dependencies. Many tasks can be launched through single task definition.

  • Programmatic Control - With simple API calls one can register and deregister tasks, and launch and stop Docker containers.

  • Scheduling - Container scheduling deals with placing the containers from the cluster according to the resources they need and the availability of requirements.

  • Load Balancing - Helps in distributing traffic across the containers/deployment.

  • Monitoring - One can monitor CPU and memory utilization of running tasks and also gets alerted if scaling is needed by containers.

Tools used for Container Orchestration


For Container orchestration different tools are used, few are open source tools like Kubernetes, and Docker Swarn which can be used privately, also some paid tools are there like AWS ECS from Amazon, Google Containers, and Microsoft Containers.

Some of these tools are briefly explained below:


 


  • Amazon ECS - Amazon ECS is yet another product from Amazon Web Services that provides the runtime environment for Docker Containers and provide orchestration. It allows running Dockerized applications on top of Amazon’s Infrastructure.


  • Docker Swarm - It’s an open source tool, part of Docker’s landscape. With this tool, we can run multiple docker engines as a single virtual Docker. This is Dockers own containers orchestration Tool. It consists of the manager and worker nodes that run different services for orchestration. Managers that distributes tasks across the cluster and worker node run containers assigned by managers.

  • Google Container Engine - Google Container Engine allow us to run Docker containers on Google Cloud Platform. It schedules the containers into the cluster and manages them as per the requirements were given. It is built on the top of Kubernetes i.e. an open source Containers Orchestration tool.

    Continue Reading About Latest DevOps Trends At: XenonStack.com/Blog