XenonStack

A Stack Innovator

Post Top Ad

Showing posts with label devops cloud. Show all posts
Showing posts with label devops cloud. Show all posts

Thursday, 12 December 2019

12/12/2019 05:34:00 pm

DevOps for Machine Learning, Tensor Flow and PyTorch

Why Continuous Integration and Deployment?

Like in Modern Web Applications, it needs Agile systems because of the ever-changing requirements of the clients and consumers. In Machine Learning the challenge is to make the system that works well with the real world, and the real-world scenarios change continuously. The system needs continuous learning and training from the real world. The solution is DevOps for Machine learning and deep learning. Which continuously trains the model on the new data after some time and then validates and tests the model accuracy to make sure it will work well with the current real-world scenarios.

DevOps for Machine Learning, Tensor Flow and PyTorch

TensorFlow and PyTorch are open source tools that help for DevOps for machine learning. Google develops TensorFlow and based on Theano Whereas PyTorch is developed by Facebook and based on Torch. Both frameworks define computational graphs. In TensorFlow, it needs to determine the entire computational graph and then run the ML algorithms. PyTorch uses dynamic graphs and creates it on the go.
TensorFlow has Tensorboard for visualization and enables directly on the browser. PyTorch doesn’t have a tool like that, but Matplotlib can be used with it. TensorFlow has more community support and online solutions than PyTorch.Whichever framework is used to build the Machine Learning Model, the CI/CD is a much-needed thing. Developers or Data Scientists spend most of their time in managing and Deploying their model to production manually, and this makes a lot of human errors. This needs to be an automated process with a well-defined pipeline and Model Versioning.
The skills needed by a data scientist is changing now, its less visualization and statistics-based and moving closer to engineering. Continuous Integration and Deployment of Machine Learning Models is the real challenge in the Data Science world, Productionizing the models requires proper Integration and Deployment pipeline. As the real world changes continuously so the system should have the capability to learn with time. Continuous Integration and Deployment Pipeline make this happen.
Currently, if a modern application needed to be developed a continuous pipeline then tools like Git, Bitbucket, GitCI, Jenkins were used for versioning and management of the code. As in the case of Modern application, only the codebase is required to be managed and versioned but in Machine Learning and AI Applications things are more iterative and complex. The data is another thing to accomplish here. A system is required which can version and manage the data, models and intermediate data.

Continuous Development life cycle

Git is a source code management and also a version control management tool. Versioning the code is much more critical for product releases, and there is history for every file for exploring the changes and reviewing the code.

Git for versioning and managing code

In Machine learning and AI systems, the model code needs to manage for releases and changes tracking. Git is the most used source code management tool used. In Continuous Integration and Continuous Deployment, Git manages the versions by tagging branches, and git-flow can be used for feature branches.

Dvc for versioning models and data

Unlike source code, the size of the model and data is much larger, and Git is not suitable for this kind of cases where the data is, and models files are large. Dvc is a Data science version control system that provides the end to end support for managing the Training data, intermediate data and model data.

Version control

DVC provides the commands like Git to add commit and push models and data to S3, Azure, GCP, Minio, SSH. It also includes data provenance for tracking the evolution of Machine Learning Models. Dvc helps in reproducibility if need to get back to a particular experiment.

Experiment management

Metric tracking is easy to use using DVC. It provides a Metric tracking feature that lists all the branches along with metrics values and picks the best version of the experiment.

Deployment and Collaboration

Dvc push-pull command is available to push the changes to production or staging. It also has a built-in way to create DAG using ML steps. DVC run command is used to create the deployment pipeline. It streamlines the work into a single, reproducible environment and also makes it easy to share the environment.

Packaging Models

There are a vast number of ways with models that can be packaged but the most convenient and automated using Docker on Kubernetes. Docker is not only applied to packaging but also as a development environment. It also handles version dependency management. It provides more reliability than running Flask on a Virtual Machine.
In this approach, Nginx, Gunicorn and Docker Compose will be used to create a scalable, repeatable template for making it easy to run with continuous integration and deployment.

Directory Structure

├── README.md
├── nginx/
├ ├── Dockerfile
├ └── nginx.conf
├── api/
├ ├── Dockerfile
├ ├── app.py
├ ├── __init__.py
├ └── models/
├── docker-compose.yml
└── run_docker.sh

How to Perform Continuous Model Testing for PyTorch and TensorFlow?

Feature Test

  • value of features lies between the threshold values
  • feature importance changed concerning previous
  • Feature a relationship with the outcome variable in terms of correlation coefficients.
  • Feature unsuitability by testing RAM usage, inference latency, etc.
  • generated feature violates the data compliance-related issues
  • code coverage of the code generating functions
  • static code analysis outcome of code generating features


Continue Reading: 
XenonStack/Blogs


Friday, 10 February 2017

2/10/2017 03:45:00 pm

Building Serverless Microservices With Python


Serverless Computing is Exploding


As we move to the different models of production, distribution, and management when it comes to applications, it only makes sense that abstracting out the, behind the scenes processes should be handled by third parties, in a move towards further decentralization.

And that’s exactly what serverless computing does – and startups and big companies are adopting this new way of running applications.

In this post, we will discover answers to questions:

What Serverless is all about and how does this new trend affect the way people write and deploy applications?

Serverless Computing


"Serverless” denotes a special kind of software architecture in which application logic is executed in an environment without visible processes, operating systems, servers or virtual machines.

It’s worth mentioning that such an environment is actually running on the top of an operating system and use physical servers or virtual machines, but the responsibility for provisioning and managing the infrastructure entirely belongs to the service provider.

Therefore, a software developer focus more on writing code.

Serverless Computing Advances the way Applications are Developed


Serverless applications will change the way we develop applications. Traditionally a lot of business rules, boundary conditions, complex integrations are built into applications and this prolongs the completion of the system as well as introduces a lot of defects and in effect, we are hard wiring the system for certain set of functional requirements.

The serverless application concept moves us away from dealing with complex system requirements and evolves the application with time. It is also easy to deploy these microservices without intruding the system.

Below figure shows how the way of application development changed with time.

Monolith- A monolith application puts all its functionality into a single process and scale by replicating the monolith on multiple servers.

Microservice- A microservice architecture puts each functionality into a separate service and scale by distributing these services across servers, replicating as needed.

FaaS- Distributing Microservices further into functions which are triggered based on events.

Monolith => Microservice => FaaS


building serverless microservices with python


Let’s get started with the deployment of a Serverless Application on NexaStack.To create a function, you first package your code and dependencies in a deployment package. 

Then, you upload the deployment package on our environment to create your function.
  • Creating a Deployment Package
  • Uploading a Deployment Package

You May also Like: Building Serverless Microservices With Java

Database Integration For Your Application


  • Install MongoDB and configure it to get started.
  • Create Database EmployeeDB
  • Create table Employee
  • Insert some records into the table for the demo.
  • Write a file “config.py” to setup configuration on the serverless architecture as shown below.


building serverless microservices with python

 

Continue Reading the full Article at: XenonStack.com/Blog

Friday, 27 January 2017

1/27/2017 01:17:00 pm

How To Adopt DevOps in your Organization




While Scaling up the Business and working with remote teams with different Skillset and culture, I realize the need of processes and automation to improve the productivity and collaboration.

At growth Stage, with 3+ Years experience of delivering more than 55 projects in various domains for Startups and Enterprises including:

  • Mobility
  • BigData
  • Internet of Things
  • Private Cloud and Hybrid Cloud

Problems Faced By Developers & Operations Team


  • Ownership Issues during deployment
  • Fewer and Slow Releases
  • Flat Access Control and Security
  • Revision Control
  • Scaling up resources for application Stack
  • Manual processes involved in Delivery pipeline
  • Isolated Declaration of Dependencies
  • Single Configuration for Multiple Deployments
  • Manual Testing results into Slower release
  • Shared backing Services

Lean Start To Adopt DevOps


We Started transformation towards DevOps Strategy by adopting processes like Integration of DevOps Tools, Processes and Data into our work Culture.

Parallely, We Started adopting different Infrastructure architectures, Building Private Cloud,  Docker, Apache Mesos and Kubernetes.

Steps We taken to adopt DevOps

  • Enforcing Rules with the help of right tools - Agile board integration with SCM, Build Tool and Deployment Tool
  • Collaboration Tools - Rocket Chat Integration with Taiga, GitLab, Jenkins
  • Continuous Integration and Delivery
  • Explicit Dependency Management
  • Automated Testing
  • Hands on Training

We started by creating two separate teams from existing pool of developers to adopt DevOps culture for new Projects in BigData and Mobile Applications.

After Initial hurdles in adaptation to Collaboration Tools and new delivery pipeline,  results came out were extraordinary.

Results After Initial Phase


  • Improved Performance & productivity
  • Less Manual work
  • Better Collaboration and Communication
  • Developers getting more Empowered and Involved in Delivery
  • Proper Dependency and Configuration Management

Challenges In First Phase


  • Cultural Shift in the way Things were being developed
  • Changing Mindset for Adaptation
  • Support for Legacy Environments
  • Integrating Security and Compliance on new Setup
  • No support for Overlay Networks

Overall Results


Deployed Solution in Healthcare Startup


Then we implemented this approach in our HealthCare Startup RayCare where we were having multiple work flows and Big Data loads.

The Technology Stack for the startup was latest and cutting edge and tall enough to leverage Microservices.

  • Stood up Development, Staging and Production Environment with almost Zero Parity.
  • Jenkins Jobs for Android, iOS, Angular and Backend.
  • Highly Available and Distributed Cassandra Cluster.
  • Ansible Playbooks for turning up and down environment with one command.
  • Using Docker for Development and Staging Environment.

Deployed Solution in Analytics Startup


Introducing DevOps practices in an Analytics Startup with well experienced Database Administrators was altogether a great experience.

The aim was to fasten data loading, database provisioning and isolating environments for a team working on 3 distinct location.

The main highlights of the implementation are:

  • Bridging gap between 3 different development location by using relevant tools to improve collaboration.
  • Writing scripts to automate the process of Data Operations as much as possible.
  • Application Delivery Pipeline of an app with multiple versions running for different clients at same time.

Deployed and collaborated with Oracle Partner for Service Now Integration & DevOps


The aim was to integrate complex Oracle Enterprise Manager (OEM) with Service Now, a popular IT Services Management solution for easing provisioning of Oracle based resources through Service Now. The solution provided capable of:

  • Automating Oracle DB Binaries Provisioning.
  • Running Chef Recipes from Oracle Enterprise Manager.
  • A Restful Service to trigger DB Provisioning Recipe.
  • Item listing in Service Now to trigger Restful Service on successful transaction of item.
  • Admin approval process for infrastructure process.

Deployed Solution in WebRTC based Calling Platform –Web & Mobile App


The company is fully dedicated to free calling App for mobile iOS and Android platform like iOS, Android to phone number with facility to call all around the globe.

The technologies used are VMware, Couchbase, PostgreSQL, CentOS 7, Zabbix and many more

Hardware: From OVH dedicated server

Virtualization: Using Vmware bare metal

All services: on VMware Virtual Machines

The system had more than 10 NodeJS Apps, Highly Available and Distributed CouchDB Cluster which was changed to MongoDB in later stages.

We made the application delivery pipeline fully automated using Ansible and Jenkins. The infrastrucutre was also made mutable using Ansible to scale up and down according to load.

Deployed Solution For Microsoft Technology Stack in HealthCare Wellness


Migrating applications powered by .NET Ecosystem to DevOps powered application lifecycle pipeline is a challenging job. The application was deployed on private cloud powered by Openstack with the following features:

  • Openstack Images for .NET Ecosystem.
  • Jenkins Jobs for Continuous Integration
  • Integration of Openstack with Object Storage powered by Ceph.
  • Visual Studio Integration with popular DevOps Tools.

Summary


During Transformation Towards Agile & DevOps  we realised that DevOps needs a platform where we can define workflow with different Integrations -