1 - Kubernetes

Kubernetes is an open source container orchestration engine for automating deployment, scaling, and management of containerized applications.

Introduction to Containers, Docker and Kubernetes1

Container technologies such as Docker and Kubernetes are essential in modern cloud infrastructure, but what are they and how do they work? This article will present a quick introduction to the key concepts. To help you further understand the concepts in a more practical manner, the introduction will be followed by a tutorial in setting up a local development copy of Kubernetes. We will then deploy a MySQL database and the Joget application platform to provide a ready environment for visual, rapid application development.

Containers

Containers are a way of packaging software so that application code, libraries, and dependencies are packed together in a repeatable way. Containers share the underlying operating system but run in isolated processes.

At this point, you might be asking how a container is different from a virtual machine (VM) running on a VM platform (called hypervisors) such as VMware or VirtualBox? Virtual machines include the entire operating system (OS) running on virtual hardware and is good for isolating the whole environment. For example, you could run an entire Windows Server installation on top of a Mac computer running macOS. Containers, on the other hand, sit above the OS and can share libraries so they are more lightweight and thus are more suitable for deployment on a larger, more efficient scale. The diagram below illustrates the difference in a visual manner for easier understanding.

Difference between virtual machines and containers

Docker

Docker is an open source tool to create, deploy and run containers. In Docker, you essentially define a Dockerfile that is like a snapshot of an application that can be deployed and run wherever a Docker runtime is available, whether in the cloud, on your PC, or even within a VM. Docker also supports repositories such as Docker Hub where container images are stored to be distributed.

While Docker is not the only container technology available (with alternatives such as CoreOS rkt, Mesos, LXC), it is dominant and the de facto standard in industry right now.

Kubernetes

If Kubernetes sounds Greek to you, it’s because it literally is. Kubernetes is the Greek word for “captain” or “helmsman of a ship”. Kubernetes, shortened to K8s (convert the middle eight letters into the number 8), is an open source container orchestration platform. What does orchestration mean in this case? While containers make it easier to package software, it does not help in many operational areas, for example:

  • How do you deploy containers across different machines? What happens when a machine fails?
  • How do you manage load? How can containers be automatically started or stopped according to the load on the system?
  • How do you handle persistent storage? Where do containers store and share files?
  • How do you deal with failures? What happens when a container crashes? An orchestration platform helps to manage containers in these areas and more.

Originally created by Google based on their need to support massive scale, Kubernetes is now under the purview of Cloud Native Computing Foundation (CNCF), a vendor-neutral foundation managing popular open source projects.

There are alternatives to Kubernetes (such as Docker Swarm, Mesos, Nomad, etc) but Kubernetes has seemingly won the container orchestration war having been adopted by almost all the big vendors including Google, Amazon, Microsoft, IBM, Oracle, Red Hat and many more.

Setup up highly available Kubernetes cluster with kubeadm2

Implementation

Dependencies:

  1. Load Balancer: HAProxy

  2. in addition to run HAProxy reliably we need keepalived

    # /etc/haproxy/haproxy.cfg on load balancer 1 & load balancer 2
    global
       log /dev/log local0
       log /dev/log local1 notice
       chroot /var/lib/haproxy
       stats timeout 30s
       user haproxy
       group haproxy
       daemon
    
    defaults
       log global
       option tcplog
       mode tcp
       option httplog
       option dontlognull
       timeout connect 5s
       timeout client 30s
       timeout server 30s
    
    listen lets-encrypt-http-resolver
        bind *:80
        mode http
        maxconn 8
        stats uri /haproxy?stats
        balance roundrobin
        server k8s-nginx-ingress-01 192.168.0.111:80 check
        server k8s-nginx-ingress-02 192.168.0.112:80 check
        server k8s-nginx-ingress-07 192.168.0.107:80 check
    
    listen k8s-nginx-ingress
        bind *:443
        mode tcp
        maxconn 128
        balance roundrobin
        option tcp-check
        server k8s-nginx-ingress-01 192.168.0.111:443 check fall 3 rise 2 
        server k8s-nginx-ingress-02 192.168.0.112:443 check fall 3 rise 2
        server k8s-nginx-ingress-07 192.168.0.107:443 check fall 3 rise 2
    
    listen k8s-api-server
        bind *:6443
        mode tcp
        maxconn 128
        timeout connect 5s
        timeout client 24h
        timeout server 24h
        server k8s-master-01 192.168.0.111:6443 check fall 3 rise 2
        server k8s-master-02 192.168.0.112:6443 check fall 3 rise 2
        server k8s-master-07 192.168.0.107:6443 check fall 3 rise 2
    # /etc/keepalived/keepalived.conf on load balancer 1
    global_defs {
      enable_script_security
      script_user root root
      router_id lb01                            
    }
    vrrp_script chk_haproxy {
      script "/usr/bin/killall -0 haproxy"
      interval 2
      weight 2
    }
    vrrp_instance VI_1 {
      virtual_router_id 51
      advert_int 1
      priority 100
      state MASTER
      interface virbr0
      #track_interface {
      #  p4p2
      #  virbr0
      #}
      unicast_src_ip 192.168.0.101
      unicast_peer {
        192.168.0.102
      }
      virtual_ipaddress {
        192.168.0.203 dev virbr0
      }
      authentication {
         auth_type PASS
         auth_pass 1111
      }
      track_script {
        chk_haproxy
      }
    }
    # /etc/keepalived/keepalived.conf on load balancer 2
    global_defs {
      enable_script_security
      script_user root root
      router_id lb02
    }
    vrrp_script chk_haproxy {
      script "/usr/bin/killall -0 haproxy"
      interval 2
      weight 2
    }
    vrrp_instance VI_1 {
      virtual_router_id 51
      advert_int 1
      priority 99
      state BACKUP
      interface virbr0
      #track_interface {
      #  p4p2
      #  virbr0
      #}
      unicast_src_ip 192.168.0.102
      unicast_peer {
        192.168.0.101
      }
      virtual_ipaddress {
        192.168.0.203 dev virbr0
      }
      authentication {
          auth_type PASS
          auth_pass 1111
      }
      track_script {
        chk_haproxy
      }
    }

2 - Effective Docker

Docker is a set of platform as a service (PaaS) products that use OS-level virtualization to deliver software in packages called containers.

Containers are isolated from one another and bundle their own software, libraries and configuration files; they can communicate with each other through well-defined channels. All containers are run by a single operating-system kernel and are thus more lightweight than virtual machines.



Docker in Production

Docker can be use in production to create an application “package” avoid problems when working with multiple environments.



Docker in Development

Docker can be use to run test environment on all your team member’s device. Usually, your application also run with database, proxy,… in those cases, Docker Compose is a good way to go.

# docker-compose.yaml (https://gitlab.com/nghinhut/comstar/blob/master/docker-compose.yaml)
version: "3.7"

services:
  ## DASHBOARD
  dashboard:
    image: bitnami/node:10
    command: sh -c './node_modules/.bin/ng serve --host 0.0.0.0 --port 4200' #--disableHostCheck'
    environment:
      - PORT=4200
    volumes:
      - ./dashboard:/app

  ## CORE
  core:
    image: bitnami/node:10
    command: sh -c 'npm start'
    env_file:
      - .env
    environment:
      - DATABASE_URL=mongodb://root:password123@mongodb-primary:27017/admin
    volumes:
      - ./core:/app

  test-core:
    image: bitnami/node:10
    command: sh -c './node_modules/.bin/jest --watchAll'
    env_file:
      - .env
    environment:
      - DATABASE_URL=mongodb://root:password123@mongodb-primary:27017/admin
    volumes:
      - ./core:/app

  ## Envoy Proxy (require for dashboard)
  envoy:
    image: envoyproxy/envoy-alpine:v1.11.1
    command: sh -c '/usr/local/bin/envoy -c /etc/envoy/envoy.yaml'
    ports:
      - 9901:9901
      - 10000:10000
    volumes:
      - ./core/envoy/envoy.yaml:/etc/envoy/envoy.yaml
    depends_on:
      - core
      - dashboard

  ## gRPC Gateway (optional)
  grpc-gateway:
    build:
      context: ./core/grpc-gateway
      dockerfile: Dockerfile
    command: /app/grpc_gateway --backend=core:5000
    depends_on:
      - core

  ## MongoDB cluster (https://github.com/bitnami/bitnami-docker-mongodb/blob/master/docker-compose-replicaset.yml)
  mongodb-primary:
    image: 'bitnami/mongodb:4.2'
    ports:
      - 27017:27017
    environment:
      - MONGODB_ADVERTISED_HOSTNAME=mongodb-primary
      - MONGODB_REPLICA_SET_MODE=primary
      - MONGODB_ROOT_PASSWORD=password123
      - MONGODB_REPLICA_SET_KEY=replicasetkey123
    volumes:
      - 'mongodb_master_data:/bitnami'

  mongodb-secondary:
    image: 'bitnami/mongodb:4.2'
    depends_on:
      - mongodb-primary
    environment:
      - MONGODB_ADVERTISED_HOSTNAME=mongodb-secondary
      - MONGODB_REPLICA_SET_MODE=secondary
      - MONGODB_PRIMARY_HOST=mongodb-primary
      - MONGODB_PRIMARY_ROOT_PASSWORD=password123
      - MONGODB_REPLICA_SET_KEY=replicasetkey123

  mongodb-arbiter:
    image: 'bitnami/mongodb:4.2'
    depends_on:
      - mongodb-primary
    environment:
      - MONGODB_ADVERTISED_HOSTNAME=mongodb-arbiter
      - MONGODB_REPLICA_SET_MODE=arbiter
      - MONGODB_PRIMARY_HOST=mongodb-primary
      - MONGODB_PRIMARY_ROOT_PASSWORD=password123
      - MONGODB_REPLICA_SET_KEY=replicasetkey123


volumes:
  redis_data:
    driver: local
  mongodb_master_data:
    driver: local

3 - Microservices Architecture

Microservices are a software development technique —a variant of the service-oriented architecture (SOA) structural style— that arranges an application as a collection of loosely coupled services. In a microservices architecture, services are fine-grained and the protocols are lightweight.






Definition



Goals



Architecture’s Properties



Design Patterns for Microservices



References

  1. https://en.wikipedia.org/wiki/Microservices

4 - Identity Management Service

Identity management (IdM), also known as identity and access management (IAM or IdAM), is a framework of policies and technologies for ensuring that the proper people in an enterprise have the appropriate access to technology resources.

IdM systems fall under the overarching umbrella of IT security and Data Management. Identity and access management systems not only identify, authenticate and authorize individuals who will be utilizing IT resources, but also the hardware and applications employees need to access. Identity and Access Management solutions have become more prevalent and critical in recent years as regulatory compliance requirements have become increasingly more rigorous and complex. It addresses the need to ensure appropriate access to resources across increasingly heterogeneous technology environments and to meet increasingly rigorous compliance requirements.







Key Features

  1. Secure
  2. High Available
  3. Lightweight
  4. Scalable
  5. High performance connection, with streaming ability




Context of Use

  1. The service should be deploy in a private network, and can make connections to OAuth2 Authorization Server and UMA2 Authorization Server.




Business Features

Create User




Searching problem

Searching on encrypted data is needed more extra steps. Choose the wrong approach may lead you to spend more resource to encrypt/decrypt data while not maintain a good performance.

Blind indexing

This is the best approach I can find, for now.

Create blind index table

Query blind index table




References

  1. https://en.wikipedia.org/wiki/Identity_management\
  2. https://www.sitepoint.com/how-to-search-on-securely-encrypted-database-fields/
  3. https://www.vaultproject.io/docs/secrets/transit/index.html

5 - GitLab Auto DevOps

Deployment is time consuming




GitLab Auto DevOps come to rescue

Auto DevOps help you save a lot of time, avoid human errors and improve delivery time for your production especially when you doing agile.


So, let's hop on!
GitLab provides a Helm Chart called auto-deploy-app that already integrates with many GitLab's features. It's making deploy to Kubernetes using GitLab so much easy.

If you’re not familiar with Kubernetes or Helm, then don’t worry you don’t need to. You only have to do a few steps as follow:

  1. First, write Dockerfile for your application.
  2. Second, configure your app to serving APIs at port 5000 (the default port specify in auto-deploy-app). Also create an endpoint at path / (default path for healthcheck in auto-deploy-app)

Your endpoint needs to response status-code 200 in order to keep your application running. Otherwise, Kubernetes will keep destroying your container replace by another after a few failed attempts.

If your application using a database then you should check your database connection if it’s still good, otherwise if things go wrong just return 500, Kubernetes will restart your application. (Caution: need to use replication to minimize downtime)


Example

Your Dockerfile gonna look like this:

    # Example Dockerfile for Node.js
    FROM node:12-alpine
    
    ENV PORT 5000
    
    COPY package*.json ./
    RUN npm install
    COPY . .
    
    CMD [ "npm", "start" ]



More…

For more info please read GitLab Auto DevOps documents.

6 -


title: “UMA2” date: 2019-11-01T11:45:10+07:00 image: “/assets/img/UMA2-logo.png” author: “[email protected] (Nghi L. M. Nhựt)” categories: “OAuth” tags: “oauth,uma2” draft: false

User-Managed Access (UMA) is an award-winning OAuth-based protocol designed to give an individual a unified control point for authorizing who and what can get access to their digital data, content, and services, no matter where all those things live.

UMA2 Grant Flow

References

  1. https://kantarainitiative.org/confluence/display/uma/Home

7 -

Workflow Technology

Workflow management system (WfMS or WFMS)

Open-source WfMS

More Reading