Monday, June 13, 2022
HomeHealthChatOps: Managing Kubernetes Deployments in Webex

ChatOps: Managing Kubernetes Deployments in Webex


That is the third put up in a sequence about writing ChatOps providers on high of the Webex API.  Within the first put up, we constructed a Webex Bot that obtained message occasions from a bunch room and printed the occasion JSON out to the console.  In the second, we added safety to that Bot, including an encrypted authentication header to Webex occasions, and subsequently including a easy listing of licensed customers to the occasion handler.  We additionally added consumer suggestions by posting messages again to the room the place the occasion was raised.

On this put up, we’ll construct on what was carried out within the first two posts, and begin to apply real-world use instances to our Bot.  The objective right here can be to handle Deployments in a Kubernetes cluster utilizing instructions entered right into a Webex room.  Not solely is that this a enjoyable problem to resolve, nevertheless it additionally supplies wider visibility into the goings-on of an ops group, as they’ll scale a Deployment or push out a brand new container model within the public view of a Webex room.  You could find the finished code for this put up on GitHub.

This put up assumes that you just’ve accomplished the steps listed within the first two weblog posts.  You could find the code from the second put up right here.  Additionally, crucial, make sure you learn the primary put up to learn to make your native improvement surroundings publicly accessible in order that Webex Webhook occasions can attain your API.  Ensure that your tunnel is up and operating and Webhook occasions can circulation by way of to your API efficiently earlier than continuing on to the subsequent part.  On this case, I’ve arrange a brand new Bot referred to as Kubernetes Deployment Supervisor, however you should utilize your current Bot should you like.  From right here on out, this put up assumes that you just’ve taken these steps and have a profitable end-to-end knowledge circulation.

Structure

Let’s check out what we’re going to construct:

Architecture Diagram

Constructing on high of our current Bot, we’re going to create two new providers: MessageIngestion, and Kubernetes.  The latter will take a configuration object that offers it entry to our Kubernetes cluster and can be liable for sending requests to the K8s management airplane.  Our Index Router will proceed to behave as a controller, orchestrating knowledge flows between providers.  And our WebexNotification service that we constructed within the second put up will proceed to be liable for sending messages again to the consumer in Webex.

Our Kubernetes Sources

On this part, we’ll arrange a easy Deployment in Kubernetes, in addition to a Service Account that we are able to leverage to speak with the Kubernetes API utilizing the NodeJS SDK.  Be at liberty to skip this half if you have already got these assets created.

This part additionally assumes that you’ve a Kubernetes cluster up and operating, and each you and your Bot have community entry to work together with its API.  There are many assets on-line for getting a Kubernetes cluster arrange, and getting kubetcl put in, each of that are past the scope of this weblog put up.

Our Check Deployment

To maintain factor easy, I’m going to make use of Nginx as my deployment container – an easily-accessible picture that doesn’t have any dependencies to rise up and operating.  In case you have a Deployment of your personal that you just’d like to make use of as an alternative, be happy to interchange what I’ve listed right here with that.

# in assets/nginx-deployment.yaml
apiVersion: apps/v1
form: Deployment
metadata:
    identify: nginx-deployment
  labels:
      app: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
template:
  metadata:
    labels:
      app: nginx
  spec:
    containers:
    - identify: nginx
      picture: nginx:1.20
      ports:
      - containerPort: 80

Our Service Account and Function

The subsequent step is to ensure our Bot code has a means of interacting with the Kubernetes API.  We will do this by making a Service Account (SA) that our Bot will assume as its identification when calling the Kubernetes API, and making certain it has correct entry with a Kubernetes Function.

First, let’s arrange an SA that may work together with the Kubernetes API:

# in assets/sa.yaml
apiVersion: v1
form: ServiceAccount
metadata:
  identify: chatops-bot

Now we’ll create a Function in our Kubernetes cluster that may have entry to just about the whole lot within the default Namespace.  In a real-world software, you’ll seemingly need to take a extra restrictive strategy, solely offering the permissions that enable your Bot to do what you propose; however wide-open entry will work for a easy demo:

# in assets/position.yaml
form: Function
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: default
  identify: chatops-admin
guidelines:
- apiGroups: ["*"]
  assets: ["*"]
  verbs: ["*"]

Lastly, we’ll bind the Function to our SA utilizing a RoleBinding useful resource:

# in assets/rb.yaml
form: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  identify: chatops-admin-binding
  namespace: default
topics:
- form: ServiceAccount
  identify: chatops-bot
  apiGroup: ""
roleRef:
  form: Function
  identify: chatops-admin
  apiGroup: ""

Apply these utilizing kubectl:

$ kubectl apply -f assets/sa.yaml
$ kubectl apply -f assets/position.yaml
$ kubectl apply -f assets/rb.yaml

As soon as your SA is created, fetching its data will present you the identify of the Secret during which its Token is saved.

Screenshot of the Service Account's describe output

Fetching data about that Secret will print out the Token string within the console.  Watch out with this Token, because it’s your SA’s secret, used to entry the Kubernetes API!

The secret token value

Configuring the Kubernetes SDK

Since we’re writing a NodeJS Bot on this weblog put up, we’ll use the JavaScript Kubernetes SDK for calling our Kubernetes API.  You’ll discover, should you take a look at the examples within the Readme, that the SDK expects to have the ability to pull from an area kubectl configuration file (which, for instance, is saved on a Mac at ~/.kube/config).  Whereas which may work for native improvement, that’s not superb for Twelve Issue improvement, the place we sometimes cross in our configurations as surroundings variables.  To get round this, we are able to cross in a pair of configuration objects that mimic the contents of our native Kubernetes config file and may use these configuration objects to imagine the identification of our newly created service account.

Let’s add some surroundings variables to the AppConfig class that we created within the earlier put up:

// in config/AppConfig.js
// contained in the constructor block
// after earlier surroundings variables

// no matter you’d like to call this cluster, any string will do
this.clusterName = course of.env['CLUSTER_NAME'];
// the bottom URL of your cluster, the place the API might be reached
this.clusterUrl = course of.env['CLUSTER_URL'];
// the CA cert arrange to your cluster, if relevant
this.clusterCert = course of.env['CLUSTER_CERT'];
// the SA identify from above - chatops-bot
this.kubernetesUserame = course of.env['KUBERNETES_USERNAME'];
// the token worth referenced within the screenshot above
this.kubernetesToken = course of.env['KUBERNETES_TOKEN'];

// the remainder of the file is unchanged…

These 5 strains will enable us to cross configuration values into our Kubernetes SDK, and configure an area shopper.  To try this, we’ll create a brand new service referred to as KubernetesService, which we’ll use to speak with our K8s cluster:

// in providers/kubernetes.js

import {KubeConfig, AppsV1Api, PatchUtils} from '@kubernetes/client-node';

export class KubernetesService {
    constructor(appConfig) {
        this.appClient = this._initAppClient(appConfig);
        this.requestOptions = { "headers": { "Content material-type": 
PatchUtils.PATCH_FORMAT_JSON_PATCH}};
    }

    _initAppClient(appConfig) { /* we’ll fill this in quickly */  }

    async takeAction(k8sCommand) { /* we’ll fill this in later */ }
}

This set of imports on the high provides us the objects and strategies that we’ll want from the Kubernetes SDK to rise up and operating.  The requestOptions property set on this constructor can be used after we ship updates to the K8s API.

Now, let’s populate the contents of the _initAppClient technique in order that we are able to have an occasion of the SDK prepared to make use of in our class:

// contained in the KubernetesService class
_initAppClient(appConfig) {
    // constructing objects from the env vars we pulled in
    const cluster = {
        identify: appConfig.clusterName,
        server: appConfig.clusterUrl,
        caData: appConfig.clusterCert
    };
    const consumer = {
        identify: appConfig.kubernetesUserame,
        token: appConfig.kubernetesToken,
    };
    // create a brand new config manufacturing unit object
    const kc = new KubeConfig();
    // cross in our cluster and consumer objects
    kc.loadFromClusterAndUser(cluster, consumer);
    // return the shopper created by the manufacturing unit object
    return kc.makeApiClient(AppsV1Api);
}

Easy sufficient.  At this level, we’ve got a Kubernetes API shopper prepared to make use of, and saved in a category property in order that public strategies can leverage it of their inside logic.  Let’s transfer on to wiring this into our route handler.

Message Ingestion and Validation

In a earlier put up, we took a take a look at the total payload of JSON that Webex sends to our Bot when a brand new message occasion is raised.  It’s price having a look once more, since it will point out what we have to do in our subsequent step:

Message event body

When you look by way of this JSON, you’ll discover that nowhere does it listing the precise content material of the message that was despatched; it merely provides occasion knowledge.  Nevertheless, we are able to use the knowledge.id discipline to name the Webex API and fetch that content material, in order that we are able to take motion on it.  To take action, we’ll create a brand new service referred to as MessageIngestion, which can be liable for pulling in messages and validating their content material.

Fetching Message Content material

We’ll begin with a quite simple constructor that pulls within the AppConfig to construct out its properties, one easy technique that calls a few stubbed-out personal strategies:

// in providers/MessageIngestion.js
export class MessageIngestion {
    constructor(appConfig) {
        this.botToken = appConfig.botToken;
    }

    async determineCommand(occasion) {
        const message = await this._fetchMessage(occasion);
        return this._interpret(message);
     }

    async _fetchMessage(occasion) { /* we’ll fill this in subsequent */ }

    _interpret(rawMessageText) { /* we’ll speak about this */ }
}

We’ve obtained a very good begin, so now it’s time to write down our code for fetching the uncooked message textual content.  We’ll name the identical /messages endpoint that we used to create messages within the earlier weblog put up, however on this case, we’ll fetch a particular message by its ID:

// in providers/MessageIngestion.js
// contained in the MessageIngestion class

// discover we’re utilizing fetch, which requires NodeJS 17.5 or increased, and a runtime flag
// see earlier put up for more information
async _fetchMessage(occasion) {
    const res = await fetch("https://webexapis.com/v1/messages/" + 
occasion.knowledge.id, {
        headers: {
            "Content material-Sort": "software/json",
            "Authorization": `Bearer ${this.botToken}`
        },
        technique: "GET"
    });
    const messageData = await res.json();
    if(!messageData.textual content) {
        throw new Error("Couldn't fetch message content material.");
    }
    return messageData.textual content;
}

When you console.log the messageData output from this fetch request, it should look one thing like this:

The messageData object

As you possibly can see, the message content material takes two varieties – first in plain textual content (identified with a crimson arrow), and second in an HTML block.  For our functions, as you possibly can see from the code block above, we’ll use the plain textual content content material that doesn’t embody any formatting.

Message Evaluation and Validation

It is a advanced matter to say the least, and the complexities are past the scope of this weblog put up.  There are numerous methods to investigate the content material of the message to find out consumer intent.  You can discover pure language processing (NLP), for which Cisco presents an open-source Python library referred to as MindMeld.  Or you could possibly leverage OTS software program like Amazon Lex.

In my code, I took the easy strategy of static string evaluation, with some inflexible guidelines across the anticipated format of the message, e.g.:

<tagged-bot-name> scale <name-of-deployment> to <number-of-instances>

It’s not probably the most user-friendly strategy, nevertheless it will get the job carried out for a weblog put up.

I’ve two intents accessible in my codebase – scaling a Deployment and updating a Deployment with a brand new picture tag.  A change assertion runs evaluation on the message textual content to find out which of the actions is meant, and a default case throws an error that can be dealt with within the index route handler.  Each have their very own validation logic, which provides as much as over sixty strains of string manipulation, so I received’t listing all of it right here.  When you’re excited by studying by way of or leveraging my string manipulation code, it may be discovered on GitHub.

Evaluation Output

The blissful path output of the _interpret technique is a brand new knowledge switch object (DTO) created in a brand new file:

// in dto/KubernetesCommand.js
export class KubernetesCommand {
    constructor(props = {}) {
        this.kind = props.kind;
        this.deploymentName = props.deploymentName;
        this.imageTag = props.imageTag;
        this.scaleTarget = props.scaleTarget;
    }
}

This standardizes the anticipated format of the evaluation output, which might be anticipated by the varied command handlers that we’ll add to our Kubernetes service.

Sending Instructions to Kubernetes

For simplicity’s sake, we’ll deal with the scaling workflow as an alternative of the 2 I’ve obtained coded.  Suffice it to say, that is in no way scratching the floor of what’s potential together with your Bot’s interactions with the Kubernetes API.

Making a Webex Notification DTO

The very first thing we’ll do is craft the shared DTO that may comprise the output of our Kubernetes command strategies.  This can be handed into the WebexNotification service that we in-built our final weblog put up and can standardize the anticipated fields for the strategies in that service.  It’s a quite simple class:

// in dto/Notification.js
export class Notification {
    constructor(props = {}) {
        this.success = props.success;
        this.message = props.message;
    }
}

That is the thing we’ll construct after we return the outcomes of our interactions with the Kubernetes SDK.

Dealing with Instructions

Beforehand on this put up, we stubbed out the general public takeAction technique within the Kubernetes Service.  That is the place we’ll decide what motion is being requested, after which cross it to inside personal strategies.  Since we’re solely trying on the scale strategy on this put up, we’ll have two paths on this implementation.  The code on GitHub has extra.

// in providers/Kuberetes.js
// contained in the KubernetesService class
async takeAction(k8sCommand) {
    let consequence;
    change (k8sCommand.kind) {
        case "scale":
            consequence = await this._updateDeploymentScale(k8sCommand);
            break;
        default:
            throw new Error(`The motion kind ${k8sCommand.kind} that was 
decided by the system shouldn't be supported.`);
    }
    return consequence;
}

Very easy – if a acknowledged command kind is recognized (on this case, simply “scale”) an inside technique known as and the outcomes are returned.  If not, an error is thrown.

Implementing our inside _updateDeploymentScale technique requires little or no code.  Nevertheless it leverages the K8s SDK, which, to say the least, isn’t very intuitive.  The information payload that we create contains an operation (op) that we’ll carry out on a Deployment configuration property (path), with a brand new worth (worth).  The SDK’s patchNamespacedDeployment technique is documented within the Typedocs linked from the SDK repo.  Right here’s my implementation:

// in providers/Kubernetes.js
// contained in the KubernetesService class
async _updateDeploymentScale(k8sCommand) {
    // craft a PATCH physique with an up to date reproduction rely
    const patch = [
        {
            "op": "replace",
            "path":"/spec/replicas",
            "value": k8sCommand.scaleTarget
        }
    ];
    // name the K8s API with a PATCH request
    const res = await 
this.appClient.patchNamespacedDeployment(k8sCommand.deploymentName, 
"default", patch, undefined, undefined, undefined, undefined, 
this.requestOptions);
    // validate response and return an success object to the
    return this._validateScaleResponse(k8sCommand, res.physique)
}

The tactic on the final line of that code block is liable for crafting our response output.

// in providers/Kubernetes.js
// contained in the KubernetesService class
_validateScaleResponse(k8sCommand, template) {
    if (template.spec.replicas === k8sCommand.scaleTarget) {
        return new Notification({
            success: true,
            message: `Efficiently scaled to ${k8sCommand.scaleTarget} 
cases on the ${k8sCommand.deploymentName} deployment`
        });
    } else {
        return new Notification({
            success: false,
            message: `The Kubernetes API returned a duplicate rely of 
${template.spec.replicas}, which doesn't match the specified 
${k8sCommand.scaleTarget}`
        });
    }
}

Updating the Webex Notification Service

We’re nearly on the finish!  We nonetheless have one service that must be up to date.  In our final weblog put up, we created a quite simple technique that despatched a message to the Webex room the place the Bot was referred to as, based mostly on a easy success or failure flag.  Now that we’ve constructed a extra advanced Bot, we’d like extra advanced consumer suggestions.

There are solely two strategies that we have to cowl right here.  They might simply be compacted into one, however I choose to maintain them separate for granularity.

The general public technique that our route handler will name is sendNotification, which we’ll refactor as follows right here:

// in providers/WebexNotifications
// contained in the WebexNotifications class
// discover that we’re including the unique occasion
// and the Notification object
async sendNotification(occasion, notification) {
    let message = `<@personEmail:${occasion.knowledge.personEmail}>`;
    if (!notification.success) {
        message += ` Oh no! One thing went improper! 
${notification.message}`;
    } else {
        message += ` Properly carried out! ${notification.message}`;
    }
    const req = this._buildRequest(occasion, message); // a brand new personal 
message, outlined beneath
    const res = await fetch(req);
    return res.json();
}

Lastly, we’ll construct the personal _buildRequest technique, which returns a Request object that may be despatched to the fetch name within the technique above:

// in providers/WebexNotifications
// contained in the WebexNotifications class
_buildRequest(occasion, message) {
    return new Request("https://webexapis.com/v1/messages/", {
        headers: this._setHeaders(),
        technique: "POST",
        physique: JSON.stringify({
            roomId: occasion.knowledge.roomId,
            markdown: message
        })
    })
}

Tying Every little thing Collectively within the Route Handler

In earlier posts, we used easy route handler logic in routes/index.js that first logged out the occasion knowledge, after which went on to answer a Webex consumer relying on their entry.  We’ll now take a unique strategy, which is to wire in our providers.  We’ll begin with pulling within the providers we’ve created to this point, holding in thoughts that it will all happen after the auth/authz middleware checks are run.  Right here is the total code of the refactored route handler, with modifications happening within the import statements, initializations, and handler logic.

// revised routes/index.js
import categorical from 'categorical'
import {AppConfig} from '../config/AppConfig.js';
import {WebexNotifications} from '../providers/WebexNotifications.js';
// ADD OUR NEW SERVICES AND TYPES
import {MessageIngestion} from "../providers/MessageIngestion.js";
import {KubernetesService} from '../providers/Kubernetes.js';
import {Notification} from "../dto/Notification.js";

const router = categorical.Router();
const config = new AppConfig();
const webex = new WebexNotifications(config);
// INSTANIATE THE NEW SERVICES
const ingestion = new MessageIngestion(config);
const k8s = new KubernetesService(config);

// Our refactored route handler
router.put up('/', async operate(req, res) {
  const occasion = req.physique;
  attempt {
    // message ingestion and evaluation
    const command = await ingestion.determineCommand(occasion);
    // taking motion based mostly on the command, at the moment stubbed-out
    const notification = await k8s.takeAction(command);
    // reply to the consumer 
    const wbxOutput = await webex.sendNotification(occasion, notification);
    res.statusCode = 200;
    res.ship(wbxOutput);
  } catch (e) {
    // reply to the consumer
    await webex.sendNotification(occasion, new Notification({success: false, 
message: e}));
    res.statusCode = 500;
    res.finish('One thing went terribly improper!');
  }
}
export default router;

Testing It Out!

In case your service is publicly accessible, or if it’s operating regionally and your tunnel is exposing it to the web, go forward and ship a message to your Bot to try it out.  Keep in mind that our check Deployment was referred to as nginx-deployment, and we began with two cases.  Let’s scale to 3:

Successful scale to 3 instances

That takes care of the blissful path.  Now let’s see what occurs if our command fails validation:

Failing validation

Success!  From right here, the probabilities are countless.  Be at liberty to share all your experiences leveraging ChatOps for managing your Kubernetes deployments within the feedback part beneath.

Comply with Cisco Studying & Certifications

Twitter, Fb, LinkedIn and Instagram.

Share:



RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments