ByteKraft.no/Automatika is a lightweight automation platform that integrates seamlessly with your existing infrastructure to efficiently handle remediation and diagnostics associated with alerts (requests) generated by your monitoring systems or ticketing systems. It boasts a modular design, wherein each module is designed to perform a single task (such as deduplication or automation matching) that can be modified to accommodate more complex logic or improved performance when needed. The platform is powered by a Business Rules Management System (BRMS), Drools, to make decisions during event processing, and it includes default configurations that meet the majority of common use cases. Automations are written in TypeScript (or JavaScript).
A 16GB machine can accommodate the entire platform with all dependencies, as well as each module running with a minimum of 2 instances. This setup is capable of executing approximately 1000 automations per minute.
The platform can be deployed in either the cloud or on-premise, using Kubernetes or as stand-alone applications (services). It's even possible to deploy a combination of cloud and on-premise services; for example, automations may need to be executed on-premise for security reasons, while the rest of the platform is deployed in the cloud. The most straightforward way to deploy the platform and all its dependencies is using Helm charts, and with all the necessary components in place, it shouldn't take more than 15 minutes to deploy the platform with its default settings.
Developers can use the Integrated Development Environment (IDE) of their choice to write automations in either TypeScript or JavaScript, as long as the IDE provides good support for either language. Popular IDEs for this purpose are IntelliJ IDEA and Visual Studio Code, though any IDE that has good TypeScript/JavaScript support will work. Few code conventions should be followed when writing the automations.
The platform binds JavaScript functions to native code that can perform automation command. For instance, when command(`ls -lat /home`) is called from the automation code, the platform accesses the end device and displays the contents of the "/home" folder.
Example of automation that restarts a service on ubuntu machine "RestartService.ts"
import {command, note, wait} from "../automatika/core";
import {Automation, Priority} from "../automatika/automation";
export class RestartService extends Automation {
constructor() {
let tags = ['service', 'os:ubuntu', 'status:down'];
let priority = Priority.MEDIUM;
let delay = 0;
super(tags, priority, delay);
}
run (task): number {
note(`Executing RestartService`);
let service = task.tags['service'];
note(`checking service ${service}`);
let [status, error] = command(`
#!/bin/bash
stat=$(systemctl is-active --quiet ${service} && echo -n 'running');
echo -n "$stat";
`);
if (error){
// if we cannot check the service, fail the execution
note(`Automation failed with error: ${error}`);
return 1;
}
if (status === 'running') {
note(`${service} is already active`);
}
else {
command(`
#!/bin/bash
sudo systemctl start ${service}
`);
//wait 5 sec and check again
wait(5);
[status, error] = command(`
#!/bin/bash
stat=$(systemctl is-active --quiet ${service} && echo -n 'running');
echo -n "$stat";
`);
if (error){
// if we cannot check the service, fail the execution
note(`Automation failed with error: ${error}`);
return 1;
}
else if (status !== 'running') {
note(`${service} is still down`);
return 1;
}
}
return 0;
}
}
Automation project structure

Each module has pre-defined settings for optimal performance; however, depending on the desired outcome, these default settings may need to be adjusted. This can include modifying templates in the Interface module, or altering correlation and de-duplication rules.
In the "Configuration project", each module has its own folder containing a Docker file and related settings. As part of the build process, these settings are applied to the base images, and the module is prepared for release.
Configuration project structure

Our platform offers GitHub action workflow files to package and distribute automations and module configurations. While Git and GitHub are available options, developers and architects are free to select any source control and hosting provider that works for them.
Workers are the modules that execute automations in a production environment, while the Runner (the CLI version of the Worker) executes them locally in an IDE or terminal. The Platform uses Docker to package, compile, and execute automations locally. The compiled automations are the minimized automation scripts with all of their dependencies.
Interface serves as a gateway for incoming events and requests, exposing a REST API and a message queue listener (RabbitMQ). Incoming events are in JSON format and consist of any number of fields; using a template engine and mapping files, Interface translates these events into a task with the fields and values expected by the platform.
Classifier is an optional module that assigns tags to incoming tasks. This allows the platform to automatically match tasks with the right automations and route events to the right workers for executions. Its rule engine, Drools, extracts and calculates tags from the task fields, which would otherwise be difficult to do in Interface using the template engine. The rules are written using Java-like syntax.
Deduplication is a module designed to filter out duplicate tasks and prevent unnecessary execution of requests. It leverages a rule engine, Drools, to compare incoming tasks to those stored in memory, allowing for efficient and accurate identification of duplicates.
If the task's 'uniqueIdentifier' property matches that of an existing task, the default deduplication rule will reject the task and set the rejection reason.
Matcher is a module that uses a rule engine, Drools, to evaluate incoming tasks against available automations. It compares task and automation properties for the purpose of making matches that are most suitable for the given criteria.
The matching rule will evaluate the tags associated with a task and the available automations, and select the automation with the most compatible tags. If no automation is found to match the task, a rejection rule will be applied.
Aggregator module acts as a central hub for task collection, automation execution updates, event scheduling, prioritization, and correlation. It coordinates these activities and sends updates to outgoing integration modules.
The Loader module facilitates the synchronization of compiled automations into Hazelcast's distributed memory, and regularly refreshes the memory to prevent potential data corruption.
Correlation is an optional module that leverages the power of Esper and Drools to identify similar tasks and select the master task. By default, tasks are correlated based on the device (hostname) and the oldest task is selected as the master, while any other correlated tasks are rejected.
Scheduler is an optional module that enables automation execution to be delayed for a later time. Using a rule engine, it calculates the delay for the task specified by first looking for the delay set in the task, then in the automation, and finally for the module-level configured delay, which is 0 (no delay) by default.
Adjusting the delay is essential for achieving accurate correlation. The longer the delay, the more opportunity the platform has to accurately correlate the data.
Cron Scheduler allows users to set up tasks to run automatically at a predetermined date and time. The list of tasks, along with the associated cron expressions, must be entered into the configuration file.
Prioritization is an optional module that uses Drools rule engine to calculate the priority of a task. Tasks with a higher priority are given precedence over those with lower priority during further processing. The default implementation uses the priority set on a task and automation to determine the task's priority.
Router module uses a rule engine, Drools, to calculate the correct routing key and automate the forwarding of tasks to the correct environment (booker and workers designated for automation executions for certain group of end devices). This routing key is a RabbitMQ feature which allows message listeners to subscribe to an exchange and receive messages with the specified routing key. By default, the task's environment tag is used to determine the correct routing key.
Booker is a module responsible for distributing automation tasks to a pool of available workers. It offers a TCP socket for workers to connect to upon startup, so that they can register themselves as potential executors. Once a task arrives with an automation attached, Booker selects a worker from its pool and sends the automation to it for execution. Booker also collects updates on the automation's progress from the workers, and forwards them to the aggregator module.
Currently, we are randomly selecting one worker from the pool, but we are aiming to have more sophisticated implementations, such as selecting the worker with the lowest workload.
The Worker module is the main component of the platform, written in Go and compiling into native code for greater performance. It embeds the high-performance JavaScript engine v8 to execute automations in separate threads, with no limit to how many can run in parallel on a single worker. At start-up, the worker connects to a configured booker and registers itself as an automation executor. Groups of workers with a booker they register with form a unit that executes automations for a certain group of end devices (e.g., prod devices, unix devices, devices accessible from certain locations in the infrastructure). To acquire login credentials, a command line tool (packaged in the same docker image if the worker runs in Kubernetes) is used. Delivering the command line tool is part of the integration work, with an out-of-the-box option to read credentials from environment variables or use a password-less login. Alternatively, credentials can be delivered in the event integration, though this may not be considered a secure option depending on the platform installation.
Worker is equipped with support for a variety of common actions, such as executing shell commands, making HTTP requests, and more. However, users may require the ability to create and execute their own tasks-specific actions, which can be done by writing extensions in GoLang and loading them when the Worker is initiated. This enables users to access their custom-made actions within the context of automation JavaScript.
For example, a user can enable automated SMS sending from their platform by creating an extension in their worker-extension project. This extension can then be utilized within their automations as follows:
let success = sendSms('+4712345678', 'Test SMS from Automatika extension');
If the event/request integration does not provide enough CI data, the CMDB Proxy module can be used to acquire additional data. This module is optional and the platform comes with a dummy implementation that users/clients can customize according to their 3rd party CMDB. The module is written in GoLang and provides a GRPC endpoint for the platform to access. Users/Clients can also implement the module in any other language that supports the GRPC protocol.
Goran Dzinovic