The documentation uses a random API key. Your own API key is displayed on the settings page.
Sending data replaces the concept of a metric composed of numbers with a report composed of report instances. A single report instance usually contains multiple numbers and strings and can be directly created from several input formats, like JSON, SQL results, command output, single numbers and strings.

The input data of a report instance should be sent as POST data to<report-name>. An API key can be specified as the key query parameter or as the HTTP username. For example, sending data about disk space usage, coming from the df command, can be accomplished in the following way using a Unix shell:

df | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @-
  "result": {
    "tags": [], 
    "rows": [
      ["Filesystem""1K-blocks""Used""Available""Use%""Mounted on"], 
    "header": [0], 
The output of the df command is used by the curl command as POST data. The API URL includes diskfree as the chosen report name.

The API call returned a JSON document describing the created report instance. The rows key holds a tabular representation of the report instance. The representation is used for creating dashboard tiles which display data from selected table cells. A Javascript alarm definition has access to both the tabular representation and the original input.

After making the call, a dashboard tile showing data from the diskfree report can be created by clicking Add Report in the DASHBOARD view. The REPORTS view will also show the report. Clicking the Edit Alarm button will open an alarm editor for the report.

Sending data periodically

In most cases a report instance should be sent periodically. It's what makes most dashboard tiles work (data is displayed from a time range of report instances) and how Javascript alarm checks are run (they are defined for a report, and executed for each report instance). The usual way to achieve this is using the Unix cron daemon. For example, to check disk space every 15 minutes, the following line could be added to the /etc/crontab file:

*/15 * * * *     df | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @-

If you want to send reports from multiple server instances, you need to manage multiple crontab files. leaves this part to tools tailored to your deployment strategy (it could be a configuration management system like Chef, Puppet, Salt or Ansible, a script building a Docker container, or a dedicated crontab manager like minicron). Of course, you can also use any alternative to cron.

Furthermore, doesn't depend on the regularity of report instances submission, making possible uses not involving a periodic run, like:

  • sending a report instance when requested by a user from a UI
  • sending an error report when an error happens
  • sending statistics once per 1000 HTTP requests
  • sending a report when git commit is pushed (using hooks)
Input format considerations

If you are constructing a report programmatically, the most convenient format is usually JSON. If your report has a header, you can represent each report row as a JSON object mapping column names to column values, and the whole report as an array of these objects, for example:

    {"country""us""users": 120, "newIds": [345, 349, 350]},
    {"country""uk""users": 34, "newIds": [362]},
    {"country""de""users": 27, "newIds": []}

If a header is not needed, table rows can be represented directly:

     ["us", 120, [345, 349, 350]],
     ["uk", 34, [362]],
     ["de", 27, []]

An array of objects and an array of arrays can be thought of as normalized JSON representations of a report instance. However, will parse any JSON document into a tabular representation by applying flattening and unnesting algorithms. For example, a single object {"country": "us", users: 120} will be parsed into a one-row table (with a header) and a nested object {"us": {"users": 120}}" will be represented as a table with the us.users column. An object with a nested array:

    "checks": [
will be converted to a table

For simple cases, constructing a JSON document can be skipped and a free-form string can be sent directly:

us 120
uk 34
de 27 uses fairly sophisticated algorithms for parsing free-form inputs which auto-detect a field delimiter and a header.

Since there's support for multiple formats that are not strictly defined, a question about the correctness of the parsing arises. How can we be sure that charts and alarms won't stop working? The following remarks can be made:

  • the safest format is JSON. When the "schema" of a document (types, object nesting) and the object keys defining column names won't change, we can be sure the parsing will work correctly. It's safe to add new column names and reorder existing columns.
  • ASCII tables (e.g. output of SQL utilities) are also a safe format, assuming that the names of existing columns are preserved
  • for free-form inputs, the safety can be increased by specifying the format and/or explicitly passing the delimiter value
  • the worst that can happen is that a chart will not be updated (which you will probably notice) or an alarm execution will fail (which results in an email notification). If you can't afford that, the data should be submitted as JSON.
Sending single values
Single numbers or strings can be used as an input and will be parsed into a single-cell table. This can be viewed as the mapping of the "metric" concept used in traditional monitoring systems to the tabular model uses (a metric name becomes a report name, and a metric value becomes the cell's content).

There is some additional support for boolean values often used in health checks. If the input is one of the strings: true, ok, success, yes for the true value or false, fail, failure, no, not, notok for the false value, then dashboard tiles displaying text will use green and red colors for displaying them and tiles displaying graphs will interpret the values as 0 and 1.

Sending blocks of text
If you need to display a larger block of text, without parsing it into individual words, you can specify the format query parameter as single. The resulting report instance will have only one cell containing the sent text. A dashboard tile can display either a newest text or a range of texts sent in a specified time period.

To enhance the formatting of blocks of text, the following format values can be used:

  • markdown — the text will be parsed using Markdown syntax. The option allows customizing font colors and styles and allows including clickable links
  • singletable — the text will be displayed as a sortable table

For example, the following command sends a Markdown-formatted description of running a build script:

Build status: **OK**. [Latest artifacts](

cat | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @- ''
A dashboard tile displaying a range of markdown-formatted report instances.
Creating dashboard tiles
Clicking Add Report in the DASHBOARD view shows a dialog for adding a dashboard tile. After selecting a report name, the most recent report instance is displayed as a table. The data to visualize is defined by clicking table cells containing values that should be shown. For example, for the report instance:
it's sufficient to click the numbers 54 and 37. The labels /usr and /var will be automatically associated with the values.

Your selection serves as a template for selecting data from other report instances. In many cases, this is when the configuration of the tile's data source ends — clicking Add will draw the tile containing values selected from a range of report instances, or a newest instance if that was chosen. Initially, the visualization type — chart or text — is selected automatically. You can configure it by clicking the icon in the top-left corner of the tile (the cursor must be placed on the tile to make the icon visible).

Tweaking data definitions

If the auto-computed definitions selecting data need tweaking, clicking "Show definitions" will make them visible. Each definition has the form:

select column data-column where column filtering-column equals|contains filtering-value using name name
The data-column is the column containing the selected value. The filtering-column is the column containing labels and the filtering-value defines which label from the column identifies the wanted row. The name is displayed as a chart series name.

The filtering-column can be set to a virtual column containing row numbers. The column is used when a report instance doesn't contain textual labels that could be used for the clicked values. It's also useful to configure if a textual value should be treated as a label. For example, for the report instance:

clicking the value 12 will auto-create a definition that uses the virtual column to select the second row of the table — regardless of the value present in the metric column. For the example report instance it's probably not a proper configuration, because all submitted metric values will be captured under a single data definition. To fix the configuration, the filtering-column of the data definition should be changed to the metric column.

To automatically create definitions for each value present in the metric column, the "create new definitions ..." option should be checked (see below).

Creating new definitions automatically
Autocreating new definitions Sometimes all rows of the current and future report instances should be visualized. If report instances contain top three countries ordered by a number of users, the selection of countries can be different for each day. If a definition for a new country is not created explicitly, its data will not be normally visualized. allows creating tiles that automatically include data for all rows. When all values from a single column are selected, the column becomes highlighted and the option, shown after definitions, becomes activated:

create new definitions for column data-column by column filtering-column
When a new value in filtering-column is present in a report instance, a new definition selecting data-column will be automatically created.
Using tags
Tags are custom labels attached to report instances, for example:
  • the tag ip: identifies a server / container by an IP address
  • the tags microservice:search hostname:inst01 pid:2145 identify a microservice instance
  • the tag table:users specifies a database table
  • the tag important can be used to mark report instances which should be shown on a dashboard

Tags allow using one report to capture data coming from multiple servers and other entities. Without tags, an entity's ID, like an IP address or a process PID, would need to be included in a report name, resulting in a large number of similarly named reports. Additionally, using tags allows sharing a single alarm definition.

Tag names employ a convention of using the : character to separate a property name from a property value. For example, the tag ip: defines the property ip as The convention is not enforced and is required only for auto-creating tiles (described later).

Tags attached to a report instance should be passed as the tags query parameter, with the , (comma) character used as a separator. For example, to indicate that a disk space report comes from a server placed in a specific data center, we could use the following API URL:,
Computing tag values in practice
When report instances are sent from your own code, you have full control over specifying tag values. IP addresses, PID values, container IDs can be usually retrieved using library calls. For example, the following Node.js code sends a status report with a PID value in a tag:

var restler = require('restler');

var tags = ['microservice:search', 'pid:' +];
restler.postJson('', { 
        status: 'ok',
        memory: process.memoryUsage()
    }, { query: {
        tags: tags.join(','),
        key: '9fxvMi8aR3CZ5BsNj0rt0odW'

When cron is used for sending reports, commands are run through a Unix shell, allowing computing tag values dynamically and sharing a single crontab definition among multiple servers, for example:

*/30 * * * *    df | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @-$(hostname)
The tags=host:$(hostname) part will be expanded by cron when the command will be run and $(hostname) will be replaced with the output of the hostname command.
Automatically creating ip tag
If the autotags query parameter is set to ip, a tag ip:<ip-address> will be attached to the report instance, where <ip-address> is the public IP address of the calling host. The API URL from the previous example could be replaced with to group the report instances by IP addresses instead of host names.
Specifying tags for a dashboard tile
Choosing tiles for a tile If report instances with tags attached exist for the selected report, you will be able to select tags that will be used as a filter — only instances having the tags will be used as the tile's input. This allows creating tiles displaying data coming from specific sources, like a selected IP address, or marked with tags like important.
Automatically creating dashboard tiles for similar tags
The main feature of tags is the ability to automatically create dashboard tiles for tags similar to the tags selected for a given tile. The "similarity" means sharing the same prefix — the part until the : character, and differing in the remaining part.

For example, if the tag ip: was selected during tile creation, a new tile for the tag ip: could be automatically created. The tags share the ip: prefix and the remaining parts containing IP addresses differ.
Sample auto-created dashboard

By default, a tile for which tags were specified during its creation is already set up as a template for creating tiles for similar tags. The tile will have a bolder border and the option "Create dashboard tiles for similar tags" will be checked.

Autocreating new tiles

The auto-creation feature can be tested by submitting a report instance with tags different than the tags associated with the template tag. For example, if the template tile was assigned the ip: tag, submitting a report instance with the tag ip: will auto-create a new tile.

The auto-created tiles are being synchronized with the template tile. For example, if you change the title, the definitions selecting data or the size of the template tile, the auto-created tiles will be updated accordingly.

Customizing the auto-creation mechanism
If multiple tags are being attached to report instances, you might want to customize how the auto-creation mechanism works. For example, if the two tags are present — the first containing an IP address and the second a data center name — you might want to create a tile for each IP address belonging to a specific data center only. The default configuration will auto-create a tile for each IP address belonging to any data center.

The option "Create dashboard tiles for similar tags" in the tile's settings dialog allows choosing how a tag value should be treated when deciding if a new tile should be created:

  • <property-name>:* — the default option — a tag having the same <property-name> prefix, but distinct property value (the * part) will cause the creation of a new tile. For example, choosing ip:* will cause the creation of a tile for each IP address.
  • <property-name>:<property-value> — a report instance must have the exact tag in order to be considered for the creation of a new tile. This is useful when more than one tag is being attached to report instances. If tags ip:<address> and datacenter:<name> are attached to each report instance, then choosing datacenter:dc-west (the exact match) and ip:* (the property name match) will cause the creation of a tile for each distinct IP address, but only if the specific tag datacenter:dc-west is present.
  • * — the tag is treated as not conforming to the convention of having a property name separated from a property value by the : character. Any different tag value will cause the creation of a new tile. This is useful for tags like important — attaching any different tag, like critical, will cause the creation of a new tile.
Multiple template tiles on a single dashboard
If you need to configure multiple template tiles, you can either create a new dashboard for each template tile, or place them on the same dashboard. In the latter case, the tiles created from the same template tile will be grouped together and will be sorted by tag values.

You can also mix template tiles with regular tiles on a single dashboard. The automatically created tiles will alter the layout of the dashboard, but the content and the sizes of the regular tiles will be preserved.

Dashboard settings
Dashboard settings, opened by clicking the  Dashboard Settings button, contain options useful for managing auto-created tiles.

Dashboard settings

The tiles can be automatically deleted by enabling the option "Delete tiles having no new data". The setting is useful when tags are used to identify ephemeral services, like dynamically created and deleted server instances.

The options "Synchronize X/Y axes of auto-created tiles" tell whether the lengths and the labels of X/Y axes should be shared among the auto-created tiles. Enabling the settings allows detecting abnormal values of a tile quickly. On the other hand, if the ranges of values will differ by a large margin, the charts can be hard to read.

Using tags for non-system reports
Most of the examples of using tags involved IP addresses, container IDs or PIDs. However, the tags mechanism can be used for creating dashboards displaying data obtained from higher-level sources, like databases and logs, for example:
  • when extracting processing times from web server logs, the data could be grouped by URLs and average times for each URL could be shown in separate dashboard tiles (by using a URL as a tag value). The Heroku add-on automatically creates such dashboard.
  • for each blog post, a dashboard tile could be created showing the number of views and comments (this would require executing a database query for each blog post and setting a tag value to a post's title/id).
  • stock market prices can be presented with a dashboard tile created for each stock symbol.
Defining alarms
An alarm definition is a Javascript code that is associated with a report and run for each received report instance. The instance's data is available to the code as global variables.

The alarm definition editor can be opened by clicking the Edit Alarm button in the REPORTS view. Alarm definition editor

The upper window shows report instance data as global variables that can be accessed from the alarm definition displayed below. Clicking the Dry Run button (or pressing ⌘+RETURN / CTRL+ENTER) simulates an actual alarm run — the alarm definition is executed against the report instance shown in the upper window with alarm events printed in the "Output" window and not actually triggered. The calendar icon and the "Previous" and the "Next" buttons allow choosing a report instance for the dry run.

Clicking the Save button (or pressing ⌘+S / CTRL+S) associates the alarm definition with the report. The code will be executed in background for each received report instance.

Triggering alarms with alarm() calls
An alarm event is triggered with the alarm(alarmKey) call, where alarmKey is a string identifying an alarm condition. Multiple alarm calls with the same alarmKey will not trigger multiple notifications and will be grouped under a single active alarm.

The active alarms can be viewed in the REPORTS view by clicking the button Active Alarms under the Edit Alarm button. An active alarm can be resolved when the reason the alarm was triggered is no longer present and it's expected it shouldn't be triggered more times. If an alarm with the same alarmKey is triggered again for the same report, a new active alarm is created and the configured notifications are sent once again.

Assuming the 4th column of the diskfree report contains information about the free disk space in percents, the following alarm definition triggers an alarm if the used space exceeds 90% for the first partition:

if (parseFloat(rows[1][4]) > 90) {
    alarm('High disk usage for /');
(the parseFloat call is needed to convert a string like "85%" to the number 85).

If the usage exceeds 90% for multiple subsequent alarm runs, only the first call will trigger alarm notifications, because the alarmKey argument has the constant value "High disk usage for /".

A single alarm definition can trigger multiple alarms and alarm keys can be constructed dynamically. A sample alarm definition checking the free disk space for each partition could look as follows:

rows.slice(1).forEach(function(row) {
    if (parseFloat(row[4]) > 90) {
        alarm('High disk usage on ' + row[0]);
The code will trigger a new alarm event only for a partition for which an active alarm doesn't exist yet.
Attaching details to an alarm

It's sometimes useful to attach additional details to an alarm event, for example the current date. The creation date of a report instance is available as the created global variable, so we could write:

if (parseFloat(rows[1][4]) > 90) {
    alarm('High disk usage for / on ' + created);

This code will, however, cause the creation of a new active alarm (and alarm notifications) for each report instance it is run against (because the alarmKey will have a unique value for each run). It's rarely the wanted behaviour. An alternative is to use the two-argument call alarm(alarmKey, details):

if (parseFloat(rows[1][4]) > 90) {
    alarm('High disk usage for /''This happened on ' + created);
The alarm event will be triggered only once (because the alarmKey has a constant value), and the details argument — a custom message — will be included in alarm notifications.

All invocations of the alarm() function grouped under a single active alarm can be reviewed by clicking the icon in the "Active Alarms" window. The shown data will contain all passed details arguments.

Alarms triggered in case of runtime errors
Alarms with the following alarm keys are automatically triggered in case of runtime errors:
  • Javascript error — triggered if an execution of an alarm definition failed because of a programming error. The details will contain an error message and a stack trace.
  • Failed to execute alarm — triggered if an execution of an alarm definition didn't complete in 5 seconds and was terminated. This could be caused by one of the following reasons:
    • the execution of the alarm used a lot of CPU or a lot of memory (at least 100MB of memory is available for each alarm execution). Note that the code must process very large amounts of data to exceed the limit - for example, an array of 100000 numbers takes only a few megabytes of memory and can be sorted in about 0.2 seconds.
    • a lot of time was spent on waiting for HTTP/API calls to finish. The get(), post(), put(), delete_() functions are executed synchronously, while the functions supporting integrations, like slack() or pagerduty(), are run asynchronously and don't consume the 5-second limit.
    • the system is not operating correctly and couldn't allocate resources for the alarm execution. Please check the operational status page to check if there are any known problems we are working on. You can also contact support to get help with identifying the issue.
If the total running time of alarms exceeds the plan's limit , the execution of alarms is halted for one minute. A team owner is notified about the event by email.

The average time consumed by alarm runs per minute is displayed on the SETTINGS page.

Using print() calls for debugging
Using print() function in alarm definitions

The print function can be used for displaying messages in the output window during a "dry run", to help debugging and investigating the Javascript code.

The print statements can be left in the code that is executed in the "real run" mode — the calls will have no effect.

Calling the API from alarm definitions
An alarm definition can access not only the data of the current report instance (through global variables), it can also make API calls. This feature allows accessing historic and other reports' data, submitting new report instances and using the storage API.

The get, post, put, delete_ functions are available for making HTTPS calls with the appropriate HTTP method. The first argument is a path component of the API URL (the host part should be omitted). For example, the following code fetches the last three report instances and checks if the disk usage exceeds 90% for all of them:

var exceeded = get('/reports/diskfree/instances', {order: 'desc', limit: 3}).json.result.every(function(instance) {
    return parseFloat(instance.rows[1][4]) > 90;
if (exceeded) {
    alarm('High disk usage for 3 consecutive checks');
The /reports/diskfree/instances is the API path for accessing historic report instances, and the object {order: 'desc', limit: 3} specifies the query parameters for fetching the last three instances. The get call returns the HTTP response as an object, with the json attribute holding the response content converted from JSON. The result attribute of the converted response contains the actual API call result — as described in the API usage guidelines. Finally, the every function checks if the condition holds for each report instance (it's a built-in method of Javascript arrays).

Since the call post('/reports/new_report', data) creates a new report instance belonging to a specified report, an alarm definition can be used for postprocessing report instances and/or combining data from different reports and submitting the result as a new report instance. The following example creates a Markdown-formatted report from the diskfree report:

var usage = parseFloat(rows[1][4]);
var s = 'Disk usage as of *'+created+'*: **'+usage+'%**';
if (usage > 80) {
    s += ' <font color=red>WARNING</font>';
post('/reports/diskfree_formatted', s, {'format''markdown'});

Some other ways the API can be used in alarm definitions:

  • accessing other report's data by accessing the URL /reports/<report-name>/instances (see description)
  • checking instances from a specific date range, for example the last 3 days (using the from and the to query parameters — English phrases are accepted as date specifications, so the from parameter could be specified as the string 3 days ago)
  • checking if an alarm is already active for a report
  • using the storage API for preserving state between alarm runs

See the API reference for the documentation of the API endpoints and the Javascript alarms reference for the documentation of the Javascript functions available in alarm definitions.

Making external HTTP calls (webhooks)
The functions get, post, put, delete_ that are used to call the API, work also for making external HTTP/HTTPS calls. If the first argument is not a path, but a full URL starting with http:// or https://, then an HTTP call will be made to the host and the path pointed by the URL, for example:

if (parseFloat(rows[1][4]) > 90) {
    alarm("High disk usage for /");
Note that while the example call of the alarm() function will trigger only one active alarm (the subsequent calls will be grouped under the alarm key High disk usage for /), the URL will be called every time the if statement succeeds.

If an URL should be called for newly triggered alarms only, the best solution is to set up a meta report that will receive new active alarms as report instances. The alarm for a meta report will be executed once for each active alarm triggered for any report. For example, if you would like to submit every active alarm to an external API, the alarm for the meta report moniqueio.active_alarm could look as follows:

     {alarmKey: input.alarmKey, details: input.details});

The undefined arguments tells that the request has no data, and the third argument defines query parameters. The input variable holds the report instance automatically created for the meta report.

The alarm definition for a meta report can check a report name and an alarm key for which an event is triggered, allowing customizing the sent notifications, for example:

if (input.alarmKey.startsWith("Heartbeat check failed")) {
    slack("Check our cron jobs");
if (input.alarmKey.includes("CRITICAL")) {
PagerDuty, Slack integrations
If you need advanced incident management features (like on-call schedules or phone notifications), the simplest solution is to set up the PagerDuty integration. Alternatively, you can call an external API of your choice by setting up a meta report (as described above).

Slack can be also set up as a destination for alarm notifications. The slack() function can be used to send arbitrary messages from an alarm definition.

Triggering alarms with API calls
Sometimes you don't need to send a report instance (because data doesn't need to be visualized on a dashboard or processed by a Javascript code) and just want to trigger an alarm directly from your own code. The behaviour is possible with an API call. Note that the report for which the call is being made must exist before the call (it must be created by submitting a sample, non-empty report instance).
Heartbeat checks
Heartbeat checks assure that data is being received at specified intervals. The checks allow detecting problems like a dead server or a cron job that wasn't executed.

The checks associated with a report can be edited by clicking the heart icon in the REPORTS view.
Defining heartbeat checks

A single row in the dialog defines a check for report instances having the particular tags. If the tags are not specified, then any report instance of the report will match the row (the label ANY marks such rows).

The maximum age of a newest report instance specifies when the check should fail. If a newest report instance is older than the specified interval, an alarm with a name starting with Heartbeat check failed is triggered.

In the example above the heartbeat check will fail if a report instance having any tags will not be created every 1 minute. The check will also fail if a report instance having the tag env:production will not be received in 1 minute, 30 seconds. Each failure detected for a list of tags will trigger a different alarm. For example, if no report instance will be created for 2 minutes, the alarms Heartbeat check failed and Heartbeat check failed for tags [env:production] will be triggered (the first for ANY tags, the second for the env:production tag).

Note that usually a margin should be added to the expected maximum age. For example, if a report instance is sent every 5 minutes, specifying the maximum age as 5 minutes may result in false positives — the actual interval between creation times of the instances may vary, because sending data can take a different amount of time for each run. It's safer to specify the maximum age as 6 or 7 minutes. ensures the heartbeat failures will be detected if the age of a newest report instance exceeds the defined maximum age by 1 minute or more. If the age is exceeded by a shorter amount of time, the failure might not be detected.

The heartbeat checks can be set programmatically using the API.

Annotations are visual indicators associated with specific report instances. They can be used to record important events, for example:
  • deploying new version of code
  • restarting a service
  • changing system configuration
Additionally, automatically creates annotations for alarms.

In the example chart the red dots indicate alarms issued for the given report instances. Hovering over a dot will reveal the associated alarmKey.

The grey dot indicates an annotation submitted with the API call. The call allows specifying a custom message that will be displayed in a chart tooltip and associating the annotation only with the given tags.

All annotations are also visible in the dashboard tiles displaying the data as "text" or "text table", as well as the report instance viewer (accessible by clicking the    icon in the REPORTS view). They are being rendered as icons, and the associated messages will be revealed by hovering over them.

Storage API
A simple key-value storage API is available for storing arbitrary data. The API can be used for multiple purposes, for example to preserve state between alarm runs or to dynamically control the execution of alarms.

The URL of a stored item is /storage/<key>, where <key> is an arbitrary string identifying an item. The HTTP PUT method is used to set the item's value, and the GET method retrieves its content. The item's data can be an UTF8-encoded text or arbitrary binary data.

Example: controlling execution of alarms
The API could be used to store a flag indicating that system maintenance takes place. The flag can be set with the following call:

echo -n 1 | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: -XPUT --data-binary @-

To retrieve the saved content, a GET request must be issued:

curl --user 9fxvMi8aR3CZ5BsNj0rt0odW:

An alarm definition can use the flag's value to alter the execution logic:

if (get('/storage/maintenance').json.result == 1) {
    print('maintenance period, not issuing alarm');
} else {
    alarm('Check failed');
Integrations allow sending alarm notifications to third-party services, synchronizing state of active alarms and sending data (like custom messages) directly from a Javascript alarm definition. The special integration Meta Reports allows receiving events taking place in as reports.

Available integrations:

The default configuration enables receiving alarm notifications by email. Each team member can control if the notifications should be sent to the email address used when signing up. The option is available on the SETTINGS / PREFERENCES page:
The ability to receive alarm notifications by email can be disabled globally (for all team members) by a team owner. To access the option, navigate to the SETTINGS / INTEGRATIONS page, and click the "email" box. The option is called "Enable receiving alarm notifications by email".
A Slack channel can be configured as a destination for alarm notifications. Additionally, the slack(message) function can be used in alarms to directly send a Slack message.
To set up the Slack integration, you must log in as the team owner, navigate to the SETTINGS / INTEGRATIONS page, and click the "slack" box.
Add to Slack button
"Add To Slack" button will appear and you have to click it. You will be redirected to If you are not logged in to Slack, you will have to sign in and select a team. Otherwise, a configuration screen will appear. will post messages to the channel selected as the "Post to" option. After clicking "Authorize" you will be redirected back to and you should receive a message confirming that the Slack integration is set up.
Slack configuration screen
Options controlling if the Slack integration is enabled are available below the "Add to Slack" button. The option "Receive alarm notifications in Slack" controls if sends a message to Slack for each new active alarm. A button embedded in the notification enables marking the alarm as resolved directly from Slack.
Slack alarm notification

When the option "Receive Slack messages from slack(message) alarm calls" is checked, each Javascript call of the function slack() like slack('We are in trouble'), will send a message to Slack. The message will be composed of the call's argument and a name of a report associated with an alarm definition containing the call.
Slack message from Javascript call
PagerDuty integration allows triggering PagerDuty incidents for each active alarm and synchronizing the state of the incidents. Additionally, pagerduty() function can be used in Javascript alarms to trigger a PagerDuty incident directly. The function can be used as a replacement for the alarm() function, allowing bypassing the handling of alarm events by and ceding it to PagerDuty.
To access the PagerDuty integration configuration, you must log in as the team owner, navigate to the SETTINGS / INTEGRATIONS page, and click the "pagerduty" box.
PagerDuty integration options
After clicking the "Alert with PagerDuty" button you will be redirected to the domain. You will need to log in into your PagerDuty acccount.
PagerDuty integration options
After clicking "Authorize Integration" (or "Sign In Using Your Identity Provider") you will see a configuration of the integration (in PagerDuty terms, a service).
PagerDuty integration options
After clicking "Finish Integration", the basic setup is already finished. An active alarm triggered by will create a corresponding PagerDuty incident. However, to set up bi-directional synchronization of incidents' state, you will need to configure a "webhook". The instructions are provided on the PagerDuty website. Use the following configuration:
  • service: ""
  • Extension Type: "Generic Webhook V1"
  • webhook URL:
The Configuration of the webhook is the last step. The integration offers the following features:
  • creation of a PagerDuty incident for each active alarm
  • bi-directional synchronization of incidents' state. Resolving an incident in will resolve it in PagerDuty, and vice-versa. Resolving an incident using a different integration, for example Slack, will also resolve it in PagerDuty.
  • going to the website directly from a PagerDuty incident through the embedded links "Edit Alarm", "View Active Alarms"
  • triggering PagerDuty incidents directly, without issuing a new active alarm with the pagerduty() function
  • enabling and disabling the integration by using the options "Create PagerDuty incidents from active alarms", "Create PagerDuty incidents from pagerduty() alarm calls" (see the first screenshot)
Heroku is available as a Heroku add-on. The add-on automatically creates dynamically updated dashboards for dynos and URLs. Refer to the add-on documentation for details.
Meta Reports
Meta reports are reports sent by for specific events taking place inside, like a newly triggered alarm. The reports provide a history of events, can be visualized on a dashboard and allow defining custom actions in an alarm definition. For example, a given HTTP URL could be called for all triggered alarms, without a need to edit alarm definitions for all reports.

To enable receiving meta reports, log in as a team owner, navigate to the SETTINGS / INTEGRATIONS page and click the "Meta Reports" box. The available reports are:

  • moniqueio.active_alarm — the report receives a report instance whenever a new active alarm is triggered
  • moniqueio.resolved_alarm — the report receives a report instance whenever an active alarm is resolved
Report instances of the meta reports are sent as JSON objects with the following keys:
  • reportName — the name of the report for which the event took place
  • alarmKey — the first argument of the alarm() function identifying an alarm event
  • details — the second argument of the alarm() function providing extra details. In case of the moniqueio.resolved_alarm report it contains details of the most recent invocation of the alarm() function.
  • triggered — a date in ISO format telling when the active alarm event was triggered
  • alarmsCountReport — the number of active alarms associated with the report for which the event is triggered
  • alarmsCountTotal — the total number of active alarms
  • resolvedBy — (available only for the moniqueio.resolved_alarm report) an email address of a user who resolved the active alarm. It can be null if the alarm was resolved through an integration, like Slack or PagerDuty.

A sample report instance of the moniqueio.resolved_alarm report looks as follows:

  "reportName": "diskfree"
  "alarmKey": "High disk usage for /"
  "details": "This happened on 2017-10-19T16:27:06.164523"
  "triggered": "2017-10-19T16:27:06.164523"
  "resolvedBy": ""
  "alarmsCountReport": 3, 
  "alarmsCountTotal": 9

The attributes of the report instance can be accessed from an alarm definition through the input global variable. A sample alarm definition sending a custom Slack message could be defined in the following way:

if (input.alarmKey.startsWith("High disk usage")) {
    slack("We have problems with disk space. We should run");

A dashboard tile can show selected attributes of the report instances. For example, a tile could display a count of active alarms. Each instance of a meta report has the tag report:<report-name>, allowing auto-creating a tile for each report for which the alarm events take place.

What to monitor? makes it easy to monitor "custom metrics" — anything that an automated system is unable to collect, because it requires application-level knowledge. The "custom metrics" will usually come from database query results, health-check scripts, API responses, custom scripts outputting JSON or text. However, what exactly is worth of monitoring in a typical web application? The following points will give some hints and examples.

Some of the examples use the moniqueio command-line tool. While the tool is not required for regular usage (the API itself is sufficient for many use cases), it contains helpers for some specific tasks like collecting CPU usage or cron jobs monitoring.

Database query results
Data gathered by executing a database query repeatedly is a valuable source for graphs and alarms. The actual queries worth monitoring depend heavily on what your application is doing. Taking an e-commerce website as an example, this could be:
  • counting a total number of registered accounts, grouped by a country or a state (an alarm could check if there were no new registrations in a day — indicating some possible problem)
  • counting a total number of products, grouped by an availability status (with an alarm checking if a number of immediately available products exceeds a given threshold)
  • computing an average price of a product, grouped by a category.
How to send query results to Since many free-form formats are parsed automatically, the command-line tools for executing queries, like psql or mysql, can be directly used for creating a report instance. A sample crontab line could look like this:

*/30 * * * *      psql -U postgres -h -c "SELECT category, avg(price) FROM product GROUP BY category" | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @-

What if, instead of plain SQL, you want to use a framework abstracting database access (like Rails, Django or Hibernate), or postprocess SQL results using your own code? You will need to create a stand-alone executable (script) that can be called from cron. You also have to decide if the HTTP call creating a report instance will be implemented in your code, or if the script will output data as JSON or text and the output will be piped to a curl invocation. The front page shows examples of sending Rails/Django query results from Ruby/Python.

Monitoring your own API service is important — whether it's for internal use by other software components, or by external users, ensuring the correctness and quality is crucial. And it's a good idea to also set up monitoring of third-party APIs, since the modern software is often tightly integrated with them and depends on their correct functioning.

When it's sufficient to monitor only the content of API responses, the curl command can be used for both fetching the response body and submitting it to the API:

 curl "" | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @- ""

If you need to monitor not only the response content, but also meta-data like HTTP status codes, HTTP headers, or timing data, you can use the moniqueio curl command. The helper command wraps the real curl invocation and prints a report containing meta-data about the executed HTTP request. The invocation is as simple as the previous one:

 moniqueio curl "" | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @- ""

If multiple API endpoints are monitored, tags can be used to automatically create dashboard tiles for each monitored endpoint (a tag would need to contain an endpoint name or an ID).

Health-check and unit test run results
Health checks perform a test on a software component and tell if the component is working properly, for example:
  • if a website's HTML content contains an expected text
  • if an HTTP API call returns with a 200 status code and has the expected content
  • if a microservice responds to a "ping" request and uses less memory than a given threshold

Health checks can be implemented as stand-alone scripts that send ok / fail value as a report instance's input. A chart added to a dashboard will show ok values as 1 and fail values as 0.

A single report instance can also group results of multiple checks. For example, the following JSON document describes checks done for multiple URLs:

    "urls": [
        { "path""/""status""OK""elapsed": 0.53 },
        { "path""/profile""status""FAIL""elapsed": 1.2 }
In that case it's easy to create a chart that will display the status and the elapsed values for all included URLs using the auto-creation of definitions.

An alternative to creating standalone scripts is writing health checks as unit tests (or you can already have unit tests that can serve as health checks). How to send unit test run results to The moniqueio command-line tool provides a helper command unittest_summarize that parses output of popular unit test runners into JSON, suitable for submitting as a report instance's input.

Tags are useful when health checks results are being reported for multiple instances of a service (a microservice, a container etc.). An automatically created dashboard can show the health of all currently running instances (either the current health or historic data, using charts or text).
Health-check statuses for all instances of a service

If you want to trigger an active alarm for each received fail value, the alarm definition will be simple:

if (rows[0][0] != 'ok') {
    alarm('Health check failed');

If an active alarm should be triggered for each tag values combination, the tags should be included in the alarm key:

if (rows[0][0] != 'ok') {
    alarm('Health check failed for ' + tags.join(','));

If you want to check historical health check values before triggering an alarm, for example to check if all checks failed for the last hour, you will need to make an API call that fetches report instances. To specify a date and time from which the instances should be returned, you can create a specific Javascript Date object, or directly use one of the query parameters of the API call supporting human-readable date specifications:

var allNotOk = get('/reports/health_check/instances', {from: '1 hour ago', limit: 20}).
               json.result.every(instance => instance.rows[0][0] != 'ok');
if (allNotOk) {
    alarm('Health checks failed for the last hour');

Heartbeat checks are a very useful feature that ensure that the health-check reports are actually being sent.

System commands
Even when a system command produces output for humans,'s algorithms parse it into a tabular representation that is often sufficient as a source for charts and alarms. This makes it possible to create report instances just by executing the command and sending its output to using curl.

Since automatically parses multiple input formats (e.g. ASCII tables, whitespace-aligned tables, CSV files), many types of commands can be used to produce a report instance's input, for example:

  • commands inspecting system state, like ps, free or lsof, possibly used together with grep to filter sent lines
  • commands supporting deployment tools, like aws, docker or supervisorctl
  • your own commands that ouput some summary of your data.

What about the correctness of the automated parsing? Definitely, there are cases when a free-from input is too irregular to guess a "correct" tabular representation, or when different runs produce outputs too different to define the wanted chart series and alarms. In that case you can try to manually specify the input format or a field delimiter. If that is not sufficient, you will have to parse the input into a more robust format, like JSON or CSV.

Logfiles often contain interesting data that after filtering and summarizing, can be submitted as a report to, for example:
  • a number of times a given event was logged
  • an average processing time (computed by extracting processing times from logfile lines)
  • all lines containing ERROR string (submitted using format=single query parameter)

When processing logfiles repeatedly, using cron, usually only the lines that appeared in a logfile since the previous invocation should be taken into consideration. The newcontent command of the moniqueio tool is a helper that implements this behaviour.

Cron jobs
A simple notification can be sent to indicating a successful run of a command using the && shell operator (which executes the right-hand side if the left-hand side completes with a return code 0): && (echo ok | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @- "")

A more robust way to do it is by using the run command of the moniqueio tool, which captures all return codes, stdin/stdout samples and elapsed time.

Both ways can be used in crontab files, allowing creating dashboard tiles displaying a history of job runs and defining alarms checking return statuses (or stdin/stderr contents). A heartbeat check should be set to ensure the jobs are actually being executed.

Tags are useful when dealing with the same cron job run on multiple servers. By including an IP address or a host name in a tag a source server can be easily identified. Additionally, a tag could contain a name of a job, allowing automatic creation of a dashboard tile for each added job, for example:

10 2 * * *    moniqueio run | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @- ""
System resources usage (a replacement for automated monitoring systems)
The moniqueio tool comes with the sysreports command that sends reports summarizing resources usage of a Linux host: CPU usage, disk usage, free disk space, network throughput etc. The command allows adding tags, supporting auto-creating dashboards displaying system-level data of multiple, possibly ephemeral, server instances.

When a specific system metric should be monitored, it's usually very easy to push it to — the output of many system commands can be sent directly, as well as the content of system files (e.g. inside the /proc directory).

Javascript alarms
An alarm definition is a Javascript code that is run for a report instance. The code can access global variables and call functions, specific to functionality, as well as Javascript built-ins. The Javascript engine supports the ECMAScript 6 standard.
Global variables
An alarm definition is executed in a context of a current report instance, whose data is available as the following global variables:
  • reportName (a string) — the current report's name
  • id (a string) — a hex string identifying the current instance
  • tags (an array of strings) — tags attached to the current instance
  • created (a Date object) — the creation date of the current instance
  • rows (an array of arrays of JSON values) — the current instance's tabular representation - an array of rows, where each row is an array of cells. Each cell is a JSON-serializable object. If the instance's input was textual, cells will be strings. If it was JSON, the content type will match the source type.
  • header (an array of integers) — indexes of the rows array that identify rows that are regarded as header rows (not containing regular data). This might not be accurate, because it's based on automatic detection (if not set manually).
  • input (a JSON value) — the original input from which the tabular representation was created. It's especially useful when the original input was a JSON document, because it's usually easier to access it directly than to deal with a tabular representation.
Alarm-specific functions
alarm(alarmKey [, details])

Trigger an active alarm with the alarm key alarmKey (a string) and the optional message details (a string). See the user guide for a description.


Print a message (a string) to the output window when running an alarm in a "dry run" mode.

get(pathOrUrl [, params])

post(pathOrUrl, data [, params])

put(pathOrUrl, data [, params])

delete_(pathOrUrl [, params])

Perform an HTTP/HTTPS call to either the API, or an external URL.

The pathOrUrl argument can have two forms:

  • if it's not a full URL, but a path, it's interpreted as a path component of a API URL, forming the URL<pathOrUrl>. An API key doesn't need to be set explicitly.
  • if it's a full URL starting with http:// or https://, it is interpreted as a full external URL

params is an optional argument - an object specifying query parameters (object keys are parameter names and object values are parameter values).

In the case of the post and the put functions, data is a string containing data that are sent as request data.

Result: an object representing an HTTP response, having the following properties:

  • code (a number) — the HTTP status code
  • content (a string) — the textual content
  • json (a JSON value) — the content, assumed to be JSON, converted to Javascript data types (null if the content is not JSON)

Example. A call creating a report instance that sets some query parameters and checks a response:

var r = post('/reports/numbers''10__20__30', {delimiter: '__', tags: 'important,ip:'});
if (!r.json.success) {
     alarm('API call failed');
} else {
    print('Parsed nums:' + r.json.result.rows);

Convert a string input into a Date object. The input can have multiple formats and the conversion works correctly for inputs like today, 3 days ago, 2016-02-01, 02/01/16, Mon Feb 01 2016 08:15:00 GMT+0200 . The function is useful when a Javascript Date must be constructed from a datetime string that is not straightforward to parse.


Convert a string input into a number. The function does the following conversions:

  • percent values are converted to a fractional representation — asNumber('32%') gives 0.32
  • file size units like KB, MB, GB are converted to a byte value — asNumber('4 kB') gives 4096
  • if a string mixes words and numbers, the first number is extracted — asNumber('items: 4 or more') gives 4

Directly send a text message to the configured Slack channel. Requires setting up Slack integration on the SETTINGS / INTEGRATIONS page and checking the "Receive Slack messages from slack(message) alarm calls" option.

Alarm notifications can be received as Slack messages automatically when the option "Receive alarm notification in Slack" is checked. The function can be used to send a Slack message directly, without triggering an active alarm. Note that while multiple alarm() calls can be grouped as a single active alarm and result in a single notification, each slack call sends a separate Slack message, possibly resulting in a large number of received messages.


var maxNumber = Math.max.apply(null, rows[0]);
slack('The max number for today is ' + maxNumber);
pagerduty(incident_key, [description, [details]])

Directly trigger a PagerDuty incident. Requires setting up PagerDuty integration on the SETTINGS / INTEGRATIONS page and checking the "Create PagerDuty incidents from pagerduty() alarm calls" option.

The function can be called with 1, 2 or 3 arguments. The arguments have the same meaning as described in PagerDuty documentation:

  • incident_key — a globally unique identifier of the event. All events triggered with the same incident_key will be grouped as a single PagerDuty incident.
  • description — text describing the incident. If only the incident_key argument is passed, the description has the same value as the incident_key.
  • details — an arbitrary JSON document included in the incident.

The pagerduty() function can be used to bypass the handling of alarms by by replacing the alarm() calls. The two functions have very similar semantics: both rely on the "key" argument to uniquely identify an event. The difference is that each report has its own namespace of alarmKeys, while PagerDuty incident_keys are global.


// Trigger an incident with both 'incident_key' and 'description' equal to "api1 fail"
pagerduty('api1 fail');

// Use a dynamically constructed 'incident_key' and 'description'
pagerduty('web: wrong status code: ' + input.status_code, 'response: ' + input.content.slice(100));

// Pass the 'details' argument
pagerduty(tags[0] + ' out of diskspace'rows[1][4], {datetime: created + '', tags: tags});
Javascript built-ins
All functions built into the Javascript language are available, as described in books and online references. Note that the objects and the functions present when Javascript is run by a web browser, such as the window object or the alert function, are not available.

Some of the useful functions and operators are:

  • parseFloat and parseInt for converting strings into numbers
  • some comparison operators allow mixing strings and numbers, for example, the comparison 23 == "23" yields true
  • regular expressions for matching and searching for substrings
  • JSON.parse and JSON.stringify functions allow reading / serializing JSON format
  • the most concise way to iterate over items in an array is using the for...of statement, for example:
    for (var row of rows) {
        var firstColumn = row[0];
        // ...

    Other iteration methods of arrays are useful to shorten the code by replacing explicit loops.
API usage guidelines
The API is JSON-based and uses REST principles. HTTPS protocol is required to access the API (using HTTP will give the Bad Request error).
Passing an API key
An API key can be passed using two methods:
  • HTTP Basic Authentication — an API key should be specified as either a username or a password
  • URL query parameter key
The security of the authentication mechanism is based on using the encrypted HTTPS protocol, as well as keeping the API key secret. If your API key has been exposed, you can reissue it on the SETTINGS page.
Response format
Except when the response format is explicitly set with the output query parameter, all API calls return a JSON object with the following properties:
  • success (a boolean) — true if the call was successful and false if not. Checking this value can replace checking the HTTP status code.
  • result (a JSON value) — the actual result of an API call, usually an object or an array. Might be omitted from the response if an API call returns no result.
  • details (an object) — meta-data about the result. For example, it can contain a count of all returned items. When an error happens, it has the following properties:
    • message (a string) — a human-readable error message
    • errorCode (a string) — a short string identifying an error

The HTTP response code is set to signal a possible error:

  • 200 (OK) — a successful call
  • 202 (Accepted) — the call was successful but the result is not immediately available
  • 400 (Bad Request) — invalid format of input parameters/data
  • 401 (Unauthorized) — invalid API key
  • 404 (Not Found) — wrong URL or a non-existing report
  • 422 (Unprocessable Entity) — invalid input format (eg. an empty input used as a report instance's input)
  • 429 (Too Many Requests) — too many requests performed in the last minute (the default limit is 1200 API calls per minute)
  • 500 (Internal Server Error) — internal error - the error was signaled to admins
Date, boolean input formats
The query parameters of API URLs can specify a date with time or a boolean. The API accepts the following input formats for them:
  • date with time — the canonical format is ISO 8601 (the format used in HTTP headers) with UTC timezone, for example 2016-02-01T08:15:30Z. For convenience, other formats will be also parsed, for example: 1/2/16, 2016-02-01 , as well as some English phrases, like 3 days ago, yesterday .
  • boolean:
    • inputs representing false0, false, f, no
    • inputs representing true1, true, t, yes
Paging results
If the number of results returned by an API call is large, the results are broken into pages that need to be fetched with separate API calls. The default size of a page is 20 and can be increased up to 1000 by using the limit query parameter.

The URL for fetching the next page is set in the details object under the next key. If no further results are available, the value is set to null. For example, the following Python code implements fetching all paged results:

import requests

def fetch_all(url, key):
    results = []
    while url:
        r = requests.get(url, params={'key': key})
        if not r.json()['success']:
            raise Exception('Invalid API response %s' % r.text)
        results += r.json()['result']
        url = r.json()['details']['next']
    return results

# Sample invocation:
# fetch_all('', '9fxvMi8aR3CZ5BsNj0rt0odW')
A JSON object returned from an API call can contain the href key with a value of an API URL containing details of the object.

For example, a call to /reports/diskfree/instances can return an array of objects containing the href key with values like Accessing the returned URL with the GET method will retrieve all details of the report instance.

POST /reports/<name>
Create a report instance belonging to a report <name>.

Input: The input can be of multiple formats (see the format query parameter documentation below) and will be parsed into a table with optional header rows. It must be passed in one of the two ways:

  • as POST binary data
  • as a POST form value stored under the key specified with the formKey query parameter (the request data must be encoded with the content type application/x-www-form-urlencoded)

Query parameters (optional):

  • tags — a comma-separated list of tags attached to the report instance. Up to three tags are supported.
  • format — an input format type, one of:
    • any — the input is of unspecified type - the format will be guessed (the default)
    • json — the input is a JSON document. There are two normalized representations of an instance's data:
      • an array of rows, where each row is an array of JSON values
      • an array of rows, where each row is an object mapping column names (strings) to column values (JSON values)
      However, even if the input does not conform to a normalized representation, but it's a valid JSON document, it will be converted to a valid representation by filling the missing parts with defaults, for example:
      • if the number of cells in each row is not equal, the shorter rows will be filled with null values
      • a single object or number will be treated as a single-row table
      Additionally, the input document is flattened to better match the tabular representation:
      • nested objects are flattened, with keys joined with the . (dot) character. For example, {"x": {"y": 8, "z": true}} is converted to {"x.y": 8, "x.z": true}
      • arrays of objects contained in outer objects are unnested. For example, {"x": [{"y": 1}, {"y": 2}]} is converted to [{"x.y": 1}, {"x.y": 2}]
      • the flattening rules are applied recursively
      For some types of inputs, the recursive unnesting of arrays might generate a document that is very big. If the number of generated rows exceeds 10000, the flattening is skipped — the input is processed using the jsonraw format.
    • jsonraw — the same as json, but the flattening is not applied. This is needed when a single table cell should be a nested object, or if the flattening generates too many rows. This is usually not very useful for drawing charts, but will be handled correctly by dashboard tiles displaying textual content.
    • csv — CSV format - each input line is a sequence of fields separated with a delimiter. The delimiter value is guessed or can be explicitly passed as the delimiter parameter.
    • ascii — the input is a table with cells separated with either whitespace or |, =, +, - characters. This is what output of commands like ps, free or psql looks like.
    • asciitable — a subset of the ascii format, requiring usage of |, =, +, - characters for "drawing" table borders.
    • asciispace — a subset of the ascii format, requiring usage of spaces for aligning table columns.
    • props — each input line is treated as containing a key, value pair, possibly separated with a delimiter. Each row of the parsed table will have the two elements. Sample inputs conforming to this format: name=value lines, Java's *.properties files.
    • tokens — each token (a string separated with whitespace) is converted to a table cell. If the number of tokens in each input line is not constant, the shorter lines are filled with an empty string.
    • markdown — the input is converted into a single-cell table. Dashboard tiles will render it as Markdown.
    • single — the input is converted into a single-cell table. Dashboard tiles will render it as ASCII text.
    • singletable — the input is converted into a single-cell table. Dashboard tiles will render it as a sortable table.
  • header — a comma-separated list of integers - indexes of the report instance's rows that are header rows (the first index is 0). The default behaviour is to auto-detect the header. It's not crucial to have a header correctly specified - it's mostly used for displaying chart and table labels.
  • delimiter — a string that delimits fields (table cells) on a single input line. Specifying this option assumes the input has the format csv or tokens. This parameter is useful when the guessed delimiter value is wrong.
  • autotags — a comma-separated list of automatically computed tags. Currently the only supported value is:
    • ip — attach a tag ip:<ip-address>, where <ip-address> is the public IP address of the calling host.
  • link — an URL associated with the report instance. Clicking a dashboard tile's title displaying the report instance will open this URL.
  • formKey — a POST form's key that stores a report instance's input to parse (this overrides the default behaviour of using request's data as direct input).
  • created — the timestamp set as the instance's creation date. This can't be a future date. The default value is equal to the datetime of the API call. This parameter is useful when importing historical data.

# Create a report instance from 'free' command output. We are not specifying a format or other options explicitly.

free | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @- ''

# Send a CSV file, explicitly specifying a format and a header.

cat data.csv | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @- ',1'

# Import a historic report by specifying a 'created' date and tags.

cat report_old.json | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @- ',important'

# Send data encoded as POST form.

curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data 'mydata=1,2,3,4,5' ''

GET /reports
Fetch report names.

Query parameters (optional):

  • prefix — only fetch the reports with the name starting with the prefix
  • lastName and limit — used for paging results

Result: an array of objects with the keys:

  • name — report name
  • href — an API link to the report

# Fetch all report names (up to the default limit of 20)

curl --user 9fxvMi8aR3CZ5BsNj0rt0odW:
  "success": true, 
  "details": {
    "next": null
  "result": [{
    "name": "diskfree", 
    "href": ""
  }, {
    "name": "j3", 
    "href": ""
  }, {
    "name": "mydata", 
    "href": ""

GET /reports/<name>
Fetch information about the report <name>.

Result: an object with the following keys:

  • created — the creation datetime of the report
  • reportInstanceCount — the number of report instances created for the report in the current billing period
  • storageSpace — storage space (as a number of bytes) consumed by the report instances of the report in the current billing period
# Fetch basic information about the report.

curl --user 9fxvMi8aR3CZ5BsNj0rt0odW:
  "success": true, 
  "result": {
    "reportInstanceCount": 4, 
    "storageSpace": 2824
PUT /reports/<name>
Ensure the report <name> exists.

Normally, a report is automatically created when the first report instance is submitted with the call POST /reports/<name>. This API endpoint allows creating a report without submitting a report instance, enabling configuring a report before it starts receiving data.

# Ensure the report exists.

curl --request PUT --user 9fxvMi8aR3CZ5BsNj0rt0odW:
  "success": true, 
  "result": {
DELETE /reports/<name>
Delete the report <name> and all associated data, including all report instances belonging to the report, an alarm definition, active alarms and dashboard tiles displaying data from the report.

The data is not immediately deleted, which is indicated by the 202 Accepted status code returned on success. The amount of time needed to complete the deletion operation depends on the number of report instances created for the report and can take from a fraction of a second up to several minutes.

# Delete a report and all associated data (including report instances)

curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request DELETE
  "success": true
GET /reports/<name>/instances
Fetch report instances of the report <name>.

Query parameters (optional):

  • from — fetch instances created on the specified date or later
  • to — fetch instances created on the specified date or earlier
  • tags — a comma-separated list of tags - fetch instances having the specified tags
  • expand0 or 1 (default) - whether the returned instances should contain the rows and header attributes containing the instance's tabular representation
  • expandInput0 (default) or 1 - whether the returned instances should contain the input attribute with the original instance's input from which the tabular representation was created
  • orderasc (default) or desc - the direction of ordering of the returned instances by a creation datetime (ascending / descending)
  • fromId — fetch instances starting from (and including) the given report instance id (preserving the specified order)
  • lastId — the same as fromId, but excludes the given report instance id
  • limit — limit the number of returned results to the specified number (default: 20)
Result: an array of objects with the keys:
  • id (a string) — the ID of the report instance
  • tags (an array of strings) — the tags attached to the report instance
  • created (a string) — the creation date and time of the report instance
  • rows (an array of arrays of JSON values) — present if the expand parameter is true - a tabular representation of the report instance
  • header (an array of numbers) — present if the expand parameter is true - the indexes of the rows from the rows array that are header rows
  • input (a JSON value) — present if the expandInput parameter is true - the original instance's input from which the tabular representation was created. If the input was JSON, it's the original JSON value. If it was textual, it's a string.
  • href — the API link to the report instance

# Fetch instances for the last two hours, without printing rows and headers.

curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: ''
  "success": true, 
  "details": {
    "next": null
  "result": [{
    "id": "bfd7b0deacf611e6a4dba4bf0107c608", 
    "tags": [], 
    "created": "2016-11-17T18:50:46.189488", 
    "href": ""

# Fetch the newest instance sent from a given IP address.

curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: ''
  "success": true, 
  "details": {
    "next": ""
  "result": [{
    "id": "bfd7b0deacf611e6a4dba4bf0107c608", 
    "tags": ["ip:"], 
    "created": "2016-11-17T18:50:46.189488", 
    "rows": [
      ["Filesystem", "1K-blocks", "Used", "Available", "Use%", "Mounted on"], 
      ["udev", "8030116", "4", "8030112", "1%", "/dev"], 
      ["tmpfs", "1608264", "22256", "1586008", "2%", "/run"], 
      ["/dev/sda1", "22841212", "18064028", "3593852", "84%", "/"], 
      ["none", "4", "0", "4", "0%", "/sys/fs/cgroup"], 
      ["none", "5120", "0", "5120", "0%", "/run/lock"], 
      ["none", "8041316", "493000", "7548316", "7%", "/run/shm"], 
      ["none", "102400", "20", "102380", "1%", "/run/user"], 
    "header": [0], 
    "href": ""

# Fetch the first instance created after an instance with a given id.

curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: ''
  "success": true, 
  "details": {
    "next": null
  "result": [{
    "id": "ac47b0dea45311e6a4dba4bf0107c612", 
    "tags": [], 
    "created": "2016-11-17T18:52:48.5833", 
    "href": ""
GET /reports/<name>/instances/<id>
Fetch details of a single report instance belonging to the report <name> and having the ID <id>.

Result: an object with the same keys as described above and as if the expand and expandInput parameters were true.

# Fetch data of a specific instance.

curl --user 9fxvMi8aR3CZ5BsNj0rt0odW:
  "success": true, 
  "result": {
    "id": "bfd7b0deacf611e6a4dba4bf0107c608", 
    "tags": [], 
    "created": "2016-11-17T18:50:46.189488", 
    "rows": [
      ["Filesystem", "1K-blocks", "Used", "Available", "Use%", "Mounted on"], 
      ["udev", "8030116", "4", "8030112", "1%", "/dev"], 
      ["tmpfs", "1608264", "22256", "1586008", "2%", "/run"], 
      ["/dev/sda1", "22841212", "18064028", "3593852", "84%", "/"], 
      ["none", "4", "0", "4", "0%", "/sys/fs/cgroup"], 
      ["none", "5120", "0", "5120", "0%", "/run/lock"], 
      ["none", "8041316", "493000", "7548316", "7%", "/run/shm"], 
      ["none", "102400", "20", "102380", "1%", "/run/user"]
    "header": [0], 
    "input": "Filesystem     1K-blocks     Used Available Use% Mounted on\nudev             8030116        4   8030112   1% /dev\ntmpfs            1608264    22256   1586008   2% /run\n/dev/sda1       22841212 18064028   3593852  84% /\nnone                   4        0         4   0% /sys/fs/cgroup\nnone                5120        0      5120   0% /run/lock\nnone             8041316   493000   7548316   7% /run/shm\nnone              102400       20    102380   1% /run/user\n", 
    "href": ""

DELETE /reports/<name>/instances
Delete a range of report instances belonging to the report <name> and specified by the optional query parameters. If no query parameters are passed, all report instances will be deleted.

The data is not immediately deleted, which is indicated by the 202 Accepted status code returned on success. The amount of time needed to complete the deletion operation depends on the number of deleted report instances and can take from a fraction of a second up to several minutes.

Query parameters (optional):

  • from — delete instances created on the specified date or later
  • to — delete instances created on the specified date or earlier
  • tags — a comma-separated list of tags - delete instances having the specified tags
# Delete all report instances of the report.

curl --request DELETE --user 9fxvMi8aR3CZ5BsNj0rt0odW:
  "success": true 

# Delete report instances having the specified tags and created after the given date.

curl --request DELETE --user 9fxvMi8aR3CZ5BsNj0rt0odW: ',dc:01'
  "success": true 
DELETE /reports/<name>/instances/<id>
Delete a single report instance belonging to the report <name> and having the ID <id>.
# Delete a single report instance

curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request DELETE
  "success": true
GET /reports/<name>/tags
Fetch a list of tags attached to report instances of the report <name>.

Query parameters (optional):

  • prefix — a prefix of a name of the returned tags
  • lastName and limit — used for paging results

Result: an array of objects with the key tag holding the tag's name.

# Fetch all tags used for the report instances of the report having the prefix 'ip:'

curl --request GET --user 9fxvMi8aR3CZ5BsNj0rt0odW: '' 
  "success": true, 
  "details": {
    "next": null
  "result": [{
    "tag": "ip:"
  }, {
    "tag": "ip:"
  }, {
    "tag": "ip:"
  }, {
    "tag": "ip:"
  }, {
    "tag": "ip:"
GET /reports/<name>/alarmdefinition
Fetch content of the Javascript alarm definition for the report <name>.

Result: an object with the key:

  • source (a string) — the alarm definition as the Javascript source code. It will be an empty string if there's no alarm defined for the report.

# Fetch Javascript alarm definition for a report

curl --user 9fxvMi8aR3CZ5BsNj0rt0odW:
  "success": true, 
  "result": {
    "source": "if (parseFloat(rows[1][4]) > 90)\n{\nalarm('High disk usage for /')l\n}"

PUT /reports/<name>/alarmdefinition
Set the content of the Javascript alarm definition for the report <name>. The content must be specified as PUT binary data.

# Set Javascript alarm definition from a shell prompt.

$ curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: -XPUT --data-binary @- << EOF 
if (rows.length > 10) {
  alarm('Too many rows');
  "success": true

GET /reports/<name>/alarms/active
Fetch active alarms for the report <name>.

Result: an array of objects with the keys:

  • alarmKey (a string) — the string identifying the alarm - the first argument of the alarm function
  • details (a string) — the detailed message - the second argument of the alarm function that triggered the active alarm. It's null when the argument was omitted.
  • triggered (a string) — the date and time when the active alarm was triggered
  • count (an integer) — the number of times the alarm was issued (multiple alarm calls with the same alarmKey are grouped under a single active alarm)

# Fetch active alarms for a report.

curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: ''
  "success": true, 
  "result": [{
    "alarmKey": "High disk usage", 
    "details": "94%", 
    "triggered": "2016-11-17T18:50:46.189488", 
    "count": 4
  }, {
    "alarmKey": "Out of disk space", 
    "details": null, 
    "triggered": "2016-11-18T19:53:44.483132", 
    "count": 1

POST /reports/<name>/alarms
Trigger an alarm for the report <name>. It has the same effect as calling the alarm function from an alarm definition, including sending the configured notifications.

Query parameters:

  • alarmKey and details — the same meaning as the arguments of the Javascript function alarm. The details parameter can be skipped.

# Trigger an active alarm from a shell prompt.

$ curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: -XPOST  ''
  "success": true

POST /reports/<name>/instances/<id>/alarmdryrun
Run the Javascript alarm definition, submitted as POST data, in "dry run" mode, against a report instance with ID <id> belonging to the report <name>.

Result: an object with the keys:

  • errorEncountered (a boolean) — true if an error happened during execution of the alarm definition
  • exceptionMessage (a string) — if errorEncountered is true, it will contain an error message
  • alarm (an array, optional) — an array of objects with the keys alarmKey, details for each alarm function call
  • print (an array, optional) — an array of string messages issued using the print function calls

# Dry-run an alarm definition that makes an 'alarm' and 'print' call.

curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: -XPOST --data-binary @- '' << EOF
print('Number of rows: ' + rows.length);
alarm('Always triggered');
  "success": true, 
  "result": {
    "errorEncountered": false, 
    "exceptionMessage": "", 
    "print": ["Number of rows: 11"], 
    "alarm": [{
      "details": null, 
      "alarmKey": "Always triggered"

# Dry-run an invalid alarm definition.

curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: -XPOST --data-binary @- '' << EOF
invalid code
  "success": true, 
  "result": {
    "errorEncountered": true, 
    "exceptionMessage": "SyntaxError: Unexpected identifier\n    at <alarm_definition>:1:8\n    invalid code"
GET /reports/<name>/instances/<id>/alarmglobals
Fetch the Javascript source containing Javascript globals used for an alarm run for a specific report instance with ID <id> belonging to the report <name>.

Query parameters (optional):

  • output — if set to raw, the result of the API call will not be a JSON document, but the Javascript source as text

Result: the Javascript source as a string.

# Get a report instance's data as Javascript global variables. 

curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: ''

reportName = "diskfree";
id = "bfd7b0deacf611e6a4dba4bf0107c608";
tags = [];
created = new Date("2016-11-17T18:50:46.189488");
rows = [
  ["Filesystem", "1K-blocks", "Used", "Available", "Use%", "Mounted on"], 
  ["udev", "8030116", "4", "8030112", "1%", "/dev"], 
  ["tmpfs", "1608264", "22256", "1586008", "2%", "/run"], 
  ["/dev/sda1", "22841212", "18064028", "3593852", "84%", "/"], 
  ["none", "4", "0", "4", "0%", "/sys/fs/cgroup"], 
  ["none", "5120", "0", "5120", "0%", "/run/lock"], 
  ["none", "8041316", "493000", "7548316", "7%", "/run/shm"], 
  ["none", "102400", "20", "102380", "1%", "/run/user"], 
header = [0];
input = "Filesystem     1K-blocks     Used Available Use% Mounted on\n" + 
    "udev             8030116        4   8030112   1% /dev\n" + 
    "tmpfs            1608264    22256   1586008   2% /run\n" + 
    "/dev/sda1       22841212 18064028   3593852  84% /\n" + 
    "none                   4        0         4   0% /sys/fs/cgroup\n" + 
    "none                5120        0      5120   0% /run/lock\n" + 
    "none             8041316   493000   7548316   7% /run/shm\n" + 
    "none              102400       20    102380   1% /run/user\n"; 
GET /reports/<name>/heartbeatchecks
Fetch a list of heartbeat checks associated with the report <name>.

Result: a JSON array containing objects with the following keys:

  • tags — a list o tags (strings) for which the check is specified. It's an empty array if the check matches any report instance of the report.
  • maxAgeSeconds — an integer number defining the maximum age of a newest report instance as a number of seconds. Note that while the web interface displays the maximum age broken into days, hours, minutes, seconds fields, the maxAgeSeconds defines the age as a single number of total seconds (for example, one day will be represented as 86400 maxAgeSeconds).
If no heartbeat check is defined for the report, the result will be an empty array.
# Get heartbeat checks specified for a report. 

curl --user 9fxvMi8aR3CZ5BsNj0rt0odW:
  "success": true, 
  "result": [{
    "tags": [], 
    "maxAgeSeconds": 3600
  }, {
    "tags": ["ip:"], 
    "maxAgeSeconds": 300

PUT /reports/<name>/heartbeatchecks
Assign a list of heartbeat checks to the report <name>, replacing the currently set checks.

Input: the input should be a JSON array sent as request data. Each element of the array must be an object with the keys tags, maxAgeSeconds described for the GET endpoint. The key tags can be omitted, which will be interpreted as if the value was an empty array.

Result: an array of assigned checks - objects with the keys tags, maxAgeSeconds. The result is a normalized form of the input data — duplicated definitions for the same tags are removed (only the first occurrence is present) and an absent tags key is set to an empty array.

# Set a list of heartbeat checks performed for a report.

echo '[{"tags": ["ip:"], "maxAgeSeconds": 300}, {"maxAgeSeconds": 3600}]' | curl --request PUT --data-binary @- --user 9fxvMi8aR3CZ5BsNj0rt0odW:  
  "success": true, 
  "result": [{
    "tags": [], 
    "maxAgeSeconds": 3600
  }, {
    "tags": ["ip:"], 
    "maxAgeSeconds": 300
DELETE /reports/<name>/heartbeatchecks
Remove all heartbeat checks possibly assigned to the report <name>.
# Delete all heartbeat checks assigned to a report.

curl --request DELETE  --user 9fxvMi8aR3CZ5BsNj0rt0odW:
  "success": true
GET /storage/<key>
Fetch arbitrary content stored under the key <key>.

Query parameters (optional):

  • output — if set to raw, the result of the API call will not be a JSON document, but directly the content.

Result: the content stored under the key, set under the result attribute, or returned directly as the HTTP response if output=raw is set.

The call will result in a 200 response code also for non-existing keys, returning an empty string in such cases.

If the content is binary data, non-encodable as UTF8, the output=raw option must be set — otherwise the call will result in a 400 (Bad Request) error.

# Retrieve stored content.

curl --user 9fxvMi8aR3CZ5BsNj0rt0odW:
  "success": true, 
  "result": "working"

# Retrieve the stored content directly, skipping creating a JSON response document.

curl --user 9fxvMi8aR3CZ5BsNj0rt0odW:

PUT /storage/<key>
Set arbitrary content, sent as binary PUT data, under the key <key>.

There is no endpoint for deleting a key, but the Storage API interprets an empty string as a non-existing key. The PUT endpoint can be used to clear a key by assigning an empty string to it.

# Store a string value.

echo -n working | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: -XPUT --data-binary @-
  "success": true, 

# Clear an existing value.

echo -n '' | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: -XPUT --data-binary @-
  "success": true
POST /reports/<name>/annotations
Add a custom annotation to dashboard tiles displaying the report <name>. The annotation will be associated with the latest report instance of the report at the time of the API call.

Query parameters:

  • message (required) — a custom message associated with the annotation
  • tags (optional) — a comma-separated list of tags to associate the annotation with

# Post a custom annotation.

curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: -XPOST --data-binary @-
  "success": true, 

# Post a custom annotation for a specific tag.

curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: -XPOST --data-binary @-
  "success": true, 
POST /reports/<name>/instances/<id>/annotations
Add a custom annotation to dashboard tiles displaying the report instance <id> of the report <name>. Compared to the API call described above, this call enables targeting a specific report instance (for example, created some time ago).

Query parameters:

  • message (required) — a custom message associated with the annotation

# Post a custom annotation.

curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: -XPOST --data-binary @-
  "success": true,