The documentation uses a random API key. Your own API key is displayed on the settings page.

If you want to review specific use cases like monitoring API responses, SQL results etc., you can jump to the "What to monitor?" guide.
Sending data

The data that is used for drawing graphs and checking for alarms comes from report instances grouped into reports. A report instance's data is represented as a table, but an actual input you are sending can be of many types of formats — a JSON document, a CSV file, command output, a single number are among the supported inputs.

The input data of a report instance should be sent as POST data to<report-name>. An API key can be specified as the key query parameter or as the HTTP username. For example, sending data about disk space usage, coming from df command, can be accomplished in the following way using a Unix shell:

df | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @-
  "result": {
    "tags": [], 
    "rows": [
      ["Filesystem""1K-blocks""Used""Available""Use%""Mounted on"], 
    "header": [0], 
The output of the df command is used by the curl command as POST data, and diskfree is a report name. You don't need to configure a report before starting sending report instances.

The API returns a JSON document. Under the result key a parsed report instance is printed, with the rows key holding the instance's tabular representation and the header being an array of header rows indexes.

More examples of calling the API is available on the front page.

Sending data periodically

In most cases a report instance should be sent periodically. It's what makes most dashboard tiles work (data is displayed from a time range of report instances) and how Javascript alarm checks are run (they are defined for a report, and executed for each report instance). The usual way to achieve this is using the Unix cron daemon, for example, to check disk space every 15 minutes, the following line could be added to /etc/crontab file:

*/15 * * * *     df | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @-

If you want to send reports from multiple server instances, you need to manage multiple crontab files. leaves this part to tools tailored to your deployment strategy (it could be a configuration management system like Chef, Puppet, Salt or Ansible, a script building a Docker container, or a dedicated crontab manager like minicron). Of course, you can also use any alternative to cron.

Furthermore, doesn't depend on the regularity of report instances submission, making possible uses not involving a periodic run, like:

  • sending a report instance when requested by a user from a UI
  • sending an error report when an error happens
  • sending statistics once per 1000 HTTP requests
  • sending a report when git commit is pushed (using hooks)
Input format considerations

If you are constructing a report programmatically, the most convenient format is usually JSON. If your report has a header, you can represent each report row as a JSON object mapping column names to column values, and the whole report as an array of these objects, for example:

    {"country""us""users": 120, "newIds": [345, 349, 350]},
    {"country""uk""users": 34, "newIds": [362]},
    {"country""de""users": 27, "newIds": []}

If a header is not needed, table rows can be represented directly:

     ["us", 120, [345, 349, 350]],
     ["uk", 34, [362]],
     ["de", 27, []]

An array of objects and an array of arrays can be thought of as normalized JSON representations of a report instance. However, will parse any JSON document into a tabular representation. Rows with different lengths, nested objects, mixing arrays and objects — each of these cases will be handled by filling with null values or "flattening" algorithms.

For simple cases, constructing a JSON document can be skipped and a free-form string can be sent directly:

us 120
uk 34
de 27

When moving from a well-defined format, like JSON, to free-form inputs and commands' outputs, the possibility that the input will be ambiguous and the format detection will not work increases. To overcome the ambiguity, a format query parameter can be specified. A delimiter query parameter is also supported as an explicit definition of a field delimiter for free-form inputs.

Sending Markdown and raw text
Text can be displayed directly in dashboard tiles (either a newest instance or a range of instances). If you specify the format query parameter as markdown or single, the input will be treated as a single piece of content (Markdown / raw text) and the resulting report instance will have only one table cell with the content.

The Markdown format supports using basic HTML tags directly and setting font color using the style attribute, for example:

And **now**
<font color=red>
    Something <em>important</em>
cat | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @- ''
Sending single values
Single numbers or strings can be used as an input and will be parsed into a single-cell table. This can be viewed as a mapping of the "metric" concept used in traditional monitoring systems to the tabular model uses (a metric name becomes a report name, and a metric value becomes the cell's content).

There is some additional support for boolean values often used in health checks. If the input is one of the strings: true, ok, success, yes for the true value or false, fail, failure, no, not, notok for the false value, then dashboard tiles displaying text will use green and red colors for displaying them (the colors can be overridden by changing tile settings) and tiles displaying graphs will interpret the values as 0 and 1.

Creating dashboard tiles
Clicking Add Report in the DASHBOARD view shows a dialog for adding a dashboard tile (a single report can be displayed in multiple tiles). After selecting a report name, the most recent report instance is displayed as a table. You can now click table cells containing values that should be shown in a tile (usually some numbers).

Your selection serves as a template for selecting data from other report instances. In many cases, this is when the configuration of the tile's data source ends — clicking Add will draw the tile containing values selected from a range of report instances, or a newest instance if that was chosen. Initially, the visualization type — graph or text — is selected automatically. You can configure it by clicking the icon in the top-left corner of the tile (the cursor must be placed on the tile to make the icon visible).

If you need to manually add or modify a data source definition, clicking "Show definitions" will make them visible. Each definition has the form:

select column data-column where column filtering-column equals|contains filtering-value using name name
The "where" part specifies which table row to select — the one having a value equal or containing the filtering-value in the column filtering-column — while data-column is the column of the selected row containing the final value. The name is a label associated with the value.
Creating new definitions automatically
Autocreating new definitions Sometimes the full content of a report instance should be visualized. If your report contains top three countries ordered by a number of users, the selection of countries made for one day might not match the countries present in a next day's report, thus making new country's data not visualized.

To prevent this, when all values from a single column are selected, the column becomes highlighted and an option, shown after definitions, becomes activated:

create new definitions for column data-column by column filtering-column
When a new value in filtering-column is present in a report instance, a new definition selecting data-column will be created.
Using tags
In many cases the same report should be sent from multiple sources — servers, devices, processes. To identify a source, uses tags. A tag is a string label attached to a report instance. If a tag contains the : character, the part before the character is treated as a property name and the part after the character as a property value. Examples:
  • a tag ip: identifies a server / container by an IP address
  • tags microservice:search hostname:inst01 pid:2145 identify a microservice instance
  • a tag important can be used to mark report instances which should be shown on a dashboard

Tags should be passed as tags query parameter value, with the , (comma) character used as a separator. Up to three tags can be attached to a single report instance.

For example, to indicate that a disk space report comes from a specific server, we could use the following API URL:,
Computing tag values in practice
When report instances are sent from your own code, you have full control over specifying tag values. Things like IP addresses, PID values, container IDs can be usually retrieved using library calls. For example, the following Node.js code sends a status and memory report with a PID value in a tag:

var restler = require('restler');

setInterval(function() {
    var tags = ['microservice:search', 'pid:' +];
    restler.postJson('', { 
            status: 'ok',
            memory: process.memoryUsage()
        }, { query: {
            tags: tags.join(','),
            key: '9fxvMi8aR3CZ5BsNj0rt0odW'
}, 60000);

When cron is used for sending reports, commands are run through a Unix shell, allowing computing tag values dynamically and sharing a single crontab definition among multiple servers, for example:

*/30 * * * *    df | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @-$(hostname)
The tags=host:$(hostname) part will be expanded by cron when the command will be run and $(hostname) will be replaced with the output of the hostname command.
Automatically creating ip tag
When autotags query parameter is set to ip, a tag ip:<ip-address> will be attached to the report instance, where <ip-address> is the public IP address of the calling host. The API URL from the previous example could be replaced with and the report instances would be organized around IP addresses instead of hostnames.
Specifying tags for a dashboard tile
Choosing tiles for a tile When report instances with tags attached exist for the selected report, you will be able to select tags that will be used as a filter — only instances having the tags will be used as a tile's input. This allows creating tiles with data coming from specific sources, like a selected IP address, or marked with tags like important.
Automatically creating dashboard tiles for similar tags
The main feature of tags is the ability to automatically create dashboard tiles having similar tags. The "similarity" means sharing the same prefix — the part until the : character, and differing in the remaining part. Thus, if the convention of using the : character to separate a property name from a property value is used, a dashboard can display a tile for each distinct property value. For the example diskfree report it would mean that a dashboard tile showing disk space usage would be created for each IP address.
Sample auto-created dashboard

To configure automatic creation of dashboard tiles, the following steps are required:

  1. Create and configure a tile that will serve as a template. Create a tile for the wanted report and specific tags (for example, a report diskfree and tag ip: could be selected). Configure the tile's data, visualization type, colors by clicking the settings icon and set the wanted size.
  2. Check "Create dashboard tiles for similar tags" option in the tile's settings dialog. For each chosen tag you must specify how the tag value should be treated when deciding if a new tile should be created by copying the template tile:
    • <property-name>:* — a tag having the same <property-name> prefix, but distinct property value (a postfix — the * part) will cause creation of a new tile. In the diskfree example, choosing ip:* will cause the creation of a tile for each IP address.
    • <property-name>:<property-value> — a report instance must have the exact tag in order to be considered for the creation of a new tile. This is useful when more than one tag is being attached to report instances. If tags microservice:<name> and hostname:<host> are attached to each report instance, then choosing microservice:search (the exact match) and hostname:* (the property name match) will cause creating a tile for each distinct hostname, but only if the specific tag microservice:search is present.
    • * — the tag is treated as not conforming to the convention of having a property name separated from a property value by the : character. Any different tag value will cause creation of a new tile. This is useful for tags like important — attaching different tags, like critical, will cause the creation of a new tile.

    Autocreating new tiles
  3. Wait until report instances with new tags are submitted. The report instances triggering the creation of new tiles must be received. A browser refresh is not required, neither having a dashboard opened in a browser.
Multiple template tiles on a single dashboard
If you need to configure multiple template tiles, you can either create a new dashboard for each template tile, or place them on the same dashboard. In the latter case tiles created from the same template tile will be grouped together and will be sorted by tag values.

You can also mix template tiles with regular tiles on a single dashboard. Automatically created tiles will alter the layout of the dashboard, but the content and the sizes of the regular tiles will be preserved.

Automatically deleting tiles having no new data
Deleting tiles without new data When tags are used to identify ephemeral services, like server instances with dynamically assigned IPs, the dashboard tiles created for them stop receiving new data after a service is destroyed. To set up automatic deletion of inactive tiles, an option is available in the Dashboard Settings menu that sets maximum time a tile can live without receiving new data.
Using tags for non-system reports
Most of the examples of using tags involved IP addresses, container IDs or PIDs. However, the tags mechanism can be used for creating dashboards displaying data obtained from high-level sources, like databases and APIs, for example:
  • stock market prices, with a dashboard tile created for each stock symbol. The following shell script creates appropriate report instances:

    for symb in "AAPL" "GOOG" "MSFT" "FB" "AMZN"; do
         # fetch ask, bid prices
         curl "$symb&f=a,b" | \
             curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @- "$symb"

  • for each blog post, a dashboard tile could be created showing a number of views and comments (this would require executing a database query for each blog post and setting a tag value to a post's title/id)
  • when extracting processing times from web server logs, the data could be grouped by URLs and average times for each URL could be shown in separate dashboard tiles (by using a URL as a tag value).
Defining alarms
An alarm definition is Javascript code that is associated with a report and run for each received report instance. The instance's data is available to the code as global variables.

The alarm definition editor can be opened by clicking the Edit Alarm button in the REPORTS view. Alarm definition editor

The upper window shows a report instance's data as global variables that can be accessed from the alarm definition. Clicking the Dry Run button simulates an actual alarm run — the alarm definition is executed against a report instance shown in the upper window, with the alarm events printed in the "Output" window and not actually triggered. The calendar icon and the "Previous" and "Next" buttons let you choose a report instance for the dry run.

Triggering alarms with alarm() calls
An alarm event is triggered with the alarm(alarmKey) call, where alarmKey is a string identifying an alarm condition. Multiple alarm calls with the same alarmKey will not trigger multiple notifications and will be grouped under a single active alarm.

The active alarms can be viewed in the REPORTS view by clicking the button Active Alarms under the Edit Alarm button. An active alarm can be resolved when the reason the alarm was triggered is no longer present and it's expected it shouldn't be triggered more times. When an alarm with the same alarmKey is triggered again for the same report, a new active alarm is created and configured notifications are sent once again.

Assuming the 4th column of the diskfree report contains information about free disk space in percents, the following alarm definition triggers an alarm if the used space exceeds 90% for the first partition:

if (parseFloat(rows[1][4]) > 90) {
    alarm('High disk usage for /');
(the parseFloat call is needed to convert a string like "85%" to a number 85).

If the usage exceeds 90% for multiple subsequent alarm runs, only the first call will trigger alarm notifications, because the alarmKey argument has the constant value "High disk usage for /".

A single alarm definition can trigger multiple alarms and alarm keys can be constructed dynamically. A sample alarm definition checking free disk space for each partition:

rows.slice(1).forEach(function(row) {
    if (parseFloat(row[4]) > 90) {
        alarm('High disk usage on ' + row[0]);
The code will trigger a new alarm event only for a partition for which an active alarm doesn't exist yet.
Attaching details to an alarm

It's sometimes useful to attach additional details to an alarm event, for example a current date. A creation date of a report instance is available as the created global variable, so we could write:

if (parseFloat(rows[1][4]) > 90) {
    alarm('High disk usage for / on ' + created);

This code will, however, cause the creation of a new active alarm (and alarm notifications) for each report instance (because the alarmKey will have a unique value for each run). It's rarely the wanted behaviour. An alternative is to use the two-argument call alarm(alarmKey, details):

if (parseFloat(rows[1][4]) > 90) {
    alarm('High disk usage for /''This happened on ' + created);
The alarm event will be triggered only once (because the alarmKey has a constant value), and the details argument — a custom message — will be included in alarm notifications.
Alarms triggered in case of runtime errors
Alarms with the following alarm keys are automatically triggered in case of runtime errors:
  • Javascript error — triggered when an execution of an alarm definition fails because of a programming error. Details will contain an error message and a stack trace.
  • Failed to execute alarm — triggered when an execution of an alarm definition didn't complete in 5 seconds and was terminated. This could be caused by one of the following reasons:
    • the execution of the alarm used a lot of CPU (infinite loops, complex algorithms) or a lot of memory (at least 100MB of RAM is available for each alarm execution).
    • a lot of time was spent on waiting for HTTP/API calls to finish. The get(), post(), put(), delete_() functions are executed synchronously, while the functions supporting integrations, like slack() or pagerduty(), are run asynchronously and don't consume the 5-second limit.
    • the system is not operating correctly and couldn't allocate resources for the alarm execution. Please check the operational status page to check if there are any known problems we are working on.
Using print() calls for debugging
Using print() function in alarm definitions

The print function can be used for displaying messages in the output window during a "dry run", to help debugging and investigating Javascript code.

The print statements can be left in the code that is executed in the "real run" mode — the calls will have no effect.

Calling the API from alarm definitions
An alarm definition can access not only the data of the current report instance (through global variables), it can also make API calls. This is a powerful feature allowing accessing historic and other report's data, using the storage API and even submitting new report instances.

The get, post, put, delete_ functions are available for making HTTPS calls with the appropriate HTTP method. The first argument is a path component of the API URL (the host part should be omitted). For example, the following code fetches last three report instances and checks if disk usage exceeds 90% for all of them:

var exceeded = get('/reports/diskfree/instances', {order: 'desc', limit: 3}).json.result.every(function(instance) {
    return parseFloat(instance.rows[1][4]) > 90;
if (exceeded) {
    alarm('High disk usage for 3 consecutive checks');
The /reports/diskfree/instances is an API URL path for accessing historic report instances, and the object {order: 'desc', limit: 3} specifies query parameters for fetching last three instances. The get call returns an HTTP response as an object, with the json attribute holding the response content converted from JSON. The result attribute of the converted response contains the actual API call result — as described in the API usage guidelines. Finally, the every function checks if the condition holds for each report instance (it's a built-in method of Javascript arrays).

Since a call post('/reports/new_report', data) creates a new report instance belonging to a specified report, an alarm definition can be used for postprocessing report instances and/or combining data from different reports and submitting the result as a new report instance. The following example creates a Markdown-formatted report from the diskfree report:

var usage = parseFloat(rows[1][4]);
var s = 'Disk usage as of *'+created+'*: **'+usage+'%**';
if (usage > 80) {
    s += ' <font color=red>WARNING</font>';
post('/reports/diskfree_formatted', s, {'format''markdown'});

Some other ways the API can be used in alarm definitions:

  • accessing other report's data by accessing the URL /reports/<report-name>/instances (see description)
  • checking instances from a specific date range, for example last 3 days (using the from and to query parameters — English phrases are accepted as date specifications, so the from parameter could be specified as the string 3 days ago)
  • checking if an alarm is already active for a report
  • using the storage API for preserving state between alarm runs

See the API reference for documentation of API endpoints and the Javascript alarms reference for documentation of Javascript functions available in alarm definitions.

Making external HTTP calls (webhooks)
The functions get, post, put, delete_ that are used to call the API, work also for making external HTTP/HTTPS calls. When the first argument is not a path, but a full URL starting with http:// or https://, then an HTTP call will be made to the host and path pointed by the URL, for example:

var out = post(''undefined, {arg1: 10});
if (out.code != 200) {
    alarm('Error when calling');
This feature allows integration with third-party APIs, or your in-house services. It will not work on trial accounts due to security reasons.
PagerDuty, Slack integrations
If you need advanced incident management features (like on-call schedules or phone notifications), the simplest solution is to set up the PagerDuty integration.

Slack can be also set up as a destination for alarm notifications. The slack() function can be used to send arbitrary messages from an alarm definition.

Using the Storage API
A simple key-value storage API is available for storing arbitrary data. It's useful when state needs to be preserved between test or alarm runs.

The URL of a stored item is /storage/<key>, where <key> is an arbitrary string identifying an item. HTTP PUT method is used to set an item's value, and GET method retrieves its content. An item's data can be UTF8-encoded text or arbitrary binary data.

For example, to save the number of lines of access.log file directly from a shell the following command can be issued:

cat access.log | wc -l | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: -XPUT --data-binary @-

To retrieve the saved content, a GET request must be issued:

curl --user 9fxvMi8aR3CZ5BsNj0rt0odW:
   "success": true, 
   "result": "3245\n"

The standard API response is a JSON document, which might be hard to parse when not using a fully featured programming language. To make the API return the stored value directly, a query parameter output should be set to raw. For example, the following line will set a Bash NUM_LINES variable to the value returned by the API:

NUM_LINES=$(curl --user 9fxvMi8aR3CZ5BsNj0rt0odW:

Using the output=raw option is also required for retrieving binary data (non-encodable to UTF8).

The Storage API can be easily called from an alarm definition, allowing saving values that will survive a single alarm run:

var sum = rows[0].reduce((a, b) => a + b);
put('/storage/sum', sum);
Integrations allow sending alarm notifications to third-party services, synchronizing state of active alarms and sending data (like custom messages) directly from a Javascript alarm definition.

Available integrations:

The default configuration enables receiving alarm notifications by email. Each team member can control if the notifications should be sent to the email address used when signing up. The option is available on the SETTINGS / PREFERENCES page:
The ability to receive alarm notifications by email can be disabled globally (for all team members) by a team owner. To access the option, navigate to the SETTINGS / INTEGRATIONS page, and click the "email" box. The option is called "Enable receiving alarm notifications by email".
A Slack channel can be configured as a destination for alarm notifications. Additionally, the slack(message) function can be used in alarms to directly send a Slack message.
To set up the Slack integration, you must log in as the team owner, navigate to the SETTINGS / INTEGRATIONS page, and click the "slack" box.
Add to Slack button
"Add To Slack" button will appear and you have to click it. You will be redirected to If you are not logged in to Slack, you will have to sign in and select a team. Otherwise, a configuration screen will appear. will post messages to the channel selected as the "Post to" option. After clicking "Authorize" you will be redirected back to and you should receive a message confirming that the Slack integration is set up.
Slack configuration screen
Options controlling if the Slack integration is enabled are available below the "Add to Slack" button. The option "Receive alarm notifications in Slack" controls if sends a message to Slack for each new active alarm. A button embedded in the notification enables marking the alarm as resolved directly from Slack.
Slack alarm notification

When the option "Receive Slack messages from slack(message) alarm calls" is checked, each Javascript call of the function slack() like slack('We are in trouble'), will send a message to Slack. The message will be composed of the call's argument and a name of a report associated with an alarm definition containing the call.
Slack message from Javascript call
PagerDuty integration allows triggering PagerDuty incidents for each active alarm and synchronizing the state of the incidents. Additionally, pagerduty() function can be used in Javascript alarms to trigger a PagerDuty incident directly. The function can be used as a replacement for the alarm() function, allowing bypassing the handling of alarm events by and ceding it to PagerDuty.
To access the PagerDuty integration configuration, you must log in as the team owner, navigate to the SETTINGS / INTEGRATIONS page, and click the "pagerduty" box.
PagerDuty integration options
What to monitor? makes it easy to monitor "custom metrics" — anything that an automated system is unable to collect, because it requires application-level knowledge. The "custom metrics" will usually come from database query results, health-check scripts, API responses, custom scripts outputting JSON or text. However, what exactly is worth of monitoring in a typical web application? The following points will give some hints and examples.

Some of the examples use the moniqueio command-line tool. While the tool is not required for regular usage (the API itself is sufficient for many use cases), it contains helpers for some specific tasks like collecting CPU usage or cron jobs monitoring.

Database query results
Data gathered by executing a database query repeatedly is a valuable source for graphs and alarms. The actual queries worth monitoring depend heavily on what your application is doing. Taking an e-commerce website as an example, this could be:
  • counting a total number of registered accounts, grouped by a country or a state (an alarm could check if there were no new registrations in a day — indicating some possible problem)
  • counting a total number of products, grouped by an availability status (with an alarm checking if a number of immediately available products exceeds a given threshold)
  • computing an average price of a product, grouped by a category.
How to send query results to Since many free-form formats are parsed automatically, the command-line tools for executing queries, like psql or mysql, can be directly used for creating a report instance. A sample crontab line could look like this:

*/30 * * * *      psql -U postgres -h -c "SELECT category, avg(price) FROM product GROUP BY category" | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @-

What if, instead of plain SQL, you want to use a framework abstracting database access (like Rails, Django or Hibernate), or postprocess SQL results using your own code? You will need to create a stand-alone executable (script) that can be called from cron. You also have to decide if an HTTP call creating a report instance will be implemented in your code, or if the script will output data as JSON or text and the output will be piped to a curl invocation. The front page shows examples of sending Rails/Django query results from Ruby/Python.

Monitoring your own API service is important — whether it's for internal use by other software components, or by external users, ensuring the correctness and quality is crucial. And it's a good idea to also set up monitoring of third-party APIs, since the modern software is often tightly integrated with them and depends on their correct functioning.

When it's sufficient to monitor only the content of API responses, the curl command can be used for both fetching the response body and submitting it to the API:

 curl "" | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @- ""

If you need to monitor not only the response content, but also meta-data like HTTP status codes, HTTP headers, or timing data, you can use the moniqueio curl command. The helper command wraps the real curl invocation and prints a report containing meta-data about the executed HTTP request. The invocation is as simple as the previous one:

 moniqueio curl "" | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @- ""

When multiple API endpoints must be monitored, tags can be used to automatically create dashboard tiles for each monitored endpoint (a tag would need to contain an endpoint name or ID).

Health-check and unit test run results
Health checks perform a test on a software component and tell if the component is working properly, for example:
  • if a website's HTML content contains an expected text
  • if an HTTP API call returns with a 200 status code and has expected content
  • if a microservice responds to a "ping" request and uses less memory than a given threshold

Health checks can be implemented as stand-alone scripts that send ok / fail value as a report instance's input. A chart added to a dashboard will draw ok values as 1 and fail values as 0.

An alternative to creating standalone scripts is writing health checks as unit tests (or you can already have integration tests written in a form of unit tests that can serve as health checks). How to send unit test run results to The moniqueio command-line tool provides a helper command unittest_summarize that parses output of popular unit test runners into JSON, suitable for submitting as a report instance's input.

Tags are useful when health checks results are reported for multiple instances of a service (a microservice, a container etc.). An automatically created dashboard can show the health of all currently running instances (either the current health or historic data, using charts or text).
Health-check statuses for all instances of a service

If you want to trigger an alarm for each received fail value, the alarm definition will be simple:

if (rows[0][0] != 'ok') {
    alarm('Health check failed');

If an alarm should be triggered for each tag values combination, the tags should be included in the alarm key:

if (rows[0][0] != 'ok') {
    alarm('Health check failed for ' + tags.join(','));

If you want to check historical health check values before triggering an alarm, for example to check if all checks failed for the last hour, you will need to make an API call that fetches report instances. To specify a date and time from which the instances should be returned, you can create a specific Javascript Date object, or directly use one of the query parameters of the API call supporting human-readable date specifications:

var allNotOk = get('/reports/health_check/instances', {from: '1 hour ago', limit: 20}).
               json.result.every(instance => instance.rows[0][0] != 'ok');
if (allNotOk) {
    alarm('Health checks failed for the last hour');
System commands
Even when a system command produces output for humans,'s AI algorithms parse it into a tabular representation that is often sufficient as a source for charts and alarms. This makes it possible to create report instances just by executing the command and sending its output to using curl.

Since automatically parses multiple input formats (e.g. ASCII tables, whitespace-aligned tables, CSV files), many types of commands can be used to produce a report instance's input, for example:

  • commands inspecting system state, like ps, free or lsof, possibly used together with grep to filter sent lines
  • commands supporting deployment tools, like aws, docker or supervisorctl
  • your own commands that ouput some summary of your data.

What about the correctness of the automated parsing? Definitely, there are cases when a free-from input is too irregular to guess a "correct" tabular representation, or when different runs produce outputs too different to define the wanted chart series and alarms. In that case you can try to manually specify the input format or a field delimiter. If that is not sufficient, you will have to parse the input into a more robust format, like JSON or CSV.

Logfiles often contain interesting data that after filtering and summarizing, can be submitted as a report to, for example:
  • a number of times a given event was logged
  • an average processing time (computed by extracting processing times from logfile lines)
  • all lines containing ERROR string (submitted using format=single query parameter)

When processing logfiles repeatedly, using cron, usually only the lines that appeared in a logfile since a previous invocation should be taken into consideration. The newcontent command of the moniqueio tool is a helper that implements this behaviour.

Cron jobs
A simple notification can be sent to indicating a successful run of a command using the && shell operator (which executes the right-hand side if the left-hand side completes with a return code 0): && (echo ok | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @- "")

A more robust way to do it is by using the run command of the moniqueio tool, which captures all return codes, stdin/stdout samples and elapsed time.

Both ways can be used in crontab files, allowing creating dashboard tiles displaying a history of job runs and defining alarms checking return statuses (or stdin/stderr contents).

Tags are useful when dealing with the same cron job run on multiple servers. Additionally, a tag could contain a name of a job, allowing automatic creation of a dashboard tile for each added job, for example:

10 2 * * *    moniqueio run | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @- ""
System resources usage (a replacement for automated monitoring systems)
The moniqueio tool comes with the sysreports command that sends reports summarizing resources usage of a Linux host: CPU usage, disk usage, free disk space, network throughput etc. The command allows adding tags, supporting auto-creating dashboards displaying system-level data of multiple, possibly ephemeral, server instances.

When a specific system metric should be monitored, it's usually very easy to push it to — the output of many system commands can be sent directly, as well as the content of system files (e.g. inside the /proc directory).

Javascript alarms
An alarm definition is Javascript code that is run for a report instance. The code can access global variables and call functions, specific to functionality, as well as Javascript built-ins. The Javascript engine supports the ECMAScript 6 standard.
Global variables
An alarm definition is executed in a context of a current report instance, whose data is available as the following global variables:
  • reportName (a string) — the current report's name
  • id (a string) — a hex string identifying the current instance
  • tags (an array of strings) — tags attached to the current instance
  • created (a Date object) — creation date of the current instance
  • rows (an array of arrays of JSON values) — the current instance's tabular representation - an array of rows, where each row is an array of cells. Each cell is a JSON-serializable object. If the instance's input was textual, cells will be strings. If it was JSON, the content type will match the source type.
  • header (an array of integers) — indexes of the rows array that identify rows that are regarded as header rows (not containing regular data). This might not be accurate, because it's based on automatic detection (if not set manually).
  • input (a JSON value) — the original input from which the tabular representation was created. It's especially useful when the original input was a JSON document, because it's usually easier to access it directly than to deal with a tabular representation.
Alarm-specific functions
alarm(alarmKey [, details])

Trigger an active alarm with an alarm key alarmKey (a string) and the optional message details (a string). See user guide for a description.


Print a message (a string) to the output window when running an alarm in a "dry run" mode.

get(pathOrUrl [, params])

post(pathOrUrl, data [, params])

put(pathOrUrl, data [, params])

delete_(pathOrUrl [, params])

Perform an HTTP/HTTPS call to either the API, or an external URL.

The pathOrUrl argument can have two forms:

  • if it's not a full URL, but a path, it's interpreted as a path component of API URL, forming the URL<pathOrUrl>. An API key doesn't need to be set explicitly.
  • if it's a full URL starting with http:// or https://, it is interpreted as a full external URL

params is an optional argument - an object specifying query parameters (object keys are parameter names and object values are parameter values).

In the case of the post and put functions, data is a string containing data that is sent as request data.

Result: an object representing an HTTP response, having the following properties:

  • code (a number) — the HTTP status code
  • content (a string) — the textual content
  • json (a JSON value) — the content, assumed to be JSON, converted to Javascript datatypes (null if the content is not JSON)

Example. A call creating a report instance that sets some query parameters and checks a response:

var r = post('/reports/numbers''10__20__30', {delimiter: '__', tags: 'important,ip:'});
if (!r.json.success) {
     alarm('API call failed');
} else {
    print('Parsed nums:' + r.json.result.rows);

Convert a string input into a Date object. The input can have multiple formats and the conversion works correctly for inputs like today, 3 days ago, 2016-02-01, 02/01/16, Mon Feb 01 2016 08:15:00 GMT+0200 . The function is useful when a Javascript Date must be constructed from a datetime string that is not straightforward to parse.


Convert a string input into a number. The function does the following conversions:

  • percent values are converted to a fractional representation — asNumber('32%') gives 0.32
  • file size units like KB, MB, GB are converted to a byte value — asNumber('4 kB') gives 4096
  • if a string mixes words and numbers, the first number is extracted — asNumber('items: 4 or more') gives 4

Directly send a text message to the configured Slack channel. Requires setting up Slack integration on the SETTINGS / INTEGRATIONS page and checking the "Receive Slack messages from slack(message) alarm calls" option.

Alarm notifications can be received as Slack messages automatically when the option "Receive alarm notification in Slack" is checked. This function can be used to send a Slack message directly, without triggering an active alarm. Please note that while multiple alarm() calls can be grouped as a single active alarm and result in a single notification, each slack call sends a separate Slack message, possibly resulting in a large number of received messages.


var maxNumber = Math.max.apply(null, rows[0]);
slack('The max number for today is ' + maxNumber);
pagerduty(incident_key, [description, [details]])

Directly trigger a PagerDuty incident. Requires setting up PagerDuty integration on the SETTINGS / INTEGRATIONS page and checking the "Create PagerDuty incidents from pagerduty() alarm calls" option.

The function can be called with 1, 2 or 3 arguments. The arguments have the same meaning as described in PagerDuty documentation:

  • incident_key — a globally unique identifier of the event. All events triggered with the same incident_key will be grouped as a single PagerDuty incident.
  • description — text describing the incident. If only the incident_key argument is passed, the description has the same value as the incident_key.
  • details — arbitrary JSON document included in the incident.

The pagerduty() function can be used to bypass the handling of alarms by when it replaces alarm() calls. The two functions have very similar semantics: both rely on the "key" argument to uniquely identify an event. The difference is that each report has its own namespace of alarmKeys, while PagerDuty incident_keys are global.


// Trigger an incident with both 'incident_key' and 'description' equal to "api1 fail"
pagerduty('api1 fail');

// Use a dynamically constructed 'incident_key' and 'description'
pagerduty('web: wrong status code: ' + input.status_code, 'response: ' + input.content.slice(100));

// Pass the 'details' argument
pagerduty(tags[0] + ' out of diskspace'rows[1][4], {datetime: created + '', tags: tags});
Javascript built-ins
All functions built into the Javascript language are available, as described in books and online references. Please note that the objects and the functions present when Javascript is run by a web browser, such as the window object or the alert function, are not available.

Some of the useful functions and operators are:

API usage guidelines
The API is JSON-based and uses REST principles. HTTPS protocol is required to access the API (using HTTP will give a Bad Request error).
Passing an API key
An API key can be passed using two methods:
  • HTTP Basic Authentication — an API key should be specified as either a username or a password
  • URL query parameter key
The security of the authentication mechanism is based on using the encrypted HTTPS protocol, as well as keeping the API key secret. If your API key has been exposed, you can reissue it on the SETTINGS page.
Response format
Except when the response format is explicitly set with an output query parameter, all API calls return a JSON object with the following properties:
  • success (a boolean) — true if the call was successful and false if not. Checking this value can replace checking an HTTP status code.
  • result (a JSON value) — the actual result of an API call, usually an object or an array. Might be omitted from the response if an API call returns no result.
  • details (an object) — meta-data about the result. For example, it can contain a count of all returned items. When an error happens, it has the following properties:
    • message (a string) — a human-readable error message
    • errorCode (a string) — a short string identifying an error

An HTTP response code is set to signal a possible error:

  • 200 (OK) — a successful call
  • 400 (Bad Request) — invalid format of input parameters/data
  • 401 (Unauthorized) — invalid API key
  • 404 (Not Found) — wrong URL
  • 422 (Unprocessable Entity) — invalid input format (eg. an empty input used as a report instance's input)
  • 429 (Too Many Requests) — too many requests performed in the last minute (the default limit is 1200 API calls per minute)
  • 500 (Internal Server Error) — internal error - the error was signaled to admins
Date, boolean input formats
The query parameters of API URLs can specify a date with time or a boolean. The API accepts the following input formats for them:
  • date with time — the canonical format is ISO 8601 (the format used in HTTP headers) with UTC timezone, for example 2016-02-01T08:15:30Z. For convenience, other formats will be also parsed, for example: 1/2/16, 2016-02-01 , as well as some English phrases, like 3 days ago, yesterday .
  • boolean:
    • inputs representing false0, false, f, no
    • inputs representing true1, true, t, yes
Paging results
When the API returns an array of results and the number of results is greater than 20, then only the first 20 results are returned and the next slices of results must be retrieved using separate API calls. The URL for fetching the next slice is set in the details object under a next key. If no further results are available, the value is set to null.

For example, the following Python code implements fetching all paged results:

import requests

def fetch_all(url, key):
    results = []
    while url:
        r = requests.get(url, params={'key': key})
        if not r.json()['success']:
            raise Exception('Invalid API response %s' % r.text)
        results += r.json()['result']
        url = r.json()['details']['next']
    return results

# Sample invocation:
# fetch_all('', '9fxvMi8aR3CZ5BsNj0rt0odW')

The paging is implemented with the lastId parameter that specifies the ID of the last item from the previous slice. The limit query parameter can be also specified to change the default size of 20 items of a single slice (the value must be between 1 and 50).

A JSON object returned from an API call can contain the href key with a value of an API URL containing details of the object.

For example, a call to /reports/diskfree/instances can return an array of objects containing the href key with values like Accessing the returned URL with the GET method will retrieve all details of the report instance.

POST /reports/<name>
Create a report instance belonging to a report <name>.

Input: The input can be of multiple formats (see format query parameter documentation below) and will be parsed into a table with optional header rows. It must be passed in one of the two ways:

  • as POST binary data
  • as a POST form value stored under a key specified with the formKey query parameter (the request data must be encoded with the content type application/x-www-form-urlencoded)

Query parameters (optional):

  • tags — a comma-separated list of tags attached to a report instance. Up to three tags are supported.
  • format — an input format type, one of:
    • any — the input is of unspecified type - the format will be guessed (the default)
    • json — the input is a JSON document. There are two normalized representations of an instance's data:
      • an array of rows, where each row is an array of JSON values
      • an array of rows, where each row is an object mapping column names (strings) to column values (JSON values)
      However, even if the input does not conform to a normalized representation, but it's a valid JSON document, it will be converted to a valid representation by filling the missing parts with defaults, for example:
      • if the number of cells in each row is not equal, the shorter rows will be filled with null values
      • a single object or number will be treated as a single-row table
      Additionally, the input document is flattened to better match the tabular representation:
      • nested objects are flattened, with keys joined with a . (dot) character. For example, {"x": {"y": 8, "z": true}} is converted to {"x.y": 8, "x.z": true}
      • arrays of objects contained in outer objects are unnested. For example, {"x": [{"y": 1}, {"y": 2}]} is converted to [{"x.y": 1}, {"x.y": 2}]
      • the flattening rules are applied recursively
      For some types of inputs, the recursive unnesting of arrays might generate a document that is very big. If the number of generated rows exceeds 10000, the flattening is skipped — the input is processed using the jsonraw format.
    • jsonraw — the same as json, but the flattening is not applied. This is needed when a single table cell should be a nested object, or if the flattening generates too many rows. This is usually not very useful for drawing charts, but will be handled correctly by dashboard tiles displaying textual content.
    • csv — CSV format - each input line is a sequence of fields separated with a delimiter. The delimiter value is guessed or can be explicitly passed as the delimiter parameter.
    • ascii — the input is a table with cells separated with either whitespace or |, =, +, - characters. This is what output of commands like ps, free or psql looks like.
    • asciitable — a subset of the ascii format, requiring usage of |, =, +, - characters for "drawing" table borders.
    • asciispace — a subset of the ascii format, requiring usage of spaces for aligning table columns.
    • props — each input line is treated as containing a key, value pair, possibly separated with a delimiter. Each row of the parsed table will have the two elements. Sample inputs conforming to this format: name=value lines, Java's *.properties files.
    • tokens — each token (a string separated with whitespace) is converted to a table cell. If the number of tokens in each input line is not constant, the shorter lines are filled with an empty string.
    • markdown — the input is stored in a single-cell table. Dashboard tiles displaying textual content will render it as Markdown.
    • single — like markdown, this converts the input into a single-cell table and is meant for displaying textual content, but the input will be rendered as raw ascii text.
  • header — a comma-separated list of integers - indexes of the report instance's rows that are header rows (the first index is 0). For default, a header is auto-detected. It's not crucial to have a header correctly specified - it's mostly used for displaying chart and table labels.
  • delimiter — a string that delimits fields (table cells) on a single input line. Specifying this option assumes the input has the format csv or tokens. This parameter is useful when the guessed delimiter value is wrong.
  • autotags — a comma-separated list of automatically computed tags. Currently the only supported value is:
    • ip — attach a tag ip:<ip-address>, where <ip-address> is the public IP address of the calling host.
  • link — an URL associated with the report instance. Clicking a dashboard tile's title displaying the report instance will open this URL.
  • formKey — a POST form's key that stores a report instance's input to parse (this overrides the default behaviour of using request's data as direct input).
  • created — a timestamp set as the instance's creation date. This can't be a future date. For default, the value is equal to the datetime of the API call. This parameter is useful when importing historical data.

# Create a report instance from 'free' command output. We are not specifying a format or other options explicitly.

free | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @- ''

# Send a CSV file, explicitly specifying a format and a header.

cat data.csv | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @- ',1'

# Import a historic report by specifying a 'created' date and tags.

cat report_old.json | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @- ',important'

# Send data encoded as POST form.

curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data 'mydata=1,2,3,4,5' ''

GET /reports
Fetch report names.

Query parameters (optional):

  • prefix — only fetch the reports with the name starting with the prefix
  • lastName and limit — used for paging results

Result: an array of objects with the keys:

  • name — report name
  • href — an API link to the report

# Fetch all report names (up to the default limit of 20)

curl --user 9fxvMi8aR3CZ5BsNj0rt0odW:
  "success": true, 
  "details": {
    "next": null
  "result": [{
    "name": "diskfree", 
    "href": ""
  }, {
    "name": "j3", 
    "href": ""
  }, {
    "name": "mydata", 
    "href": ""

GET /reports/<name>/instances
Fetch report instances of the report <name>.

Query parameters (optional):

  • from — fetch instances created on the specified date or later
  • to — fetch instances created on the specified date or earlier
  • tags — a comma-separated list of tags - fetch instances having the specified tags
  • expand0 or 1 (default) - whether the returned instances should contain the rows and header attributes containing the instance's tabular representation
  • expandInput0 (default) or 1 - whether the returned instances should contain the input attribute with the original instance's input from which the tabular representation was created
  • orderasc (default) or desc - the direction of ordering of the returned instances by a creation datetime (ascending / descending)
  • fromId — fetch instances starting from (and including) the given report instance id (preserving the specified order)
  • lastId — the same as fromId, but excludes the given report instance id
  • limit — limit the number of returned results to the specified number (default: 20)
Result: an array of objects with the keys:
  • id (a string) — an ID of the report instance
  • tags (an array of strings) — tags attached to the report instance
  • created (a string) — a creation date and time of the report instance
  • rows (an array of arrays of JSON values) — present if the expand parameter is true - a tabular representation of the report instance
  • header (an array of numbers) — present if the expand parameter is true - indexes of rows from the rows array that are header rows
  • input (a JSON value) — present if the expandInput parameter is true - the original instance's input from which a tabular representation was created. If the input was JSON, it's the original JSON value. If it was text, it's a string.
  • href — an API link to the report instance

# Fetch instances for the last two hours, without printing rows and headers.

curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: ''
  "success": true, 
  "details": {
    "next": null
  "result": [{
    "id": "bfd7b0deacf611e6a4dba4bf0107c608", 
    "tags": [], 
    "created": "2016-11-17T18:50:46.189488", 
    "href": ""

# Fetch the newest instance sent from a given IP address.

curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: ''
  "success": true, 
  "details": {
    "next": ""
  "result": [{
    "id": "bfd7b0deacf611e6a4dba4bf0107c608", 
    "tags": ["ip:"], 
    "created": "2016-11-17T18:50:46.189488", 
    "rows": [
      ["Filesystem", "1K-blocks", "Used", "Available", "Use%", "Mounted on"], 
      ["udev", "8030116", "4", "8030112", "1%", "/dev"], 
      ["tmpfs", "1608264", "22256", "1586008", "2%", "/run"], 
      ["/dev/sda1", "22841212", "18064028", "3593852", "84%", "/"], 
      ["none", "4", "0", "4", "0%", "/sys/fs/cgroup"], 
      ["none", "5120", "0", "5120", "0%", "/run/lock"], 
      ["none", "8041316", "493000", "7548316", "7%", "/run/shm"], 
      ["none", "102400", "20", "102380", "1%", "/run/user"], 
    "header": [0], 
    "href": ""

# Fetch the first instance created after an instance with a given id.

curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: ''
  "success": true, 
  "details": {
    "next": null
  "result": [{
    "id": "ac47b0dea45311e6a4dba4bf0107c612", 
    "tags": [], 
    "created": "2016-11-17T18:52:48.5833", 
    "href": ""
GET /reports/<name>/instances/<id>
Fetch details of a single report instance belonging to the report <name> and having the ID <id>.

Result: an object with the same keys as described above and as if the expand and expandInput parameters were true.

# Fetch data of a specific instance.

curl --user 9fxvMi8aR3CZ5BsNj0rt0odW:
  "success": true, 
  "result": {
    "id": "bfd7b0deacf611e6a4dba4bf0107c608", 
    "tags": [], 
    "created": "2016-11-17T18:50:46.189488", 
    "rows": [
      ["Filesystem", "1K-blocks", "Used", "Available", "Use%", "Mounted on"], 
      ["udev", "8030116", "4", "8030112", "1%", "/dev"], 
      ["tmpfs", "1608264", "22256", "1586008", "2%", "/run"], 
      ["/dev/sda1", "22841212", "18064028", "3593852", "84%", "/"], 
      ["none", "4", "0", "4", "0%", "/sys/fs/cgroup"], 
      ["none", "5120", "0", "5120", "0%", "/run/lock"], 
      ["none", "8041316", "493000", "7548316", "7%", "/run/shm"], 
      ["none", "102400", "20", "102380", "1%", "/run/user"]
    "header": [0], 
    "input": "Filesystem     1K-blocks     Used Available Use% Mounted on\nudev             8030116        4   8030112   1% /dev\ntmpfs            1608264    22256   1586008   2% /run\n/dev/sda1       22841212 18064028   3593852  84% /\nnone                   4        0         4   0% /sys/fs/cgroup\nnone                5120        0      5120   0% /run/lock\nnone             8041316   493000   7548316   7% /run/shm\nnone              102400       20    102380   1% /run/user\n", 
    "href": ""

GET /reports/<name>/alarmdefinition
Fetch content of the Javascript alarm definition for the report <name>.

Result: an object with the key:

  • source (a string) — the alarm definition as Javascript source code. It will be an empty string if there's no alarm defined for the report.

# Fetch Javascript alarm definition for a report

curl --user 9fxvMi8aR3CZ5BsNj0rt0odW:
  "success": true, 
  "result": {
    "source": "if (parseFloat(rows[1][4]) > 90)\n{\nalarm('High disk usage for /')l\n}"

PUT /reports/<name>/alarmdefinition
Set content of the Javascript alarm definition for the report <name>. The content must be specified as PUT binary data.

# Set Javascript alarm definition from a shell prompt.

$ curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: -XPUT --data-binary @- << EOF 
if (rows.length > 10) {
  alarm('Too many rows');
  "success": true

GET /reports/<name>/alarms/active
Fetch active alarms for the report <name>.

Result: an array of objects with the keys:

  • alarmKey (a string) — a string identifying the alarm - the first argument of the alarm function
  • details (a string) — detailed message - the second argument of the alarm function that triggered the active alarm. It's null when the argument was omitted.
  • triggered (a string) — a date and time when the active alarm was triggered
  • count (an integer) — the number of times the alarm was issued (multiple alarm calls with the same alarmKey are grouped under a single active alarm)

# Fetch active alarms for a report.

curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: ''
  "success": true, 
  "result": [{
    "alarmKey": "High disk usage", 
    "details": "94%", 
    "triggered": "2016-11-17T18:50:46.189488", 
    "count": 4
  }, {
    "alarmKey": "Out of disk space", 
    "details": null, 
    "triggered": "2016-11-18T19:53:44.483132", 
    "count": 1

POST /reports/<name>/alarms
Trigger an alarm for the report <name>. It has the same effect as calling the alarm function from an alarm definition, including sending configured notifications.

Query parameters:

  • alarmKey and details — the same meaning as the arguments of the Javascript function alarm. The details parameter can be skipped.
  • forId — an ID of a report instance for which the alarm is triggered. If left empty, the latest report instance is used.

# Trigger an active alarm from a shell prompt.

$ curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: -XPOST  ''
  "success": true

POST /reports/<name>/instances/<id>/alarmdryrun
Run a Javascript alarm definition, submitted as POST data, in "dry run" mode, against a report instance with ID <id> belonging to the report <name>.

Result: an object with the keys:

  • errorEncountered (a boolean) — true if an error happened during execution of the alarm definition
  • exceptionMessage (a string) — if errorEncountered is true, it will contain an error message
  • alarm (an array, optional) — an array of objects with the keys alarmKey, details for each alarm function call
  • print (an array, optional) — an array of string messages issued using the print function calls

# Dry-run an alarm definition that makes an 'alarm' and 'print' call.

curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: -XPOST --data-binary @- '' << EOF
print('Number of rows: ' + rows.length);
alarm('Always triggered');
  "success": true, 
  "result": {
    "errorEncountered": false, 
    "exceptionMessage": "", 
    "print": ["Number of rows: 11"], 
    "alarm": [{
      "details": null, 
      "alarmKey": "Always triggered"

# Dry-run an invalid alarm definition.

curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: -XPOST --data-binary @- '' << EOF
invalid code
  "success": true, 
  "result": {
    "errorEncountered": true, 
    "exceptionMessage": "SyntaxError: Unexpected identifier\n    at <alarm_definition>:1:8\n    invalid code"
GET /reports/<name>/instances/<id>/alarmglobals
Fetch Javascript source containing Javascript globals used for an alarm run for a specific report instance with ID <id> belonging to the report <name>.

Query parameters (optional):

  • output — if set to raw, the result of the API call will not be a JSON document, but the Javascript source as text

Result: a string with the Javascript source.

# Get a report instance's data as Javascript global variables. 

curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: ''

reportName = "diskfree";
id = "bfd7b0deacf611e6a4dba4bf0107c608";
tags = [];
created = new Date("2016-11-17T18:50:46.189488");
rows = [
  ["Filesystem", "1K-blocks", "Used", "Available", "Use%", "Mounted on"], 
  ["udev", "8030116", "4", "8030112", "1%", "/dev"], 
  ["tmpfs", "1608264", "22256", "1586008", "2%", "/run"], 
  ["/dev/sda1", "22841212", "18064028", "3593852", "84%", "/"], 
  ["none", "4", "0", "4", "0%", "/sys/fs/cgroup"], 
  ["none", "5120", "0", "5120", "0%", "/run/lock"], 
  ["none", "8041316", "493000", "7548316", "7%", "/run/shm"], 
  ["none", "102400", "20", "102380", "1%", "/run/user"], 
header = [0];
input = "Filesystem     1K-blocks     Used Available Use% Mounted on\n" + 
    "udev             8030116        4   8030112   1% /dev\n" + 
    "tmpfs            1608264    22256   1586008   2% /run\n" + 
    "/dev/sda1       22841212 18064028   3593852  84% /\n" + 
    "none                   4        0         4   0% /sys/fs/cgroup\n" + 
    "none                5120        0      5120   0% /run/lock\n" + 
    "none             8041316   493000   7548316   7% /run/shm\n" + 
    "none              102400       20    102380   1% /run/user\n"; 
GET /storage/<key>
Fetch arbitrary content stored under the key <key>.

Query parameters (optional):

  • output — if set to raw, the result of the API call will not be a JSON document, but directly the content.

Result: the content stored under the key (set under the result attribute in the standard API response object, or returned directly as an HTTP response when output=raw is set). When the content is binary data, non-encodable as UTF8, the output=raw option must be set - otherwise the call will result in a 400 (Bad Request) error.

# Retrieve stored content.

curl --user 9fxvMi8aR3CZ5BsNj0rt0odW:
  "success": true, 
  "result": "working"

# Retrieve the stored content directly, skipping creating a JSON response document.

curl --user 9fxvMi8aR3CZ5BsNj0rt0odW:

PUT /storage/<key>
Set content, sent as binary PUT data, under the key <key>.

# Store a string value.

echo -n working | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: -XPUT --data-binary @- ''
  "success": true, 
  "result": {
    "href": ""