Monitor what matters to you and your users makes it easy to set up dashboards, alerts and heartbeat checks for JSON data, SQL results or health-checks.
Create account
How to monitor SQL query results? (1 line)
# directly send PostgreSQL query results

$ psql -c "SELECT country, COUNT(*) FROM user GROUP BY country" | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @-
Traditional monitoring system (15 lines + TODO)
import requests
import psycopg2

# Lets connect to the DB and execute the query
db = psycopg2.connect("dbname='' user='' host='localhost' password=''")
cursor = db.cursor()
rows = cursor.execute("""SELECT country, COUNT(*) FROM user GROUP BY country""")

# For each row, create a proper metric name and make an API call
for row in rows:
    country = row[0]
    count = row[1]
    metric_name = 'users_by_country.%s' % country'',
                  data={'metric': metric_name,
                        'value': count},
                  params={'api_key': '9fxvMi8aR3CZ5BsNj0rt0odW'})

    # TODO: we need to manually add metrics for new countries to a dashboard :(
    # TODO: decide on a metric naming scheme to handle more complex queries
    # TODO: rewrite the code to an installable plugin due to limitations of the API
Vast increase in productivity and happiness .

Setting up dashboards using a traditional monitoring system or a dashboard framework requires a lot of tedious work. For each SQL query or a JSON endpoint the work must be repeated to create proper metric names or a format accepted by a dashboard widget. requires writing vastly less code and is just more enjoyable to use.

More knowledge about what is really happening.

If you set up a traditional monitoring system or an APM platform, some, lower-level parts of your system will be monitored. However, this is just a tip of an iceberg — the most meaningful things happen in the application layer. APIs, microservices, databases, logs require custom monitoring.

Since it's so easy to push data from these sources into, you will end up having a lot more meaningful information compared to using only the traditional tools. It's good to know what's really going on with your product.

More (and better) sleep.

Traditional monitoring systems have good support for alerting on CPU usage and other system metrics. But if you want to set up checks on SQL results or API responses, you will find it either hard or impossible — these tools are not meant for such use cases.'s Javascript alarms allow defining any alerting logic with ease. The checks that matter can be actually implemented — you shouldn't rely on reporting errors by your users when a monitoring system can handle it.

Making your monitoring setup more pleasant.

The limitations of the traditional monitoring systems are often worked around by developing various in-house scripts that parse some data and check the health of services. However, the plethora of scripts, each doing its job in a different way, saving state to ad-hoc files, leads to a setup that is very hard to manage. brings structure to the custom monitoring. The health-check results are grouped into reports that can be automatically visualized on a dashboard and queried through the API, which also supports storing intermediate state needed by the checks.

Sample dashboard
See live read-only view of the dashboard (obtained using dashboard sharing function)
The dashboard was created by putting three lines in /etc/crontab file, and making a few clicks to select numbers to graph (no agent installation, no configuration):
# directly send 'ps aux' output
*/30 * * * *      ps aux | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @-

# directly send 'mysql' output
*/30 * * * *      mysql -e "SELECT country, COUNT(*) FROM user GROUP BY country" | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @-

# directly send health check result
*/30 * * * *      if curl -s http://web | grep -q 'Welcome!'; then echo OK; else echo FAIL; fi | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @-
It's incredibly easy to monitor various things using More examples:
Rails query results
API response
Microservice health check
Cron job
Unit test results
CPU, disk, memory
Django query results
SQL results
Ad-hoc data (Python)
Ad-hoc data (Unix)
require 'rest_client'

articles = Article.all

# Send JSON representation of results '', articles.to_json, :params => {:key => '9fxvMi8aR3CZ5BsNj0rt0odW'}
# Directly send JSON API response.
curl | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: -XPOST --data-binary @-

# If you need to monitor not only the content of an API response, but also meta-data like status codes or headers, use the 'moniqueio curl' command. 'moniqueio' is a set of extra command line tools available at
moniqueio curl -XPOST --data 'key1=val1' | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: -XPOST --data-binary @-
var restler = require('restler');

setInterval(function() {
    # Tags identify a microservice instance. They allow auto-creating dashboards with a tile created for each microservice instance. 
    var tags = ['microservice:search', 'pid:' +];

    # Send a status report containing memory usage data.
    restler.postJson('', { 
            status: 'ok',
            # Arbitrary JSON-serializable data can be included in the sent data.
            memory: process.memoryUsage()
        }, { query: {
            tags: tags.join(','),
            key: '9fxvMi8aR3CZ5BsNj0rt0odW'
}, 60000);
# 'moniqueio run' runs a command and outputs a JSON report summarizing the run, which can be submitted to the API.
10 0 * * *    moniqueio run | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @-

# Tags can be used to automatically create dashboard tiles for each job name (see sample dashboard below).
10 0 * * *    moniqueio run | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @-
# Run unit tests, parse results using the moniqueio tool, send it to API.

*/30 * * * * 2>&1 | moniqueio unittest_summarize | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @-
# Send log lines containing WARN or ERROR. The 'newcontent' command outputs lines that appeared since the previous invocation. The 'format=single' query parameter assures the sent data is treated as a single piece of content.

*/5 * * * *    moniqueio newcontent /home/ubuntu/app.log | grep 'WARN|ERROR' | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @- ''
# Send system-level reports. The '--tag-ip' option tags sent reports with an IP address, making possible auto-creating dashboards for multiple IPs.

*/15 * * * * moniqueio --api-key 9fxvMi8aR3CZ5BsNj0rt0odW sysreports --tag-ip
from django.core import serializers
import requests

qs = models.Choice.objects.filter(question_id=1).all()

# Send results serialized as JSON, using the standard serializer'', params={'key': '9fxvMi8aR3CZ5BsNj0rt0odW'}, data=serializers.serialize('json', qs))
# Send PostgreSQL results
$ psql -c "SELECT country, COUNT(*) FROM user GROUP BY country" | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @-

# Send MySQL results
$ mysql -e "SELECT country, COUNT(*) FROM user GROUP BY country" | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @-

# Send Oracle results
$ echo "SELECT * FROM EMPLOYEE;" | sqlplus -s HR/oracle | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: -XPOST --data-binary @-

# Send MongoDB results
$ mongo --quiet --eval 'JSON.stringify( db.restaurants.find().limit(10).toArray() )' | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: -XPOST --data-binary @-

# Send Cassandra results
$ cqlsh -e "SELECT count(*) FROM user" | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @-
import requests
import json

# send a single number
num = 86'', params={'key': '9fxvMi8aR3CZ5BsNj0rt0odW'}, data=str(num))

# send a list of numbers
num_list = [86, 43, 32]'', params={'key': '9fxvMi8aR3CZ5BsNj0rt0odW'}, data=json.dumps(num_list))

# send a dict
d = {'x': 86, 'y': 43, 'z': 32}'', params={'key': '9fxvMi8aR3CZ5BsNj0rt0odW'}, data=json.dumps(d))

# send a nested dict
nd = {'x': {'nums': [1, 2], 'message': 'OK'}, 'y': 3.42}'', params={'key': '9fxvMi8aR3CZ5BsNj0rt0odW'}, data=json.dumps(nd))

# send data from string
cdata = """
us 120
uk 34
de 27
it -
"""'', params={'key': '9fxvMi8aR3CZ5BsNj0rt0odW'}, data=cdata)
# Send kernel version (input doesn't need to be numeric - in general, parsed content can be any JSON value)
$ uname -r | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @-

# Send memory usage data
$ cat /proc/meminfo | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @-

# Directly send 'docker' command output
$ docker stats | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: -XPOST --data-binary @-

# Directly send 'aws' command output
$ aws ec2 describe-instances | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: -XPOST --data-binary @-

# Send disk status coming from 'smartctl' tabular output
$ sudo smartctl -A /dev/sda | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @-

# Send last 10 lines of logs, without parsing the input into a table
$ dmesg | tail -10 | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @- ''

# A single dashboard tile can show a newest Markdown document sent, or a range of documents for the given time period.

$ curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @- '' << EOF
> Quick **code summary**:
>   * Javascript lines: **$(wc -l **/*.js | tail -1)**
>   * CSS lines: **$(wc -l **/*.css | tail -1)**
>   * HTML lines: **$(wc -l **/*.html | tail -1)**
Examples of sending data into are boring — whether it's a nested JSON document, an ASCII table containing SQL results or a text file, the same HTTP POST call handles the processing, even without specifying an input format. The data is automatically parsed into a tabular format and immediately available for graphing and alerting.

Adding charts to a dashboard is very easy — just click the wanted values.

(8-second video)
The sent data is available to Javascript code that can perform tests and trigger alarms. The "dry run" mode and print calls make the code easy to investigate and debug.

The list of other features includes: sending a custom message to a Slack channel, querying historic data, making HTTP requests, submitting postprocessed data as a source for charts and other alarms, synchronizing incidents with Slack and PagerDuty.

The extra features make the alarms a valuable tool for DevOps and ChatOps. The alarms are run in a robust environment with notifications sent for failures like Javascript errors or timeouts.
Defining alarms in Javascript
It's very easy to monitor multiple instances of servers, containers or microservices. Attaching a tag to submitted data, like ip:, allows automatic creation of a dashboard tile for each tag value by copying a template tile.

The mechanism is very lightweight and isn't limited to selected technologies. It can be used to automatically create a dashboard tile for each monitored entity — whether it comes from a system-level area (an HTTP endpoint, a cron job, a process) or a higher-level source (customer's data, stock price, an offered product).

Extra features include setting a maximum time a tile can live without receiving new data (which can be set for ephemeral entities that should disappear from a dashboard after a deletion) and controlling whether X and Y axes of charts should be synchronized.
Sample dashboard from microservice health checks
If a server sending a health-check report dies or a cron job is misconfigured, the data is no longer being sent. A heartbeat check assures that an alarm will be triggered in such cases.

The checks can be configured for specific tags, which can contain a source IP address, a cron job name or a process PID.
A heartbeat check
A GitHub project contains extra tools that make solutions to many tasks one-liners (monitoring unit test results, processing log files, monitoring cron jobs, collecting system-level data and more).
The API lets you do more than just sending a report: retrieve parsed report instances, store arbitrary data, directly trigger alarms .
For some reason many monitoring systems have poor user experience.'s UI stays clean and simple, even when what happens behind the scenes is complex.
24 / 7
A monitoring system should be something you can rely on 24/7/365. is run using a highly-available cluster and for the last 12 months the availability of the Web, API and Alarms service exceeded 99.99%.
Keeping your data secure is our highest priority. HTTPS everywhere, fine-grained access checking and doing periodic security reviews are some of the measures we take.
The 10-day trial allows you to check all features without entering credit card data. We are happy to answer all questions submitted to the email address
Click on for explanation.
Traditional monitoring system
Dashboard framework
Monitoring CPU usage
Monitoring SQL results
Monitoring custom microservices
Setting up dashboards for specific services (like Google Analytics or Stripe)
Monitoring custom data (JSON, health-checks, text and numeric values)
Make setting up monitoring a joyful experience
(and save a lot of time and money)
Sign up for a trial

(or just see plan options)
Do you wonder how could be actually used in your projects,
or if it can replace other monitoring systems? Ask us anything.