is shutting down on October 15th, 2018
Monitor what matters to you and your users
Traditional monitoring systems are designed for low-level system metrics. makes it easy to monitor high-level metrics coming from JSON data, health-checks or SQL queries.
How to monitor SQL query results? (1 line)
# directly send PostgreSQL query results

$ psql -c "SELECT country, COUNT(*) FROM user GROUP BY country" | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @-
Traditional monitoring system (14 lines + TODO)
import requests
import psycopg2

# Lets connect to the DB and execute the query
db = psycopg2.connect("dbname='' user='' host='localhost' password=''")
cursor = db.cursor()
rows = cursor.execute("""SELECT country, COUNT(*) FROM user GROUP BY country""")

# For each row, create a proper metric name and make an API call
for row in rows:
    country = row[0]
    count = row[1]
    metric_name = 'users_by_country.%s' % country'',
                  data={'metric': metric_name, 'value': count},
                  params={'api_key': '9fxvMi8aR3CZ5BsNj0rt0odW'})

    # TODO: decide on a metric naming scheme to handle more complex queries
    # TODO: set up a custom alerting script - builtin alerting rules are too limited
    # TODO: remember about adding new countries to the dashboard
An increase in productivity.

Setting up dashboards using a traditional monitoring system or a dashboard framework requires a lot of tedious work. For each SQL query or a JSON endpoint the work must be repeated to create proper metric names or a format accepted by a dashboard widget. requires writing vastly less code by allowing multiple input types to be directly submitted.

More knowledge about what is really happening.

Traditional monitoring systems collect low-level metrics, like CPU or disk usage. However, this is just a tip of an iceberg — the most meaningful things happen in the application layer. APIs, microservices, database contents, backend services require custom monitoring.

Since it's easy to push data from these sources into, you will end up having more meaningful information compared to using only the traditional tools.

More (and better) sleep.

Traditional monitoring systems support alerting on CPU usage and other system metrics. But if you want to set up checks on JSON data or SQL results, you will find it either hard or impossible — these tools are not meant for such use cases.'s Javascript alarms allow defining any alerting logic. The checks that matter can be actually implemented — you shouldn't rely on reporting errors by your users when a monitoring system can handle it.

Sample dashboard
See live read-only view of the dashboard (obtained using dashboard sharing function)
The dashboard was created by putting three lines in /etc/crontab file, and making a few clicks to select numbers to graph (no agent installation, no configuration):
# directly send 'ps aux' output
*/30 * * * *      ps aux | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @-

# directly send 'mysql' output
*/30 * * * *      mysql -e "SELECT country, COUNT(*) FROM user GROUP BY country" | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @-

# directly send health check result
*/30 * * * *      if curl -s http://web | grep -q 'Welcome!'; then echo OK; else echo FAIL; fi | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @-
It's very easy to monitor various things using More examples:
Rails query results
Django query results
Custom data (Python)
Microservice health check
API response
SQL results
Custom text
Cron job
CPU, disk, memory
require 'rest_client'

articles = Article.all

# Send JSON representation of results '', articles.to_json, :params => {:key => '9fxvMi8aR3CZ5BsNj0rt0odW'}
from django.core import serializers
import requests

qs = models.Choice.objects.filter(question_id=1).all()

# Send results serialized as JSON, using the standard serializer'', params={'key''9fxvMi8aR3CZ5BsNj0rt0odW'}, data=serializers.serialize('json', qs))
import requests
import json

# send a single number
num = 86'', data=str(num), params={'key''9fxvMi8aR3CZ5BsNj0rt0odW'})

# send a list of numbers
num_list = [86, 43, 32]'', data=json.dumps(num_list), params={'key''9fxvMi8aR3CZ5BsNj0rt0odW'})

# send a dict
d = {'requests': 86, 'status''OK''flag'True}'', data=json.dumps(d), params={'key''9fxvMi8aR3CZ5BsNj0rt0odW'})

# send a nested dict
nd = {'x': {'nums': [1, 2], 'message''OK'}, 'y': 3.42}'', data=json.dumps(nd), params={'key''9fxvMi8aR3CZ5BsNj0rt0odW'})

# send data from string
cdata = """
us 120
uk 34
de 27
it -
"""'', data=cdata, params={'key''9fxvMi8aR3CZ5BsNj0rt0odW'})
var restler = require('restler');

setInterval(function() {
    # Tags identify a microservice instance. They allow auto-creating dashboards with a tile created for each microservice instance. 
    var tags = ['microservice:search''pid:' +];

    # Send a status report containing memory usage data.
    restler.postJson('', { 
            status: 'ok',
            # Arbitrary JSON-serializable data can be included in the sent data.
            memory: process.memoryUsage()
        }, { query: {
            tags: tags.join(','),
            key: '9fxvMi8aR3CZ5BsNj0rt0odW'
}, 60000);
# A JSON API response, like:
  "creator": {
    "name": "app184563"
  "id": "43473",
  "likes": {
    "data": [ 1, 34, 35 ],
    "summary": {
      "total_count": 1022,
      "can_like": true,
      "has_liked": false

# can be sent directly:

curl | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: -XPOST --data-binary @-
# Send PostgreSQL results
$ psql -c "SELECT country, COUNT(*) FROM user GROUP BY country" | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @-

# Send MySQL results
$ mysql -e "SELECT country, COUNT(*) FROM user GROUP BY country" | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @-

# Send Oracle results
$ echo "SELECT * FROM EMPLOYEE;" | sqlplus -s HR/oracle | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: -XPOST --data-binary @-

# Send MongoDB results
$ mongo --quiet --eval 'JSON.stringify( db.restaurants.find().limit(10).toArray() )' | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: -XPOST --data-binary @-

# Send Cassandra results
$ cqlsh -e "SELECT count(*) FROM user" | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @-
# Send lines containing ERROR string as one block of text.

cat app.log | grep ERROR | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @-
# supports heartbeat checks, which can be used to ensure that cron jobs 
# are run at specified intervals.
20 3 * * * && (echo run_ok | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @-

# To capture return codes and time elapsed during a command run,
# 'moniqueio run' command can be used. moniqueio is a set of extra command line
# tools available at
10 0 * * *    moniqueio run | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @-
 # Collecting system metrics is not a primary use case for, but they still can be easily submitted.
 # Command output and the content of system files is usually correctly parsed without specifying a format:
 $ cat /proc/meminfo | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @-
 # A tag containing an IP address can be automatically added, enabling auto-creating
 # a dashboard tile for each server:
 $ df | curl --user 9fxvMi8aR3CZ5BsNj0rt0odW: --request POST --data-binary @-
 # To automatically collect a more comprehensive set of system metrics, the extra 'moniqueio' tool can be used:
 moniqueio --api-key 9fxvMi8aR3CZ5BsNj0rt0odW sysreports --tag-ip
Examples of sending data into are boring — whether it's a nested JSON document, an ASCII table containing SQL results or a text file, the same HTTP POST call handles the processing, even without specifying an input format. The data is automatically parsed into a tabular format and immediately available for graphing and alerting.

Adding charts to a dashboard is very easy — just click the wanted values.

(8-second video)
The sent data — in the original and the tabular format — are available to the Javascript code that can perform tests and trigger alarms. The "dry run" mode and print calls make the code easy to investigate and debug.

The list of other features includes: sending a custom Slack message, making HTTP requests, synchronizing incidents with Slack, PagerDuty and custom third-party services.

The alarms are run in a robust environment with notifications sent for failures like Javascript errors or timeouts.
Defining alarms in Javascript
Multiple instances of servers, containers or microservices can be monitored by attaching a tag to submitted data, for example ip: A dashboard tile can be automatically created for each tag value.

The mechanism is very lightweight and isn't limited to selected technologies. It can be used to automatically create a dashboard tile for each monitored entity — whether it comes from a system-level area (an HTTP endpoint, a cron job, a database table) or a higher-level source (customer's data, stock price, an offered product).
Sample dashboard from microservice health checks
If a server sending a health-check report dies or a cron job is misconfigured, the data is no longer being sent. A heartbeat check assures that an alarm will be triggered in such cases.

The checks can be configured for specific tags, which can contain a source IP address, a cron job name or a process PID.
A heartbeat check
The API lets you do more than just submitting data: retrieve parsed report instances, store arbitrary data, directly trigger alarms, programmatically set heartbeat checks .
24 / 7
A monitoring system should be something you can rely on 24/7/365. is run using a highly-available cluster and for the last 12 months the availability of the Web, API and Alarms services exceeded 99.99%.
Keeping your data secure is our highest priority. HTTPS everywhere, fine-grained access checking and doing periodic security reviews are some of the measures we take.
For some reason many monitoring systems have poor user experience.'s UI stays clean and simple, even when what happens behind the scenes is complex.
The 10-day trial allows you to check all features without entering credit card or personal data.
All plans include a 30-day guarantee period. If didn't work for you, you will receive a refund.
Click on for an explanation.
Traditional monitoring system
Dashboard framework
Monitoring CPU usage
Monitoring SQL results
Monitoring custom microservices
Setting up dashboards for specific services (like Google Analytics or Stripe)
Monitoring custom data (JSON, health-checks, text and numeric values)
Heartbeat checks
Do you wonder how could be actually used in your projects,
or if it can replace other monitoring systems? Ask us anything.