What is it?
Prometheus is a free software ecosystem for monitoring and alerting, with focus on reliability and simplicity. See also prometheus overview and prometheus FAQ.
There's a few interesting features that are missing from what we have now, among others:
multi-dimensional data model
Metrics have a name and several key=value pairs to better model what the metric is about. e.g. to measure varnish requests in the upload cache in eqiad we'd have a metric like http_requests_total{cache="upload",site="eqiad"}​.
a powerful query language
Makes it able to ask complex questions, e.g. when debugging problems or drilling down for root cause during outages. From the example above, the query topk(3, sum(http_requests_total{status~="^5"}) by (cache)) would return the top 3 caches (text/upload/misc) with the most errors (status matches the regexp "^5")
pull metrics from targets
Prometheus is primarily based on a pull model, in which the prometheus server has a list of targets it should scrape metrics from. The pull protocol is HTTP based and simply put, the target returns a list of "<metric> <value>". Pushing metrics is supported too, see also​.
After the Prometheus POC (as per User:Filippo_Giunchedi/Prometheus_POC​) has been running in Labs for some time, during FQ1 2016-2017 the Prometheus deployment has been extended to production, as outlined in the Technical Operations goals .
Each prometheus server is configured to scrape a list of targets (i.e. HTTP endpoints) at a certain frequency, in our case starting at 60s. All metrics are stored on the local disk with a per-server retention period (minimum of 4 months for the initial goal).
All targets to be scraped are grouped into jobs, depending on the purpose that those targets serve. For example the job to scrape all host-level data for a given location using node-exporter will be called node and each target will be listed as hostname:9100. Similarly there could be jobs for varnish, mysql, etc.
Each prometheus server is meant to be stand-alone and polling targets in the same failure domain as the server itself as appropriate (e.g. the same datacenter, the same vlan and so on). For example this allows to keep the monitoring local to the datacenter and not have spotty metrics upon cross-datacenter connectivity blips. (See also Federation)
The endpoint being polled by the prometheus server and answering the GET requests is typically called exporter, e.g. the host-level metrics exporter is node-exporter.
Each exporter serves the current snapshot of metrics when polled by the prometheus server, there is no metric history kept by the exporter itself. Further, the exporter usually runs on the same host as the service or host it is monitoring.
Why just stand-alone prometheus servers with local storage and not clustered storage? The idea behind a single prometheus server is one of reliability: a monitoring system must be more reliabile than the systems it is monitoring. It is certainly easier to get local storage right and reliable than clustered storage, especially important when collecting operational metrics.
See also prometheus storage documentation for a more in-depth explanation and storage space requirements.
High availability
With local storage being the basic building block we can still achieve high-availability by running more than one server in parallel, each configured the same and polling the same set of targets. Queries for data can be routed via LVS in an active/standby fashion.
For efficiency reasons, prometheus spools chunks of datapoints in memory for each metric before flushing them to disk. This makes it harder to perform backups online by simply copying the files on disk. The issue of having consistent backups is also discussed in prometheus #651.
Notwithstanding the above, it should be possible to backup the prometheus local storage files as-is by archiving its storage directory with tar before regular (bacula) backups. Since the backup is being done online it will result in some inconsistencies, upon restoring the backup Prometheus will crash-recovery its storage at startup.
To perform backups of consistent/clean state, at the moment prometheus needs to be shutdown gracefully, therefore when running an active/standby configuration backup can be taken on the standby prometheus to minimize its impact. Note that the shutdown will result in gaps in the standby prometheus server for the duration of the shutdown.
Failure recovery
In the event of a prometheus server having an unusable local storage (disk failed, FS failed, corruption, etc) failure recovery can take the form of:
start with empty storage: of course it is a complete loss of metric history for the local server and will obviously fully recover once the metric retention period has passed.
recover from backups: restore the storage directory to the last good backup
copy data from a similar server: when deployed in pairs it is possible to copy/rsync the storage directory onto the failed server, this will likely result in gaps in the recent history though (see also Backups)
Federation and multiple DCs
Each prometheus server is able to act as a target to another prometheus server by means of Prometheus federation. Our use case for this feature is primarily hierarchical federation, namely to have a 'global' prometheus that aggregates datacenter-level metrics from prometheus in each datacenter.
The global instance is what we would normally use in grafana as the "datasource" for dashboards to get an overview of all sites and aggregated metrics. To drilldown further and get more details it is possible to use the datacenter-local datasource and dashboard.
Server location
In the diagram above the various Prometheus servers are logically separated, though physically they can share one/multiple machines. As of Nov 2016 Prometheus dc-local runs in two VMs for each of eqiad/codfw (instance named "ops") and we're in process of provisioning real hardware.
An open question at this time is where to host the dc-local Prometheus servers for caching centers, essentially two options:
  1. Local to the site
  2. Remote, e.g. codfw polling ulsfo and eqiad polling esams
The local option offers some advantages since all sites are logically the same and all polling for monitoring purposes is kept local to the site and reflects our current Ganglia deployment. Only the global instance would reach out to remote sites and thus could be affected by cross-DC network unavailability.
This is significant especially during outages: the global instance would show a drop in global aggregates while the dc-local instance can keep collecting high-resolution data from site-local machines.
Disadvantages of the local option include (as of Nov 2016) running Prometheus on the bastion for sites where we lack internal dedicated machines (e.g. ulsfo) alongside other services like tftp/installserver. Also the fact that running Prometheus on a single bastion would provide no redundancy when the bastion is down.
Service Discovery
Prometheus supports different kinds of discovery through its configuration. For example, in role::prometheus::labs_project implements auto-discovery of all instances for a given labs project. file_sd_config is used to continuously monitor a set of configuration files for changes and the script prometheus-labs-targets is run periodically to write the list of instances to the relative configuration file. The file_sd files are reloaded automatically by prometheus, so new instances will be auto-discovered and have their instance-level metrics collected.
While file-based service discovery works, Prometheus also supports higher-level discovery for example for Kubernetes (see also role::prometheus::tools​).
Adding new metrics
In general Prometheus' model is pull-based. In practical terms that means that once metrics are available over HTTP somewhere on the network with the methods described below, Prometheus itself should be instructed to poll for metrics via its configuration (more specifically, a job as described in​). Within WMF's Puppet the Prometheus configuration lives inside its respective instance profile, for example modules/profile/manifests/prometheus/ops.pp is often the right place to add new jobs.
Direct service instrumentation
The most benefits from service metrics are obtained when services are directly instrumented with one of Prometheus clients, e.g. Python client. Metrics are then exposed via HTTP/HTTP over TLS, commonly at /metrics, on the service's HTTP(S) port (in the common case) or a separate port if the service isn't HTTP to begin with.
Service exporters
For cases where services can't be directly instrumented (aka whitebox monitoring), a sidekick application exporter can be run alongside the service that will query the service using whatever mechanism and expose prometheus metrics via the client. This is the case for example for varnish_exporter parsing varnishstat -j or apache_exporter parsing apache's mod_status page.
Machine-level metrics
Another class of metrics is all those related to the machine itself rather than a particular service. Those involve calling a subprocess and parsing the result, often in a cronjob. In these cases the simplest thing to do is drop plaintext files on the machine's filesystem for node-exporter to pick up and expose the metrics on HTTP. This mechanism is named textfile and for example the python client has support for it, e.g. sample textfile collector usage. This is most likely the mechanism we could use to replace most of the custom collectors we have for Diamond.
Ephemeral jobs (Pushgateway)
Yet another case involves service-level ephemeral jobs that are not quite long-lived enough to be queried via HTTP. For those jobs there's a push mechanism to be used: metrics are pushed to Prometheus' pushgateway via HTTP and subsequently scraped by Prometheus from the gateway itself.
This method appears similar to what statsd for its semplicity but it should be used with care, see also best practices on when to use the pushgateway. Good use cases are for example mediawiki's maintenance jobs: tracking how long the job took and when it last succeeded; if the job isn't tied to a machine in particular it is usually a good candidate.
In WMF's deployment the pushgateway address to use is http://prometheus-pushgateway.discovery.wmnet
When using TLS for metric scraping, make sure the host on the certificate and the one configured match, or you will get a TLS Handshake error. By default, puppet sets just the hostname as the target of monitoring -you are likelty to want to add the option hosts_only => false to use the full qualified domain name as target
Global view (Thanos) web interface
As of Jul 2020 the Thanos web interface is available at​. This interface offers a global view over Prometheus data and should be preferred for new use cases. Please consult the Thanos page to find out more.
Access Prometheus web interface
Use to run Prometheus queries across all Prometheus instances in all sites. The old method of SSH port forwarding still works but has been deprecated and replaced by the Thanos web interface. In short, for example for the 'ops' instance (port 9900) in prometheus codfw: ssh -L9900:localhost:9900 prometheus2003.codfw.wmnet then browse http://localhost:9900
To access the prometheus web interface in beta (deployment-prep) you use
To access the prometheus web interface for Cloud Services hardware that are using the cloudmetrics monitoring setup, please follow the instructions at Portal:Cloud_VPS/Admin/Monitoring#Accessing_"labs"_prometheus
List metrics with curl
One easy way to check what metrics are being collected by prometheus on a given machine is to request the metrics via HTTP like prometheus server does at scrape time, e.g. for node-exporter on port 9100:
curl -s localhost:9100/metrics
Query cheatsheet
Filter for a specific instance
Given values such as
and a template variable called $server, containing the server hostname, one can filter for the selected instance as follows:
Filter by label using multi-values template variables
Given the following two metrics:
varnish_version{job="varnish-upload", ...} node_uname_info{cluster="cache_upload", ...}
and a multi-value template variable called $cache_type, with the following values: text,upload,misc,canary​, it is possible to write a prometheus query filtering the selected cache_types:
node_uname_info{cluster=~"cache_($cache_type)"} varnish_version{job=~"varnish-($cache_type)"}
Dynamic, query-based template variables
Grafana's templating allows to define template variables based on Prometheus queries.
Given the following metric:
node_uname_info{release="4.9.0-0.bpo.4-amd64", ...} node_uname_info{release="4.9.0-0.bpo.3-amd64", ...}
Choose Query as the variable Type, the desired Data Source, and specify a query such as the following to extract the values:
label_values(node_uname_info, release)
Aggregate metrics from multiple sites
Sometimes it is useful to have an overall view of all sites from where metrics are collected. That's the use case for our 'global' instance of Prometheus, namely to pull metrics from site-local Prometheus instances.
Prometheus' name for this feature is federation, as described in and​.
Adding new aggregated metrics to the global instance is composed of two parts:
  1. Instruct the site-local Prometheus to calculate new aggregated metrics, for example the ops instance uses modules/role/files/prometheus/rules_ops.conf in Puppet. The format of the file and its best practices are described at
  2. Instruct the global instance to pick up the newly-created aggregated metrics, via the global instance configuration at modules/role/manifests/prometheus/global.pp
Sync data from an existing Prometheus host
When replacing existing Prometheus hosts it is possible to keep existing data by rsync'ing the metrics directory from the old host into the new. It is important to make sure first that the new host has puppet run successfully (thus Prometheus is configured) and can Prometheus can reach its targets successfully (i.e. the new host is part of prometheus_nodes for its site. Once all of that is done the rsync can happen, on the new host:
puppet agent --disable "copying prometheus data" export old_host=<hostname> export instance_name=ops systemctl stop prometheus@${instance_name} su -s /bin/bash prometheus rsync -vd ${old_host}::prometheus-${instance_name}/ /srv/prometheus/${instance_name}/metrics/ # do a first rsync pass in parallel for each subdirectory /usr/bin/time parallel -j10 -i rsync -a ${old_host}::prometheus-${instance_name}/{}/ {}/ -- /srv/prometheus/${instance_name}/metrics/* # once this is completed stop puppet and prometheus on $old_host as well, and repeat the rsync for a final pass. rsync -vd ${old_host}::prometheus-${instance_name}/ /srv/prometheus/${instance_name}/metrics/ /usr/bin/time parallel -j10 -i rsync -a ${old_host}::prometheus-${instance_name}/{}/ {}/ -- /srv/prometheus/${instance_name}/metrics/* # once this is completed you can restart prometheus and puppet on both hosts
Prometheus host running out of space
It might happen that Prometheus hosts get close to running out of space on one of their per-instance filesystems. Assuming the underlying volume group has space available (lvs to check what LVs are present and on which VGs, then vgs to check VGs themselves) then it is possible to extend the filesystem online with (e.g. +25G to the prometheus-foo LV on vg-hdd VG, remove --test once happy).
lvextend --test --resizefs --size +25G vg-hdd/prometheus-foo
Make sure to:
No space available on the volume group
At some point the space on volume group might be fully allocated (e.g. like on bastions). In this case the emergency remedy is to decrease Prometheus retention time via prometheus::server::storage_retention in Puppet, and restart Prometheus with the new settings.
In the unfortunate case that the filesystem is 100% utilized is also possible to manually remove storage "blocks" (i.e. directories) from the metrics directory under /srv/prometheus/INSTANCE​. The filenames are sortable, which each directory representing maximum 24h of data.
Add filesystems for a new instance
Until {Bug|T163692} is fully resolved, new Prometheus instances require adding LVs to the Prometheus hosts in eqiad/codfw. There are two volume groups (vg-ssd or vg-hdd) depending on the type of storage.
Set the instance and vg variables, then the following commands can be used as-is:
instance=prometheus-NAME vg=vg-hdd
mp=/srv/${instance/-//} lvcreate -L 50G -n $instance $vg mkfs.ext4 /dev/${vg}/${instance} install -d -o prometheus -m 750 $mp echo "/dev/${vg}/${instance} $mp ext4 defaults 0 0" >> /etc/fstab mount $mp
Add metrics from a new service
Most services which export metrics to Prometheus do so via an HTTP endpoint, running on its own port. This HTTP endpoint can be served by the daemon itself, or by a separate "exporter" process.
Prometheus needs to be told to scrape the HTTP endpoint, which it calls a "target." (A logical grouping of targets is called a "job.") In addition to adding the new job to the Prometheus server, you will need to add a firewall rule exposing the HTTP endpoint.
For an example Puppet changes to add new jobs, see or​.
How long are metrics stored in Prometheus?
As of June 2020, we have deployed Thanos for long term storage of metrics. The target retention period for all one-minute metrics is three years.
What are the semantics of rate/irate/increase?
These functions generally take a counter metric (i.e. non-decreasing) and return a "value over time". Rate and irate return per second counts, while increase returns the change over the given interval. See also in depth explanation at
Use cases
MySQL monitoring is performed by running prometheus-mysqld-exporter on the database machine to be monitored. Metrics are exported via http on port 9104 and fetched by prometheus server(s), to preview what metrics are being collected a fetch can be simulated with:
curl -s localhost:9104/metrics | grep -v '^#'
Per group / shard / role overview
Per server drilldown
Per cluster overview
Replacing Graphite
Another use case imaginable for Prometheus is to replace the current Graphite deployment. This task is less "standalone" than replacing Ganglia and therefore more difficult: Graphite is more powerful and used by more people/services/dashboards. Nevertheless it should be possible to keep Prometheus and Graphite alongside each other and progressively put more data into Prometheus without affecting Graphite users. The top contributors to data that flows into Graphite as of Aug 2016 are Diamond, Statsd and Cassandra.
Statsd traffic for the most part flows from machines to statsd.eqiad.wmnet over UDP on port 8125 for aggregation. There are some exceptions (e.g. swift) where statsd aggregation is performed on localhost and then pushed via graphite line-oriented protocol.
Prometheus provides statsd_exporter to receive statsd metrics and turn those into key => value prometheus metrics according to a user-supplied mapping. The resulting metrics are then exposed via HTTP for prometheus server to scrape.
One idea to integrate statsd_exporter into our statsd traffic is to put it "inline" between the application and statsd.eqiad.wmnet. In other words we would need to:
  1. Modify statsd_exporter to mirror received udp packets to statsd.eqiad.wmnet and install it on end hosts
  2. Opt-in applications by changing their statsd host from statsd.eqiad.wmnet to localhost
  3. Extend the statsd_exporter mapping file to include mappings for our statsd metrics.
This method works well for applications/languages that are request-scoped (e.g. php) since there isn't necessarily a server process to keep and aggregate metrics in. For services that qualify, the recommended way is to switch to Prometheus client for instrumentation.
If you are migrating your service that uses statsd to k8s, see also Prometheus/statsd_k8s
Cassandra is hosted on separate Graphite machines due to the number and size of metrics it pushes, particularly in conjunction with Restbase. It should be evaluated separatedly too if e.g. a separate prometheus instance makes sense. WRT implementation there are two viable options:
Prometheus jmx_exporter can be used to collect metrics through JMX.
A few notes:
This implies that an overly broad blacklist query can still have a non trivial cost.
List/inspect existing mbeans
Scenario: you want to check JMX MBeans available or generic JVM data in Production from your laptop:
ssh -ND 9099 $some_hostname$ jconsole -J-DsocksProxyHost=localhost -J-DsocksProxyPort=9099
Then Jconsole will be opened and you'll need to select Remote Process, adding the following: $hostname$:port (don't use localhost, it will not work!)
Grafana dashboards will need porting from Graphite to Prometheus metrics; this is likely to be the most labor-intensive part since most (all?) dashboards are hand-curated. While it should be possible to programmatically change statsd metric names into prometheus metric names, the query language is different enough to make this impractical except for very basic cases.
Stop queries on problematic instances
If a single Prometheus instance is misbehaving (e.g. overloaded) it is possible to temporarily stop queries from reaching that instance, by stopping Puppet commenting the relevant ProxyPass entry in /etc/apache2/prometheus.d/ and issue apache2ctl graceful. See also bug T217715.
Prometheus was restarted
The alert on Prometheus uptime exists to notify opsen of the possibility of strange monitoring artifacts occurring, as has happened in the past. If it was just a single restart, and not a crashloop, no action is strictly necessary (but investigating what happened isn't a bad idea; Prometheus isn't supposed to crash or restart).
If this alert is firing for a 'global' Prometheus, it can mean that either the global instance restarted, or that one of the Prometheis scraped by the global instance restarted.
Configuration reload failure
Check for recent changes in Puppet, particular modifications to monitoring::check_prometheus invocations or to the underlying module/prometheus templates themselves. Hopefully the error message from Prometheus gives you some idea.
k8s cache not updating
As discovered in bug T227478 the Prometheus kubernetes cache can stop updating (reasons TBD). In this case systemctl restart prometheus@k8s "fixes" the issue.
Prometheus job unavailable
As part of bug T187708 there's alerting in place for unavailable Prometheus jobs. This means that Prometheus was unable to fetch metrics from most of the job's targets, e.g. because the target is down, unreachable or fetching metrics timed out. See also for dashboard and logs.
Prometheus exporters "up" metrics unavailable
Some services don't have native Prometheus metrics support, thus an "exporter" is used that runs alongside the service and converts metrics from the service into Prometheus metrics. It might happen that the exporter itself is up (thus the job is available, see above) but the exporter is unable to contact the service for some reason. Such conditions are reported in metrics such as mysql_up for example by the mysql exporter. See also for dashboard and logs.
Failover Prometheus Pushgateway
The Prometheus Pushgateway needs to run as a singleton to properly track pushed metrics. For this reason the prometheus-pushgateway is active on one host at a time.
To failover the steps involved are the following:
Stale file for node-exporter textfile
Certain metrics are periodically generated by dumping Prometheus-formatted plaintext files (extension .prom) into /var/lib/prometheus/node.d/​. The processes that generate the files run asynchronously to node-exporter, normally via systemd timers, and such processes can fail to update the files. The alert fire whenever such metric files have failed to be updated; the Icinga alert description will be something like the following:
cluster=analytics file=debian_version.prom instance=an-worker1101 job=node site=eqiad
Meaning that an-worker1101 has failed to update debian_version.prom. Debugging such failures usually involved finding out which systemd timer is responsible for generating the file, usually by looking at puppet, and further debug from there.
Last edited on 20 July 2021, at 14:51
Content is available under CC BY-SA 3.0 unless otherwise noted.
Privacy policy
Terms of Use
HomeRandomLog in Settings DonateAbout WikitechDisclaimers