Scrape Configs API for Prometheus
The HostedMetrics API allows you to manage the scrape configs that Prometheus uses to pull in metrics data.
- Discovery Mechanisms
- Configuration Documentation
- API Provided by HostedMetrics
Scrape targets can be configured as a pre-specified list or can be determined through the various discovery mechanisms Prometheus provides. These include:
• Azure - retrieve scrape targets from Azure VMs.
• Consul - retrieve scrape targets from Consul's Catalog API.
• DigitalOcean - retrieve scrape targets from DigitalOcean's Droplets API.
• Docker Swarm - retrieve scrape targets from Docker Swarm engine.
• DNS - specify a set of DNS domain names which are periodically queried to discover a list of targets.
• File-based - provides a more generic way to configure static targets. Not supported yet.
• Google Cloud Engine - retrieve scrape targets from GCP GCE instances.
• Hetzner - retrieve scrape targets from Hetzner Cloud API and Robot API.
• Kubernetes - retrieve scrape targets from Kubernetes' REST API and always stay synchronized with the cluster state.
• Marathon - retrieve scrape targets using the Marathon REST API.
• Nerve - retrieve scrape targets from AirBnB's Nerve which are stored in Zookeeper.
• Serverset - retrieve scrape targets from Serversets which are stored in Zookeeper. Serversets are commonly used by Finagle and Aurora.
• Triton - retrieve scrape targets from Container Monitor discovery endpoints.
• Eureka - retrieve scrape targets using the Eureka REST API.
The various types of scrape configs and service discovery mechanisms are documented on Prometheus' website. Read the documentation.
API requests are authenticated using basic auth. Please refer to your dashboard for the credentials.
The current scrape configs can be retrieved using a
GET request to
The retrieved data is equivalent to the content on this webpage:
Scrape configs can be defined using a
PUT request to
Content-Type header to either
The request body must contain the JSON or YAML that defines the new configuration for the scrape configs.
The response contains the ip address from which scraping will occur, in case it needs to be whitelisted.