In previous parts [1] [2] [3] , I described the development process for our microservices. Initially, I intentionally omitted monitoring capabilities.
Monitoring ensures system reliability, aids in debugging, and provides operational insights, making it crucial as our system scales.
Now, I’ll address that by integrating monitoring into our system.
Prometheus is a powerful tool for collecting, querying, and visualizing metrics, making it an excellent choice for microservices monitoring.
Flask has a module to create monitoring endpoint for Prometheus - prometheus_flask_exporter
. Github, Pypi of project. This integration automatically tracks request durations, response statuses, and more, providing valuable insight into application performance.
According documentation, lets modify our applications:
|
|
So, on line 4, I connect prometheus_flask_exporter
to my main scraper application.
On line 9 I use PrometheusMetrics.for_app_factory()
method to connect this exporter to project.
And on line 18 I actually initialize the metrics endpoint. Now my application have metrics on /metrics
endpoint!
Let’s modify another service - databus
:
|
|
As you can see - I refactor databus
project. Now all routes live in different file routes.py
and initialized with blueprints.
As in scrapers
microservice I make changes to bring metrics to project.
Now all our flask endpoints monitored.
But what about our internal processes, which don’t belong Flask ecosystem? Scheduler, Redis, RabbitMQ & telegrambot?
Let’s make little refactoring:
metrics.py
from prometheus_flask_exporter import PrometheusMetrics
metrics = PrometheusMetrics.for_app_factory()
|
|
This is how we can export to Prometheus counter with data “How many times function is called”. But more interesting and more useful - how many times was error in functions? Let’s implement it:
|
|
In this snippet I create counter for errors in for function configure_all
. Additionaly I create label scraper
to identify which scraper failed for detailed investigation in future.
|
|
Now I have custom counters for errors. It will help me in future faster identify module which have problems .
Time to bring Prometheus in stack to make it working.
Let’s start from creating monitoring
directory:
mkdir monitoring
cd monitoring
Next, I need write configuration file with targets for Prometheus:
# monitoring/prometheus.yaml
global:
scrape_interval: 15s # Global scrape interval
evaluation_interval: 15s # Rule evaluation interval
# Rule files for custom alerts
rule_files:
# - "alerts.rules.yml"
# Scrape configurations
scrape_configs:
# Prometheus monitoring itself
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
# Monitoring Flask API
- job_name: 'flask-api'
scrape_interval: 15s # Align with the global scrape interval unless high frequency is needed
metrics_path: /metrics # Specify metrics path if not default
static_configs:
- targets: ['localhost:5000']
honor_labels: true
- job_name: 'flask-databus'
scrape_interval: 15s # Align with the global scrape interval unless high frequency is needed
metrics_path: /metrics # Specify metrics path if not default
static_configs:
- targets: ['localhost:5001']
honor_labels: true
Step directory up and create docker compose file:
# docker-compose.yaml
version: "3.5"
services:
prometheus:
image: prom/prometheus:latest
restart: unless-stopped
container_name: prometheus
ports:
- 9090:9090
volumes:
- ./monitoring/prometheus.yaml:/etc/prometheus/prometheus.yml
command:
- '--config.file=/etc/prometheus/prometheus.yml'
Let’s bring it up:
docker-compose up
In separate terminal - run your services.
Provided configuration is for local testing and would require modifications for production (e.g., security, scaling).
In this article, we added Prometheus monitoring to Flask microservices using prometheus_flask_exporter
. We also set up custom counters for internal processes and deployed Prometheus using Docker Compose. With these metrics, we can now gain valuable insights into service performance and errors.
Next time I will combine all services in one docker-compose file and bring all this alive.