Cortex Architecture
内容来自github 官方文档,参考连接:https://github.com/cortexproject/cortex/blob/master/docs/architecture.md
Cortex consists of multiple horizontally scalable microservices. Each microservice uses the most appropriate technique for horizontal scaling; most are stateless and can handle requests for any users while some (namely the ingesters) are semi-stateful and depend on consistent hashing. This document provides a basic overview of Cortex's architecture.

The role of Prometheus
Prometheus instances scrape samples from various targets and then push them to Cortex (using Prometheus' remote write API). That remote write API emits batched Snappy-compressed Protocol Buffer messages inside the body of an HTTP PUT request.
Cortex requires that each HTTP request bear a header specifying a tenant ID for the request. Request authentication and authorization are handled by an external reverse proxy.
Incoming samples (writes from Prometheus) are handled by the distributor while incoming reads (PromQL queries) are handled by the query frontend.
Services
Cortex has a service-based architecture, in which the overall system is split up into a variety of components that perform specific tasks and run separately (and potentially in parallel).
Cortex is, for the most part, a shared-nothing system. Each layer of the system can run multiple instances of each component and they don't coordinate or communicate with each other within that layer.
Distributor
The distributor service is responsible for handling samples written by Prometheus. It's essentially the "first stop" in the write path for Prometheus samples. Once the distributor receives samples from Prometheus, it splits them into batches and then sends them to multiple ingesters in parallel.
Distributors communicate with ingesters via gRPC. They are stateless and can be scaled up and down as needed.
If the HA Tracker is enabled, the Distributor will deduplicate incoming samples that contain both a cluster and replica label. It talks to a KVStore to store state about which replica per cluster it's accepting samples from for a given user ID. Samples with one or neither of these labels will be accepted by default.
Hashing
Distributors use consistent hashing, in conjunction with the (configurable) replication factor, to determine which instances of the ingester service receive each sample.
The hash itself is based on one of two schemes:
- The metric name and tenant ID
- All the series labels and tenant ID
The trade-off associated with the latter is that writes are more balanced but they must involve every ingester in each query.
This hashing scheme was originally chosen to reduce the number of required ingesters on the query path. The trade-off, however, is that the write load on the ingesters is less even.
The hash ring
A consistent hash ring is stored in Consul as a single key-value pair, with the ring data structure also encoded as a Protobuf message. The consistent hash ring consists of a list of tokens and ingesters. Hashed values are looked up in the ring; the replication set is built for the closest unique ingesters by token. One of the benefits of this system is that adding and remove ingesters results in only 1/N of the series being moved (where N is the number of ingesters).
Quorum consistency
All distributors share access to the same hash ring, which means that write requests can be sent to any distributor.
To ensure consistent query results, Cortex uses Dynamo-style quorum consistency on reads and writes. This means that the distributor will wait for a positive response of at least one half plus one of the ingesters to send the sample to before responding to the user.
Load balancing across distributors
We recommend randomly load balancing write requests across distributor instances, ideally by running the distributors as a Kubernetes Service.
Ingester
The ingester service is responsible for writing sample data to long-term storage backends (DynamoDB, S3, Cassandra, etc.).
Samples from each timeseries are built up in "chunks" in memory inside each ingester, then flushed to the chunk store. By default each chunk is up to 12 hours long.
If an ingester process crashes or exits abruptly, all the data that has not yet been flushed will be lost. Cortex is usually configured to hold multiple (typically 3) replicas of each timeseries to mitigate this risk.
A hand-over process manages the state when ingesters are added, removed or replaced.
Write de-amplification
Ingesters store the last 12 hours worth of samples in order to perform write de-amplification, i.e. batching and compressing samples for the same series and flushing them out to the chunk store. Under normal operations, there should be many orders of magnitude fewer operations per second (OPS) worth of writes to the chunk store than to the ingesters.
Write de-amplification is the main source of Cortex's low total cost of ownership (TCO).
Ruler
Ruler executes PromQL queries for Recording Rules and Alerts. Ruler is configured from a database, so that different rules can be set for each tenant.
All the rules for one instance are executed as a group, then rescheduled to be executed again 15 seconds later. Execution is done by a 'worker' running on a goroutine - if you don't have enough workers then the ruler will lag behind.
Ruler can be scaled horizontally.
AlertManager
AlertManager is responsible for accepting alert notifications from Ruler, grouping them, and passing on to a notification channel such as email, PagerDuty, etc.
Like the Ruler, AlertManager is configured per-tenant in a database.
Query frontend
The query frontend is an optional service that accepts HTTP requests, queues them by tenant ID, and retries in case of errors.
The query frontend is completely optional; you can use queriers directly. To use the query frontend, direct incoming authenticated reads at them and set the
-querier.frontend-addressflag on the queriers.
Queueing
Queuing performs a number of functions for the query frontend:
- It ensures that large queries that cause an out-of-memory (OOM) error in the querier will be retried. This allows administrators to under-provision memory for queries, or optimistically run more small queries in parallel, which helps to reduce TCO.
- It prevents multiple large requests from being convoyed on a single querier by distributing them first-in/first-out (FIFO) across all queriers.
- It prevents a single tenant from denial-of-service-ing (DoSing) other tenants by fairly scheduling queries between tenants.
Splitting
The query frontend splits multi-day queries into multiple single-day queries, executing these queries in parallel on downstream queriers and stitching the results back together again. This prevents large, multi-day queries from OOMing a single querier and helps them execute faster.
Caching
The query frontend caches query results and reuses them on subsequent queries. If the cached results are incomplete, the query frontend calculates the required subqueries and executes them in parallel on downstream queriers. The query frontend can optionally align queries with their step parameter to improve the cacheability of the query results.
Parallelism
The query frontend job accepts gRPC streaming requests from the queriers, which then "pull" requests from the frontend. For high availability it's recommended that you run multiple frontends; the queriers will connect to—and pull requests from—all of them. To reap the benefit of fair scheduling, it is recommended that you run fewer frontends than queriers. Two should suffice in most cases.
Querier
The querier service handles the actual PromQL evaluation of samples stored in long-term storage.
It embeds the chunk store client code for fetching data from long-term storage and communicates with ingesters for more recent data.
Chunk store
The chunk store is Cortex's long-term data store, designed to support interactive querying and sustained writing without the need for background maintenance tasks. It consists of:
- An index for the chunks. This index can be backed by DynamoDB from Amazon Web Services, Bigtable from Google Cloud Platform, Apache Cassandra.
- A key-value (KV) store for the chunk data itself, which can be DynamoDB, Bigtable, Cassandra again, or an object store such as Amazon S3
Unlike the other core components of Cortex, the chunk store is not a separate service, job, or process, but rather a library embedded in the three services that need to access Cortex data: the ingester, querier, and ruler.
The chunk store relies on a unified interface to the "NoSQL" stores—DynamoDB, Bigtable, and Cassandra—that can be used to back the chunk store index. This interface assumes that the index is a collection of entries keyed by:
- A hash key. This is required for all reads and writes.
- A range key. This is required for writes and can be omitted for reads, which can be queried by prefix or range.
The interface works somewhat differently across the supported databases:
- DynamoDB supports range and hash keys natively. Index entries are thus modelled directly as DynamoDB entries, with the hash key as the distribution key and the range as the range key.
- For Bigtable and Cassandra, index entries are modelled as individual column values. The hash key becomes the row key and the range key becomes the column key.
A set of schemas are used to map the matchers and label sets used on reads and writes to the chunk store into appropriate operations on the index. Schemas have been added as Cortex has evolved, mainly in an attempt to better load balance writes and improve query performance.
The current schema recommendation is the v10 schema.
Cortex Architecture的更多相关文章
- ARM 架构、ARM7、ARM9、STM32、Cortex M3 M4 、51、AVR 之间有什么区别和联系?(转载自知乎)
ARM架构: 由英国ARM公司设计的一系列32位的RISC微处理器架构总称,现有ARMv1~ARMv8种类. ARM7: 一类采用ARMv3或ARMv4架构的,使用冯诺依曼结构的内核. ...
- Implementation of Serial Wire JTAG flash programming in ARM Cortex M3 Processors
Implementation of Serial Wire JTAG flash programming in ARM Cortex M3 Processors The goal of the pro ...
- Introduction to Cortex Serial Wire Debugging
Serial Wire Debug (SWD) provides a debug port for severely pin limited packages, often the case for ...
- ARM architecture
http://en.wikipedia.org/wiki/ARM_architecture ARM architecture ARM architectures The ARM logo De ...
- Neural Architecture Search — Limitations and Extensions
Neural Architecture Search — Limitations and Extensions 2019-09-16 07:46:09 This blog is from: https ...
- Undefined symbols for architecture arm64解决方案
在iOS开发中经常遇到的一个错误是Undefined symbols for architecture arm64,这个错误表示工程某些地方不支持arm64指令集.那我们应该怎么解决这个问题了?我们不 ...
- Optimal Flexible Architecture(最优灵活架构)
来自:Oracle® Database Installation Guide 12_c_ Release 1 (12.1) for Linux Oracle base目录命名规范: /pm/s/u 例 ...
- EF框架组件详述【Entity Framework Architecture】(EF基础系列篇3)
我们来看看EF的框架设计吧: The following figure shows the overall architecture of the Entity Framework. Let us n ...
- [Architecture] 系统架构正交分解法
[Architecture] 系统架构正交分解法 前言 随着企业成长,支持企业业务的软件,也会越来越庞大与复杂.当系统复杂到一定程度,开发人员会发现很多系统架构的设计细节,很难有条理.有组织的用一张大 ...
随机推荐
- kaishi
https://zjc.wtc.edu.cn/zs/2019/0623/c2937a54869/page.htm https://zjc.wtc.edu.cn/zs/2019/0614/c593a54 ...
- Springboot vue.js html 跨域 前后分离 Activiti6 shiro 权限
官网:www.fhadmin.org 特别注意: Springboot 工作流 前后分离 + 跨域 版本 (权限控制到菜单和按钮) 后台框架:springboot2.1.2+ activiti6.0 ...
- css, js 项目练习之网页换肤
首先,该练习参考自:https://www.jianshu.com/p/2961d9c317a3 我就直接上代码了(颜色可以自己调). HTML: <nav> <li>< ...
- 单词demantoite翠榴石demantoite英语
一般认为翠榴石demantoite的形成条件是: (1)围岩组成应该是贫铝富铁,且附近有钙质碳酸盐出露地区,即有利于“纯度高”的钙铁榴石结晶环境.否则,若钙铝榴石端员分子比增多,2价Fe必将直接影响晶 ...
- 汽车行业如何个性化定制转型?看APS系统在这家企业的运用
传统汽车行业中往往采用的是按库存推动式生产,一旦市场产生变动就会造成大量的生产,给企业带来大批的资金压力,而另一方面采用按单生产的方式企业往往面临供应链,产能的诸多约束条件限制,稍有不慎就会带来产线停 ...
- linux清理系统缓存
Linux 内存优化. 1.清理前内存使用情况 free -m 2.开始清理 echo 1 > /proc/sys/vm/drop_caches 3.清理后内存使用情况 free -m 4.完成 ...
- XenServer Tools安装
右键Linux虚拟机,选择 Install XenServer Tools XenCenter 切换到 Console界面 执行如下命令安装: # mount /dev/xvdd /mnt # /mn ...
- Api测试-为postman自动添加cookie
使用postman来调试接口,会被buc-sso-csrf等拦截,需要自己挨个添加cookie,但是cookie又有失效时间,所以本篇介绍如何使用插件来自动获取cookie进行接口api测试 一.安装 ...
- ip黑名单-做过ssh扫描黑的ip
# # hosts.deny This file contains access rules which are used to # deny connections to network servi ...
- sqlserver建表及注释
--**********************************************创建表************************************************* ...