Scaling to 1 million active GraphQL subscriptions (live queries)

Hasura is a GraphQL engine on Postgres that provides instant GraphQL APIs with authorization. Read more at hasura.ioand on github.com/hasura/graphql-engine.

Hasura allows 'live queries' for clients (over GraphQL subscriptions). For example, a food ordering app can use a live query to show the live-status of an order for a particular user.

This document describes Hasura's architecture which lets you scale to handle a million active live queries.

TL;DR:

The setup: Each client (a web/mobile app) subscribes to data or a live-result with an auth token. The data is in Postgres. 1 million rows are updated in Postgres every second (ensuring a new result pushed per client). Hasura is the GraphQL API provider (with authorization).

The test: How many concurrent live subscriptions (clients) can Hasura handle? Does Hasura scale vertically and/or horizontally?

single instance configuration No. of active live queries CPU load average
1xCPU, 2GB RAM 5000 60%
2xCPU, 4GB RAM 10000 73%
4xCPU, 8GB RAM 20000 90%

At 1 million live queries, Postgres is under about 28% load with peak number of connections being around 850.

Notes on configuration:

  • AWS RDS postgres, Fargate, ELB were used with their default configurations and without any tuning.
  • RDS Postgres: 16xCPU, 64GB RAM, Postgres 11
  • Hasura running on Fargate (4xCPU, 8GB RAM per instance) with default configurations

GraphQL and subscriptions

GraphQL makes it easy for app developers to query for precisely the data they want from their API.

For example, let’s say we’re building a food delivery app. Here’s what the schema might look like on Postgres:

For an app screen showing a “order status” for the current user for their order, a GraphQL query would fetch the latest order status and the location of the agent delivering the order.

Underneath it, this query is sent as a string to a webserver that parses it, applies authorization rules and makes appropriate calls to things like databases to fetch the data for the app. It sends this data, in the exact shape that was requested, as a JSON.

Enter live-queries: Live queries is the idea of being able to subscribe to the latest result of a particular query. As the underlying data changes, the server should push the latest result to the client.

On the surface this is a perfect fit with GraphQL because GraphQL clients support subscriptions that take care of dealing with messy websocket connections. Converting a query to a live query might look as simple as replacing the word query with subscription for the client. That is, if the GraphQL server can implement it.

Implementing GraphQL live-queries

Implementing live-queries is painful. Once you have a database query that has all the authorization rules, it might be possible to incrementally compute the result of the query as events happen. But this is practically challenging to do at the web-service layer. For databases like Postgres, it is equivalent to the hard problem of keeping a materialized view up to date as underlying tables change. An alternative approach is to refetch all the data for a particular query (with the appropriate authorization rules for the specific client). This is the approach we currently take.

Secondly, building a webserver to handle websockets in a scalable way is also sometimes a little hairy, but certain frameworks and languages do make the concurrent programming required a little more tractable.

Refetching results for a GraphQL query

To understand why refetching a GraphQL query is hard, let’s look at how a GraphQL query is typically processed:

The authorization + data fetching logic has to run for each “node” in the GraphQL query. This is scary, because even a slightly large query fetching a collection could bring down the database quite easily. The N+1 query problem, also common with badly implemented ORMs, is bad for your database and makes it hard to optimally query Postgres. Data loader type patterns can alleviate the problem, but will still query the underlying Postgres database multiple times (reduces to as many nodes in the GraphQL query from as many items in the response).

For live queries, this problem becomes worse, because each client’s query will translate into an independent refetch. Even though the queries are the “same”, since the authorization rules create different session variables, independent fetches are required for each client.

Hasura approach

Is it possible to do better? What if declarative mapping from the data models to the GraphQL schema could be used to create a single SQL query to the database? This would avoid multiple hits to the database, whether there are a large number of items in the response or the number of nodes in the GraphQL query are large.

Idea #1: “Compile” a GraphQL query to a single SQL query

Part of Hasura is a transpiler that uses the metadata of mapping information for the data models to the GraphQL schema to “compile” GraphQL queries to the SQL queries to fetch data from the database.

GraphQL query → GraphQL AST → SQL AST → SQL

This gets rid of the N+1 query problem and allows the database to optimise data-fetching now that it can see the entire query.

But this in itself isn't enough as resolvers also enforce authorization rules by only fetching the data that is allowed. We will therefore need to embed these authorization rules into the generated SQL.

Idea #2: Make authorization declarative

Authorization when it comes to accessing data is essentially a constraint that depends on the values of data (or rows) being fetched combined with application-user specific “session variables” that are provided dynamically. For example, in the most trivial case, a row might container a user_id that denotes the data ownership. Or documents that are viewable by a user might be represented in a related table, document_viewers. In other scenarios the session variable itself might contain the data ownership information pertinent to a row, for eg, an account manager has access to any account [1,2,3…] where that information is not present in the current database but present in the session variable (probably provided by some other data system).

To model this, we implemented an authorization layer similar to Postgres RLS at the API layer to provide a declarative framework to configure access control. If you’re familiar with RLS, the analogy is that the “current session” variables in a SQL query are now HTTP “session-variables” coming from cookies or JWTs or HTTP headers.

Incidentally, because we had started the engineering work behind Hasura many years ago, we ended up implementing Postgres RLS features at the application layer before it landed in Postgres. We even had the same bug in our equivalent of the insert returning clause that Postgres RLS fixed 

一篇来自hasura graphql-engine 百万级别live query 的实践的更多相关文章

  1. epoll的内部实现 & 百万级别句柄监听 & lt和et模式非常好的解释

    epoll是Linux高效网络的基础,比如event poll(例如nodejs),是使用libev,而libev的底层就是epoll(只不过不同的平台可能用epoll,可能用kqueue). epo ...

  2. 百万级别数据Excel导出优化

    前提 这篇文章不是标题党,下文会通过一个仿真例子分析如何优化百万级别数据Excel导出. 笔者负责维护的一个数据查询和数据导出服务是一个相对远古的单点应用,在上一次云迁移之后扩展为双节点部署,但是发现 ...

  3. Facebook兆级别图片存储及每秒百万级别图片查询原理

    前言 Facebook(后面简称fb)是世界最大的社交平台,需要存储的数据时刻都在不断剧增(占比最大为图片,每天存储约20亿张,大概是微信的三倍). 那么问题来了,fb是如何存储兆级别的图片?并且又是 ...

  4. 使用表类型(Table Type-SqlServer)实现百万级别的数据一次性毫秒级别插入

    使用表类型(Table Type)实现百万级别的数据一次性插入 思路 1 创建表类型(TaBleType)         2 创建添加存储过程         3 使用C#语言构建一个DataTab ...

  5. JAVA使用POI如何导出百万级别数据(转)

    https://blog.csdn.net/happyljw/article/details/52809244   用过POI的人都知道,在POI以前的版本中并不支持大数据量的处理,如果数据量过多还会 ...

  6. Hasura GraphQL schema 生成是如何工作的

    不像大部分的graphql 引擎,使用标准的graphql 规范的处理模型,Hasura graphql 不存在resolver 的概念(实际上是有的,只是转换为了sql语法) 以下是Hasura g ...

  7. 通过torodb && hasura graphql 让mongodb 快速支持graphql api

    torodb 可以方便的将mongo 数据实时同步到pg,hasura graphql 可以方便的将pg 数据暴露为graphql api,集成在一起真的很方便 环境准备 docker-compose ...

  8. hasura graphql server 集成gatsby

    hasura graphql server 社区基于gatsby-source-graphql 开发了gatsby-postgres-graphql 插件, 可以快速的开发丰富的网站 基本使用 安装h ...

  9. hasura graphql server event trigger 试用

    hasura graphql server 是一个很不错的graphql 引擎,当前版本已经支持event triiger 了 使用此功能我们可以方便的集成webhook功能,实现灵活,稳定,快捷的消 ...

随机推荐

  1. HBase 系列(一)—— HBase 简介

    一.Hadoop的局限 HBase 是一个构建在 Hadoop 文件系统之上的面向列的数据库管理系统. 要想明白为什么产生 HBase,就需要先了解一下 Hadoop 存在的限制?Hadoop 可以通 ...

  2. linux配置环境jdk

    条件:将jdk安装好,如果没有安装请看这里:linux(Centos7系统)中安装JDK.Tomcat.Mysql 步骤如下: linux中,环境变量是在 /etc/profile 中修改文件 vi ...

  3. ServletContextInitializer添加 servlet filter listener

    ServletContextInitializer添加 servlet filter listener https://www.cnblogs.com/pomer-huang/p/9639322.ht ...

  4. MVC Filter的使用方法

    相信对权限过滤大家伙都不陌生 用户要访问一个页面时 先对其权限进行判断并进行相应的处理动作 在webform中 最直接也是最原始的办法就是 在page_load事件中所有代码之前 先执行一个权限判断的 ...

  5. RStudio中安装factoextra包的问题

    最近在做一个R语言的小作业,其中聚类分析部分需要用到factoextra安装包,在RStudio中输入install.packages("factoextra")之后,就一直出现“ ...

  6. centos7 ipython安装

    ##下载yum源(Centos 7 为例)[root@localhost ~]# wget http://mirror.centos.org/centos/7/extras/x86_64/Packag ...

  7. 介于JAVAswing和Socket写的聊天室

    在厦门的第一阶段给我们复习了JAVASE基础,第一阶段的小玩具叫我们自选题材,我自己选了聊天室这个内容,这个小玩具无论是线程,还是网络编程,都会涉及到,比较有综合性,所以我选了这个: 这是我的包体结构 ...

  8. node基础学习——http基础知识-01-客户单请求

    <一> HTTP基础createServer()相关事件介绍 1. 创建HTTP服务器 server = http.createServer([requestListener]) // 下 ...

  9. 使用Arduino开发板和ESP8266从互联网读取数据

    ESP8266-01是一款很强大的模块,可以满足我们任何IOT项目的需求.自发布以来,它已经形成了一个很强大的群体,并演变成一个易于使用.价格低廉且功能强大的Wi-Fi模块.另一个更受欢迎的开源平台是 ...

  10. PAT甲级1004题解——并查集思想改

    题目分析:本题开始一直在考虑如何将每一个节点通过一种合适的数据结构存储起来(一对多的关系),最后发现借助并查集的思想可以用一个数组p,p[i]存放i节点的父节点,每次查询编号为i的节点属于第几层且判断 ...