转自:http://blog.fanout.io/2017/11/15/high-scalability-fanout-fastly/

Fanout Cloud is for high scale data push. Fastly is for high scale data pull. Many realtime applications need to work with data that is both pushed and pulled, and thus can benefit from using both of these systems in the same application. Fanout and Fastly can even be connected together!

Using Fanout and Fastly in the same application, independently, is pretty straightforward. For example, at initialization time, past content could be retrieved from Fastly, and Fanout Cloud could provide future pushed updates. What does it mean to connect the two systems together though? Read on to find out.

Proxy chaining

Since Fanout and Fastly both work as reverse proxies, it is possible to have Fanout proxy traffic through Fastly rather than sending it directly to your origin server. This provides some unique benefits:

  1. Cached initial data. Fanout lets you build API endpoints that serve both historical and future content, for example an HTTP streaming connection that returns some initial data before switching into push mode. Fastly can provide that initial data, reducing load on your origin server.

  2. Cached Fanout instructions. Fanout’s behavior (e.g. transport mode, channels to subscribe to, etc.) is determined by instructions provided in origin server responses, usually in the form of special headers such as Grip-Hold and Grip-Channel. Fastly can cache these instructions/headers, again reducing load on your origin server.

  3. High availability. If your origin server goes down, Fastly can serve cached data and instructions to Fanout. This means clients could connect to your API endpoint, receive historical data, and activate a streaming connection, all without needing access to the origin server.

Network flow

Suppose there’s an API endpoint /stream that returns some initial data and then stays open until there is an update to push. With Fanout, this can be implemented by having the origin server respond with instructions:

HTTP/1.1 200 OK
Content-Type: text/plain
Content-Length: 29
Grip-Hold: stream
Grip-Channel: updates {"data": "current value"}

When Fanout Cloud receives this response from the origin server, it converts it into a streaming response to the client:

HTTP/1.1 200 OK
Content-Type: text/plain
Transfer-Encoding: chunked
Connection: Transfer-Encoding {"data": "current value"}

The request between Fanout Cloud and the origin server is now finished, but the request between the client and Fanout Cloud remains open. Here’s a sequence diagram of the process:

Since the request to the origin server is just a normal short-lived request/response interaction, it can alternatively be served through a caching server such as Fastly. Here’s what the process looks like with Fastly in the mix:

Now, guess what happens when the next client makes a request to the /stream endpoint?

That’s right, the origin server isn’t involved at all! Fastly serves the same response to Fanout Cloud, with those special HTTP headers and initial data, and Fanout Cloud sets up a streaming connection with the client.

Of course, this is only the connection setup. To send updates to connected clients, the data must be published to Fanout Cloud.

We may also need to purge the Fastly cache, if an event that triggers a publish causes the origin server response to change as well. For example, suppose the “value” that the /stream endpoint serves has been changed. The new value could be published to all current connections, but we’d also want any new connections that arrive afterwards to receive this latest value as well, rather than the older cached value. This can be solved by purging from Fastly and publishing to Fanout Cloud at the same time.

Here’s a (long) sequence diagram of a client connecting, receiving an update, and then another client connecting:

At the end of this sequence, the first and second clients have both received the latest data.

Rate-limiting

One gotcha with purging at the same time as publishing is if your data rate is high it can negate the caching benefit of using Fastly.

The sweet spot is data that is accessed frequently (many new visitors per second), changes infrequently (minutes), and you want changes to be delivered instantly (sub-second). An example could be a live blog. In that case, most requests can be served/handled from cache.

If your data changes multiple times per second (or has the potential to change that fast during peak moments), and you expect frequent access, you really don’t want to be purging your cache multiple times per second. The workaround is to rate-limit your purges. For example, during periods of high throughput, you might purge and publish at a maximum rate of once per second or so. This way the majority of new visitors can be served from cache, and the data will be updated shortly after.

An example

We created a Live Counter Demo to show off this combined Fanout + Fastly architecture. Requests first go to Fanout Cloud, then to Fastly, then to a Django backend server which manages the counter API logic. Whenever a counter is incremented, the Fastly cache is purged and the data is published through Fanout Cloud. The purge and publish process is also rate-limited to maximize caching benefit.

The code for the demo is on GitHub.

High scalability with Fanout and Fastly的更多相关文章

  1. spring amqp rabbitmq fanout配置

    基于spring amqp rabbitmq fanout配置如下: 发布端 <rabbit:connection-factory id="rabbitConnectionFactor ...

  2. 可扩展性 Scalability

    水平扩展和垂直扩展: Horizontal and vertical scaling Methods of adding more resources for a particular applica ...

  3. RabbitMQ Exchange中的fanout类型

    fanout 多播 在之前都是使用direct直连类型的交换机,通过routingkey来决定把消息推到哪个queue中. 而fanout则是把拿到消息推到与之绑定的所有queue中. 分析业务,怎样 ...

  4. RabbitMQ学习笔记4-使用fanout交换器

    fanout交换器会把发送给它的所有消息发送给绑定在它上面的队列,起到广播一样的效果. 本里使用实际业务中常见的例子, 订单系统:创建订单,然后发送一个事件消息 积分系统:发送订单的积分奖励 短信平台 ...

  5. What is the difference between extensibility and scalability?

    You open a small fast food center, with a serving capacity of 5-10 people at a time. But you have en ...

  6. Achieving High Availability and Scalability - ARR and NLB

    Achieving High Availability and Scalability: Microsoft Application Request Routing (ARR) for IIS 7.0 ...

  7. Improve Scalability With New Thread Pool APIs

    Pooled Threads Improve Scalability With New Thread Pool APIs Robert Saccone Portions of this article ...

  8. A Flock Of Tasty Sources On How To Start Learning High Scalability

    This is a guest repost by Leandro Moreira. When we usually are interested about scalability we look ...

  9. SSIS ->> Reliability And Scalability

    Error outputs can obviously be used to improve reliability, but they also have an important part to ...

随机推荐

  1. 深入玩转K8S之外网如何访问业务应用

    有一个问题就是现在我的业务分配在多个Pod上,那么如果我某个Pod死掉岂不是业务完蛋了,当然也会有人说Pod死掉没问题啊,K8S自身机制Deployment和Controller会动态的创建和销毁Po ...

  2. ComPtr的介绍以及使用

    ComPtr是为COM而设计的智能指针.它支持WindowsRT,也支持传统Win32.相比ATL里的CComPtr类,它有了一些提升. ComPtr包含在Windows 8.x SDK and Wi ...

  3. C#下IOC/依赖注入框架Grace介绍

    对依赖注入或控制反转不了解的童鞋请先自行学习一下这一设计,这里直接介绍项目和实现步骤. Grace是一个开源.轻巧.易用同时特性丰富.性能优秀的依赖注入容器框架.从这篇IOC容器评测文章找到的Grac ...

  4. java之spring mvc之页面跳转

    1. 如果返回值为ModelAndView,在处理方法中,返回null时,默认跳转的视图名称为请求名.跳转结果会根据视图解析器来跳转. @RequestMapping("/hello.do& ...

  5. this指向详解及改变它的指向的方法

    一.this指向详解(彻底理解js中this的指向,不必硬背) 首先必须要说的是,this的指向在函数定义的时候是确定不了的,只有函数执行的时候才能确定this到底指向谁,实际上this的最终指向的是 ...

  6. jquery实现弹出层完美居中效果

    代码如下: showDiv($("#pop"));function showDiv(obj){ $(obj).show(); center(obj); $(window).scro ...

  7. 车间管理难?APS系统为你智能排程

    对 APS系统不熟或者不了解他的一些运行规则也是在实施项目中导致经常不能正常运行不可忽视的因素,对 APS系统的早期了解是整个项目实施运行的成功至关重要的因素. 如果不了解 APS潜在的因素和运行准则 ...

  8. Github强制找回管理员账号密码

    步骤: 1. 登录Github所在的服务器,切换用户为git:su git 2. 进入Github的Rails控制台:gitlab-rails console production 3. 查看超级管理 ...

  9. linux 中常遇到的问题

    1.上传文件是速度为零 xshell连接对应的Ubuntu服务器上在Ubuntu服务器上安装lrzszsudo apt install lrzsz xshell连接对应的centos服务器上 yum  ...

  10. linux中container_of

    linux 驱动程序中 container_of宏解析 众所周知,linux内核的主要开发语言是C,但是现在内核的框架使用了非常多的面向对象的思想,这就面临了一个用C语言来实现面向对象编程的问题,今天 ...