Misultin, Mochiweb, Cowboy, NodeJS 及 Tornadoweb测评
http://www.oschina.net/translate/a-comparison-between-misultin-mochiweb-cowboy-nodejs-and-tornadoweb
英文原文:A comparison between Misultin, Mochiweb, Cowboy, NodeJS and Tornadoweb
As some of you already know, I’m the author of Misultin, an Erlang HTTP lightweight server library. I’m interested in HTTP servers, I spend quite some time trying them out and am always interested in comparing them from different perspectives. Today I wanted to try the same benchmark against various HTTP server libraries:
|
译者信息 |
I’ve chosen these libraries because they are the ones which currently interest me the most. Misultin, obviously since I wrote it; Mochiweb, since it’s a very solid library widely used in production (afaik it has been used or is still used to empower the Facebook Chat, amongst other things); Cowboy, a newly born lib whose programmer is very active in the Erlang community; NodeJS, since bringing javascript to the backend has opened up a new whole world of possibilities (code reusable in frontend, ease of access to various programmers,…); and finally, Tornadoweb, since Python still remains one of my favourites languages out there, and Tornadoweb has been excelling in loads of benchmarks and in production, empowering FriendFeed.
Two main ideas are behind this benchmark. First, I did not want to do a “Hello World” kind of test: we have static servers such as Nginx that wonderfully perform in such tasks. This benchmark needed to address dynamic servers. Second, I wanted sockets to get periodically closed down, since having all the load on a few sockets scarcely correspond to real life situations. |
译者信息
之所以选择这些库是因为我现在对它们几个最感兴趣。Misultin,我自己写的,当然感兴趣;Mochiweb, 是一个稳定的库,在生产环境上使用广泛(估计曾经,或者现在还是,和其它组件一起,被 Facebook 来作聊天服务的后端);Cowboy,一个刚刚诞生不久的库,其开发者在 Erlang 社区里十分活跃;NodeJS,将 javascript 引入服务端,开创了一个充满各种可能的新世界(可与前端共用代码,入门成本低等);还有 Tornadoweb,因为 Python 仍然是我最喜欢的语言之一,另外 Tornadoweb 在各种性能测试和生产环境中都有出色的表现,是 FriendFeed 的后端实现。 关于这个测试,还有两个重要的原则。首先,我并不想做 "Hello World" 类型的测试:对于这类测试,使用静态内容服务器(如 Nginx)就完全可以了。这个测试主要针对动态内容服务器。其次,我计划隔一段时间就关闭 socket,因为现实中是不可能仅靠几个 socket 来承担所有负载压力的。 |
For the latter reason, I decided to use a patched version of HttPerf. It’s a widely known and used benchmark tool from HP, which basically tries to send a desired number of requests out to a server and reports how many of these actually got replied, and how many errors were experienced in the process (together with a variety of other pieces of information). A great thing about HttPerf is that you can set a parameter, called –num-calls, which sets the amount of calls per session (i.e. socket connection) before the socket gets closed by the client. The command issued in these tests was:
httperf --timeout=5 --client=0/1 --server= --port=8080 --uri=/?value=benchmarks --rate= --send-buffer=4096 The value of rate has been set incrementally between 100 and 1,200. Since the number of requests/sec = rate * num-calls, the tests were conducted for a desired number of responses/sec incrementing from 1,000 to 12,000. The total number of requests = num-conns * rate, which has therefore been a fixed value of 50,000 along every test iteration. |
译者信息
为了实现上面所说的第二点,我决定使用一个经过修改 的 HttPerf。这是一个来自HP公司的知名的测试工具,使用广泛。它的作用是往服务器发送指定数目的请求,然后根据响应的数目、整个过程中产生错误以及其他信息来生成报告。HttpPerf 最好的一点是,你可以指定一个参数(叫"-num-calls"),来告诉它你在关闭 socket 前一次性要发送多少个请求(就是 socket 连接)。这次测试用到的命令为: httperf --timeout=5 --client=0/1 --server= --port=8080 --uri=/?value=benchmarks --rate= --send-buffer=4096 我将速度(--rate)从100开始一直增加到 1200。由于每秒请求数目等于 rate*num-calls,这个测试实际上每秒发送的请求数目由 1000 增加到 12000。总的请求数目等于 num-conns*rate,每次迭代中实际等于 50000。 |
The test basically asks servers to:
Therefore, what is being tested is:
The server is a virtualized up-to-date Ubuntu 10.04 LTS with 2 CPU and 1.5GB of RAM. Its/etc/sysctl.conf file has been tuned with these parameters: # Maximum TCP Receive Window |
译者信息
这个测试实际上是请求服务器:
因此,实际上测试的是服务器的:
服务器是一个双核的1.5G内存的虚拟机系统,安装了 Ubuntu 10.04 LTS,并打上了最新的补丁。/etc/sysctl.conf 已经过优化,主要参数如下: # Maximum TCP Receive Window |
The /etc/security/limits.conf file has been tuned so that ulimit -n is set to 65535 for both hard and soft limits.
Here is the code for the different servers. Misultin -module(misultin_bench). Mochiweb -module(mochi_bench). Note: i’m using misultin_utility:get_key_value/2 function inside this code since proplists:get_value/2 is much slower. Cowboy -module(cowboy_bench). -module(cowboy_bench_handler). NodeJS var http = require('http'), url = require('url'); Tornadoweb import tornado.ioloop |
译者信息
/etc/security/limits.conf 文件已经过优化,ulimit -n 设置为 65535。 以下是各个库实现HTTP服务器的代码: Misultin -module(misultin_bench). Mochiweb -module(mochi_bench). 注: 我在这里使用 misultin_utility:get_key_value/2 函数,因为 proplists:get_value/2 太慢了. Cowboy -module(cowboy_bench). -module(cowboy_bench_handler). NodeJS var http = require('http'), url = require('url'); Tornadoweb import tornado.ioloop |
I took this code and run it against:
All the libraries have been run with the standard settings. Erlang was launched with Kernel Polling enabled, and with SMP disabled so that a single CPU was used by all the libraries. Test results The raw printout of HttPerf results that I got can be downloaded from here. Note: the above graph has a logarithmic Y scale. According to this, we see that Tornadoweb tops at around 1,500 responses/seconds, NodeJS at 3,000, Mochiweb at 4,850, Cowboy at 8,600 and Misultin at 9,700. While Misultin and Cowboy experience very little or no error at all, the other servers seem to funnel under the load. Please note that “Errors” are timeout errors (over 5 seconds without a reply). Total responses and response times speak for themselves. |
译者信息
本文测试的各个库的版本:
各个库都采用默认设置来运行。其中 Erlang 开启了 Kernel Polling。另外,为了让所有库只使用一个CPU,我禁用了SMP。 测试结果 HttPerf 的原始输出文件可从 这里 下载 预期和实际响应图 超时错误图 响应时间图 注:上面的图的Y-轴使用了对数 总响应时间图 由图可知,Tornadoweb 最高约为 1500 请求/秒,NodeJS 为 3000,Mochiweb 为4850, Cowboy 为8600, Misultin 为9700。除了 Misultin 和 Cowboy 没有或者只有少数错误外,其他库在高负载情况下性能下降明显。请注意,这里的“错误”是指超时错误(大于5秒没有响应)。Total responses 指总的响应数目,response times 指的是响应时间。 |
I have to say that I’m surprised on these results, to the point I’d like to have feedback on code and methodology, with alternate tests that can be performed. Any input is welcome, and I’m available to update this post and correct eventual errors I’ve made, as an ongoing discussion with whomever wants to contribute.
However, please do refrain from flame wars which are not welcomed here. I have published this post exactly because I was surprised on the results I got. What is your opinion on all this? |
译者信息
不得不说,这些结果还是让我比较吃惊的。写到这里,希望各位读者能在代码和测试方法上给我提些意见,以进行更好的测试。欢迎各种意见,我将和其他贡献者一起,在后续的讨论中继续更新这篇文章以及更正相关错误。 注意:文章内容不代表本人观点,请不要由此进行言论攻击。正是由于这些结果太让我惊奇,我才把这篇文章发布出来。 对于这些结果,你的观点是怎样的呢? |
—————————————————–
UPDATE (May 16th, 2011) Due to the success of these benchmarks I want to stress an important point when you read any of these (including mines). Benchmarks often are misleading interpreted as “the higher you are on a graph, the best that *lib-of-the-moment-name-here* is at doing everything”. This is absolutely the wrongest way to look at those. I cannot stress this point enough. |
译者信息—————————————————–
更新 (2011-5-16) 现在各种各样的测试正源源不断地冒出来。在你们阅读这些测试报告(也包括我的)的时候,我想着重强调一点: 对于各种测试报告,人们常常误认为:如果一个测试对象在图表中表现越好,那么这个测试对象在所有的应用场景下都是最好的。这种观点绝对,绝对是错误的! |
‘Fast’ is only 1 of the ‘n’ features you desire from a webserver library: you definitely want to consider stability, features, ease of maintenance, low standard deviation, code usability, community, developments speed, and many other factors whenever choosing the best suited library for your own application. There is no such thing as generic benchmarks. These ones are related to a very specific situation: fast application computational times, loads of connections, and small data transfer.
Therefore, please use this with a grain of salt and do not jump to generic conclusions regarding any of the cited libraries, which as I’ve clearly stated in the beginning of my post I all find interesting and valuable. And I still am very open in being criticized for the described methodology or other things I might have missed. Thank you, r. |
译者信息
"快速"只是优秀WEB服务器库的众多特点之一:在选择组件进行开发之前,你还必须考虑健状性、功能、易于维护、尽可能接近标准、代码可用性、支持社区、开发速度及其他因素。没有哪个测试是适用于所有情况的。这类测试问题侧重于某一方面:程序运行速度够快、负载能力够强或者尽量少的数据传输量。 因此,请仅将本文测试作为参考的一个因素,而不要武断地下结论。本文引用的库都很优秀,也很有意思,但是你在使用之前,请经过充分的考虑。本文在语言或者在测试方法若有不当,欢迎指正,严厉一些也没关系。 谢谢。 |
Misultin, Mochiweb, Cowboy, NodeJS 及 Tornadoweb测评的更多相关文章
- MOCHIWEB与COWBOY使用JSON
http://4096.info/2014/05/28/mochiweb%E4%B8%8Ecowboy%E4%BD%BF%E7%94%A8json/ 服务器原来的socket实现机制更改为ranch了 ...
- Erlang cowboy websocket 服务器
Erlang cowboy websocket 服务器 原文见于: http://marcelog.github.io/articles/erlang_websocket_server_cowboy_ ...
- Cowboy http服务器 websocket
一.基础介绍 cowboy是一个小巧.快速.模块化的http服务器,采用Erlang开发.其中良好的clean module使得我们可以扩展到多种网络协议之中,cowboy自带的有tcp和ssl,而也 ...
- Python伪开发者对于搜狐云景的测评
Python伪开发者对于搜狐云景的测评 本人是GAE和OpenShift的狂热爱好者,玩过各种国外PaaS.某次想搞个稍微复杂点的Python Web程序,需要比较好的网络传输速度,就试图找前PM(P ...
- 遇见NodeJS:JavaScript的贵人
在大家的印象中,相当长一段时间里,JavaScript是一门脚本语言,一般不能成为某个项目的担纲主角,作用只是在浏览器里帮忙校验校验输入是不是正确,响应一下鼠标.键盘事件,或者让某个HTML元素动起来 ...
- Erlang cowboy 入门参考
Erlang cowboy 入门参考 cheungmine,2014-10-28 本文翻译自: http://ninenines.eu/docs/en/cowboy/HEAD/guide/gettin ...
- 【erlang 网络编程学习】 分析cowboy acceptor实现
http://www.tuicool.com/articles/vuymei 不知道为什么就看了cowboy代码,就继续看了下去了. 分析一下吧,主要写写cowboy 的acceptor pool 的 ...
- ElementUI入门和NodeJS环境搭建
1. ElementUI简介 我们学习VUE,知道它的核心思想式组件和数据驱动,但是每一个组件都需要自己编写模板,样式,添加事件,数据等是非常麻烦的, 所以饿了吗推出了基于VUE2.0的组件库,它 ...
- NodeJs之OS
OS Node.js提供了一些基本的底层操作系统的模块OS. API var os = require('os'); console.log('[arch] 操作系统CPU架构'+os.arch()) ...
随机推荐
- Android 技巧 - Debug 判断不再用 BuildConfig
Android 开发中一般会通过 BuildConfig.DEBUG 判断是否是 Debug 模式,从而做一些在 Debug 模式才开启的特殊操作,比如打印日志.这样好处是不用在发布前去主动修改,因为 ...
- Docker---(8)Docker启动Redis后访问不了
原文:Docker---(8)Docker启动Redis后访问不了 版权声明:欢迎转载,请标明出处,如有问题,欢迎指正!谢谢!微信:w1186355422 https://blog.csdn.net/ ...
- Spring学习总结(7)——applicationContext.xml 配置文详解
web.xml中classpath:和classpath*: 有什么区别? classpath:只会到你的class路径中查找找文件; classpath*:不仅包含class路径,还包括jar文件 ...
- amazeui学习笔记--css(常用组件9)--导航nav
amazeui学习笔记--css(常用组件9)--导航nav 一.总结 1.导航基本使用:<ul> 添加 .am-nav class 以后就是一个基本的垂直导航.默认样式中并没有限定导航的 ...
- Codeforces Round #445 Div. 1 C Maximum Element (dp + 组合数学)
题目链接: http://codeforces.com/contest/889/problem/C 题意: 给你 \(n\)和 \(k\). 让你找一种全排列长度为\(n\)的 \(p\),满足存在下 ...
- SPRINGAOP实现基于注解的数据源动态切换(转)
需求 代码实现读写数据库分离 武器 spring3.0以上版本 实现思路 1.继承org.springframework.jdbc.datasource.lookup.AbstractRoutingD ...
- springboot入门(三)-- springboot集成mybatis及mybatis generator工具使用
前言 mybatis是一个半自动化的orm框架,所谓半自动化就是mybaitis只支持数据库查出的数据映射到pojo类上,而实体到数据库的映射需要自己编写sql语句实现,相较于hibernate这种完 ...
- [Vue] Load components when needed with Vue async components
In large applications, dividing the application into smaller chunks is often times necessary. In thi ...
- C语言深度剖析-----最终的胜利
进军C++ 初始OOP 抽象 封装 封装的好处,改名只需改封装 小结 面试题 指针运算 打印11,16,29,28,26 调试经验 printf定义,可变参数无法判断实际参数的类型 安全编程 数组 ...
- Android 系统状态栏一体化实现
自上周更新了QQ手机client.对于新版本号的QQ,系统状态栏也有蓝色色调,看起来有种清爽感觉.于是想自已也实现这样的效果,随查阅资料,自已调试实现这样的效果.Android 系统4.4以上都能够具 ...