众所周知,mysql在数据量很大的时候查询的效率是很低的,因为假如你需要 OFFSET 100000 LIMIT 5 这样的数据,数据库就需要跳过前100000条数据,才能返回给你你需要的5条数据。由于数据在磁盘上面不一定是相等长度的,所以没法在跳过这100000条数据上面进行优化,只能一条一条地查找数据、找到结尾处并查找下一条,这就导致了数据库很差的性能。解决的办法就是使用 seek 这种方法,可以参考 这篇文章 。

假如你有这样的数据,

| ID | VALUE | PAGE_BOUNDARY | |------|-------|---------------| | ... | ... | ... | | 474 | 2 | 0 | | 533 | 2 | 1 | <-- Last on page 5 | 640 | 2 | 0 | | 776 | 2 | 0 | | 815 | 2 | 0 | | 947 | 2 | 0 | | 37 | 3 | 1 | <-- Last on page 6 | 287 | 3 | 0 | | 450 | 3 | 0 | | ... | ... | ... |

你需要将第6页的数据返回给用户,这时不需要用OFFSET和LIMIT语句,取而代之的是这种方法――找到第5页的最后一个数据的标识(在这里是id和value),并传递给SQL语句,用于定位从哪里开始第6页的数据,这样即使前面有100000条数据,数据库也不用管这些,只需一行 where id > ? 这样的语句便可略过这100000条数据直接找到你需要的那5条。

用SQL表示为: SELECT id, value FROM t WHERE (value, id) > (2, 533) ORDER BY value, id LIMIT 5 用JOOQ则表示为: DSL.using(configuration) .select(T.ID, T.VALUE) .from(T) .orderBy(T.VALUE, T.ID) .seek(2, 533) .limit(5) .fetch();

返回的结果是

| ID | VALUE | |-----|-------| | 640 | 2 | | 776 | 2 | | 815 | 2 | | 947 | 2 | | 37 | 3 |

和用OFFSET、LIMIT语句查询的结果一样,但是性能方面显然这个更好。

在使用的时候,一般需要配合JOOQ的动态SQL来使用,以便区分各种筛选条件以及究竟是第一次查询还是需要查询更靠后面的数据,代码如下所示:

//动态生成SQL语句 public Condition whereCondition(List<Integer> ids, List<String> wordList, String time, boolean own) { Condition condition = trueCondition(); if (ids != null) { if (own) condition = condition.and(SELF_SITE.URL_ID.in(ids)); else condition = condition.and(SELF_SITE.URL_ID.notIn(ids)); } //加上关键字的条件 Condition wordCondition = falseCondition(); for (String word : wordList) { wordCondition = wordCondition.or(SELF_SITE.NAME.contains(word)); } condition = condition.and(wordCondition); //加上时间的条件 if (time != null) { Instant instant = Instant.ofEpochMilli(Long.parseLong(time)); LocalDateTime localDateTime = LocalDateTime.ofInstant(instant, ZoneOffset.ofHours(8)); condition = condition.and(SELF_SITE.FETCH_TIME.greaterOrEqual(localDateTime)); } return condition; } @Override public List<SelfSite> findData(List<Integer> ids, int self_site_id, List<String> wordList, String time, boolean own) { SelectQuery query = dsl.selectQuery(); query.addFrom(SELF_SITE); query.addConditions(whereCondition(ids, wordList, time, own)); query.addOrderBy(SELF_SITE.ID.desc()); if (self_site_id != -1) query.addSeekAfter(DSL.val(self_site_id)); query.addLimit(NUM_PER_PAGE); return query.fetch().into(SelfSite.class); }

MySQL has the ability to handle a decent amount of traffic as the business expands, however, when the database realizes maximum capacity, the e-commerce website will not work as efficiently as you want. This is because MySQL faces some difficulties and has limitations in handling a few things, such as the following:

Increased Availability : MySQL has a single point of failure at the master server. This means that if the master server goes down, then there will be downtime. When this happens, your customers will not be able to purchase anything from your e-commerce site and when the downtime lasts too long, they will get frustrated and will eventually take the business to some other place, even permanently in some cases. This directly translates to a loss in money for your business. Increasing Reads and Writes : MySQL has limitations on capacity, i.e., if more and more customers carry out complete transactions on your website, it won’t take long before the database stalls. It is true that MySQL can scale reads through read-slaves, however, at the same time the applications have to be aware that the reads are not synchronized with the write master. For example, if a customer updates products in his e-cart, it better be read from the write-master otherwise there will be a risk of having the wrong available-to-promise quantities. This will create a bottleneck in the checkout line, which will undoubtedly result in abandoned carts, unmanaged and unsold inventory, selling inventory you don’t have, refunds, negative social media exposure or worse ― unhappy customers who might not come back. Flexing Up and Down : MySQL writes are not scalable via slaves, which means that you can accommodate an increase in traffic by paying a premium and scaling up. Unfortunately, MySQL does not have the ability to flex up and down in order to benefit your e-commerce business. Tackling When it Reaches its Limit

There will come a time when MySQL will reach its limit causing your e-commerce website to stall. When your write master has been scaled, you must consider the following techniques:

Re-platforming : This technique involves moving the database from one platform to the other. For example, you can move Magento/MySQL to an Oracle-based platform. Sharding : In this, you partition the database across multiple different servers. This helps in improving the performance of MySQL even though it presents various shortcomings.

These techniques do enhance the performance of MySQL, however, they are quite complex processes and are costly to implement. Although, they are not your only options to fall back on.

Maintaining the Database

MySQL数据很大的时候的更多相关文章

  1. hdu 1690 构图后Floyd 数据很大

    WA了好多次... 这题要用long long 而且INF要设大一点 Sample Input2 //T1 2 3 4 1 3 5 7 //L1-L4 C1-C4 距离和花费4 2 //结点数 询问次 ...

  2. 关于Sending build context to Docker daemon 数据很大的问题

    以往进行docker build的时候都是在新建的文件夹下面进行,这次为了图方便,就直接放在开发根目录下进行build,这样子问题就来了.于是就有了下面的文件大小发送量: Sending build ...

  3. 黄聪:Mysql数据库还原备份提示MySQL server has gone away 的解决方法(备份文件数据过大)

    使用mysql做数据库还原的时候,由于有些数据很大,会出现这样的错误:The MySQL Server returned this Error:MySQL Error Nr. MySQL server ...

  4. ECMall的MySQL数据调用的简单方法

    很多ecmall开发者会问,怎么使用Ecmall的mysql类库进行数据调用.从原理上来讲Ecmall的数据调用是以数据模块+模块类库的方式进行mysql数据调用的,所有数据模块都存储在include ...

  5. 解决mysql导入数据量很大导致失败及查找my.ini 位置(my.ini)在哪

    数据库数据量很大的数据库导入到本地时,会等很久,然而等很久之后还是显示失败: 这是就要看看自己本地的没mysql是否设置了超时等待,如果报相关time_out这些,可以把mysql.ini尾部添加ma ...

  6. mysql innobackupex xtrabackup 大数据量 备份 还原

    大数据量备份与还原,始终是个难点.当MYSQL超10G,用mysqldump来导出就比较慢了.在这里推荐xtrabackup,这个工具比mysqldump要快很多. 一.Xtrabackup介绍 1, ...

  7. mysql innobackupex xtrabackup 大数据量 备份 还原(转)

    原文:http://blog.51yip.com/mysql/1650.html 作者:海底苍鹰 大数据量备份与还原,始终是个难点.当MYSQL超10G,用mysqldump来导出就比较慢了.在这里推 ...

  8. Slave延迟很大的优化方法总结(MySQL优化)

    [http://www.cstor.cn/textdetail_9146.html] 一般而言,slave相对master延迟较大,其根本原因就是slave上的复制线程没办法真正做到并发.简单说,在m ...

  9. [MySQL优化案例]系列 — slave延迟很大优化方法

    备注:插图来自网络搜索,如果觉得不当还请及时告知 :) 一般而言,slave相对master延迟较大,其根本原因就是slave上的复制线程没办法真正做到并发.简单说,在master上是并发模式(以In ...

随机推荐

  1. jQuery基础修炼圣典—DOM篇(一)

    一.DOM节点的创建 1.创建节点及节点属性 通过JavaScript可以很方便的获取DOM节点,从而进行一系列的DOM操作.但实际上一般开发者都习惯性的先定义好HTML结构,但这样就非常不灵活了. ...

  2. 如何激活一个window/dialog && 不能直接对Dialog Box使用SetFocus

    问题,症状: 程序的主窗口CMainWnd创建了一个modal dialog,希望这个dialog能接收WM_KEYDOWN消息,但是需要点一下这个dialog窗口它才能接收到(我嫌麻烦),而且我发现 ...

  3. iOS 自定义UIButton(图片和文字混合)

    // UIApplicationDelegate  .h文件 #import <UIKit/UIKit.h> @interface AppDelegate : UIResponder &l ...

  4. MagicNotes:自我管理中的破窗效应

    MagicNotes,思绪随风飞扬,偶尔在这里停留. 在<程序员修炼之道——从小工到专家>这本书里,有这么一段描述: 在市区,有些建筑漂亮而整洁,而另一些却是破败不堪的“废弃船只”.为什么 ...

  5. Lua xavante WEB server实现xmlrpc服务器端

    xavante xavante是一个使用lua实现的遵守http1.1的web server,支持wsapi. 依赖库: xavante核心 -- lua, copas(纯lua编写,网络连接coro ...

  6. HTTPS and the TLS handshake protocol阅读笔记

    目的 为能够透彻理解HTTPS报文交互过程,做此笔记. 本文大部分内容来自 : http://albertx.mx/blog/https-handshake/ http://www.cnblogs.c ...

  7. jquery实现input输入框实时输入触发事件代码

    <input id="productName" name="productName" class="wid10" type=" ...

  8. TNS-01189: The listener could not authenticate the user

    查看监听时,发现监听状态异常,报TNS-01189: The listener could not authenticate the user错误 $ lsnrctl stat LSNRCTL - P ...

  9. jQuery中的siblings

    所谓siblings,英文翻译就是兄弟节点.那么故名思意,就是拿到某元素的兄弟节点(不包括自己). <html> <head> <script type="te ...

  10. 伪类写border, transform: scale3d() 及兼容性

    .top::before { content: ""; position: absolute; left: 0; width: 200%; height: 0; border-to ...