Preface
 
    Today I'm gonna implement a consul in my environment to discover service of MySQL database and check whether the master-slave is working normally.
 
Introduce
 
    Consul is a tool like zookeeper which can discover services we've registered on it.It's just a simple binary command which can run with simple configuration file(json format).
 
official website:
 
Framework
 
Hostname IP Port OS Version MySQL Version MySQL Role Consul Version Consul Role
zlm1 192.168.56.100 3306 CentOS 7.0 5.7.21 master Consul v1.2.2 server
zlm2 192.168.56.101 3306 CentOS 7.0 5.7.21 slave Consul v1.2.2 client
 
Procedure
 
Download and install consul.
  1. [root@zlm1 :: /vagrant]
  2. #wget https://releases.hashicorp.com/consul/1.2.2/consul_1.2.2_linux_amd64.zip
  3. ---- ::-- https://releases.hashicorp.com/consul/1.2.2/consul_1.2.2_linux_amd64.zip
  4. Resolving releases.hashicorp.com (releases.hashicorp.com)... 151.101.229.183, 2a04:4e42:::
  5. Connecting to releases.hashicorp.com (releases.hashicorp.com)|151.101.229.183|:... connected.
  6. HTTP request sent, awaiting response... OK
  7. Length: (18M) [application/zip]
  8. Saving to: consul_1..2_linux_amd64.zip
  9.  
  10. %[===========================================================================================================>] ,, .8KB/s in 6m 12s
  11.  
  12. -- :: (48.3 KB/s) - consul_1..2_linux_amd64.zip saved [/]
  13.  
  14. [root@zlm1 :: /vagrant/consul_1..2_linux_amd64]
  15. #ls -l
  16. total
  17. -rwxrwxrwx vagrant vagrant Jul : consul //There's only a binary command in the unziped directory of zip file.
  18.  
  19. [root@zlm1 :: /vagrant/consul_1..2_linux_amd64]
  20. #mkdir /etc/consul.d
  21.  
  22. [root@zlm1 :: /vagrant/consul_1..2_linux_amd64]
  23. #mkdir /data/consul
  24.  
  25. [root@zlm1 :: /vagrant/consul_1..2_linux_amd64]
  26. #cp consul ~
  27.  
  28. [root@zlm1 :: ~]
  29. #cd /usr/local/bin
  30.  
  31. [root@zlm1 :: /usr/local/bin]
  32. #ls -l|grep consul
  33. lrwxrwxrwx root root Aug : consul -> /root/consul
  34.  
  35. [root@zlm1 :: /usr/local/bin]
  36. #consul --help
  37. Usage: consul [--version] [--help] <command> [<args>]
  38.  
  39. Available commands are:
  40. agent Runs a Consul agent
  41. catalog Interact with the catalog
  42. connect Interact with Consul Connect
  43. event Fire a new event
  44. exec Executes a command on Consul nodes
  45. force-leave Forces a member of the cluster to enter the "left" state
  46. info Provides debugging information for operators.
  47. intention Interact with Connect service intentions
  48. join Tell Consul agent to join cluster
  49. keygen Generates a new encryption key
  50. keyring Manages gossip layer encryption keys
  51. kv Interact with the key-value store
  52. leave Gracefully leaves the Consul cluster and shuts down
  53. lock Execute a command holding a lock
  54. maint Controls node or service maintenance mode
  55. members Lists the members of a Consul cluster
  56. monitor Stream logs from a Consul agent
  57. operator Provides cluster-level tools for Consul operators
  58. reload Triggers the agent to reload configuration files
  59. rtt Estimates network round trip time between nodes
  60. snapshot Saves, restores and inspects snapshots of Consul server state
  61. validate Validate config files/directories
  62. version Prints the Consul version
  63. watch Watch for changes in Consul
Start agent of consul by "dev" mode.
  1. [root@zlm1 :: /usr/local/bin]
  2. #consul agent -dev
  3. ==> Starting Consul agent...
  4. ==> Consul agent running!
  5. Version: 'v1.2.2'
  6. Node ID: '7c839914-8a47-ab36-8920-1a9da54fc6c3'
  7. Node name: 'zlm1'
  8. Datacenter: 'dc1' (Segment: '<all>')
  9. Server: true (Bootstrap: false)
  10. Client Addr: [127.0.0.1] (HTTP: , HTTPS: -, DNS: )
  11. Cluster Addr: 127.0.0.1 (LAN: , WAN: )
  12. Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false
  13.  
  14. ==> Log data will now stream in as it occurs:
  15.  
  16. // :: [DEBUG] agent: Using random ID "7c839914-8a47-ab36-8920-1a9da54fc6c3" as node ID
  17. // :: [INFO] raft: Initial configuration (index=): [{Suffrage:Voter ID:7c839914-8a47-ab36--1a9da54fc6c3 Address:127.0.0.1:}]
  18. // :: [INFO] serf: EventMemberJoin: zlm1.dc1 127.0.0.1
  19. // :: [INFO] serf: EventMemberJoin: zlm1 127.0.0.1
  20. // :: [INFO] raft: Node at 127.0.0.1: [Follower] entering Follower state (Leader: "")
  21. // :: [INFO] consul: Adding LAN server zlm1 (Addr: tcp/127.0.0.1:) (DC: dc1)
  22. // :: [INFO] consul: Handled member-join event for server "zlm1.dc1" in area "wan"
  23. // :: [DEBUG] agent/proxy: managed Connect proxy manager started
  24. // :: [WARN] agent/proxy: running as root, will not start managed proxies
  25. // :: [INFO] agent: Started DNS server 127.0.0.1: (tcp)
  26. // :: [INFO] agent: Started DNS server 127.0.0.1: (udp)
  27. // :: [INFO] agent: Started HTTP server on 127.0.0.1: (tcp)
  28. // :: [INFO] agent: started state syncer
  29. // :: [WARN] raft: Heartbeat timeout from "" reached, starting election
  30. // :: [INFO] raft: Node at 127.0.0.1: [Candidate] entering Candidate state in term
  31. // :: [DEBUG] raft: Votes needed:
  32. // :: [DEBUG] raft: Vote granted from 7c839914-8a47-ab36--1a9da54fc6c3 in term . Tally:
  33. // :: [INFO] raft: Election won. Tally:
  34. // :: [INFO] raft: Node at 127.0.0.1: [Leader] entering Leader state
  35. // :: [INFO] consul: cluster leadership acquired
  36. // :: [INFO] consul: New leader elected: zlm1
  37. // :: [INFO] connect: initialized CA with provider "consul"
  38. // :: [DEBUG] consul: Skipping self join check for "zlm1" since the cluster is too small
  39. // :: [INFO] consul: member 'zlm1' joined, marking health alive
  40. // :: [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically
  41. // :: [INFO] agent: Synced node info
  42. // :: [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically
  43. // :: [DEBUG] agent: Node info in sync
  44. // :: [DEBUG] agent: Node info in sync
  45. // :: [DEBUG] consul: Skipping self join check for "zlm1" since the cluster is too small
  46. // :: [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically
  47. // :: [DEBUG] agent: Node info in sync
  48. // :: [DEBUG] manager: Rebalanced servers, next active server is zlm1.dc1 (Addr: tcp/127.0.0.1:) (DC: dc1)
  49. // :: [DEBUG] consul: Skipping self join check for "zlm1" since the cluster is too small
  50. // :: [DEBUG] consul: Skipping self join check for "zlm1" since the cluster is too small
  51. // :: [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically
  52. // :: [DEBUG] agent: Node info in sync
  53.  
  54. //Now the consul cluser has only one node "zlm1".
  55.  
  56. [root@zlm1 :: ~]
  57. #consul members
  58. Node Address Status Type Build Protocol DC Segment
  59. zlm1 127.0.0.1: alive server 1.2. dc1 <all>
  60.  
  61. //Type "Ctrl+C" can exit consul gracefully.
  62. ^C // :: [INFO] agent: Caught signal: interrupt
  63. // :: [INFO] agent: Graceful shutdown disabled. Exiting
  64. // :: [INFO] agent: Requesting shutdown
  65. // :: [WARN] agent: dev mode disabled persistence, killing all proxies since we can't recover them
  66. // :: [DEBUG] agent/proxy: Stopping managed Connect proxy manager
  67. // :: [INFO] consul: shutting down server
  68. // :: [WARN] serf: Shutdown without a Leave
  69. // :: [WARN] serf: Shutdown without a Leave
  70. // :: [INFO] manager: shutting down
  71. // :: [INFO] agent: consul server down
  72. // :: [INFO] agent: shutdown complete
  73. // :: [INFO] agent: Stopping DNS server 127.0.0.1: (tcp)
  74. // :: [INFO] agent: Stopping DNS server 127.0.0.1: (udp)
  75. // :: [INFO] agent: Stopping HTTP server 127.0.0.1: (tcp)
  76. // :: [INFO] agent: Waiting for endpoints to shut down
  77. // :: [INFO] agent: Endpoints down
  78. // :: [INFO] agent: Exit code:
  79.  
  80. [root@zlm1 :: /usr/local/bin]
  81. #ps aux|grep consul
  82. root 0.0 0.0 pts/ R+ : : grep --color=auto consul
  83.  
  84. [root@zlm1 :: /usr/local/bin]
  85. #consul members
  86. Error retrieving members: Get http://127.0.0.1:8500/v1/agent/members?segment=_all: dial tcp 127.0.0.1:8500: connect: connection refused
Prepare the server configuration file and start it again.
  1. [root@zlm1 :: /etc/consul.d]
  2. #cat server.json
  3. {
  4. "data_dir":"/data/consul",
  5. "datacenter":"dc1",
  6. "log_level":"INFO",
  7. "server":true,
  8. "bootstrap_expect":,
  9. "bind_addr":"192.168.56.100",
  10. "client_addr":"192.168.56.100",
  11. "ports":{
  12. },
  13. "ui":true,
  14. "retry_join":[],
  15. "retry_interval":"3s",
  16. "raft_protocol":,
  17. "rejoin_after_leave":true
  18. }
  19.  
  20. [root@zlm1 :: /etc/consul.d]
  21. #consul agent --config-dir=/etc/consul.d/ > /data/consul/consul.log >& &
  22. []
  23.  
  24. [root@zlm1 :: /etc/consul.d]
  25. #ps aux|grep consul
  26. root 1.3 2.0 pts/ Sl : : consul agent --config-dir=/etc/consul.d/
  27. root 0.0 0.0 pts/ R+ : : grep --color=auto consul
  28.  
  29. [root@zlm1 :: /etc/consul.d]
  30. #tail -f /data/consul/consul.log
  31. // :: [INFO] agent: Started HTTP server on 192.168.56.100: (tcp)
  32. // :: [INFO] agent: started state syncer
  33. // :: [WARN] raft: Heartbeat timeout from "" reached, starting election
  34. // :: [INFO] raft: Node at 192.168.56.100: [Candidate] entering Candidate state in term
  35. // :: [INFO] raft: Election won. Tally:
  36. // :: [INFO] raft: Node at 192.168.56.100: [Leader] entering Leader state
  37. // :: [INFO] consul: cluster leadership acquired
  38. // :: [INFO] consul: New leader elected: zlm1
  39. // :: [INFO] consul: member 'zlm1' joined, marking health alive
  40. // :: [INFO] agent: Synced node info
  41.  
  42. [root@zlm1 :: ~]
  43. #consul members
  44. Error retrieving members: Get http://127.0.0.1:8500/v1/agent/members?segment=_all: dial tcp 127.0.0.1:8500: connect: connection refused
  45.  
  46. [root@zlm1 :: ~]
  47. #consul members --http-addr=192.168.56.100:
  48. Node Address Status Type Build Protocol DC Segment
  49. zlm1 192.168.56.100: alive server 1.2. dc1 <all>
Add a client on zlm2.
  1. [root@zlm1 :: /etc/consul.d]
  2. #scp /root/consul zlm2:~
  3. consul % 80MB .3MB/s :
  4.  
  5. [root@zlm2 :: ~]
  6. #mkdir /etc/consul.d
  7.  
  8. [root@zlm2 :: ~]
  9. #mkdir /data/consul
  10.  
  11. [root@zlm2 :: ~]
  12. #cd /etc/consul.d/
  13.  
  14. [root@zlm2 :: /etc/consul.d]
  15. #ls -l
  16. total
  17. -rw-r--r-- root root Aug : client.json
  18.  
  19. [root@zlm2 :: /etc/consul.d]
  20. #cat client.json
  21. {
  22. "data_dir": "/data/consul",
  23. "enable_script_checks": true,
  24. "bind_addr": "192.168.56.101",
  25. "retry_join": ["192.168.56.100"],
  26. "retry_interval": "30s",
  27. "rejoin_after_leave": true,
  28. "start_join": ["192.168.56.100"]
  29. }
  30.  
  31. [root@zlm2 :: /etc/consul.d]
  32. #consul agent -client 192.168.56.101 -bind 192.168.56.101 --config-dir=/etc/consul.d
  33. ==> Starting Consul agent...
  34. ==> Joining cluster...
  35. Join completed. Synced with initial agents
  36. ==> Consul agent running!
  37. Version: 'v1.2.2'
  38. Node ID: 'a69eae21-4e31-7edf-1f1a-3ec285a8fb3b'
  39. Node name: 'zlm2'
  40. Datacenter: 'dc1' (Segment: '')
  41. Server: false (Bootstrap: false)
  42. Client Addr: [192.168.56.101] (HTTP: , HTTPS: -, DNS: )
  43. Cluster Addr: 192.168.56.101 (LAN: , WAN: )
  44. Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false
  45.  
  46. ==> Log data will now stream in as it occurs:
  47.  
  48. // :: [INFO] serf: EventMemberJoin: zlm2 192.168.56.101
  49. // :: [INFO] agent: Started DNS server 192.168.56.101: (udp)
  50. // :: [WARN] agent/proxy: running as root, will not start managed proxies
  51. // :: [INFO] agent: Started DNS server 192.168.56.101: (tcp)
  52. // :: [INFO] agent: Started HTTP server on 192.168.56.101: (tcp)
  53. // :: [INFO] agent: (LAN) joining: [192.168.56.100]
  54. // :: [INFO] agent: Retry join LAN is supported for: aliyun aws azure digitalocean gce os packet scaleway softlayer triton vsphere
  55. // :: [INFO] agent: Joining LAN cluster...
  56. // :: [INFO] agent: (LAN) joining: [192.168.56.100]
  57. // :: [INFO] serf: EventMemberJoin: zlm1 192.168.56.100
  58. // :: [INFO] agent: (LAN) joined: Err: <nil>
  59. // :: [INFO] agent: started state syncer
  60. // :: [INFO] consul: adding server zlm1 (Addr: tcp/192.168.56.100:) (DC: dc1)
  61. // :: [INFO] agent: (LAN) joined: Err: <nil>
  62. // :: [INFO] agent: Join LAN completed. Synced with initial agents
  63. // :: [INFO] agent: Synced node info
  64.  
  65. [root@zlm2 :: /etc/consul.d]
  66. #consul members --http-addr=192.168.56.100:
  67. Node Address Status Type Build Protocol DC Segment
  68. zlm1 192.168.56.100: alive server 1.2. dc1 <all>
  69. zlm2 192.168.56.101: alive client 1.2. dc1 <default>
Add two services on node zlm2.
  1. [root@zlm2 :: /etc/consul.d]
  2. #cat service_master_check.json
  3. {
  4. "service":
  5. {
  6. "name": "w_db3306",
  7. "tags": [
  8. "Master Satus Check"
  9. ],
  10. "address": "192.168.56.100",
  11. "port": ,
  12. "check":
  13. {
  14. "args": [
  15. "/data/consul/script/CheckMaster.py",
  16. ""
  17. ],
  18. "interval": "15s"
  19. }
  20. }
  21. }
  22.  
  23. [root@zlm2 :: /etc/consul.d]
  24. #cat service_slave_check.json
  25. {
  26. "service":
  27. {
  28. "name": "r_db3306",
  29. "tags": [
  30. "Slave Satus Check"
  31. ],
  32. "address": "192.168.56.101",
  33. "port": ,
  34. "check":
  35. {
  36. "args": [
  37. "/data/consul/script/CheckSlave.py",
  38. ""
  39. ],
  40. "interval": "15s"
  41. }
  42. }
  43. }
  44.  
  45. [root@zlm2 :: /etc/consul.d]
  46. #ls -l
  47. total
  48. -rw-r--r-- root root Aug : client.json
  49. -rw-r--r-- root root Aug : service_master_check.json
  50. -rw-r--r-- root root Aug : servi_slave_check.json
Check CheckMaster.py script.
  1. [root@zlm2 :: /data/consul/script]
  2. #cat CheckMaster.py
  3. #!/usr/bin/python
  4. import sys
  5. import os
  6. import pymysql
  7.  
  8. port = int(sys.argv[])
  9. var={}
  10. conn = pymysql.connect(host='192.168.56.100',port=port,user='zlm',passwd='zlmzlm')
  11. #cur = conn.cursor(pymysql.cursor.DictCursor)
  12. cur = conn.cursor()
  13.  
  14. cur.execute("show global variables like \"%read_only%\"")
  15. rows = cur.fetchall()
  16.  
  17. for r in rows:
  18. var[r[]]=r[]
  19. if var['read_only']=='OFF' and var['super_read_only']=='OFF':
  20. print "MySQL %d master instance." % port
  21. else:
  22. print "This is read only instance."
  23. sys.exit()
  24.  
  25. sys.exit()
  26. cur.close()
  27. conn.close()
Check CheckSlave.py script.
  1. [root@zlm2 :: /data/consul/script]
  2. #cat CheckSlave.py
  3. #!/usr/bin/python
  4. import sys
  5. import os
  6. import pymysql
  7.  
  8. port = int(sys.argv[])
  9. var={}
  10. conn = pymysql.connect(host='192.168.56.101',port=port,user='zlm',passwd='zlmzlm')
  11. cur = conn.cursor
  12. cur.execute("show global variables like \"%read_only%\"")
  13. rows = cur.fetchall()
  14.  
  15. for r in rows:
  16. var[r[]]=r[]
  17. if var['read_only']=='OFF' and var['super_read_only']=='OFF':
  18. print "MySQL %d master instance." % port
  19. sys.exit()
  20. else:
  21. print "MySQL %d is read only instance." % port
  22.  
  23. cur = conn.cursor(pymysql.cursors.DictCursor)
  24. cur.execute("show slave status")
  25. slave_status = cur.fetchone()
  26.  
  27. if len(slave_status)<:
  28. print "Slave replication setup error.";
  29. sys.exit()
  30.  
  31. if slave_status['Slave_IO_Running'] !='Yes' or slave_status['Slave_SQL_Running'] !='Yes':
  32. print "Replication error: replication from host=%s, port=%s, io_thread=%s, sql_thread=%s, error info %s %s" % (slave_status['Master_Host'],slave_status['Master_Port'],slave_status['Slave_IO_Running'],slave_status['Slave_SQL_Running'],slave_status['Last_IO_Error'],slave_status['Last_SQL_Error'])
  33. sys.exit()
  34. print slave_status
  35.  
  36. sys.exit()
  37. cur.close()
  38. conn.close()
Restartt consul(or reload).
  1. [root@zlm2 :: /etc/consul.d]
  2. #consul members --http-addr=192.168.56.100:
  3. Node Address Status Type Build Protocol DC Segment
  4. zlm1 192.168.56.100: alive server 1.2. dc1 <all>
  5. zlm2 192.168.56.101: alive client 1.2. dc1 <default>
  6.  
  7. [root@zlm2 :: /etc/consul.d]
  8. #consul leave --http-addr=192.168.56.101:
  9. Graceful leave complete
  10.  
  11. [root@zlm2 :: /etc/consul.d]
  12. #consul members --http-addr=192.168.56.100:
  13. Node Address Status Type Build Protocol DC Segment
  14. zlm1 192.168.56.100: alive server 1.2. dc1 <all>
  15. zlm2 192.168.56.101: left client 1.2. dc1 <default>
  16.  
  17. [root@zlm2 :: /etc/consul.d]
  18. #consul agent --config-dir=/etc/consul.d/ -client 192.168.56.101 > /data/consul/consul.log >& &
  19. []
  20.  
  21. [root@zlm2 :: /etc/consul.d]
  22. #!ps
  23. ps aux|grep consul
  24. root 2.3 2.0 pts/ Sl : : consul agent --config-dir=/etc/consul.d/ -client 192.168.56.101
  25. root 0.0 0.0 pts/ R+ : : grep --color=auto consul
  26.  
  27. [root@zlm2 :: /etc/consul.d]
  28. #consul reload -http-addr=192.168.56.101:
  29. Configuration reload triggered
Well,now we can login the dashbord(GUI) of consul to check MySQL working status.
 
 
 
 
 
 

Consul初体验的更多相关文章

  1. consul 初体验

    consul server: 192.168.48.134: #!/bin/bash cd /data/server/consuls nohup /data/server/consuls/consul ...

  2. Consul在.Net Core中初体验

    Consul在.Net Core中初体验 简介 在阅读本文前我想您应该对微服务架构有一个基本的或者模糊的了解 Consul是一个服务管理软件,它其实有很多组件,包括服务发现配置共享键值对存储等 本文主 ...

  3. .NET平台开源项目速览(15)文档数据库RavenDB-介绍与初体验

    不知不觉,“.NET平台开源项目速览“系列文章已经15篇了,每一篇都非常受欢迎,可能技术水平不高,但足够入门了.虽然工作很忙,但还是会抽空把自己知道的,已经平时遇到的好的开源项目分享出来.今天就给大家 ...

  4. Xamarin+Prism开发详解四:简单Mac OS 虚拟机安装方法与Visual Studio for Mac 初体验

    Mac OS 虚拟机安装方法 最近把自己的电脑升级了一下SSD固态硬盘,总算是有容量安装Mac 虚拟机了!经过心碎的安装探索,尝试了国内外的各种安装方法,最后在youtube上找到了一个好方法. 简单 ...

  5. Spring之初体验

                                     Spring之初体验 Spring是一个轻量级的Java Web开发框架,以IoC(Inverse of Control 控制反转)和 ...

  6. Xamarin.iOS开发初体验

    aaarticlea/png;base64,iVBORw0KGgoAAAANSUhEUgAAAKwAAAA+CAIAAAA5/WfHAAAJrklEQVR4nO2c/VdTRxrH+wfdU84pW0

  7. 【腾讯Bugly干货分享】基于 Webpack & Vue & Vue-Router 的 SPA 初体验

    本文来自于腾讯bugly开发者社区,非经作者同意,请勿转载,原文地址:http://dev.qq.com/topic/57d13a57132ff21c38110186 导语 最近这几年的前端圈子,由于 ...

  8. 【Knockout.js 学习体验之旅】(1)ko初体验

    前言 什么,你现在还在看knockout.js?这货都已经落后主流一千年了!赶紧去学Angular.React啊,再不赶紧的话,他们也要变out了哦.身旁的90后小伙伴,嘴里还塞着山东的狗不理大蒜包, ...

  9. 在同一个硬盘上安装多个 Linux 发行版及 Fedora 21 、Fedora 22 初体验

    在同一个硬盘上安装多个 Linux 发行版 以前对多个 Linux 发行版的折腾主要是在虚拟机上完成.我的桌面电脑性能比较强大,玩玩虚拟机没啥问题,但是笔记本电脑就不行了.要在我的笔记本电脑上折腾多个 ...

随机推荐

  1. centos7 yum安装mysql后启动不起来问题

    [root@localhost ~]# systemctl start mysqld       启动失败 Job for mysqld.service failed because the cont ...

  2. 二十三、详述 IntelliJ IDEA 中恢复代码的方法「进阶篇」

    咱们已经了解了如何将代码恢复至某一版本,但是通过Local History恢复代码有的时候并不方便,例如咱们将项目中的代码进行了多处修改,这时通过Local History恢复代码就显得很麻烦,因为它 ...

  3. linux c 获取当前时间 毫秒级的 unix网络编程

    #include <time.h> #inlcude <sys/time.h> char *gf_time(void) /* get the time */{ struct t ...

  4. 【题解】洛谷P1351 [NOIP2014TG] 联合权值(树形结构+DFS)

    题目来源:洛谷P1351 思路 由题意可得图为一棵树 在一棵树上距离为2的两个点有两种情况 当前点与其爷爷 当前点的两个儿子 当情况为当前点与其爷爷时比较好操作 只需要在传递时不仅传递父亲 还传递爷爷 ...

  5. ffmpeg 从mp4上提取H264的nalu

    转自http://blog.csdn.net/gavinr/article/details/7183499 1.获取数据 ffmpeg读取mp4中的H264数据,并不能直接得到NALU,文件中也没有储 ...

  6. AutoLayout对 scrollview的contentSize 和contentOffset属性的影响

      AutoLayout对 scrollview的contentSize 和contentOffset属性的影响 问题一.iOS开发中,如果在XIB文件中创建一个scrollview,同时给它设置布局 ...

  7. Xcode命令行作用

    问题:Command Line Tools for Xcode有什么用 答案: Command Line Tools里面有git, xcrun, xcodebuild, gcc, gdb, make等 ...

  8. vue 方法相互调用注意事项与详解

    vue在同一个组件内: methods中的一个方法调用methods中的另外一个方法: 可以直接这样调用:this.$options.methods.test(); this.$options.met ...

  9. oracle在线迁移同步数据,数据库报错

    报需要升级的错误,具体处理步骤如下: 一.错误信息 SQL> alter database open ;alter database open resetlogs*ERROR at line 1 ...

  10. 阿里云服务器发送邮件失败,25端口被禁用,采用ssl 方式 465端口发送

    /** * 邮件工具类 * User: NZG * Date: 2019/3/8 * Time: 12:25 **/ @Data @Component @Configuration @Configur ...