1. 场景

使用windows, wsl2 进行日常开发测试工作。 但是wsl2经常会遇到网络问题。比如今天在测试一个项目,核心功能是将postgres 的数据使用开源组件synch 同步到clickhouse 这个工作。

测试所需组件

  1. postgres
  2. kafka
  3. zookeeper
  4. redis
  5. synch容器

最开始测试时,选择的方案是, 将上述五个服务使用 docker-compose 进行编排, network_modules使用hosts模式, 因为考虑到kafka的监听安全机制,这种网络模式,无需单独指定暴露端口。

docker-compose.yaml 文件如下

version: "3"

services:
postgres:
image: failymao/postgres:12.7
container_name: postgres
restart: unless-stopped
privileged: true # 设置docker-compose env 文件
command: [ "-c", "config_file=/var/lib/postgresql/postgresql.conf", "-c", "hba_file=/var/lib/postgresql/pg_hba.conf" ]
volumes:
- ./config/postgresql.conf:/var/lib/postgresql/postgresql.conf
- ./config/pg_hba.conf:/var/lib/postgresql/pg_hba.conf
environment:
POSTGRES_PASSWORD: abc123
POSTGRES_USER: postgres
POSTGRES_PORT: 15432
POSTGRES_HOST: 127.0.0.1
healthcheck:
test: sh -c "sleep 5 && PGPASSWORD=abc123 psql -h 127.0.0.1 -U postgres -p 15432 -c '\q';"
interval: 30s
timeout: 10s
retries: 3
network_mode: "host" zookeeper:
image: failymao/zookeeper:1.4.0
container_name: zookeeper
restart: always
network_mode: "host" kafka:
image: failymao/kafka:1.4.0
container_name: kafka
restart: always
depends_on:
- zookeeper
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_ZOOKEEPER_CONNECT: localhost:2181
KAFKA_LISTENERS: PLAINTEXT://127.0.0.1:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://127.0.0.1:9092
KAFKA_BROKER_ID: 1
KAFKA_LOG_RETENTION_HOURS: 24
KAFKA_LOG_DIRS: /data/kafka-data #数据挂载
network_mode: "host" producer:
depends_on:
- redis
- kafka
- zookeeper
image: long2ice/synch
container_name: producer
command: sh -c "
sleep 30 &&
synch --alias pg2ch_test produce"
volumes:
- ./synch.yaml:/synch/synch.yaml
network_mode: "host" # 一个消费者消费一个数据库
consumer:
tty: true
depends_on:
- redis
- kafka
- zookeeper
image: long2ice/synch
container_name: consumer
command: sh -c
"sleep 30 &&
synch --alias pg2ch_test consume --schema pg2ch_test"
volumes:
- ./synch.yaml:/synch/synch.yaml
network_mode: "host" redis:
hostname: redis
container_name: redis
image: redis:latest
volumes:
- redis:/data
network_mode: "host" volumes:
redis:
kafka:
zookeeper:

测试过程中因为要使用 postgres, wal2json组件,在容器里单独安装组件很麻烦, 尝试了几次均已失败而告终,所以后来选择了将 postgres 服务安装在宿主机上, 容器里面的synch服务 使用宿主机的 ip,port端口。

但是当重新启动服务后,synch服务一直启动不起来, 日志显示 postgres 无法连接. synch配置文件如下

core:
debug: true # when set True, will display sql information.
insert_num: 20000 # how many num to submit,recommend set 20000 when production
insert_interval: 60 # how many seconds to submit,recommend set 60 when production
# enable this will auto create database `synch` in ClickHouse and insert monitor data
monitoring: true redis:
host: redis
port: 6379
db: 0
password:
prefix: synch
sentinel: false # enable redis sentinel
sentinel_hosts: # redis sentinel hosts
- 127.0.0.1:5000
sentinel_master: master
queue_max_len: 200000 # stream max len, will delete redundant ones with FIFO source_dbs:
- db_type: postgres
alias: pg2ch_test
broker_type: kafka # current support redis and kafka
host: 127.0.0.1
port: 5433
user: postgres
password: abc123
databases:
- database: pg2ch_test
auto_create: true
tables:
- table: pgbench_accounts
auto_full_etl: true
clickhouse_engine: CollapsingMergeTree
sign_column: sign
version_column:
partition_by:
settings: clickhouse:
# shard hosts when cluster, will insert by random
hosts:
- 127.0.0.1:9000
user: default
password: ''
cluster_name: # enable cluster mode when not empty, and hosts must be more than one if enable.
distributed_suffix: _all # distributed tables suffix, available in cluster kafka:
servers:
- 127.0.0.1:9092
topic_prefix: synch
core:
debug: true # when set True, will display sql information.
insert_num: 20000 # how many num to submit,recommend set 20000 when production
insert_interval: 60 # how many seconds to submit,recommend set 60 when production
# enable this will auto create database `synch` in ClickHouse and insert monitor data
monitoring: true redis:
host: redis
port: 6379
db: 0
password:
prefix: synch
sentinel: false # enable redis sentinel
sentinel_hosts: # redis sentinel hosts
- 127.0.0.1:5000
sentinel_master: master
queue_max_len: 200000 # stream max len, will delete redundant ones with FIFO source_dbs:
- db_type: postgres
alias: pg2ch_test
broker_type: kafka # current support redis and kafka
host: 127.0.0.1
port: 5433
user: postgres
password: abc123
databases:
- database: pg2ch_test
auto_create: true
tables:
- table: pgbench_accounts
auto_full_etl: true
clickhouse_engine: CollapsingMergeTree
sign_column: sign
version_column:
partition_by:
settings: clickhouse:
# shard hosts when cluster, will insert by random
hosts:
- 127.0.0.1:9000
user: default
password: ''
cluster_name: # enable cluster mode when not empty, and hosts must be more than one if enable.
distributed_suffix: _all # distributed tables suffix, available in cluster kafka:
servers:
- 127.0.0.1:9092
topic_prefix: synch

这种情况很奇怪,首先确认 postgres, 启动,且监听端口(此处是5433) 也正常,使用localhost和主机eth0网卡地址均报错。

2. 解决

google 答案,参考 stackoverflow 高赞回答,问题解决,原答案如下

If you are using Docker-for-mac or Docker-for-Windows 18.03+, just connect to your mysql service using the host host.docker.internal (instead of the 127.0.0.1 in your connection string).

If you are using Docker-for-Linux 20.10.0+, you can also use the host host.docker.internal if you started your Docker

container with the --add-host host.docker.internal:host-gateway option.

Otherwise, read below

Use** --network="host" **in your docker run command, then 127.0.0.1 in your docker container will point to your docker host.

更多详情见 源贴

host 模式下 容器内服务访问宿主机服务

将postgres监听地址修改如下 host.docker.internal 报错解决。 查看宿主机 /etc/hosts 文件如下


root@failymao-NC:/mnt/d/pythonProject/pg_2_ch_demo# cat /etc/hosts
# This file was automatically generated by WSL. To stop automatic generation of this file, add the following entry to /etc/wsl.conf:
# [network]
# generateHosts = false
127.0.0.1 localhost 10.111.130.24 host.docker.internal

可以看到,宿主机 ip跟域名的映射. 通过访问域名,解析到宿主机ip, 访问宿主机服务。

最终启动 synch 服务配置如下

core:
debug: true # when set True, will display sql information.
insert_num: 20000 # how many num to submit,recommend set 20000 when production
insert_interval: 60 # how many seconds to submit,recommend set 60 when production
# enable this will auto create database `synch` in ClickHouse and insert monitor data
monitoring: true redis:
host: redis
port: 6379
db: 0
password:
prefix: synch
sentinel: false # enable redis sentinel
sentinel_hosts: # redis sentinel hosts
- 127.0.0.1:5000
sentinel_master: master
queue_max_len: 200000 # stream max len, will delete redundant ones with FIFO source_dbs:
- db_type: postgres
alias: pg2ch_test
broker_type: kafka # current support redis and kafka
host: host.docker.internal
port: 5433
user: postgres
password: abc123
databases:
- database: pg2ch_test
auto_create: true
tables:
- table: pgbench_accounts
auto_full_etl: true
clickhouse_engine: CollapsingMergeTree
sign_column: sign
version_column:
partition_by:
settings: clickhouse:
# shard hosts when cluster, will insert by random
hosts:
- 127.0.0.1:9000
user: default
password: ''
cluster_name: # enable cluster mode when not empty, and hosts must be more than one if enable.
distributed_suffix: _all # distributed tables suffix, available in cluster kafka:
servers:
- 127.0.0.1:9092
topic_prefix: synch host: host.docker.internal
core:
debug: true # when set True, will display sql information.
insert_num: 20000 # how many num to submit,recommend set 20000 when production
insert_interval: 60 # how many seconds to submit,recommend set 60 when production
# enable this will auto create database `synch` in ClickHouse and insert monitor data
monitoring: true redis:
host: redis
port: 6379
db: 0
password:
prefix: synch
sentinel: false # enable redis sentinel
sentinel_hosts: # redis sentinel hosts
- 127.0.0.1:5000
sentinel_master: master
queue_max_len: 200000 # stream max len, will delete redundant ones with FIFO source_dbs:
- db_type: postgres
alias: pg2ch_test
broker_type: kafka # current support redis and kafka
host:
port: 5433
user: postgres
password: abc123
databases:
- database: pg2ch_test
auto_create: true
tables:
- table: pgbench_accounts
auto_full_etl: true
clickhouse_engine: CollapsingMergeTree
sign_column: sign
version_column:
partition_by:
settings: clickhouse:
# shard hosts when cluster, will insert by random
hosts:
- 127.0.0.1:9000
user: default
password: ''
cluster_name: # enable cluster mode when not empty, and hosts must be more than one if enable.
distributed_suffix: _all # distributed tables suffix, available in cluster kafka:
servers:
- 127.0.0.1:9092
topic_prefix: synch

## 3. 总结
1. 以--networks="host" 模式下启动容器时,如果想在容器内访问宿主机上的服务, 将ip修改为`host.docker.internal`

4. 参考

  1. https://stackoverflow.com/questions/24319662/from-inside-of-a-docker-container-how-do-i-connect-to-the-localhost-of-the-mach

docker内服务访问宿主机服务的更多相关文章

  1. docker 访问宿主机网络

    使用宿主机IP 在安装Docker的时候,会在宿主机安装一个虚拟网关docker0,我们可以使用宿主机在docker0上的IP地址来代替localhost. 首先,使用如下命令查询宿主机IP地址: i ...

  2. docker centos7 配置和宿主机同网段IP

    docker centos7 配置和宿主机同网段IP 1.安装brctl 命令 # yum -y install bridge-utils 2.编辑网卡配置文件 # vi ifcfg-eno16777 ...

  3. docker 容器时间跟宿主机时间同步

    docker 容器时间跟宿主机时间同步 docker cp /etc/localtime 87986863838b:/etc/docker cp /etc/localtime container-na ...

  4. docker 安装redis , 让宿主机可以访问

    1, docker 拉去最新版本的redis docker pull redis #后面可以带上tag号, 默认拉取最新版本 2, docker安装redis container 安装之前去定义我们的 ...

  5. 解决Docker MySQL无法被宿主机访问的问题

    1 问题描述 Docker启动MySQL容器后,创建一个localhost访问的用户: create user test@localhost identified by 'test'; 但是在宿主机中 ...

  6. docker 容器不能访问宿主端口原因

    因为数据包到了eth0的 上的iptables 表,首先匹配PREROUTING 链,这个拒绝了来自docker0的流量,从而跳到input链,input没有放开服务端口,所以容器访问宿主端口失败;但 ...

  7. docker 启动mysql 挂载宿主机目录

    在使用docker run 运行镜像获取容器时,有些容器会自动产生一些数据,为了这些数据会因为container (容器)的消失而消失,保证数据的安全,比如mysql 容器在运行中产生的一些表的数据, ...

  8. 【解决】修改 docker 容器时间与宿主机不同

    修改 docker 容器时间 需求: 这几天,开发提了个需求 "测试需要模拟未来某天的业务,发现容器里面没有修改时间的权限",想在我们 k8s 集群上,调整容器时间 解决方案: 使 ...

  9. 安装Samba服务让宿主机和虚拟机共享文件

    安装 samba 服务器之后,很方便的实现 Windows 和 Linux 进行通信. 安装步骤: 1 .在 Ubuntu 系统下面安装 samba 服务: $ sudo apt-get instal ...

随机推荐

  1. JQ动画

    /* //基本 show([s,[e],[fn]]) 显示元素 hide([s,[e],[fn]]) 隐藏元素 //滑动 slideDown([s],[e],[fn]) 向下滑动 slideUp([s ...

  2. ActiveMQ和消息中间件概念

    一.概念

  3. 数据治理中Oracle SQL和存储过程的数据血缘分析

    数据治理中Oracle SQL和存储过程的数据血缘分析   数据治理中的一个重要基础工作是分析组织中数据的血缘关系.有了完整的数据血缘关系,我们可以用它进行数据溯源.表和字段变更的影响分析.数据合规性 ...

  4. Android使用百度语音识别api代码实现。

    第一步 ① 创建平台应用 点击百度智能云进入,没有账号的可以先注册账号,这里默认都有账号了,然后登录. 然后左侧导航栏点击找到语音技术 然后会进入一个应用总览页面, 然后点击创建应用 立即创建 点击查 ...

  5. java.net.NoRouteToHostException: Cannot assign requested address

    今天压力测试时, 刚开始出现了很多异常, 都是 java.net.NoRouteToHostException: Cannot assign requested address.  经网上查资料, 是 ...

  6. Dockers(29)- 网络连通

    网络连通 背景 基于docker0建了两个容器tomcat01和tomcat02,网段位于172.12.0.0/16 我们又新建了一个网络,网段为192.168.0.0/16,基于此网段新建了两个容器 ...

  7. Linux系列(24) - chmod

    前言 在Unix和Linux的中,每个文件(文件夹也被看作是文件)都有三种权限:读.写.运行. 被授予权限的用户身份有三种:当前文件的拥有者,与拥有者属于同组者(同一个group),其他人 hello ...

  8. Shell系列(18)- 什么是正则表达式

    概念: 正则表达式是用于描述字符排列和匹配模式的一种语法 它主要用于字符串的模式分割.匹配.查找及替换操作.

  9. 使用正则表达式在VS中批量移除 try-catch

    使用正则表达式在VS中批量移除 try-catch 前言 try-catch 意为捕获错误,一般在可能出错的地方使用(如调用外部函数或外部设备),以对错误进行正确的处理,并进行后续操作而不至于程序直接 ...

  10. 替代jquery中的几个函数

    // https://open.alipay.com/developmentAccess/developmentAccess.htm var $ = window.jQuery; (function( ...