docker内服务访问宿主机服务
1. 场景
使用windows, wsl2 进行日常开发测试工作。 但是wsl2经常会遇到网络问题。比如今天在测试一个项目,核心功能是将postgres 的数据使用开源组件synch 同步到clickhouse 这个工作。
测试所需组件
- postgres
- kafka
- zookeeper
- redis
- synch容器
最开始测试时,选择的方案是, 将上述五个服务使用 docker-compose 进行编排, network_modules使用hosts模式, 因为考虑到kafka的监听安全机制,这种网络模式,无需单独指定暴露端口。
docker-compose.yaml 文件如下
version: "3"
services:
postgres:
image: failymao/postgres:12.7
container_name: postgres
restart: unless-stopped
privileged: true # 设置docker-compose env 文件
command: [ "-c", "config_file=/var/lib/postgresql/postgresql.conf", "-c", "hba_file=/var/lib/postgresql/pg_hba.conf" ]
volumes:
- ./config/postgresql.conf:/var/lib/postgresql/postgresql.conf
- ./config/pg_hba.conf:/var/lib/postgresql/pg_hba.conf
environment:
POSTGRES_PASSWORD: abc123
POSTGRES_USER: postgres
POSTGRES_PORT: 15432
POSTGRES_HOST: 127.0.0.1
healthcheck:
test: sh -c "sleep 5 && PGPASSWORD=abc123 psql -h 127.0.0.1 -U postgres -p 15432 -c '\q';"
interval: 30s
timeout: 10s
retries: 3
network_mode: "host"
zookeeper:
image: failymao/zookeeper:1.4.0
container_name: zookeeper
restart: always
network_mode: "host"
kafka:
image: failymao/kafka:1.4.0
container_name: kafka
restart: always
depends_on:
- zookeeper
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_ZOOKEEPER_CONNECT: localhost:2181
KAFKA_LISTENERS: PLAINTEXT://127.0.0.1:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://127.0.0.1:9092
KAFKA_BROKER_ID: 1
KAFKA_LOG_RETENTION_HOURS: 24
KAFKA_LOG_DIRS: /data/kafka-data #数据挂载
network_mode: "host"
producer:
depends_on:
- redis
- kafka
- zookeeper
image: long2ice/synch
container_name: producer
command: sh -c "
sleep 30 &&
synch --alias pg2ch_test produce"
volumes:
- ./synch.yaml:/synch/synch.yaml
network_mode: "host"
# 一个消费者消费一个数据库
consumer:
tty: true
depends_on:
- redis
- kafka
- zookeeper
image: long2ice/synch
container_name: consumer
command: sh -c
"sleep 30 &&
synch --alias pg2ch_test consume --schema pg2ch_test"
volumes:
- ./synch.yaml:/synch/synch.yaml
network_mode: "host"
redis:
hostname: redis
container_name: redis
image: redis:latest
volumes:
- redis:/data
network_mode: "host"
volumes:
redis:
kafka:
zookeeper:
测试过程中因为要使用 postgres, wal2json组件,在容器里单独安装组件很麻烦, 尝试了几次均已失败而告终,所以后来选择了将 postgres 服务安装在宿主机上, 容器里面的synch服务 使用宿主机的 ip,port端口。
但是当重新启动服务后,synch服务一直启动不起来, 日志显示 postgres 无法连接. synch配置文件如下
core:
debug: true # when set True, will display sql information.
insert_num: 20000 # how many num to submit,recommend set 20000 when production
insert_interval: 60 # how many seconds to submit,recommend set 60 when production
# enable this will auto create database `synch` in ClickHouse and insert monitor data
monitoring: true
redis:
host: redis
port: 6379
db: 0
password:
prefix: synch
sentinel: false # enable redis sentinel
sentinel_hosts: # redis sentinel hosts
- 127.0.0.1:5000
sentinel_master: master
queue_max_len: 200000 # stream max len, will delete redundant ones with FIFO
source_dbs:
- db_type: postgres
alias: pg2ch_test
broker_type: kafka # current support redis and kafka
host: 127.0.0.1
port: 5433
user: postgres
password: abc123
databases:
- database: pg2ch_test
auto_create: true
tables:
- table: pgbench_accounts
auto_full_etl: true
clickhouse_engine: CollapsingMergeTree
sign_column: sign
version_column:
partition_by:
settings:
clickhouse:
# shard hosts when cluster, will insert by random
hosts:
- 127.0.0.1:9000
user: default
password: ''
cluster_name: # enable cluster mode when not empty, and hosts must be more than one if enable.
distributed_suffix: _all # distributed tables suffix, available in cluster
kafka:
servers:
- 127.0.0.1:9092
topic_prefix: synch
core:
debug: true # when set True, will display sql information.
insert_num: 20000 # how many num to submit,recommend set 20000 when production
insert_interval: 60 # how many seconds to submit,recommend set 60 when production
# enable this will auto create database `synch` in ClickHouse and insert monitor data
monitoring: true
redis:
host: redis
port: 6379
db: 0
password:
prefix: synch
sentinel: false # enable redis sentinel
sentinel_hosts: # redis sentinel hosts
- 127.0.0.1:5000
sentinel_master: master
queue_max_len: 200000 # stream max len, will delete redundant ones with FIFO
source_dbs:
- db_type: postgres
alias: pg2ch_test
broker_type: kafka # current support redis and kafka
host: 127.0.0.1
port: 5433
user: postgres
password: abc123
databases:
- database: pg2ch_test
auto_create: true
tables:
- table: pgbench_accounts
auto_full_etl: true
clickhouse_engine: CollapsingMergeTree
sign_column: sign
version_column:
partition_by:
settings:
clickhouse:
# shard hosts when cluster, will insert by random
hosts:
- 127.0.0.1:9000
user: default
password: ''
cluster_name: # enable cluster mode when not empty, and hosts must be more than one if enable.
distributed_suffix: _all # distributed tables suffix, available in cluster
kafka:
servers:
- 127.0.0.1:9092
topic_prefix: synch
这种情况很奇怪,首先确认 postgres, 启动,且监听端口(此处是5433) 也正常,使用localhost和主机eth0网卡地址均报错。
2. 解决
google 答案,参考 stackoverflow 高赞回答,问题解决,原答案如下
If you are using Docker-for-mac or Docker-for-Windows 18.03+, just connect to your mysql service using the host host.docker.internal (instead of the 127.0.0.1 in your connection string).
If you are using Docker-for-Linux 20.10.0+, you can also use the host
host.docker.internalif you started your Dockercontainer with the
--add-host host.docker.internal:host-gatewayoption.Otherwise, read below
Use** --network="host" **in your docker run command, then 127.0.0.1 in your docker container will point to your docker host.
更多详情见 源贴
将postgres监听地址修改如下 host.docker.internal 报错解决。 查看宿主机 /etc/hosts 文件如下
root@failymao-NC:/mnt/d/pythonProject/pg_2_ch_demo# cat /etc/hosts
# This file was automatically generated by WSL. To stop automatic generation of this file, add the following entry to /etc/wsl.conf:
# [network]
# generateHosts = false
127.0.0.1 localhost
10.111.130.24 host.docker.internal
可以看到,宿主机 ip跟域名的映射. 通过访问域名,解析到宿主机ip, 访问宿主机服务。
最终启动 synch 服务配置如下
core:
debug: true # when set True, will display sql information.
insert_num: 20000 # how many num to submit,recommend set 20000 when production
insert_interval: 60 # how many seconds to submit,recommend set 60 when production
# enable this will auto create database `synch` in ClickHouse and insert monitor data
monitoring: true
redis:
host: redis
port: 6379
db: 0
password:
prefix: synch
sentinel: false # enable redis sentinel
sentinel_hosts: # redis sentinel hosts
- 127.0.0.1:5000
sentinel_master: master
queue_max_len: 200000 # stream max len, will delete redundant ones with FIFO
source_dbs:
- db_type: postgres
alias: pg2ch_test
broker_type: kafka # current support redis and kafka
host: host.docker.internal
port: 5433
user: postgres
password: abc123
databases:
- database: pg2ch_test
auto_create: true
tables:
- table: pgbench_accounts
auto_full_etl: true
clickhouse_engine: CollapsingMergeTree
sign_column: sign
version_column:
partition_by:
settings:
clickhouse:
# shard hosts when cluster, will insert by random
hosts:
- 127.0.0.1:9000
user: default
password: ''
cluster_name: # enable cluster mode when not empty, and hosts must be more than one if enable.
distributed_suffix: _all # distributed tables suffix, available in cluster
kafka:
servers:
- 127.0.0.1:9092
topic_prefix: synch host: host.docker.internal
core:
debug: true # when set True, will display sql information.
insert_num: 20000 # how many num to submit,recommend set 20000 when production
insert_interval: 60 # how many seconds to submit,recommend set 60 when production
# enable this will auto create database `synch` in ClickHouse and insert monitor data
monitoring: true
redis:
host: redis
port: 6379
db: 0
password:
prefix: synch
sentinel: false # enable redis sentinel
sentinel_hosts: # redis sentinel hosts
- 127.0.0.1:5000
sentinel_master: master
queue_max_len: 200000 # stream max len, will delete redundant ones with FIFO
source_dbs:
- db_type: postgres
alias: pg2ch_test
broker_type: kafka # current support redis and kafka
host:
port: 5433
user: postgres
password: abc123
databases:
- database: pg2ch_test
auto_create: true
tables:
- table: pgbench_accounts
auto_full_etl: true
clickhouse_engine: CollapsingMergeTree
sign_column: sign
version_column:
partition_by:
settings:
clickhouse:
# shard hosts when cluster, will insert by random
hosts:
- 127.0.0.1:9000
user: default
password: ''
cluster_name: # enable cluster mode when not empty, and hosts must be more than one if enable.
distributed_suffix: _all # distributed tables suffix, available in cluster
kafka:
servers:
- 127.0.0.1:9092
topic_prefix: synch
## 3. 总结
1. 以--networks="host" 模式下启动容器时,如果想在容器内访问宿主机上的服务, 将ip修改为`host.docker.internal`
4. 参考
docker内服务访问宿主机服务的更多相关文章
- docker 访问宿主机网络
使用宿主机IP 在安装Docker的时候,会在宿主机安装一个虚拟网关docker0,我们可以使用宿主机在docker0上的IP地址来代替localhost. 首先,使用如下命令查询宿主机IP地址: i ...
- docker centos7 配置和宿主机同网段IP
docker centos7 配置和宿主机同网段IP 1.安装brctl 命令 # yum -y install bridge-utils 2.编辑网卡配置文件 # vi ifcfg-eno16777 ...
- docker 容器时间跟宿主机时间同步
docker 容器时间跟宿主机时间同步 docker cp /etc/localtime 87986863838b:/etc/docker cp /etc/localtime container-na ...
- docker 安装redis , 让宿主机可以访问
1, docker 拉去最新版本的redis docker pull redis #后面可以带上tag号, 默认拉取最新版本 2, docker安装redis container 安装之前去定义我们的 ...
- 解决Docker MySQL无法被宿主机访问的问题
1 问题描述 Docker启动MySQL容器后,创建一个localhost访问的用户: create user test@localhost identified by 'test'; 但是在宿主机中 ...
- docker 容器不能访问宿主端口原因
因为数据包到了eth0的 上的iptables 表,首先匹配PREROUTING 链,这个拒绝了来自docker0的流量,从而跳到input链,input没有放开服务端口,所以容器访问宿主端口失败;但 ...
- docker 启动mysql 挂载宿主机目录
在使用docker run 运行镜像获取容器时,有些容器会自动产生一些数据,为了这些数据会因为container (容器)的消失而消失,保证数据的安全,比如mysql 容器在运行中产生的一些表的数据, ...
- 【解决】修改 docker 容器时间与宿主机不同
修改 docker 容器时间 需求: 这几天,开发提了个需求 "测试需要模拟未来某天的业务,发现容器里面没有修改时间的权限",想在我们 k8s 集群上,调整容器时间 解决方案: 使 ...
- 安装Samba服务让宿主机和虚拟机共享文件
安装 samba 服务器之后,很方便的实现 Windows 和 Linux 进行通信. 安装步骤: 1 .在 Ubuntu 系统下面安装 samba 服务: $ sudo apt-get instal ...
随机推荐
- 关于Quartus构建nios软核以及eclipse建立c语言工程以及成功下载到FPGA芯片过程遇到的各种问题以及解决方法详解
这不是一篇构建nios的教程,而是遇到的各种问题以及解决方法.至于构建教程,网上一大把,我推荐正点原子的FPGA教程,比较新,比较详细,通俗易懂!!! 这里以一个点亮LED灯的Nios软核为例,很明显 ...
- golang channel原理
channel介绍 channel一个类型管道,通过它可以在goroutine之间发送和接收消息.它是Golang在语言层面提供的goroutine间的通信方式. 众所周知,Go依赖于称为CSP(Co ...
- https(ssl) 免费证书
https://letsencrypt.org/getting-started/ https://certbot.eff.org/lets-encrypt/centosrhel7-nginx http ...
- JDK和环境配置,eclipse安装与使用
本博客部分参照https://blog.csdn.net/PGY0000/article/details/79256720 (记住要尊重别人的劳动产品) 原博客给的链接和后面的安装过程有点不一样,不能 ...
- 一、docker部署Jenkins
1.部署启动脚本: [root@node10 docker-data]# cat start.sh docker run -d \ --restart=unless-stopped \ -v /opt ...
- volatile的基本原理
volatile这个关键字可能很多朋友都听说过,或许也都用过.在Java 5之前,它是一个备受争议的关键字,因为在程序中使用它往往会导致出人意料的结果.在Java 5之后,volatile关键字才得以 ...
- 从环境搭建到打包使用TypeScript
目录 1.TypeScript是什么 2.TypeScript增加了什么 3.TypeScript环境的搭建 4.TypeScript的基本类型 5.TypeScrip编译选项 6.TypeScrip ...
- Flask(4)- URL 组成部分详解
URL Uniform Resource Locator 的简写,中文名叫统一资源定位符 用于表示服务端的各种资源,例如网页 下面将讲解 Flask 中如何提取组成 URL 的各个部分 URL 组 ...
- 快速模式第三包收尾之quick_inI2()
快速模式第三包收尾之quick_inI2() 文章目录 快速模式第三包收尾之quick_inI2() 1. 序言 2. quick_inI2()处理流程图 3. 报文格式 4. quick_inI2( ...
- Vue 路由跳转报错 Error: Avoided redundant navigation to current location: "/XXX".
在router文件夹下的index.js中加入红色字体代码即可解决 import Vue from 'vue' import VueRouter from 'vue-router' Vue.use(V ...