项目的组件相对比较复杂,原有的一些选项是靠 ConfigMap 以及 istioctl 分别调整的,现在通过重新设计的Helm Chart,安装选项用values.yml或者 helm 命令行的方式来进行集中管理了。

在安装 Istio 之前要确保 Kubernetes 集群(仅支持v1.9及以后版本)已部署并配置好本地的 kubectl 客户端。

1. 下载 Istio

$ wget https://github.com/istio/istio/releases/download/1.0.2/istio-1.0.2-linux.tar.gz
$ tar zxf istio-1.0.-linux.tar.gz
$ cp istio-1.0./bin/istioctl /usr/local/bin/

2. 使用 Helm 部署 Istio 服务

git clone https://github.com/istio/istio.git
cd istio

安装包内的 Helm 目录中包含了 Istio 的 Chart,官方提供了两种方法:

  • 用 Helm 生成 istio.yaml,然后自行安装。
  • 用 Tiller 直接安装。

很明显,两种方法并没有什么本质区别,这里我们采用第一种方法来部署。

$ helm template install/kubernetes/helm/istio --name istio --namespace istio-system  --set ingress.enabled=true  --set sidecarInjectorWebhook.enabled=true  --set ingress.service.type=NodePort --set gateways.istio-ingressgateway.type=NodePort --set gateways.istio-egressgateway.type=NodePort --set tracing.enabled=true --set servicegraph.enabled=true --set prometheus.enabled=true --set tracing.jaeger.enabled=true --set grafana.enabled=true --set kiali.enabled=true > istio.yaml

$ kubectl create namespace istio-system
$ kubectl create -f istio.yaml
---
# Source: istio/charts/kiali/templates/secrets.yaml
apiVersion: v1
kind: Secret
metadata:
name: kiali
namespace: istio-system
labels:
app: kiali type: Opaque
data:
username: "YWRtaW4="
passphrase: "YWRtaW4=" ---
# Source: istio/charts/galley/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: istio-galley-configuration
namespace: istio-system
labels:
app: istio-galley
chart: galley-1.0.
release: istio
heritage: Tiller
istio: mixer
data:
validatingwebhookconfiguration.yaml: |-
apiVersion: admissionregistration.k8s.io/v1beta1
kind: ValidatingWebhookConfiguration
metadata:
name: istio-galley
namespace: istio-system
labels:
app: istio-galley
chart: galley-1.0.
release: istio
heritage: Tiller
webhooks:
- name: pilot.validation.istio.io
clientConfig:
service:
name: istio-galley
namespace: istio-system
path: "/admitpilot"
caBundle: ""
rules:
- operations:
- CREATE
- UPDATE
apiGroups:
- config.istio.io
apiVersions:
- v1alpha2
resources:
- httpapispecs
- httpapispecbindings
- quotaspecs
- quotaspecbindings
- operations:
- CREATE
- UPDATE
apiGroups:
- rbac.istio.io
apiVersions:
- "*"
resources:
- "*"
- operations:
- CREATE
- UPDATE
apiGroups:
- authentication.istio.io
apiVersions:
- "*"
resources:
- "*"
- operations:
- CREATE
- UPDATE
apiGroups:
- networking.istio.io
apiVersions:
- "*"
resources:
- destinationrules
- envoyfilters
- gateways
# disabled per @costinm's request
# - serviceentries
- virtualservices
failurePolicy: Fail
- name: mixer.validation.istio.io
clientConfig:
service:
name: istio-galley
namespace: istio-system
path: "/admitmixer"
caBundle: ""
rules:
- operations:
- CREATE
- UPDATE
apiGroups:
- config.istio.io
apiVersions:
- v1alpha2
resources:
- rules
- attributemanifests
- circonuses
- deniers
- fluentds
- kubernetesenvs
- listcheckers
- memquotas
- noops
- opas
- prometheuses
- rbacs
- servicecontrols
- solarwindses
- stackdrivers
- statsds
- stdios
- apikeys
- authorizations
- checknothings
# - kuberneteses
- listentries
- logentries
- metrics
- quotas
- reportnothings
- servicecontrolreports
- tracespans
failurePolicy: Fail ---
# Source: istio/charts/grafana/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: istio-grafana-custom-resources
namespace: istio-system
labels:
app: istio-grafana
chart: grafana-1.0.
release: istio
heritage: Tiller
istio: grafana
data:
custom-resources.yaml: |-
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: grafana-ports-mtls-disabled
namespace: istio-system
spec:
targets:
- name: grafana
ports:
- number:
run.sh: |-
#!/bin/sh set -x if [ "$#" -ne "" ]; then
echo "first argument should be path to custom resource yaml"
exit
fi pathToResourceYAML=${} /kubectl get validatingwebhookconfiguration istio-galley >/dev/null
if [ "$?" -eq ]; then
echo "istio-galley validatingwebhookconfiguration found - waiting for istio-galley deployment to be ready"
while true; do
/kubectl -n istio-system get deployment istio-galley >/dev/null
if [ "$?" -eq ]; then
break
fi
sleep
done
/kubectl -n istio-system rollout status deployment istio-galley
if [ "$?" -ne ]; then
echo "istio-galley deployment rollout status check failed"
exit
fi
echo "istio-galley deployment ready for configuration validation"
fi
sleep
/kubectl apply -f ${pathToResourceYAML} ---
# Source: istio/charts/kiali/templates/configmap.yaml
#apiVersion: v1
#kind: ConfigMap
#metadata:
# name: kiali
# namespace: istio-system
# labels:
# app: kiali
#data:
# config.yaml: |
# server:
# port:
# static_content_root_directory: /opt/kiali/console apiVersion: v1
kind: ConfigMap
metadata:
name: kiali
namespace: istio-system
labels:
app: kiali
version: "v0.10.0"
data:
config.yaml: |
istio_namespace: istio-system
server:
port:
static_content_root_directory: /opt/kiali/console
external_services:
jaeger:
url: "http://jaeger-query:16686/jaeger-query"
grafana:
url: "http://grafana:3000/grafana" ---
# Source: istio/charts/mixer/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: istio-statsd-prom-bridge
namespace: istio-system
labels:
app: istio-statsd-prom-bridge
chart: mixer-1.0.
release: istio
heritage: Tiller
istio: mixer
data:
mapping.conf: |- ---
# Source: istio/charts/prometheus/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus
namespace: istio-system
labels:
app: prometheus
chart: prometheus-1.0.
release: istio
heritage: Tiller
data:
prometheus.yml: |-
global:
scrape_interval: 15s
scrape_configs: - job_name: 'istio-mesh'
# Override the global default and scrape targets from this job every seconds.
scrape_interval: 5s kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- istio-system relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: istio-telemetry;prometheus - job_name: 'envoy'
# Override the global default and scrape targets from this job every seconds.
scrape_interval: 5s
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'. kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- istio-system relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: istio-statsd-prom-bridge;statsd-prom - job_name: 'istio-policy'
# Override the global default and scrape targets from this job every seconds.
scrape_interval: 5s
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'. kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- istio-system relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: istio-policy;http-monitoring - job_name: 'istio-telemetry'
# Override the global default and scrape targets from this job every seconds.
scrape_interval: 5s
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'. kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- istio-system relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: istio-telemetry;http-monitoring - job_name: 'pilot'
# Override the global default and scrape targets from this job every seconds.
scrape_interval: 5s
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'. kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- istio-system relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: istio-pilot;http-monitoring - job_name: 'galley'
# Override the global default and scrape targets from this job every seconds.
scrape_interval: 5s
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'. kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- istio-system relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: istio-galley;http-monitoring # scrape config for API servers
- job_name: 'kubernetes-apiservers'
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- default
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: kubernetes;https # scrape config for nodes (kubelet)
- job_name: 'kubernetes-nodes'
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${}/proxy/metrics # Scrape config for Kubelet cAdvisor.
#
# This is required for Kubernetes 1.7. and later, where cAdvisor metrics
# (those whose names begin with 'container_') have been removed from the
# Kubelet metrics endpoint. This job scrapes the cAdvisor endpoint to
# retrieve those metrics.
#
# In Kubernetes 1.7.-1.7., these metrics are only exposed on the cAdvisor
# HTTP endpoint; use "replacement: /api/v1/nodes/${1}:4194/proxy/metrics"
# in that case (and ensure cAdvisor's HTTP server hasn't been disabled with
# the --cadvisor-port= Kubelet flag).
#
# This job is not necessary and should be removed in Kubernetes 1.6 and
# earlier versions, or it will cause the metrics to be scraped twice.
- job_name: 'kubernetes-cadvisor'
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${}/proxy/metrics/cadvisor # scrape config for service endpoints.
- job_name: 'kubernetes-service-endpoints'
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
action: replace
target_label: __scheme__
regex: (https?)
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
action: replace
target_label: __address__
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $:$
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
action: replace
target_label: kubernetes_name # Example scrape config for pods
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $:$
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: pod_name ---
# Source: istio/charts/security/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: istio-security-custom-resources
namespace: istio-system
labels:
app: istio-security
chart: security-1.0.
release: istio
heritage: Tiller
istio: security
data:
custom-resources.yaml: |-
run.sh: |-
#!/bin/sh set -x if [ "$#" -ne "" ]; then
echo "first argument should be path to custom resource yaml"
exit
fi pathToResourceYAML=${} /kubectl get validatingwebhookconfiguration istio-galley >/dev/null
if [ "$?" -eq ]; then
echo "istio-galley validatingwebhookconfiguration found - waiting for istio-galley deployment to be ready"
while true; do
/kubectl -n istio-system get deployment istio-galley >/dev/null
if [ "$?" -eq ]; then
break
fi
sleep
done
/kubectl -n istio-system rollout status deployment istio-galley
if [ "$?" -ne ]; then
echo "istio-galley deployment rollout status check failed"
exit
fi
echo "istio-galley deployment ready for configuration validation"
fi
sleep
/kubectl apply -f ${pathToResourceYAML} ---
# Source: istio/templates/configmap.yaml apiVersion: v1
kind: ConfigMap
metadata:
name: istio
namespace: istio-system
labels:
app: istio
chart: istio-1.0.
release: istio
heritage: Tiller
data:
mesh: |-
# Set the following variable to true to disable policy checks by the Mixer.
# Note that metrics will still be reported to the Mixer.
disablePolicyChecks: false # Set enableTracing to false to disable request tracing.
enableTracing: true # Set accessLogFile to empty string to disable access log.
accessLogFile: "/dev/stdout"
#
# Deprecated: mixer is using EDS
mixerCheckServer: istio-policy.istio-system.svc.cluster.local:
mixerReportServer: istio-telemetry.istio-system.svc.cluster.local:
# This is the k8s ingress service name, update if you used a different name
ingressService: istio-ingress # Unix Domain Socket through which envoy communicates with NodeAgent SDS to get
# key/cert for mTLS. Use secret-mount files instead of SDS if set to empty.
sdsUdsPath: "" # How frequently should Envoy fetch key/cert from NodeAgent.
sdsRefreshDelay: 15s #
defaultConfig:
#
# TCP connection timeout between Envoy & the application, and between Envoys.
connectTimeout: 10s
#
### ADVANCED SETTINGS #############
# Where should envoy's configuration be stored in the istio-proxy container
configPath: "/etc/istio/proxy"
binaryPath: "/usr/local/bin/envoy"
# The pseudo service name used for Envoy.
serviceCluster: istio-proxy
# These settings that determine how long an old Envoy
# process should be kept alive after an occasional reload.
drainDuration: 45s
parentShutdownDuration: 1m0s
#
# The mode used to redirect inbound connections to Envoy. This setting
# has no effect on outbound traffic: iptables REDIRECT is always used for
# outbound connections.
# If "REDIRECT", use iptables REDIRECT to NAT and redirect to Envoy.
# The "REDIRECT" mode loses source addresses during redirection.
# If "TPROXY", use iptables TPROXY to redirect to Envoy.
# The "TPROXY" mode preserves both the source and destination IP
# addresses and ports, so that they can be used for advanced filtering
# and manipulation.
# The "TPROXY" mode also configures the sidecar to run with the
# CAP_NET_ADMIN capability, which is required to use TPROXY.
#interceptionMode: REDIRECT
#
# Port where Envoy listens (on local host) for admin commands
# You can exec into the istio-proxy container in a pod and
# curl the admin port (curl http://localhost:15000/) to obtain
# diagnostic information from Envoy. See
# https://lyft.github.io/envoy/docs/operations/admin.html
# for more details
proxyAdminPort:
#
# Set concurrency to a specific number to control the number of Proxy worker threads.
# If set to (default), then start worker thread for each CPU thread/core.
concurrency:
#
# Zipkin trace collector
zipkinAddress: zipkin.istio-system:
#
# Statsd metrics collector converts statsd metrics into Prometheus metrics.
statsdUdpAddress: istio-statsd-prom-bridge.istio-system:
#
# Mutual TLS authentication between sidecars and istio control plane.
controlPlaneAuthPolicy: NONE
#
# Address where istio Pilot service is running
discoveryAddress: istio-pilot.istio-system: ---
# Source: istio/templates/sidecar-injector-configmap.yaml apiVersion: v1
kind: ConfigMap
metadata:
name: istio-sidecar-injector
namespace: istio-system
labels:
app: istio
chart: istio-1.0.
release: istio
heritage: Tiller
istio: sidecar-injector
data:
config: |-
policy: enabled
template: |-
initContainers:
- name: istio-init
image: "192.168.200.10/istio-release/proxy_init:1.0.2"
args:
- "-p"
- [[ .MeshConfig.ProxyListenPort ]]
- "-u"
-
- "-m"
- [[ or (index .ObjectMeta.Annotations "sidecar.istio.io/interceptionMode") .ProxyConfig.InterceptionMode.String ]]
- "-i"
[[ if (isset .ObjectMeta.Annotations "traffic.sidecar.istio.io/includeOutboundIPRanges") -]]
- "[[ index .ObjectMeta.Annotations "traffic.sidecar.istio.io/includeOutboundIPRanges" ]]"
[[ else -]]
- "*"
[[ end -]]
- "-x"
[[ if (isset .ObjectMeta.Annotations "traffic.sidecar.istio.io/excludeOutboundIPRanges") -]]
- "[[ index .ObjectMeta.Annotations "traffic.sidecar.istio.io/excludeOutboundIPRanges" ]]"
[[ else -]]
- ""
[[ end -]]
- "-b"
[[ if (isset .ObjectMeta.Annotations "traffic.sidecar.istio.io/includeInboundPorts") -]]
- "[[ index .ObjectMeta.Annotations "traffic.sidecar.istio.io/includeInboundPorts" ]]"
[[ else -]]
- [[ range .Spec.Containers -]][[ range .Ports -]][[ .ContainerPort -]], [[ end -]][[ end -]][[ end]]
- "-d"
[[ if (isset .ObjectMeta.Annotations "traffic.sidecar.istio.io/excludeInboundPorts") -]]
- "[[ index .ObjectMeta.Annotations "traffic.sidecar.istio.io/excludeInboundPorts" ]]"
[[ else -]]
- ""
[[ end -]]
imagePullPolicy: IfNotPresent
securityContext:
capabilities:
add:
- NET_ADMIN
restartPolicy: Always containers:
- name: istio-proxy
image: [[ if (isset .ObjectMeta.Annotations "sidecar.istio.io/proxyImage") -]]
"[[ index .ObjectMeta.Annotations "sidecar.istio.io/proxyImage" ]]"
[[ else -]]
192.168.200.10/istio-release/proxyv2:1.0.
[[ end -]]
args:
- proxy
- sidecar
- --configPath
- [[ .ProxyConfig.ConfigPath ]]
- --binaryPath
- [[ .ProxyConfig.BinaryPath ]]
- --serviceCluster
[[ if ne "" (index .ObjectMeta.Labels "app") -]]
- [[ index .ObjectMeta.Labels "app" ]]
[[ else -]]
- "istio-proxy"
[[ end -]]
- --drainDuration
- [[ formatDuration .ProxyConfig.DrainDuration ]]
- --parentShutdownDuration
- [[ formatDuration .ProxyConfig.ParentShutdownDuration ]]
- --discoveryAddress
- [[ .ProxyConfig.DiscoveryAddress ]]
- --discoveryRefreshDelay
- [[ formatDuration .ProxyConfig.DiscoveryRefreshDelay ]]
- --zipkinAddress
- [[ .ProxyConfig.ZipkinAddress ]]
- --connectTimeout
- [[ formatDuration .ProxyConfig.ConnectTimeout ]]
- --statsdUdpAddress
- [[ .ProxyConfig.StatsdUdpAddress ]]
- --proxyAdminPort
- [[ .ProxyConfig.ProxyAdminPort ]]
[[ if gt .ProxyConfig.Concurrency -]]
- --concurrency
- [[ .ProxyConfig.Concurrency ]]
[[ end -]]
- --controlPlaneAuthPolicy
- [[ or (index .ObjectMeta.Annotations "sidecar.istio.io/controlPlaneAuthPolicy") .ProxyConfig.ControlPlaneAuthPolicy ]]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: INSTANCE_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: ISTIO_META_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: ISTIO_META_INTERCEPTION_MODE
value: [[ or (index .ObjectMeta.Annotations "sidecar.istio.io/interceptionMode") .ProxyConfig.InterceptionMode.String ]]
imagePullPolicy: IfNotPresent
securityContext:
readOnlyRootFilesystem: true
[[ if eq (or (index .ObjectMeta.Annotations "sidecar.istio.io/interceptionMode") .ProxyConfig.InterceptionMode.String) "TPROXY" -]]
capabilities:
add:
- NET_ADMIN
runAsGroup:
[[ else -]]
runAsUser:
[[ end -]]
restartPolicy: Always
resources:
[[ if (isset .ObjectMeta.Annotations "sidecar.istio.io/proxyCPU") -]]
requests:
cpu: "[[ index .ObjectMeta.Annotations "sidecar.istio.io/proxyCPU" ]]"
memory: "[[ index .ObjectMeta.Annotations "sidecar.istio.io/proxyMemory" ]]"
[[ else -]]
requests:
cpu: 10m [[ end -]]
volumeMounts:
- mountPath: /etc/istio/proxy
name: istio-envoy
- mountPath: /etc/certs/
name: istio-certs
readOnly: true
volumes:
- emptyDir:
medium: Memory
name: istio-envoy
- name: istio-certs
secret:
optional: true
[[ if eq .Spec.ServiceAccountName "" -]]
secretName: istio.default
[[ else -]]
secretName: [[ printf "istio.%s" .Spec.ServiceAccountName ]]
[[ end -]] ---
# Source: istio/charts/galley/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: istio-galley-service-account
namespace: istio-system
labels:
app: istio-galley
chart: galley-1.0.
heritage: Tiller
release: istio ---
# Source: istio/charts/gateways/templates/serviceaccount.yaml apiVersion: v1
kind: ServiceAccount
metadata:
name: istio-egressgateway-service-account
namespace: istio-system
labels:
app: egressgateway
chart: gateways-1.0.
heritage: Tiller
release: istio
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: istio-ingressgateway-service-account
namespace: istio-system
labels:
app: ingressgateway
chart: gateways-1.0.
heritage: Tiller
release: istio
--- ---
# Source: istio/charts/grafana/templates/create-custom-resources-job.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: istio-grafana-post-install-account
namespace: istio-system
labels:
app: istio-grafana
chart: grafana-1.0.
heritage: Tiller
release: istio
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: istio-grafana-post-install-istio-system
labels:
app: istio-grafana
chart: grafana-1.0.
heritage: Tiller
release: istio
rules:
- apiGroups: ["authentication.istio.io"] # needed to create default authn policy
resources: ["*"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: istio-grafana-post-install-role-binding-istio-system
labels:
app: istio-grafana
chart: grafana-1.0.
heritage: Tiller
release: istio
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: istio-grafana-post-install-istio-system
subjects:
- kind: ServiceAccount
name: istio-grafana-post-install-account
namespace: istio-system
---
apiVersion: batch/v1
kind: Job
metadata:
name: istio-grafana-post-install
namespace: istio-system
annotations:
"helm.sh/hook": post-install
"helm.sh/hook-delete-policy": hook-succeeded
labels:
app: istio-grafana
chart: grafana-1.0.
release: istio
heritage: Tiller
spec:
template:
metadata:
name: istio-grafana-post-install
labels:
app: istio-grafana
release: istio
spec:
serviceAccountName: istio-grafana-post-install-account
containers:
- name: hyperkube
image: "192.168.200.10/istio-release/hyperkube:v1.7.6_coreos.0"
command: [ "/bin/bash", "/tmp/grafana/run.sh", "/tmp/grafana/custom-resources.yaml" ]
volumeMounts:
- mountPath: "/tmp/grafana"
name: tmp-configmap-grafana
volumes:
- name: tmp-configmap-grafana
configMap:
name: istio-grafana-custom-resources
restartPolicy: OnFailure ---
# Source: istio/charts/ingress/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: istio-ingress-service-account
namespace: istio-system
labels:
app: ingress
chart: ingress-1.0.
heritage: Tiller
release: istio ---
# Source: istio/charts/kiali/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: kiali-service-account
namespace: istio-system
labels:
app: kiali
chart: kiali-1.0.
heritage: Tiller
release: istio ---
# Source: istio/charts/mixer/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: istio-mixer-service-account
namespace: istio-system
labels:
app: mixer
chart: mixer-1.0.
heritage: Tiller
release: istio ---
# Source: istio/charts/pilot/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: istio-pilot-service-account
namespace: istio-system
labels:
app: istio-pilot
chart: pilot-1.0.
heritage: Tiller
release: istio ---
# Source: istio/charts/prometheus/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: prometheus
namespace: istio-system ---
# Source: istio/charts/security/templates/cleanup-secrets.yaml
# The reason for creating a ServiceAccount and ClusterRole specifically for this
# post-delete hooked job is because the citadel ServiceAccount is being deleted
# before this hook is launched. On the other hand, running this hook before the
# deletion of the citadel (e.g. pre-delete) won't delete the secrets because they
# will be re-created immediately by the to-be-deleted citadel.
#
# It's also important that the ServiceAccount, ClusterRole and ClusterRoleBinding
# will be ready before running the hooked Job therefore the hook weights. apiVersion: v1
kind: ServiceAccount
metadata:
name: istio-cleanup-secrets-service-account
namespace: istio-system
annotations:
"helm.sh/hook": post-delete
"helm.sh/hook-delete-policy": hook-succeeded
"helm.sh/hook-weight": ""
labels:
app: security
chart: security-1.0.
heritage: Tiller
release: istio
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: istio-cleanup-secrets-istio-system
annotations:
"helm.sh/hook": post-delete
"helm.sh/hook-delete-policy": hook-succeeded
"helm.sh/hook-weight": ""
labels:
app: security
chart: security-1.0.
heritage: Tiller
release: istio
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["list", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: istio-cleanup-secrets-istio-system
annotations:
"helm.sh/hook": post-delete
"helm.sh/hook-delete-policy": hook-succeeded
"helm.sh/hook-weight": ""
labels:
app: security
chart: security-1.0.
heritage: Tiller
release: istio
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: istio-cleanup-secrets-istio-system
subjects:
- kind: ServiceAccount
name: istio-cleanup-secrets-service-account
namespace: istio-system
---
apiVersion: batch/v1
kind: Job
metadata:
name: istio-cleanup-secrets
namespace: istio-system
annotations:
"helm.sh/hook": post-delete
"helm.sh/hook-delete-policy": hook-succeeded
"helm.sh/hook-weight": ""
labels:
app: security
chart: security-1.0.
release: istio
heritage: Tiller
spec:
template:
metadata:
name: istio-cleanup-secrets
labels:
app: security
release: istio
spec:
serviceAccountName: istio-cleanup-secrets-service-account
containers:
- name: hyperkube
image: "192.168.200.10/istio-release/hyperkube:v1.7.6_coreos.0"
command:
- /bin/bash
- -c
- >
kubectl get secret --all-namespaces | grep "istio.io/key-and-cert" | while read -r entry; do
ns=$(echo $entry | awk '{print $1}');
name=$(echo $entry | awk '{print $2}');
kubectl delete secret $name -n $ns;
done
restartPolicy: OnFailure ---
# Source: istio/charts/security/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: istio-citadel-service-account
namespace: istio-system
labels:
app: security
chart: security-1.0.
heritage: Tiller
release: istio ---
# Source: istio/charts/sidecarInjectorWebhook/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: istio-sidecar-injector-service-account
namespace: istio-system
labels:
app: istio-sidecar-injector
chart: sidecarInjectorWebhook-1.0.
heritage: Tiller
release: istio ---
# Source: istio/templates/crds.yaml
#
# these CRDs only make sense when pilot is enabled
#
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: virtualservices.networking.istio.io
annotations:
"helm.sh/hook": crd-install
labels:
app: istio-pilot
spec:
group: networking.istio.io
names:
kind: VirtualService
listKind: VirtualServiceList
plural: virtualservices
singular: virtualservice
categories:
- istio-io
- networking-istio-io
scope: Namespaced
version: v1alpha3
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: destinationrules.networking.istio.io
annotations:
"helm.sh/hook": crd-install
labels:
app: istio-pilot
spec:
group: networking.istio.io
names:
kind: DestinationRule
listKind: DestinationRuleList
plural: destinationrules
singular: destinationrule
categories:
- istio-io
- networking-istio-io
scope: Namespaced
version: v1alpha3
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: serviceentries.networking.istio.io
annotations:
"helm.sh/hook": crd-install
labels:
app: istio-pilot
spec:
group: networking.istio.io
names:
kind: ServiceEntry
listKind: ServiceEntryList
plural: serviceentries
singular: serviceentry
categories:
- istio-io
- networking-istio-io
scope: Namespaced
version: v1alpha3
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: gateways.networking.istio.io
annotations:
"helm.sh/hook": crd-install
"helm.sh/hook-weight": "-5"
labels:
app: istio-pilot
spec:
group: networking.istio.io
names:
kind: Gateway
plural: gateways
singular: gateway
categories:
- istio-io
- networking-istio-io
scope: Namespaced
version: v1alpha3
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: envoyfilters.networking.istio.io
annotations:
"helm.sh/hook": crd-install
labels:
app: istio-pilot
spec:
group: networking.istio.io
names:
kind: EnvoyFilter
plural: envoyfilters
singular: envoyfilter
categories:
- istio-io
- networking-istio-io
scope: Namespaced
version: v1alpha3
---
# # these CRDs only make sense when security is enabled
# #
kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
annotations:
"helm.sh/hook": crd-install
name: httpapispecbindings.config.istio.io
spec:
group: config.istio.io
names:
kind: HTTPAPISpecBinding
plural: httpapispecbindings
singular: httpapispecbinding
categories:
- istio-io
- apim-istio-io
scope: Namespaced
version: v1alpha2
---
kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
annotations:
"helm.sh/hook": crd-install
name: httpapispecs.config.istio.io
spec:
group: config.istio.io
names:
kind: HTTPAPISpec
plural: httpapispecs
singular: httpapispec
categories:
- istio-io
- apim-istio-io
scope: Namespaced
version: v1alpha2
---
kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
annotations:
"helm.sh/hook": crd-install
name: quotaspecbindings.config.istio.io
spec:
group: config.istio.io
names:
kind: QuotaSpecBinding
plural: quotaspecbindings
singular: quotaspecbinding
categories:
- istio-io
- apim-istio-io
scope: Namespaced
version: v1alpha2
---
kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
annotations:
"helm.sh/hook": crd-install
name: quotaspecs.config.istio.io
spec:
group: config.istio.io
names:
kind: QuotaSpec
plural: quotaspecs
singular: quotaspec
categories:
- istio-io
- apim-istio-io
scope: Namespaced
version: v1alpha2
--- # Mixer CRDs
kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
name: rules.config.istio.io
annotations:
"helm.sh/hook": crd-install
labels:
app: mixer
package: istio.io.mixer
istio: core
spec:
group: config.istio.io
names:
kind: rule
plural: rules
singular: rule
categories:
- istio-io
- policy-istio-io
scope: Namespaced
version: v1alpha2
--- kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
name: attributemanifests.config.istio.io
annotations:
"helm.sh/hook": crd-install
labels:
app: mixer
package: istio.io.mixer
istio: core
spec:
group: config.istio.io
names:
kind: attributemanifest
plural: attributemanifests
singular: attributemanifest
categories:
- istio-io
- policy-istio-io
scope: Namespaced
version: v1alpha2
--- kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
name: bypasses.config.istio.io
annotations:
"helm.sh/hook": crd-install
labels:
app: mixer
package: bypass
istio: mixer-adapter
spec:
group: config.istio.io
names:
kind: bypass
plural: bypasses
singular: bypass
categories:
- istio-io
- policy-istio-io
scope: Namespaced
version: v1alpha2
--- kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
name: circonuses.config.istio.io
annotations:
"helm.sh/hook": crd-install
labels:
app: mixer
package: circonus
istio: mixer-adapter
spec:
group: config.istio.io
names:
kind: circonus
plural: circonuses
singular: circonus
categories:
- istio-io
- policy-istio-io
scope: Namespaced
version: v1alpha2
--- kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
name: deniers.config.istio.io
annotations:
"helm.sh/hook": crd-install
labels:
app: mixer
package: denier
istio: mixer-adapter
spec:
group: config.istio.io
names:
kind: denier
plural: deniers
singular: denier
categories:
- istio-io
- policy-istio-io
scope: Namespaced
version: v1alpha2
--- kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
name: fluentds.config.istio.io
annotations:
"helm.sh/hook": crd-install
labels:
app: mixer
package: fluentd
istio: mixer-adapter
spec:
group: config.istio.io
names:
kind: fluentd
plural: fluentds
singular: fluentd
categories:
- istio-io
- policy-istio-io
scope: Namespaced
version: v1alpha2
--- kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
name: kubernetesenvs.config.istio.io
annotations:
"helm.sh/hook": crd-install
labels:
app: mixer
package: kubernetesenv
istio: mixer-adapter
spec:
group: config.istio.io
names:
kind: kubernetesenv
plural: kubernetesenvs
singular: kubernetesenv
categories:
- istio-io
- policy-istio-io
scope: Namespaced
version: v1alpha2
--- kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
name: listcheckers.config.istio.io
annotations:
"helm.sh/hook": crd-install
labels:
app: mixer
package: listchecker
istio: mixer-adapter
spec:
group: config.istio.io
names:
kind: listchecker
plural: listcheckers
singular: listchecker
categories:
- istio-io
- policy-istio-io
scope: Namespaced
version: v1alpha2
--- kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
name: memquotas.config.istio.io
annotations:
"helm.sh/hook": crd-install
labels:
app: mixer
package: memquota
istio: mixer-adapter
spec:
group: config.istio.io
names:
kind: memquota
plural: memquotas
singular: memquota
categories:
- istio-io
- policy-istio-io
scope: Namespaced
version: v1alpha2
--- kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
name: noops.config.istio.io
annotations:
"helm.sh/hook": crd-install
labels:
app: mixer
package: noop
istio: mixer-adapter
spec:
group: config.istio.io
names:
kind: noop
plural: noops
singular: noop
categories:
- istio-io
- policy-istio-io
scope: Namespaced
version: v1alpha2
--- kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
name: opas.config.istio.io
annotations:
"helm.sh/hook": crd-install
labels:
app: mixer
package: opa
istio: mixer-adapter
spec:
group: config.istio.io
names:
kind: opa
plural: opas
singular: opa
categories:
- istio-io
- policy-istio-io
scope: Namespaced
version: v1alpha2
--- kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
name: prometheuses.config.istio.io
annotations:
"helm.sh/hook": crd-install
labels:
app: mixer
package: prometheus
istio: mixer-adapter
spec:
group: config.istio.io
names:
kind: prometheus
plural: prometheuses
singular: prometheus
categories:
- istio-io
- policy-istio-io
scope: Namespaced
version: v1alpha2
--- kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
name: rbacs.config.istio.io
annotations:
"helm.sh/hook": crd-install
labels:
app: mixer
package: rbac
istio: mixer-adapter
spec:
group: config.istio.io
names:
kind: rbac
plural: rbacs
singular: rbac
categories:
- istio-io
- policy-istio-io
scope: Namespaced
version: v1alpha2
--- kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
name: redisquotas.config.istio.io
annotations:
"helm.sh/hook": crd-install
labels:
package: redisquota
istio: mixer-adapter
spec:
group: config.istio.io
names:
kind: redisquota
plural: redisquotas
singular: redisquota
scope: Namespaced
version: v1alpha2
--- kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
name: servicecontrols.config.istio.io
annotations:
"helm.sh/hook": crd-install
labels:
app: mixer
package: servicecontrol
istio: mixer-adapter
spec:
group: config.istio.io
names:
kind: servicecontrol
plural: servicecontrols
singular: servicecontrol
categories:
- istio-io
- policy-istio-io
scope: Namespaced
version: v1alpha2 --- kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
name: signalfxs.config.istio.io
annotations:
"helm.sh/hook": crd-install
labels:
app: mixer
package: signalfx
istio: mixer-adapter
spec:
group: config.istio.io
names:
kind: signalfx
plural: signalfxs
singular: signalfx
categories:
- istio-io
- policy-istio-io
scope: Namespaced
version: v1alpha2
--- kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
name: solarwindses.config.istio.io
annotations:
"helm.sh/hook": crd-install
labels:
app: mixer
package: solarwinds
istio: mixer-adapter
spec:
group: config.istio.io
names:
kind: solarwinds
plural: solarwindses
singular: solarwinds
categories:
- istio-io
- policy-istio-io
scope: Namespaced
version: v1alpha2
--- kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
name: stackdrivers.config.istio.io
annotations:
"helm.sh/hook": crd-install
labels:
app: mixer
package: stackdriver
istio: mixer-adapter
spec:
group: config.istio.io
names:
kind: stackdriver
plural: stackdrivers
singular: stackdriver
categories:
- istio-io
- policy-istio-io
scope: Namespaced
version: v1alpha2
--- kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
name: statsds.config.istio.io
annotations:
"helm.sh/hook": crd-install
labels:
app: mixer
package: statsd
istio: mixer-adapter
spec:
group: config.istio.io
names:
kind: statsd
plural: statsds
singular: statsd
categories:
- istio-io
- policy-istio-io
scope: Namespaced
version: v1alpha2
--- kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
name: stdios.config.istio.io
annotations:
"helm.sh/hook": crd-install
labels:
app: mixer
package: stdio
istio: mixer-adapter
spec:
group: config.istio.io
names:
kind: stdio
plural: stdios
singular: stdio
categories:
- istio-io
- policy-istio-io
scope: Namespaced
version: v1alpha2
--- kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
name: apikeys.config.istio.io
annotations:
"helm.sh/hook": crd-install
labels:
app: mixer
package: apikey
istio: mixer-instance
spec:
group: config.istio.io
names:
kind: apikey
plural: apikeys
singular: apikey
categories:
- istio-io
- policy-istio-io
scope: Namespaced
version: v1alpha2
--- kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
name: authorizations.config.istio.io
annotations:
"helm.sh/hook": crd-install
labels:
app: mixer
package: authorization
istio: mixer-instance
spec:
group: config.istio.io
names:
kind: authorization
plural: authorizations
singular: authorization
categories:
- istio-io
- policy-istio-io
scope: Namespaced
version: v1alpha2
--- kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
name: checknothings.config.istio.io
annotations:
"helm.sh/hook": crd-install
labels:
app: mixer
package: checknothing
istio: mixer-instance
spec:
group: config.istio.io
names:
kind: checknothing
plural: checknothings
singular: checknothing
categories:
- istio-io
- policy-istio-io
scope: Namespaced
version: v1alpha2
--- kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
name: kuberneteses.config.istio.io
annotations:
"helm.sh/hook": crd-install
labels:
app: mixer
package: adapter.template.kubernetes
istio: mixer-instance
spec:
group: config.istio.io
names:
kind: kubernetes
plural: kuberneteses
singular: kubernetes
categories:
- istio-io
- policy-istio-io
scope: Namespaced
version: v1alpha2
--- kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
name: listentries.config.istio.io
annotations:
"helm.sh/hook": crd-install
labels:
app: mixer
package: listentry
istio: mixer-instance
spec:
group: config.istio.io
names:
kind: listentry
plural: listentries
singular: listentry
categories:
- istio-io
- policy-istio-io
scope: Namespaced
version: v1alpha2
--- kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
name: logentries.config.istio.io
annotations:
"helm.sh/hook": crd-install
labels:
app: mixer
package: logentry
istio: mixer-instance
spec:
group: config.istio.io
names:
kind: logentry
plural: logentries
singular: logentry
categories:
- istio-io
- policy-istio-io
scope: Namespaced
version: v1alpha2
--- kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
name: edges.config.istio.io
annotations:
"helm.sh/hook": crd-install
labels:
app: mixer
package: edge
istio: mixer-instance
spec:
group: config.istio.io
names:
kind: edge
plural: edges
singular: edge
categories:
- istio-io
- policy-istio-io
scope: Namespaced
version: v1alpha2
--- kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
name: metrics.config.istio.io
annotations:
"helm.sh/hook": crd-install
labels:
app: mixer
package: metric
istio: mixer-instance
spec:
group: config.istio.io
names:
kind: metric
plural: metrics
singular: metric
categories:
- istio-io
- policy-istio-io
scope: Namespaced
version: v1alpha2
--- kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
name: quotas.config.istio.io
annotations:
"helm.sh/hook": crd-install
labels:
app: mixer
package: quota
istio: mixer-instance
spec:
group: config.istio.io
names:
kind: quota
plural: quotas
singular: quota
categories:
- istio-io
- policy-istio-io
scope: Namespaced
version: v1alpha2
--- kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
name: reportnothings.config.istio.io
annotations:
"helm.sh/hook": crd-install
labels:
app: mixer
package: reportnothing
istio: mixer-instance
spec:
group: config.istio.io
names:
kind: reportnothing
plural: reportnothings
singular: reportnothing
categories:
- istio-io
- policy-istio-io
scope: Namespaced
version: v1alpha2
--- kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
name: servicecontrolreports.config.istio.io
annotations:
"helm.sh/hook": crd-install
labels:
app: mixer
package: servicecontrolreport
istio: mixer-instance
spec:
group: config.istio.io
names:
kind: servicecontrolreport
plural: servicecontrolreports
singular: servicecontrolreport
categories:
- istio-io
- policy-istio-io
scope: Namespaced
version: v1alpha2
--- kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
name: tracespans.config.istio.io
annotations:
"helm.sh/hook": crd-install
labels:
app: mixer
package: tracespan
istio: mixer-instance
spec:
group: config.istio.io
names:
kind: tracespan
plural: tracespans
singular: tracespan
categories:
- istio-io
- policy-istio-io
scope: Namespaced
version: v1alpha2
--- kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
name: rbacconfigs.rbac.istio.io
annotations:
"helm.sh/hook": crd-install
labels:
app: mixer
package: istio.io.mixer
istio: rbac
spec:
group: rbac.istio.io
names:
kind: RbacConfig
plural: rbacconfigs
singular: rbacconfig
categories:
- istio-io
- rbac-istio-io
scope: Namespaced
version: v1alpha1
--- kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
name: serviceroles.rbac.istio.io
annotations:
"helm.sh/hook": crd-install
labels:
app: mixer
package: istio.io.mixer
istio: rbac
spec:
group: rbac.istio.io
names:
kind: ServiceRole
plural: serviceroles
singular: servicerole
categories:
- istio-io
- rbac-istio-io
scope: Namespaced
version: v1alpha1
--- kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
name: servicerolebindings.rbac.istio.io
annotations:
"helm.sh/hook": crd-install
labels:
app: mixer
package: istio.io.mixer
istio: rbac
spec:
group: rbac.istio.io
names:
kind: ServiceRoleBinding
plural: servicerolebindings
singular: servicerolebinding
categories:
- istio-io
- rbac-istio-io
scope: Namespaced
version: v1alpha1
---
kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
name: adapters.config.istio.io
annotations:
"helm.sh/hook": crd-install
labels:
app: mixer
package: adapter
istio: mixer-adapter
spec:
group: config.istio.io
names:
kind: adapter
plural: adapters
singular: adapter
categories:
- istio-io
- policy-istio-io
scope: Namespaced
version: v1alpha2
---
kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
name: instances.config.istio.io
annotations:
"helm.sh/hook": crd-install
labels:
app: mixer
package: instance
istio: mixer-instance
spec:
group: config.istio.io
names:
kind: instance
plural: instances
singular: instance
categories:
- istio-io
- policy-istio-io
scope: Namespaced
version: v1alpha2
---
kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
name: templates.config.istio.io
annotations:
"helm.sh/hook": crd-install
labels:
app: mixer
package: template
istio: mixer-template
spec:
group: config.istio.io
names:
kind: template
plural: templates
singular: template
categories:
- istio-io
- policy-istio-io
scope: Namespaced
version: v1alpha2
---
kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
name: handlers.config.istio.io
annotations:
"helm.sh/hook": crd-install
labels:
app: mixer
package: handler
istio: mixer-handler
spec:
group: config.istio.io
names:
kind: handler
plural: handlers
singular: handler
categories:
- istio-io
- policy-istio-io
scope: Namespaced
version: v1alpha2
---
#
#
---
# Source: istio/charts/galley/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: istio-galley-istio-system
labels:
app: istio-galley
chart: galley-1.0.
heritage: Tiller
release: istio
rules:
- apiGroups: ["admissionregistration.k8s.io"]
resources: ["validatingwebhookconfigurations"]
verbs: ["*"]
- apiGroups: ["config.istio.io"] # istio mixer CRD watcher
resources: ["*"]
verbs: ["get", "list", "watch"]
- apiGroups: ["*"]
resources: ["deployments"]
resourceNames: ["istio-galley"]
verbs: ["get"]
- apiGroups: ["*"]
resources: ["endpoints"]
resourceNames: ["istio-galley"]
verbs: ["get"] ---
# Source: istio/charts/gateways/templates/clusterrole.yaml apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
labels:
app: gateways
chart: gateways-1.0.
heritage: Tiller
release: istio
name: istio-egressgateway-istio-system
rules:
- apiGroups: ["extensions"]
resources: ["thirdpartyresources", "virtualservices", "destinationrules", "gateways"]
verbs: ["get", "watch", "list", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
labels:
app: gateways
chart: gateways-1.0.
heritage: Tiller
release: istio
name: istio-ingressgateway-istio-system
rules:
- apiGroups: ["extensions"]
resources: ["thirdpartyresources", "virtualservices", "destinationrules", "gateways"]
verbs: ["get", "watch", "list", "update"]
--- ---
# Source: istio/charts/ingress/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
labels:
app: ingress
chart: ingress-1.0.
heritage: Tiller
release: istio
name: istio-ingress-istio-system
rules:
- apiGroups: ["extensions"]
resources: ["thirdpartyresources", "ingresses"]
verbs: ["get", "watch", "list", "update"]
- apiGroups: [""]
resources: ["configmaps", "pods", "endpoints", "services"]
verbs: ["get", "watch", "list"] ---
# Source: istio/charts/kiali/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kiali
labels:
app: kiali
version: master
rules:
- apiGroups: ["","apps", "autoscaling"]
resources:
- configmaps
- namespaces
- nodes
- pods
- projects
- services
- endpoints
- deployments
- horizontalpodautoscalers
verbs:
- get
- list
- watch
- apiGroups: ["config.istio.io"]
resources:
- rules
- circonuses
- deniers
- fluentds
- kubernetesenvs
- listcheckers
- memquotas
- opas
- prometheuses
- rbacs
- servicecontrols
- solarwindses
- stackdrivers
- statsds
- stdios
- apikeys
- authorizations
- checknothings
- kuberneteses
- listentries
- logentries
- metrics
- quotas
- reportnothings
- servicecontrolreports
- quotaspecs
- quotaspecbindings
verbs:
- get
- list
- watch
- apiGroups: ["networking.istio.io"]
resources:
- virtualservices
- destinationrules
- serviceentries
- gateways
verbs:
- get
- list
- watch ---
# Source: istio/charts/mixer/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: istio-mixer-istio-system
labels:
app: mixer
chart: mixer-1.0.
heritage: Tiller
release: istio
rules:
- apiGroups: ["config.istio.io"] # istio CRD watcher
resources: ["*"]
verbs: ["create", "get", "list", "watch", "patch"]
- apiGroups: ["rbac.istio.io"] # istio RBAC watcher
resources: ["*"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apiextensions.k8s.io"]
resources: ["customresourcedefinitions"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["configmaps", "endpoints", "pods", "services", "namespaces", "secrets"]
verbs: ["get", "list", "watch"]
- apiGroups: ["extensions"]
resources: ["replicasets"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
resources: ["replicasets"]
verbs: ["get", "list", "watch"] ---
# Source: istio/charts/pilot/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: istio-pilot-istio-system
labels:
app: istio-pilot
chart: pilot-1.0.
heritage: Tiller
release: istio
rules:
- apiGroups: ["config.istio.io"]
resources: ["*"]
verbs: ["*"]
- apiGroups: ["rbac.istio.io"]
resources: ["*"]
verbs: ["get", "watch", "list"]
- apiGroups: ["networking.istio.io"]
resources: ["*"]
verbs: ["*"]
- apiGroups: ["authentication.istio.io"]
resources: ["*"]
verbs: ["*"]
- apiGroups: ["apiextensions.k8s.io"]
resources: ["customresourcedefinitions"]
verbs: ["*"]
- apiGroups: ["extensions"]
resources: ["thirdpartyresources", "thirdpartyresources.extensions", "ingresses", "ingresses/status"]
verbs: ["*"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create", "get", "list", "watch", "update"]
- apiGroups: [""]
resources: ["endpoints", "pods", "services"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["namespaces", "nodes", "secrets"]
verbs: ["get", "list", "watch"] ---
# Source: istio/charts/prometheus/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: prometheus-istio-system
rules:
- apiGroups: [""]
resources:
- nodes
- services
- endpoints
- pods
- nodes/proxy
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources:
- configmaps
verbs: ["get"]
- nonResourceURLs: ["/metrics"]
verbs: ["get"] ---
# Source: istio/charts/security/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: istio-citadel-istio-system
labels:
app: security
chart: security-1.0.
heritage: Tiller
release: istio
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create", "get", "watch", "list", "update", "delete"]
- apiGroups: [""]
resources: ["serviceaccounts"]
verbs: ["get", "watch", "list"]
- apiGroups: [""]
resources: ["services"]
verbs: ["get", "watch", "list"] ---
# Source: istio/charts/sidecarInjectorWebhook/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: istio-sidecar-injector-istio-system
labels:
app: istio-sidecar-injector
chart: sidecarInjectorWebhook-1.0.
heritage: Tiller
release: istio
rules:
- apiGroups: ["*"]
resources: ["configmaps"]
verbs: ["get", "list", "watch"]
- apiGroups: ["admissionregistration.k8s.io"]
resources: ["mutatingwebhookconfigurations"]
verbs: ["get", "list", "watch", "patch"] ---
# Source: istio/charts/galley/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: istio-galley-admin-role-binding-istio-system
labels:
app: istio-galley
chart: galley-1.0.
heritage: Tiller
release: istio
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: istio-galley-istio-system
subjects:
- kind: ServiceAccount
name: istio-galley-service-account
namespace: istio-system ---
# Source: istio/charts/gateways/templates/clusterrolebindings.yaml apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: istio-egressgateway-istio-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: istio-egressgateway-istio-system
subjects:
- kind: ServiceAccount
name: istio-egressgateway-service-account
namespace: istio-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: istio-ingressgateway-istio-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: istio-ingressgateway-istio-system
subjects:
- kind: ServiceAccount
name: istio-ingressgateway-service-account
namespace: istio-system
--- ---
# Source: istio/charts/ingress/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: istio-ingress-istio-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: istio-pilot-istio-system
subjects:
- kind: ServiceAccount
name: istio-ingress-service-account
namespace: istio-system ---
# Source: istio/charts/kiali/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: istio-kiali-admin-role-binding-istio-system
labels:
app: kiali
chart: kiali-1.0.
heritage: Tiller
release: istio
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kiali
subjects:
- kind: ServiceAccount
name: kiali-service-account
namespace: istio-system ---
# Source: istio/charts/mixer/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: istio-mixer-admin-role-binding-istio-system
labels:
app: mixer
chart: mixer-1.0.
heritage: Tiller
release: istio
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: istio-mixer-istio-system
subjects:
- kind: ServiceAccount
name: istio-mixer-service-account
namespace: istio-system ---
# Source: istio/charts/pilot/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: istio-pilot-istio-system
labels:
app: istio-pilot
chart: pilot-1.0.
heritage: Tiller
release: istio
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: istio-pilot-istio-system
subjects:
- kind: ServiceAccount
name: istio-pilot-service-account
namespace: istio-system ---
# Source: istio/charts/prometheus/templates/clusterrolebindings.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: prometheus-istio-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus-istio-system
subjects:
- kind: ServiceAccount
name: prometheus
namespace: istio-system ---
# Source: istio/charts/security/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: istio-citadel-istio-system
labels:
app: security
chart: security-1.0.
heritage: Tiller
release: istio
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: istio-citadel-istio-system
subjects:
- kind: ServiceAccount
name: istio-citadel-service-account
namespace: istio-system ---
# Source: istio/charts/sidecarInjectorWebhook/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: istio-sidecar-injector-admin-role-binding-istio-system
labels:
app: istio-sidecar-injector
chart: sidecarInjectorWebhook-1.0.
heritage: Tiller
release: istio
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: istio-sidecar-injector-istio-system
subjects:
- kind: ServiceAccount
name: istio-sidecar-injector-service-account
namespace: istio-system ---
# Source: istio/charts/galley/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: istio-galley
namespace: istio-system
labels:
istio: galley
spec:
ports:
- port:
name: https-validation
- port:
name: http-monitoring
selector:
istio: galley ---
# Source: istio/charts/gateways/templates/service.yaml apiVersion: v1
kind: Service
metadata:
name: istio-egressgateway
namespace: istio-system
annotations:
labels:
chart: gateways-1.0.
release: istio
heritage: Tiller
app: istio-egressgateway
istio: egressgateway
spec:
type: NodePort
selector:
app: istio-egressgateway
istio: egressgateway
ports:
-
name: http2
port:
-
name: https
port:
---
apiVersion: v1
kind: Service
metadata:
name: istio-ingressgateway
namespace: istio-system
annotations:
labels:
chart: gateways-1.0.
release: istio
heritage: Tiller
app: istio-ingressgateway
istio: ingressgateway
spec:
type: NodePort
selector:
app: istio-ingressgateway
istio: ingressgateway
ports:
-
name: http2
nodePort:
port:
targetPort:
-
name: https
nodePort:
port:
-
name: tcp
nodePort:
port:
-
name: tcp-pilot-grpc-tls
port:
targetPort:
-
name: tcp-citadel-grpc-tls
port:
targetPort:
-
name: tcp-dns-tls
port:
targetPort:
-
name: http2-prometheus
port:
targetPort:
-
name: http2-grafana
port:
targetPort:
--- ---
# Source: istio/charts/grafana/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: grafana
namespace: istio-system
annotations:
labels:
app: grafana
chart: grafana-1.0.
release: istio
heritage: Tiller
spec:
type: ClusterIP
ports:
- port:
targetPort:
protocol: TCP
name: http
selector:
app: grafana ---
# Source: istio/charts/ingress/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: istio-ingress
namespace: istio-system
labels:
chart: ingress-1.0.
release: istio
heritage: Tiller
istio: ingress
annotations:
spec:
type: NodePort
selector:
istio: ingress
ports:
-
name: http
nodePort:
port:
-
name: https
port:
--- ---
# Source: istio/charts/kiali/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: kiali
namespace: istio-system
labels:
app: kiali
spec:
type: NodePort
ports:
- name: tcp
protocol: TCP
port:
name: http-kiali
selector:
app: kiali ---
# Source: istio/charts/mixer/templates/service.yaml apiVersion: v1
kind: Service
metadata:
name: istio-policy
namespace: istio-system
labels:
chart: mixer-1.0.
release: istio
istio: mixer
spec:
ports:
- name: grpc-mixer
port:
- name: grpc-mixer-mtls
port:
- name: http-monitoring
port:
selector:
istio: mixer
istio-mixer-type: policy
---
apiVersion: v1
kind: Service
metadata:
name: istio-telemetry
namespace: istio-system
labels:
chart: mixer-1.0.
release: istio
istio: mixer
spec:
ports:
- name: grpc-mixer
port:
- name: grpc-mixer-mtls
port:
- name: http-monitoring
port:
- name: prometheus
port:
selector:
istio: mixer
istio-mixer-type: telemetry
--- ---
# Source: istio/charts/mixer/templates/statsdtoprom.yaml ---
apiVersion: v1
kind: Service
metadata:
name: istio-statsd-prom-bridge
namespace: istio-system
labels:
chart: mixer-1.0.
release: istio
istio: statsd-prom-bridge
spec:
ports:
- name: statsd-prom
port:
- name: statsd-udp
port:
protocol: UDP
selector:
istio: statsd-prom-bridge --- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: istio-statsd-prom-bridge
namespace: istio-system
labels:
chart: mixer-1.0.
release: istio
istio: mixer
spec:
template:
metadata:
labels:
istio: statsd-prom-bridge
annotations:
sidecar.istio.io/inject: "false"
spec:
serviceAccountName: istio-mixer-service-account
volumes:
- name: config-volume
configMap:
name: istio-statsd-prom-bridge
containers:
- name: statsd-prom-bridge
image: "192.168.200.10/istio-release/statsd-exporter:v0.6.0"
imagePullPolicy: IfNotPresent
ports:
- containerPort:
- containerPort:
protocol: UDP
args:
- '-statsd.mapping-config=/etc/statsd/mapping.conf'
resources:
requests:
cpu: 10m volumeMounts:
- name: config-volume
mountPath: /etc/statsd ---
# Source: istio/charts/pilot/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: istio-pilot
namespace: istio-system
labels:
app: istio-pilot
chart: pilot-1.0.
release: istio
heritage: Tiller
spec:
ports:
- port:
name: grpc-xds # direct
- port:
name: https-xds # mTLS
- port:
name: http-legacy-discovery # direct
- port:
name: http-monitoring
selector:
istio: pilot ---
# Source: istio/charts/prometheus/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: prometheus
namespace: istio-system
annotations:
prometheus.io/scrape: 'true'
labels:
name: prometheus
spec:
selector:
app: prometheus
ports:
- name: http-prometheus
protocol: TCP
port: ---
# Source: istio/charts/security/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
# we use the normal name here (e.g. 'prometheus')
# as grafana is configured to use this as a data source
name: istio-citadel
namespace: istio-system
labels:
app: istio-citadel
spec:
ports:
- name: grpc-citadel
port:
targetPort:
protocol: TCP
- name: http-monitoring
port:
selector:
istio: citadel ---
# Source: istio/charts/servicegraph/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: servicegraph
namespace: istio-system
annotations:
labels:
app: servicegraph
chart: servicegraph-1.0.
release: istio
heritage: Tiller
spec:
type: ClusterIP
ports:
- port:
targetPort:
protocol: TCP
name: http
selector:
app: servicegraph ---
# Source: istio/charts/sidecarInjectorWebhook/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: istio-sidecar-injector
namespace: istio-system
labels:
istio: sidecar-injector
spec:
ports:
- port:
selector:
istio: sidecar-injector ---
# Source: istio/charts/galley/templates/deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: istio-galley
namespace: istio-system
labels:
app: galley
chart: galley-1.0.
release: istio
heritage: Tiller
istio: galley
spec:
replicas:
strategy:
rollingUpdate:
maxSurge:
maxUnavailable:
template:
metadata:
labels:
istio: galley
annotations:
sidecar.istio.io/inject: "false"
scheduler.alpha.kubernetes.io/critical-pod: ""
spec:
serviceAccountName: istio-galley-service-account
containers:
- name: validator
image: "192.168.200.10/istio-release/galley:1.0.2"
imagePullPolicy: IfNotPresent
ports:
- containerPort:
- containerPort:
command:
- /usr/local/bin/galley
- validator
- --deployment-namespace=istio-system
- --caCertFile=/etc/istio/certs/root-cert.pem
- --tlsCertFile=/etc/istio/certs/cert-chain.pem
- --tlsKeyFile=/etc/istio/certs/key.pem
- --healthCheckInterval=1s
- --healthCheckFile=/health
- --webhook-config-file
- /etc/istio/config/validatingwebhookconfiguration.yaml
volumeMounts:
- name: certs
mountPath: /etc/istio/certs
readOnly: true
- name: config
mountPath: /etc/istio/config
readOnly: true
livenessProbe:
exec:
command:
- /usr/local/bin/galley
- probe
- --probe-path=/health
- --interval=10s
initialDelaySeconds:
periodSeconds:
readinessProbe:
exec:
command:
- /usr/local/bin/galley
- probe
- --probe-path=/health
- --interval=10s
initialDelaySeconds:
periodSeconds:
resources:
requests:
cpu: 10m volumes:
- name: certs
secret:
secretName: istio.istio-galley-service-account
- name: config
configMap:
name: istio-galley-configuration
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
- ppc64le
- s390x
preferredDuringSchedulingIgnoredDuringExecution:
- weight:
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
- weight:
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- ppc64le
- weight:
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- s390x ---
# Source: istio/charts/gateways/templates/deployment.yaml apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: istio-egressgateway
namespace: istio-system
labels:
chart: gateways-1.0.
release: istio
heritage: Tiller
app: istio-egressgateway
istio: egressgateway
spec:
replicas:
template:
metadata:
labels:
app: istio-egressgateway
istio: egressgateway
annotations:
sidecar.istio.io/inject: "false"
scheduler.alpha.kubernetes.io/critical-pod: ""
spec:
serviceAccountName: istio-egressgateway-service-account
containers:
- name: istio-proxy
image: "192.168.200.10/istio-release/proxyv2:1.0.2"
imagePullPolicy: IfNotPresent
ports:
- containerPort:
- containerPort:
args:
- proxy
- router
- -v
- ""
- --discoveryRefreshDelay
- '1s' #discoveryRefreshDelay
- --drainDuration
- '45s' #drainDuration
- --parentShutdownDuration
- '1m0s' #parentShutdownDuration
- --connectTimeout
- '10s' #connectTimeout
- --serviceCluster
- istio-egressgateway
- --zipkinAddress
- zipkin:
- --statsdUdpAddress
- istio-statsd-prom-bridge:
- --proxyAdminPort
- ""
- --controlPlaneAuthPolicy
- NONE
- --discoveryAddress
- istio-pilot:
resources:
requests:
cpu: 10m env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: INSTANCE_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: ISTIO_META_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
volumeMounts:
- name: istio-certs
mountPath: /etc/certs
readOnly: true
- name: egressgateway-certs
mountPath: "/etc/istio/egressgateway-certs"
readOnly: true
- name: egressgateway-ca-certs
mountPath: "/etc/istio/egressgateway-ca-certs"
readOnly: true
volumes:
- name: istio-certs
secret:
secretName: istio.istio-egressgateway-service-account
optional: true
- name: egressgateway-certs
secret:
secretName: "istio-egressgateway-certs"
optional: true
- name: egressgateway-ca-certs
secret:
secretName: "istio-egressgateway-ca-certs"
optional: true
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
- ppc64le
- s390x
preferredDuringSchedulingIgnoredDuringExecution:
- weight:
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
- weight:
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- ppc64le
- weight:
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- s390x
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: istio-ingressgateway
namespace: istio-system
labels:
chart: gateways-1.0.
release: istio
heritage: Tiller
app: istio-ingressgateway
istio: ingressgateway
spec:
replicas:
template:
metadata:
labels:
app: istio-ingressgateway
istio: ingressgateway
annotations:
sidecar.istio.io/inject: "false"
scheduler.alpha.kubernetes.io/critical-pod: ""
spec:
serviceAccountName: istio-ingressgateway-service-account
containers:
- name: istio-proxy
image: "192.168.200.10/istio-release/proxyv2:1.0.2"
imagePullPolicy: IfNotPresent
ports:
- containerPort:
- containerPort:
- containerPort:
- containerPort:
- containerPort:
- containerPort:
- containerPort:
- containerPort:
args:
- proxy
- router
- -v
- ""
- --discoveryRefreshDelay
- '1s' #discoveryRefreshDelay
- --drainDuration
- '45s' #drainDuration
- --parentShutdownDuration
- '1m0s' #parentShutdownDuration
- --connectTimeout
- '10s' #connectTimeout
- --serviceCluster
- istio-ingressgateway
- --zipkinAddress
- zipkin:
- --statsdUdpAddress
- istio-statsd-prom-bridge:
- --proxyAdminPort
- ""
- --controlPlaneAuthPolicy
- NONE
- --discoveryAddress
- istio-pilot:
resources:
requests:
cpu: 10m env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: INSTANCE_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: ISTIO_META_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
volumeMounts:
- name: istio-certs
mountPath: /etc/certs
readOnly: true
- name: ingressgateway-certs
mountPath: "/etc/istio/ingressgateway-certs"
readOnly: true
- name: ingressgateway-ca-certs
mountPath: "/etc/istio/ingressgateway-ca-certs"
readOnly: true
volumes:
- name: istio-certs
secret:
secretName: istio.istio-ingressgateway-service-account
optional: true
- name: ingressgateway-certs
secret:
secretName: "istio-ingressgateway-certs"
optional: true
- name: ingressgateway-ca-certs
secret:
secretName: "istio-ingressgateway-ca-certs"
optional: true
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
- ppc64le
- s390x
preferredDuringSchedulingIgnoredDuringExecution:
- weight:
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
- weight:
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- ppc64le
- weight:
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- s390x
--- ---
# Source: istio/charts/grafana/templates/deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: grafana
namespace: istio-system
labels:
app: grafana
chart: grafana-1.0.
release: istio
heritage: Tiller
spec:
replicas:
template:
metadata:
labels:
app: grafana
annotations:
sidecar.istio.io/inject: "false"
scheduler.alpha.kubernetes.io/critical-pod: ""
spec:
containers:
- name: grafana
image: "192.168.200.10/istio-release/grafana:1.0.2"
imagePullPolicy: IfNotPresent
ports:
- containerPort:
readinessProbe:
httpGet:
path: /login
port:
env:
- name: GRAFANA_PORT
value: ""
- name: GF_AUTH_BASIC_ENABLED
value: "false"
- name: GF_AUTH_ANONYMOUS_ENABLED
value: "true"
- name: GF_AUTH_ANONYMOUS_ORG_ROLE
value: Admin
- name: GF_PATHS_DATA
value: /data/grafana
resources:
requests:
cpu: 10m volumeMounts:
- name: data
mountPath: /data/grafana
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
- ppc64le
- s390x
preferredDuringSchedulingIgnoredDuringExecution:
- weight:
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
- weight:
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- ppc64le
- weight:
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- s390x
volumes:
- name: data
emptyDir: {} ---
# Source: istio/charts/ingress/templates/deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: istio-ingress
namespace: istio-system
labels:
app: ingress
chart: ingress-1.0.
release: istio
heritage: Tiller
istio: ingress
spec:
replicas:
template:
metadata:
labels:
istio: ingress
annotations:
sidecar.istio.io/inject: "false"
scheduler.alpha.kubernetes.io/critical-pod: ""
spec:
serviceAccountName: istio-ingress-service-account
containers:
- name: ingress
image: "192.168.200.10/istio-release/proxyv2:1.0.2"
imagePullPolicy: IfNotPresent
ports:
- containerPort:
- containerPort:
args:
- proxy
- ingress
- -v
- ""
- --discoveryRefreshDelay
- '1s' #discoveryRefreshDelay
- --drainDuration
- '45s' #drainDuration
- --parentShutdownDuration
- '1m0s' #parentShutdownDuration
- --connectTimeout
- '10s' #connectTimeout
- --serviceCluster
- istio-ingress
- --zipkinAddress
- zipkin:
- --statsdUdpAddress
- istio-statsd-prom-bridge:
- --proxyAdminPort
- ""
- --controlPlaneAuthPolicy
- NONE
- --discoveryAddress
- istio-pilot:
resources:
requests:
cpu: 10m env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: INSTANCE_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
volumeMounts:
- name: istio-certs
mountPath: /etc/certs
readOnly: true
- name: ingress-certs
mountPath: /etc/istio/ingress-certs
readOnly: true
volumes:
- name: istio-certs
secret:
secretName: istio.istio-ingress-service-account
optional: true
- name: ingress-certs
secret:
secretName: istio-ingress-certs
optional: true
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
- ppc64le
- s390x
preferredDuringSchedulingIgnoredDuringExecution:
- weight:
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
- weight:
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- ppc64le
- weight:
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- s390x ---
# Source: istio/charts/kiali/templates/deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kiali
namespace: istio-system
labels:
app: kiali
chart: kiali-1.0.
release: istio
heritage: Tiller
spec:
replicas:
selector:
matchLabels:
app: kiali
template:
metadata:
name: kiali
labels:
app: kiali
annotations:
sidecar.istio.io/inject: "false"
scheduler.alpha.kubernetes.io/critical-pod: ""
spec:
serviceAccountName: kiali-service-account
containers:
- image: "192.168.200.10/istio-release/kiali:istio-release-1.0"
name: kiali
command:
- "/opt/kiali/kiali"
- "-config"
- "/kiali-configuration/config.yaml"
- "-v"
- ""
env:
- name: ACTIVE_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: SERVER_CREDENTIALS_USERNAME
valueFrom:
secretKeyRef:
name: kiali
key: username
- name: SERVER_CREDENTIALS_PASSWORD
valueFrom:
secretKeyRef:
name: kiali
key: passphrase
- name: PROMETHEUS_SERVICE_URL
value: http://prometheus:9090
- name: GRAFANA_DASHBOARD
value: istio-service-dashboard
- name: GRAFANA_VAR_SERVICE_SOURCE
value: var-service
- name: GRAFANA_VAR_SERVICE_DEST
value: var-service
volumeMounts:
- name: kiali-configuration
mountPath: "/kiali-configuration"
resources:
requests:
cpu: 10m volumes:
- name: kiali-configuration
configMap:
name: kiali ---
# Source: istio/charts/mixer/templates/deployment.yaml apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: istio-policy
namespace: istio-system
labels:
chart: mixer-1.0.
release: istio
istio: mixer
spec:
replicas:
template:
metadata:
labels:
app: policy
istio: mixer
istio-mixer-type: policy
annotations:
sidecar.istio.io/inject: "false"
scheduler.alpha.kubernetes.io/critical-pod: ""
spec:
serviceAccountName: istio-mixer-service-account
volumes:
- name: istio-certs
secret:
secretName: istio.istio-mixer-service-account
optional: true
- name: uds-socket
emptyDir: {}
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
- ppc64le
- s390x
preferredDuringSchedulingIgnoredDuringExecution:
- weight:
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
- weight:
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- ppc64le
- weight:
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- s390x
containers:
- name: mixer
image: "192.168.200.10/istio-release/mixer:1.0.2"
imagePullPolicy: IfNotPresent
ports:
- containerPort:
- containerPort:
args:
- --address
- unix:///sock/mixer.socket
- --configStoreURL=k8s://
- --configDefaultNamespace=istio-system
- --trace_zipkin_url=http://zipkin:9411/api/v1/spans
resources:
requests:
cpu: 10m volumeMounts:
- name: uds-socket
mountPath: /sock
livenessProbe:
httpGet:
path: /version
port:
initialDelaySeconds:
periodSeconds:
- name: istio-proxy
image: "192.168.200.10/istio-release/proxyv2:1.0.2"
imagePullPolicy: IfNotPresent
ports:
- containerPort:
- containerPort:
args:
- proxy
- --serviceCluster
- istio-policy
- --templateFile
- /etc/istio/proxy/envoy_policy.yaml.tmpl
- --controlPlaneAuthPolicy
- NONE
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: INSTANCE_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
resources:
requests:
cpu: 10m volumeMounts:
- name: istio-certs
mountPath: /etc/certs
readOnly: true
- name: uds-socket
mountPath: /sock ---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: istio-telemetry
namespace: istio-system
labels:
chart: mixer-1.0.
release: istio
istio: mixer
spec:
replicas:
template:
metadata:
labels:
app: telemetry
istio: mixer
istio-mixer-type: telemetry
annotations:
sidecar.istio.io/inject: "false"
scheduler.alpha.kubernetes.io/critical-pod: ""
spec:
serviceAccountName: istio-mixer-service-account
volumes:
- name: istio-certs
secret:
secretName: istio.istio-mixer-service-account
optional: true
- name: uds-socket
emptyDir: {}
containers:
- name: mixer
image: "192.168.200.10/istio-release/mixer:1.0.2"
imagePullPolicy: IfNotPresent
ports:
- containerPort:
- containerPort:
args:
- --address
- unix:///sock/mixer.socket
- --configStoreURL=k8s://
- --configDefaultNamespace=istio-system
- --trace_zipkin_url=http://zipkin:9411/api/v1/spans
resources:
requests:
cpu: 10m volumeMounts:
- name: uds-socket
mountPath: /sock
livenessProbe:
httpGet:
path: /version
port:
initialDelaySeconds:
periodSeconds:
- name: istio-proxy
image: "192.168.200.10/istio-release/proxyv2:1.0.2"
imagePullPolicy: IfNotPresent
ports:
- containerPort:
- containerPort:
args:
- proxy
- --serviceCluster
- istio-telemetry
- --templateFile
- /etc/istio/proxy/envoy_telemetry.yaml.tmpl
- --controlPlaneAuthPolicy
- NONE
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: INSTANCE_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
resources:
requests:
cpu: 10m volumeMounts:
- name: istio-certs
mountPath: /etc/certs
readOnly: true
- name: uds-socket
mountPath: /sock --- ---
# Source: istio/charts/pilot/templates/deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: istio-pilot
namespace: istio-system
# TODO: default template doesn't have this, which one is right ?
labels:
app: istio-pilot
chart: pilot-1.0.
release: istio
heritage: Tiller
istio: pilot
annotations:
checksum/config-volume: f8da08b6b8c170dde721efd680270b2901e750d4aa186ebb6c22bef5b78a43f9
spec:
replicas:
template:
metadata:
labels:
istio: pilot
app: pilot
annotations:
sidecar.istio.io/inject: "false"
scheduler.alpha.kubernetes.io/critical-pod: ""
spec:
serviceAccountName: istio-pilot-service-account
containers:
- name: discovery
image: "192.168.200.10/istio-release/pilot:1.0.2"
imagePullPolicy: IfNotPresent
args:
- "discovery"
ports:
- containerPort:
- containerPort:
readinessProbe:
httpGet:
path: /ready
port:
initialDelaySeconds:
periodSeconds:
timeoutSeconds:
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: PILOT_CACHE_SQUASH
value: ""
- name: GODEBUG
value: "gctrace=2"
- name: PILOT_PUSH_THROTTLE_COUNT
value: ""
- name: PILOT_TRACE_SAMPLING
value: ""
resources:
requests:
cpu: 500m
memory: 2048Mi volumeMounts:
- name: config-volume
mountPath: /etc/istio/config
- name: istio-certs
mountPath: /etc/certs
readOnly: true
- name: istio-proxy
image: "192.168.200.10/istio-release/proxyv2:1.0.2"
imagePullPolicy: IfNotPresent
ports:
- containerPort:
- containerPort:
- containerPort:
- containerPort:
args:
- proxy
- --serviceCluster
- istio-pilot
- --templateFile
- /etc/istio/proxy/envoy_pilot.yaml.tmpl
- --controlPlaneAuthPolicy
- NONE
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: INSTANCE_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
resources:
requests:
cpu: 10m volumeMounts:
- name: istio-certs
mountPath: /etc/certs
readOnly: true
volumes:
- name: config-volume
configMap:
name: istio
- name: istio-certs
secret:
secretName: istio.istio-pilot-service-account
optional: true
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
- ppc64le
- s390x
preferredDuringSchedulingIgnoredDuringExecution:
- weight:
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
- weight:
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- ppc64le
- weight:
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- s390x ---
# Source: istio/charts/prometheus/templates/deployment.yaml
# TODO: the original template has service account, roles, etc
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: prometheus
namespace: istio-system
labels:
app: prometheus
chart: prometheus-1.0.
release: istio
heritage: Tiller
spec:
replicas:
selector:
matchLabels:
app: prometheus
template:
metadata:
labels:
app: prometheus
annotations:
sidecar.istio.io/inject: "false"
scheduler.alpha.kubernetes.io/critical-pod: ""
spec:
serviceAccountName: prometheus
containers:
- name: prometheus
image: "192.168.200.10/istio-release/prometheus:v2.3.1"
imagePullPolicy: IfNotPresent
args:
- '--storage.tsdb.retention=6h'
- '--config.file=/etc/prometheus/prometheus.yml'
ports:
- containerPort:
name: http
livenessProbe:
httpGet:
path: /-/healthy
port:
readinessProbe:
httpGet:
path: /-/ready
port:
resources:
requests:
cpu: 10m volumeMounts:
- name: config-volume
mountPath: /etc/prometheus
volumes:
- name: config-volume
configMap:
name: prometheus
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
- ppc64le
- s390x
preferredDuringSchedulingIgnoredDuringExecution:
- weight:
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
- weight:
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- ppc64le
- weight:
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- s390x ---
# Source: istio/charts/security/templates/deployment.yaml
# istio CA watching all namespaces
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: istio-citadel
namespace: istio-system
labels:
app: security
chart: security-1.0.
release: istio
heritage: Tiller
istio: citadel
spec:
replicas:
template:
metadata:
labels:
istio: citadel
annotations:
sidecar.istio.io/inject: "false"
scheduler.alpha.kubernetes.io/critical-pod: ""
spec:
serviceAccountName: istio-citadel-service-account
containers:
- name: citadel
image: "192.168.200.10/istio-release/citadel:1.0.2"
imagePullPolicy: IfNotPresent
args:
- --append-dns-names=true
- --grpc-port=
- --grpc-hostname=citadel
- --citadel-storage-namespace=istio-system
- --custom-dns-names=istio-pilot-service-account.istio-system:istio-pilot.istio-system,istio-ingressgateway-service-account.istio-system:istio-ingressgateway.istio-system
- --self-signed-ca=true
resources:
requests:
cpu: 10m affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
- ppc64le
- s390x
preferredDuringSchedulingIgnoredDuringExecution:
- weight:
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
- weight:
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- ppc64le
- weight:
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- s390x ---
# Source: istio/charts/servicegraph/templates/deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: servicegraph
namespace: istio-system
labels:
app: servicegraph
chart: servicegraph-1.0.
release: istio
heritage: Tiller
spec:
replicas:
template:
metadata:
labels:
app: servicegraph
annotations:
sidecar.istio.io/inject: "false"
scheduler.alpha.kubernetes.io/critical-pod: ""
spec:
containers:
- name: servicegraph
image: "192.168.200.10/istio-release/servicegraph:1.0.2"
imagePullPolicy: IfNotPresent
ports:
- containerPort:
args:
- --prometheusAddr=http://prometheus:9090
livenessProbe:
httpGet:
path: /graph
port:
readinessProbe:
httpGet:
path: /graph
port:
resources:
requests:
cpu: 10m affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
- ppc64le
- s390x
preferredDuringSchedulingIgnoredDuringExecution:
- weight:
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
- weight:
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- ppc64le
- weight:
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- s390x ---
# Source: istio/charts/sidecarInjectorWebhook/templates/deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: istio-sidecar-injector
namespace: istio-system
labels:
app: sidecarInjectorWebhook
chart: sidecarInjectorWebhook-1.0.
release: istio
heritage: Tiller
istio: sidecar-injector
spec:
replicas:
template:
metadata:
labels:
istio: sidecar-injector
annotations:
sidecar.istio.io/inject: "false"
scheduler.alpha.kubernetes.io/critical-pod: ""
spec:
serviceAccountName: istio-sidecar-injector-service-account
containers:
- name: sidecar-injector-webhook
image: "192.168.200.10/istio-release/sidecar_injector:1.0.2"
imagePullPolicy: IfNotPresent
args:
- --caCertFile=/etc/istio/certs/root-cert.pem
- --tlsCertFile=/etc/istio/certs/cert-chain.pem
- --tlsKeyFile=/etc/istio/certs/key.pem
- --injectConfig=/etc/istio/inject/config
- --meshConfig=/etc/istio/config/mesh
- --healthCheckInterval=2s
- --healthCheckFile=/health
volumeMounts:
- name: config-volume
mountPath: /etc/istio/config
readOnly: true
- name: certs
mountPath: /etc/istio/certs
readOnly: true
- name: inject-config
mountPath: /etc/istio/inject
readOnly: true
livenessProbe:
exec:
command:
- /usr/local/bin/sidecar-injector
- probe
- --probe-path=/health
- --interval=4s
initialDelaySeconds:
periodSeconds:
readinessProbe:
exec:
command:
- /usr/local/bin/sidecar-injector
- probe
- --probe-path=/health
- --interval=4s
initialDelaySeconds:
periodSeconds:
resources:
requests:
cpu: 10m volumes:
- name: config-volume
configMap:
name: istio
- name: certs
secret:
secretName: istio.istio-sidecar-injector-service-account
- name: inject-config
configMap:
name: istio-sidecar-injector
items:
- key: config
path: config
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
- ppc64le
- s390x
preferredDuringSchedulingIgnoredDuringExecution:
- weight:
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
- weight:
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- ppc64le
- weight:
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- s390x ---
# Source: istio/charts/tracing/templates/deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: istio-tracing
namespace: istio-system
labels:
app: istio-tracing
chart: tracing-1.0.
release: istio
heritage: Tiller
spec:
replicas:
template:
metadata:
labels:
app: jaeger
annotations:
sidecar.istio.io/inject: "false"
scheduler.alpha.kubernetes.io/critical-pod: ""
spec:
containers:
- name: jaeger
image: "docker.io/jaegertracing/all-in-one:1.5"
imagePullPolicy: IfNotPresent
ports:
- containerPort:
- containerPort:
- containerPort:
protocol: UDP
- containerPort:
protocol: UDP
- containerPort:
protocol: UDP
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: COLLECTOR_ZIPKIN_HTTP_PORT
value: ""
- name: MEMORY_MAX_TRACES
value: ""
livenessProbe:
httpGet:
path: /
port:
readinessProbe:
httpGet:
path: /
port:
resources:
requests:
cpu: 10m affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
- ppc64le
- s390x
preferredDuringSchedulingIgnoredDuringExecution:
- weight:
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
- weight:
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- ppc64le
- weight:
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- s390x ---
# Source: istio/charts/pilot/templates/gateway.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: istio-autogenerated-k8s-ingress
namespace: istio-system
spec:
selector:
istio: ingress
servers:
- port:
number:
protocol: HTTP2
name: http
hosts:
- "*" --- ---
# Source: istio/charts/gateways/templates/autoscale.yaml apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: istio-egressgateway
namespace: istio-system
spec:
maxReplicas:
minReplicas:
scaleTargetRef:
apiVersion: apps/v1beta1
kind: Deployment
name: istio-egressgateway
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization:
---
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: istio-ingressgateway
namespace: istio-system
spec:
maxReplicas:
minReplicas:
scaleTargetRef:
apiVersion: apps/v1beta1
kind: Deployment
name: istio-ingressgateway
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization:
--- ---
# Source: istio/charts/ingress/templates/autoscale.yaml apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: istio-ingress
namespace: istio-system
spec:
maxReplicas:
minReplicas:
scaleTargetRef:
apiVersion: apps/v1beta1
kind: Deployment
name: istio-ingress
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: ---
# Source: istio/charts/mixer/templates/autoscale.yaml apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: istio-policy
namespace: istio-system
spec:
maxReplicas:
minReplicas:
scaleTargetRef:
apiVersion: apps/v1beta1
kind: Deployment
name: istio-policy
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization:
---
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: istio-telemetry
namespace: istio-system
spec:
maxReplicas:
minReplicas:
scaleTargetRef:
apiVersion: apps/v1beta1
kind: Deployment
name: istio-telemetry
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization:
--- ---
# Source: istio/charts/pilot/templates/autoscale.yaml apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: istio-pilot
spec:
maxReplicas:
minReplicas:
scaleTargetRef:
apiVersion: apps/v1beta1
kind: Deployment
name: istio-pilot
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization:
--- ---
# Source: istio/charts/tracing/templates/service-jaeger.yaml apiVersion: v1
kind: List
items:
- apiVersion: v1
kind: Service
metadata:
name: jaeger-query
namespace: istio-system
annotations:
labels:
app: jaeger
jaeger-infra: jaeger-service
chart: tracing-1.0.
release: istio
heritage: Tiller
spec:
ports:
- name: query-http
port:
protocol: TCP
targetPort:
selector:
app: jaeger
- apiVersion: v1
kind: Service
metadata:
name: jaeger-collector
namespace: istio-system
labels:
app: jaeger
jaeger-infra: collector-service
chart: tracing-1.0.
release: istio
heritage: Tiller
spec:
ports:
- name: jaeger-collector-tchannel
port:
protocol: TCP
targetPort:
- name: jaeger-collector-http
port:
targetPort:
protocol: TCP
selector:
app: jaeger
type: ClusterIP
- apiVersion: v1
kind: Service
metadata:
name: jaeger-agent
namespace: istio-system
labels:
app: jaeger
jaeger-infra: agent-service
chart: tracing-1.0.
release: istio
heritage: Tiller
spec:
ports:
- name: agent-zipkin-thrift
port:
protocol: UDP
targetPort:
- name: agent-compact
port:
protocol: UDP
targetPort:
- name: agent-binary
port:
protocol: UDP
targetPort:
clusterIP: None
selector:
app: jaeger ---
# Source: istio/charts/tracing/templates/service.yaml
apiVersion: v1
kind: List
items:
- apiVersion: v1
kind: Service
metadata:
name: zipkin
namespace: istio-system
labels:
app: jaeger
chart: tracing-1.0.
release: istio
heritage: Tiller
spec:
type: ClusterIP
ports:
- port:
targetPort:
protocol: TCP
name: http
selector:
app: jaeger
- apiVersion: v1
kind: Service
metadata:
name: tracing
namespace: istio-system
annotations:
labels:
app: jaeger
chart: tracing-1.0.
release: istio
heritage: Tiller
spec:
ports:
- name: http-query
port:
protocol: TCP
targetPort:
selector:
app: jaeger ---
# Source: istio/charts/sidecarInjectorWebhook/templates/mutatingwebhook.yaml
apiVersion: admissionregistration.k8s.io/v1beta1
kind: MutatingWebhookConfiguration
metadata:
name: istio-sidecar-injector
namespace: istio-system
labels:
app: istio-sidecar-injector
chart: sidecarInjectorWebhook-1.0.
release: istio
heritage: Tiller
webhooks:
- name: sidecar-injector.istio.io
clientConfig:
service:
name: istio-sidecar-injector
namespace: istio-system
path: "/inject"
caBundle: ""
rules:
- operations: [ "CREATE" ]
apiGroups: [""]
apiVersions: ["v1"]
resources: ["pods"]
failurePolicy: Fail
namespaceSelector:
matchLabels:
istio-injection: enabled ---
# Source: istio/charts/galley/templates/validatingwehookconfiguration.yaml.tpl ---
# Source: istio/charts/grafana/templates/grafana-ports-mtls.yaml ---
# Source: istio/charts/grafana/templates/pvc.yaml ---
# Source: istio/charts/grafana/templates/secret.yaml ---
# Source: istio/charts/kiali/templates/ingress.yaml ---
# Source: istio/charts/pilot/templates/meshexpansion.yaml ---
# Source: istio/charts/security/templates/create-custom-resources-job.yaml ---
# Source: istio/charts/security/templates/enable-mesh-mtls.yaml ---
# Source: istio/charts/security/templates/meshexpansion.yaml --- ---
# Source: istio/charts/servicegraph/templates/ingress.yaml ---
# Source: istio/charts/telemetry-gateway/templates/gateway.yaml ---
# Source: istio/charts/tracing/templates/ingress-jaeger.yaml ---
# Source: istio/charts/tracing/templates/ingress.yaml ---
# Source: istio/templates/install-custom-resources.sh.tpl ---
# Source: istio/charts/mixer/templates/config.yaml
apiVersion: "config.istio.io/v1alpha2"
kind: attributemanifest
metadata:
name: istioproxy
namespace: istio-system
spec:
attributes:
origin.ip:
valueType: IP_ADDRESS
origin.uid:
valueType: STRING
origin.user:
valueType: STRING
request.headers:
valueType: STRING_MAP
request.id:
valueType: STRING
request.host:
valueType: STRING
request.method:
valueType: STRING
request.path:
valueType: STRING
request.reason:
valueType: STRING
request.referer:
valueType: STRING
request.scheme:
valueType: STRING
request.total_size:
valueType: INT64
request.size:
valueType: INT64
request.time:
valueType: TIMESTAMP
request.useragent:
valueType: STRING
response.code:
valueType: INT64
response.duration:
valueType: DURATION
response.headers:
valueType: STRING_MAP
response.total_size:
valueType: INT64
response.size:
valueType: INT64
response.time:
valueType: TIMESTAMP
source.uid:
valueType: STRING
source.user: # DEPRECATED
valueType: STRING
source.principal:
valueType: STRING
destination.uid:
valueType: STRING
destination.principal:
valueType: STRING
destination.port:
valueType: INT64
connection.event:
valueType: STRING
connection.id:
valueType: STRING
connection.received.bytes:
valueType: INT64
connection.received.bytes_total:
valueType: INT64
connection.sent.bytes:
valueType: INT64
connection.sent.bytes_total:
valueType: INT64
connection.duration:
valueType: DURATION
connection.mtls:
valueType: BOOL
connection.requested_server_name:
valueType: STRING
context.protocol:
valueType: STRING
context.timestamp:
valueType: TIMESTAMP
context.time:
valueType: TIMESTAMP
# Deprecated, kept for compatibility
context.reporter.local:
valueType: BOOL
context.reporter.kind:
valueType: STRING
context.reporter.uid:
valueType: STRING
api.service:
valueType: STRING
api.version:
valueType: STRING
api.operation:
valueType: STRING
api.protocol:
valueType: STRING
request.auth.principal:
valueType: STRING
request.auth.audiences:
valueType: STRING
request.auth.presenter:
valueType: STRING
request.auth.claims:
valueType: STRING_MAP
request.auth.raw_claims:
valueType: STRING
request.api_key:
valueType: STRING ---
apiVersion: "config.istio.io/v1alpha2"
kind: attributemanifest
metadata:
name: kubernetes
namespace: istio-system
spec:
attributes:
source.ip:
valueType: IP_ADDRESS
source.labels:
valueType: STRING_MAP
source.metadata:
valueType: STRING_MAP
source.name:
valueType: STRING
source.namespace:
valueType: STRING
source.owner:
valueType: STRING
source.service: # DEPRECATED
valueType: STRING
source.serviceAccount:
valueType: STRING
source.services:
valueType: STRING
source.workload.uid:
valueType: STRING
source.workload.name:
valueType: STRING
source.workload.namespace:
valueType: STRING
destination.ip:
valueType: IP_ADDRESS
destination.labels:
valueType: STRING_MAP
destination.metadata:
valueType: STRING_MAP
destination.owner:
valueType: STRING
destination.name:
valueType: STRING
destination.container.name:
valueType: STRING
destination.namespace:
valueType: STRING
destination.service: # DEPRECATED
valueType: STRING
destination.service.uid:
valueType: STRING
destination.service.name:
valueType: STRING
destination.service.namespace:
valueType: STRING
destination.service.host:
valueType: STRING
destination.serviceAccount:
valueType: STRING
destination.workload.uid:
valueType: STRING
destination.workload.name:
valueType: STRING
destination.workload.namespace:
valueType: STRING
---
apiVersion: "config.istio.io/v1alpha2"
kind: stdio
metadata:
name: handler
namespace: istio-system
spec:
outputAsJson: true
---
apiVersion: "config.istio.io/v1alpha2"
kind: logentry
metadata:
name: accesslog
namespace: istio-system
spec:
severity: '"Info"'
timestamp: request.time
variables:
sourceIp: source.ip | ip("0.0.0.0")
sourceApp: source.labels["app"] | ""
sourcePrincipal: source.principal | ""
sourceName: source.name | ""
sourceWorkload: source.workload.name | ""
sourceNamespace: source.namespace | ""
sourceOwner: source.owner | ""
destinationApp: destination.labels["app"] | ""
destinationIp: destination.ip | ip("0.0.0.0")
destinationServiceHost: destination.service.host | ""
destinationWorkload: destination.workload.name | ""
destinationName: destination.name | ""
destinationNamespace: destination.namespace | ""
destinationOwner: destination.owner | ""
destinationPrincipal: destination.principal | ""
apiClaims: request.auth.raw_claims | ""
apiKey: request.api_key | request.headers["x-api-key"] | ""
protocol: request.scheme | context.protocol | "http"
method: request.method | ""
url: request.path | ""
responseCode: response.code |
responseSize: response.size |
requestSize: request.size |
requestId: request.headers["x-request-id"] | ""
clientTraceId: request.headers["x-client-trace-id"] | ""
latency: response.duration | "0ms"
connection_security_policy: conditional((context.reporter.kind | "inbound") == "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls", "none"))
requestedServerName: connection.requested_server_name | ""
userAgent: request.useragent | ""
responseTimestamp: response.time
receivedBytes: request.total_size |
sentBytes: response.total_size |
referer: request.referer | ""
httpAuthority: request.headers[":authority"] | request.host | ""
xForwardedFor: request.headers["x-forwarded-for"] | "0.0.0.0"
reporter: conditional((context.reporter.kind | "inbound") == "outbound", "source", "destination")
monitored_resource_type: '"global"'
---
apiVersion: "config.istio.io/v1alpha2"
kind: logentry
metadata:
name: tcpaccesslog
namespace: istio-system
spec:
severity: '"Info"'
timestamp: context.time | timestamp("2017-01-01T00:00:00Z")
variables:
connectionEvent: connection.event | ""
sourceIp: source.ip | ip("0.0.0.0")
sourceApp: source.labels["app"] | ""
sourcePrincipal: source.principal | ""
sourceName: source.name | ""
sourceWorkload: source.workload.name | ""
sourceNamespace: source.namespace | ""
sourceOwner: source.owner | ""
destinationApp: destination.labels["app"] | ""
destinationIp: destination.ip | ip("0.0.0.0")
destinationServiceHost: destination.service.host | ""
destinationWorkload: destination.workload.name | ""
destinationName: destination.name | ""
destinationNamespace: destination.namespace | ""
destinationOwner: destination.owner | ""
destinationPrincipal: destination.principal | ""
protocol: context.protocol | "tcp"
connectionDuration: connection.duration | "0ms"
connection_security_policy: conditional((context.reporter.kind | "inbound") == "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls", "none"))
requestedServerName: connection.requested_server_name | ""
receivedBytes: connection.received.bytes |
sentBytes: connection.sent.bytes |
totalReceivedBytes: connection.received.bytes_total |
totalSentBytes: connection.sent.bytes_total |
reporter: conditional((context.reporter.kind | "inbound") == "outbound", "source", "destination")
monitored_resource_type: '"global"'
---
apiVersion: "config.istio.io/v1alpha2"
kind: rule
metadata:
name: stdio
namespace: istio-system
spec:
match: context.protocol == "http" || context.protocol == "grpc"
actions:
- handler: handler.stdio
instances:
- accesslog.logentry
---
apiVersion: "config.istio.io/v1alpha2"
kind: rule
metadata:
name: stdiotcp
namespace: istio-system
spec:
match: context.protocol == "tcp"
actions:
- handler: handler.stdio
instances:
- tcpaccesslog.logentry
---
apiVersion: "config.istio.io/v1alpha2"
kind: metric
metadata:
name: requestcount
namespace: istio-system
spec:
value: ""
dimensions:
reporter: conditional((context.reporter.kind | "inbound") == "outbound", "source", "destination")
source_workload: source.workload.name | "unknown"
source_workload_namespace: source.workload.namespace | "unknown"
source_principal: source.principal | "unknown"
source_app: source.labels["app"] | "unknown"
source_version: source.labels["version"] | "unknown"
destination_workload: destination.workload.name | "unknown"
destination_workload_namespace: destination.workload.namespace | "unknown"
destination_principal: destination.principal | "unknown"
destination_app: destination.labels["app"] | "unknown"
destination_version: destination.labels["version"] | "unknown"
destination_service: destination.service.host | "unknown"
destination_service_name: destination.service.name | "unknown"
destination_service_namespace: destination.service.namespace | "unknown"
request_protocol: api.protocol | context.protocol | "unknown"
response_code: response.code |
connection_security_policy: conditional((context.reporter.kind | "inbound") == "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls", "none"))
monitored_resource_type: '"UNSPECIFIED"'
---
apiVersion: "config.istio.io/v1alpha2"
kind: metric
metadata:
name: requestduration
namespace: istio-system
spec:
value: response.duration | "0ms"
dimensions:
reporter: conditional((context.reporter.kind | "inbound") == "outbound", "source", "destination")
source_workload: source.workload.name | "unknown"
source_workload_namespace: source.workload.namespace | "unknown"
source_principal: source.principal | "unknown"
source_app: source.labels["app"] | "unknown"
source_version: source.labels["version"] | "unknown"
destination_workload: destination.workload.name | "unknown"
destination_workload_namespace: destination.workload.namespace | "unknown"
destination_principal: destination.principal | "unknown"
destination_app: destination.labels["app"] | "unknown"
destination_version: destination.labels["version"] | "unknown"
destination_service: destination.service.host | "unknown"
destination_service_name: destination.service.name | "unknown"
destination_service_namespace: destination.service.namespace | "unknown"
request_protocol: api.protocol | context.protocol | "unknown"
response_code: response.code |
connection_security_policy: conditional((context.reporter.kind | "inbound") == "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls", "none"))
monitored_resource_type: '"UNSPECIFIED"'
---
apiVersion: "config.istio.io/v1alpha2"
kind: metric
metadata:
name: requestsize
namespace: istio-system
spec:
value: request.size |
dimensions:
reporter: conditional((context.reporter.kind | "inbound") == "outbound", "source", "destination")
source_workload: source.workload.name | "unknown"
source_workload_namespace: source.workload.namespace | "unknown"
source_principal: source.principal | "unknown"
source_app: source.labels["app"] | "unknown"
source_version: source.labels["version"] | "unknown"
destination_workload: destination.workload.name | "unknown"
destination_workload_namespace: destination.workload.namespace | "unknown"
destination_principal: destination.principal | "unknown"
destination_app: destination.labels["app"] | "unknown"
destination_version: destination.labels["version"] | "unknown"
destination_service: destination.service.host | "unknown"
destination_service_name: destination.service.name | "unknown"
destination_service_namespace: destination.service.namespace | "unknown"
request_protocol: api.protocol | context.protocol | "unknown"
response_code: response.code |
connection_security_policy: conditional((context.reporter.kind | "inbound") == "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls", "none"))
monitored_resource_type: '"UNSPECIFIED"'
---
apiVersion: "config.istio.io/v1alpha2"
kind: metric
metadata:
name: responsesize
namespace: istio-system
spec:
value: response.size |
dimensions:
reporter: conditional((context.reporter.kind | "inbound") == "outbound", "source", "destination")
source_workload: source.workload.name | "unknown"
source_workload_namespace: source.workload.namespace | "unknown"
source_principal: source.principal | "unknown"
source_app: source.labels["app"] | "unknown"
source_version: source.labels["version"] | "unknown"
destination_workload: destination.workload.name | "unknown"
destination_workload_namespace: destination.workload.namespace | "unknown"
destination_principal: destination.principal | "unknown"
destination_app: destination.labels["app"] | "unknown"
destination_version: destination.labels["version"] | "unknown"
destination_service: destination.service.host | "unknown"
destination_service_name: destination.service.name | "unknown"
destination_service_namespace: destination.service.namespace | "unknown"
request_protocol: api.protocol | context.protocol | "unknown"
response_code: response.code |
connection_security_policy: conditional((context.reporter.kind | "inbound") == "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls", "none"))
monitored_resource_type: '"UNSPECIFIED"'
---
apiVersion: "config.istio.io/v1alpha2"
kind: metric
metadata:
name: tcpbytesent
namespace: istio-system
spec:
value: connection.sent.bytes |
dimensions:
reporter: conditional((context.reporter.kind | "inbound") == "outbound", "source", "destination")
source_workload: source.workload.name | "unknown"
source_workload_namespace: source.workload.namespace | "unknown"
source_principal: source.principal | "unknown"
source_app: source.labels["app"] | "unknown"
source_version: source.labels["version"] | "unknown"
destination_workload: destination.workload.name | "unknown"
destination_workload_namespace: destination.workload.namespace | "unknown"
destination_principal: destination.principal | "unknown"
destination_app: destination.labels["app"] | "unknown"
destination_version: destination.labels["version"] | "unknown"
destination_service: destination.service.name | "unknown"
destination_service_name: destination.service.name | "unknown"
destination_service_namespace: destination.service.namespace | "unknown"
connection_security_policy: conditional((context.reporter.kind | "inbound") == "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls", "none"))
monitored_resource_type: '"UNSPECIFIED"'
---
apiVersion: "config.istio.io/v1alpha2"
kind: metric
metadata:
name: tcpbytereceived
namespace: istio-system
spec:
value: connection.received.bytes |
dimensions:
reporter: conditional((context.reporter.kind | "inbound") == "outbound", "source", "destination")
source_workload: source.workload.name | "unknown"
source_workload_namespace: source.workload.namespace | "unknown"
source_principal: source.principal | "unknown"
source_app: source.labels["app"] | "unknown"
source_version: source.labels["version"] | "unknown"
destination_workload: destination.workload.name | "unknown"
destination_workload_namespace: destination.workload.namespace | "unknown"
destination_principal: destination.principal | "unknown"
destination_app: destination.labels["app"] | "unknown"
destination_version: destination.labels["version"] | "unknown"
destination_service: destination.service.name | "unknown"
destination_service_name: destination.service.name | "unknown"
destination_service_namespace: destination.service.namespace | "unknown"
connection_security_policy: conditional((context.reporter.kind | "inbound") == "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls", "none"))
monitored_resource_type: '"UNSPECIFIED"'
---
apiVersion: "config.istio.io/v1alpha2"
kind: prometheus
metadata:
name: handler
namespace: istio-system
spec:
metrics:
- name: requests_total
instance_name: requestcount.metric.istio-system
kind: COUNTER
label_names:
- reporter
- source_app
- source_principal
- source_workload
- source_workload_namespace
- source_version
- destination_app
- destination_principal
- destination_workload
- destination_workload_namespace
- destination_version
- destination_service
- destination_service_name
- destination_service_namespace
- request_protocol
- response_code
- connection_security_policy
- name: request_duration_seconds
instance_name: requestduration.metric.istio-system
kind: DISTRIBUTION
label_names:
- reporter
- source_app
- source_principal
- source_workload
- source_workload_namespace
- source_version
- destination_app
- destination_principal
- destination_workload
- destination_workload_namespace
- destination_version
- destination_service
- destination_service_name
- destination_service_namespace
- request_protocol
- response_code
- connection_security_policy
buckets:
explicit_buckets:
bounds: [0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, , 2.5, , ]
- name: request_bytes
instance_name: requestsize.metric.istio-system
kind: DISTRIBUTION
label_names:
- reporter
- source_app
- source_principal
- source_workload
- source_workload_namespace
- source_version
- destination_app
- destination_principal
- destination_workload
- destination_workload_namespace
- destination_version
- destination_service
- destination_service_name
- destination_service_namespace
- request_protocol
- response_code
- connection_security_policy
buckets:
exponentialBuckets:
numFiniteBuckets:
scale:
growthFactor:
- name: response_bytes
instance_name: responsesize.metric.istio-system
kind: DISTRIBUTION
label_names:
- reporter
- source_app
- source_principal
- source_workload
- source_workload_namespace
- source_version
- destination_app
- destination_principal
- destination_workload
- destination_workload_namespace
- destination_version
- destination_service
- destination_service_name
- destination_service_namespace
- request_protocol
- response_code
- connection_security_policy
buckets:
exponentialBuckets:
numFiniteBuckets:
scale:
growthFactor:
- name: tcp_sent_bytes_total
instance_name: tcpbytesent.metric.istio-system
kind: COUNTER
label_names:
- reporter
- source_app
- source_principal
- source_workload
- source_workload_namespace
- source_version
- destination_app
- destination_principal
- destination_workload
- destination_workload_namespace
- destination_version
- destination_service
- destination_service_name
- destination_service_namespace
- connection_security_policy
- name: tcp_received_bytes_total
instance_name: tcpbytereceived.metric.istio-system
kind: COUNTER
label_names:
- reporter
- source_app
- source_principal
- source_workload
- source_workload_namespace
- source_version
- destination_app
- destination_principal
- destination_workload
- destination_workload_namespace
- destination_version
- destination_service
- destination_service_name
- destination_service_namespace
- connection_security_policy
---
apiVersion: "config.istio.io/v1alpha2"
kind: rule
metadata:
name: promhttp
namespace: istio-system
spec:
match: context.protocol == "http" || context.protocol == "grpc"
actions:
- handler: handler.prometheus
instances:
- requestcount.metric
- requestduration.metric
- requestsize.metric
- responsesize.metric
---
apiVersion: "config.istio.io/v1alpha2"
kind: rule
metadata:
name: promtcp
namespace: istio-system
spec:
match: context.protocol == "tcp"
actions:
- handler: handler.prometheus
instances:
- tcpbytesent.metric
- tcpbytereceived.metric
--- apiVersion: "config.istio.io/v1alpha2"
kind: kubernetesenv
metadata:
name: handler
namespace: istio-system
spec:
# when running from mixer root, use the following config after adding a
# symbolic link to a kubernetes config file via:
#
# $ ln -s ~/.kube/config mixer/adapter/kubernetes/kubeconfig
#
# kubeconfig_path: "mixer/adapter/kubernetes/kubeconfig" ---
apiVersion: "config.istio.io/v1alpha2"
kind: rule
metadata:
name: kubeattrgenrulerule
namespace: istio-system
spec:
actions:
- handler: handler.kubernetesenv
instances:
- attributes.kubernetes
---
apiVersion: "config.istio.io/v1alpha2"
kind: rule
metadata:
name: tcpkubeattrgenrulerule
namespace: istio-system
spec:
match: context.protocol == "tcp"
actions:
- handler: handler.kubernetesenv
instances:
- attributes.kubernetes
---
apiVersion: "config.istio.io/v1alpha2"
kind: kubernetes
metadata:
name: attributes
namespace: istio-system
spec:
# Pass the required attribute data to the adapter
source_uid: source.uid | ""
source_ip: source.ip | ip("0.0.0.0") # default to unspecified ip addr
destination_uid: destination.uid | ""
destination_port: destination.port |
attribute_bindings:
# Fill the new attributes from the adapter produced output.
# $out refers to an instance of OutputTemplate message
source.ip: $out.source_pod_ip | ip("0.0.0.0")
source.uid: $out.source_pod_uid | "unknown"
source.labels: $out.source_labels | emptyStringMap()
source.name: $out.source_pod_name | "unknown"
source.namespace: $out.source_namespace | "default"
source.owner: $out.source_owner | "unknown"
source.serviceAccount: $out.source_service_account_name | "unknown"
source.workload.uid: $out.source_workload_uid | "unknown"
source.workload.name: $out.source_workload_name | "unknown"
source.workload.namespace: $out.source_workload_namespace | "unknown"
destination.ip: $out.destination_pod_ip | ip("0.0.0.0")
destination.uid: $out.destination_pod_uid | "unknown"
destination.labels: $out.destination_labels | emptyStringMap()
destination.name: $out.destination_pod_name | "unknown"
destination.container.name: $out.destination_container_name | "unknown"
destination.namespace: $out.destination_namespace | "default"
destination.owner: $out.destination_owner | "unknown"
destination.serviceAccount: $out.destination_service_account_name | "unknown"
destination.workload.uid: $out.destination_workload_uid | "unknown"
destination.workload.name: $out.destination_workload_name | "unknown"
destination.workload.namespace: $out.destination_workload_namespace | "unknown" ---
# Configuration needed by Mixer.
# Mixer cluster is delivered via CDS
# Specify mixer cluster settings
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: istio-policy
namespace: istio-system
spec:
host: istio-policy.istio-system.svc.cluster.local
trafficPolicy:
connectionPool:
http:
http2MaxRequests:
maxRequestsPerConnection:
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: istio-telemetry
namespace: istio-system
spec:
host: istio-telemetry.istio-system.svc.cluster.local
trafficPolicy:
connectionPool:
http:
http2MaxRequests:
maxRequestsPerConnection:
---

这里说的是使用 install/kubernetes/helm/istio 目录中的 Chart 进行渲染,生成的内容保存到 ./istio.yaml 文件之中。将 sidecarInjectorWebhook.enabled 设置为 true,从而使自动注入属性生效。

部署完成后,可以检查 isotio-system namespace 中的服务是否正常运行:

[root@master1 kubernetes]# /root/show.sh  | grep istio-system
istio-system grafana-676b67689b-wzsrg / Running 1h 10.254.98.12 node3
istio-system istio-citadel-5f7664d4f5-szqtx / Running 1h 10.254.98.14 node3
istio-system istio-cleanup-secrets-ppbg7 / Completed 1h 10.254.87.7 node2
istio-system istio-egressgateway-7c56f748c7-2lw4z / Running 1h 10.254.98.10 node3
istio-system istio-egressgateway-7c56f748c7-57m6v / Running 1h 10.254.104.6 node5
istio-system istio-egressgateway-7c56f748c7-q9hsj / Running 1h 10.254.87.13 node2
istio-system istio-egressgateway-7c56f748c7-vmrwh / Running 1h 10.254.102.6 node1
istio-system istio-egressgateway-7c56f748c7-wxjfs / Running 1h 10.254.95.3 node4
istio-system istio-galley-7cfd5974fd-hwkz4 / Running 1h 10.254.87.11 node2
istio-system istio-grafana-post-install-p8hfg / Completed 1h 10.254.98.8 node3
istio-system istio-ingressgateway-679574c9c6-6hnnk / Running 1h 10.254.102.7 node1
istio-system istio-ingressgateway-679574c9c6-d456h / Running 1h 10.254.87.9 node2
istio-system istio-ingressgateway-679574c9c6-gd9p7 / Running 1h 10.254.98.17 node3
istio-system istio-ingressgateway-679574c9c6-gzff8 / Running 1h 10.254.104.7 node5
istio-system istio-ingressgateway-679574c9c6-sf75q / Running 1h 10.254.95.5 node4
istio-system istio-pilot-6897c9df47-gpqmg / Running 1h 10.254.87.7 node2
istio-system istio-policy-5459cb554f-6nbsk / Running 1h 10.254.104.8 node5
istio-system istio-policy-5459cb554f-74v9w / Running 1h 10.254.98.18 node3
istio-system istio-policy-5459cb554f-bqxt8 / Running 1h 10.254.95.4 node4
istio-system istio-policy-5459cb554f-f4x6j / Running 1h 10.254.102.8 node1
istio-system istio-policy-5459cb554f-qj74h / Running 1h 10.254.87.10 node2
istio-system istio-sidecar-injector-5c5fb8f6b9-224tw / Running 1h 10.254.98.8 node3
istio-system istio-statsd-prom-bridge-d44479954-pbb8d / Running 1h 10.254.98.9 node3
istio-system istio-telemetry-8694d7f76-9g8s2 / Running 1h 10.254.104.5 node5
istio-system istio-telemetry-8694d7f76-kh7bf / Running 1h 10.254.87.12 node2
istio-system istio-telemetry-8694d7f76-mt5rg / Running 1h 10.254.95.6 node4
istio-system istio-telemetry-8694d7f76-t5p8l / Running 1h 10.254.102.5 node1
istio-system istio-telemetry-8694d7f76-vfdbl / Running 1h 10.254.98.11 node3
istio-system istio-tracing-7c9b8969f7-wpt8p / Running 1h 10.254.98.16 node3
istio-system prometheus-6b945b75b6-vng9q / Running 1h 10.254.98.13 node3
  1. 过去的 istio-ca 现已更名 istio-citadel
  2. istio-cleanup-secrets 是一个 job,用于清理过去的 Istio 遗留下来的 CA 部署(包括 sa、deploy 以及 svc 三个对象)。
  3. egressgatewayingress 以及 ingressgateway,可以看出边缘部分的变动很大,以后会另行发文。

kubernetes 直接安装

kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml

kubectl apply -f install/kubernetes/istio-demo.yaml

3. Prometheus、Grafana、Servicegraph 和 Jaeger

等所有 Pod 启动后,可以通过 NodePort、Ingress 或者 kubectl proxy 来访问这些服务。比如可以通过 Ingress 来访问服务。

首先为 Prometheus、Grafana、Servicegraph 和 Jaeger 服务创建 Ingress:

$ cat ingress.yaml

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: prometheus
namespace: istio-system
spec:
rules:
- host: prometheus.istio.io
http:
paths:
- path: /
backend:
serviceName: prometheus
servicePort:
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: grafana
namespace: istio-system
spec:
rules:
- host: grafana.istio.io
http:
paths:
- path: /
backend:
serviceName: grafana
servicePort:
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: servicegraph
namespace: istio-system
spec:
rules:
- host: servicegraph.istio.io
http:
paths:
- path: /
backend:
serviceName: servicegraph
servicePort:
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: tracing
namespace: istio-system
spec:
rules:
- host: tracing.istio.io
http:
paths:
- path: /
backend:
serviceName: tracing
servicePort:

手工注入 Sidecar

手工或自动注入都会从 istio-system 命名空间的 istio-sidecar-injector 以及 istio ConfigMap 中获取配置信息

自动注入过程会在 Pod 的生成过程中进行注入,这种方法不会更改控制器的配置。手工删除 Pod 或者使用滚动更新都可以选择性的对 Sidecar 进行更新。

使用集群内置配置将 Sidecar 注入到 Deployment 中

[root@master1 ~]# istioctl kube-inject -f gateway-deployment.yaml | kubectl apply -f -
deployment.apps/gateway created
service/gateway created

查看注入到proxy

[root@master1 ~]# kubectl get deployment gateway -o yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: ""
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"apps/v1beta2","kind":"Deployment","metadata":{"annotations":{},"creationTimestamp":null,"name":"gateway","namespace":"default"},"spec":{"replicas":,"selector":{"matchLabels":{"app":"gateway"}},"strategy":{},"template":{"metadata":{"annotations":{"sidecar.istio.io/status":"{\"version\":\"9f116c4689c03bb21330a7b2baa7a88c26d7c5adb08b1deda5fca9032de8a474\",\"initContainers\":[\"istio-init\"],\"containers\":[\"istio-proxy\"],\"volumes\":[\"istio-envoy\",\"istio-certs\"],\"imagePullSecrets\":null}"},"creationTimestamp":null,"labels":{"app":"gateway"}},"spec":{"containers":[{"image":"192.168.200.10/testsubject/gateway:28","name":"gateway","ports":[{"containerPort":}],"resources":{"limits":{"cpu":"400m","memory":"1Gi"},"requests":{"cpu":"100m","memory":"512Mi"}}},{"args":["proxy","sidecar","--configPath","/etc/istio/proxy","--binaryPath","/usr/local/bin/envoy","--serviceCluster","gateway","--drainDuration","45s","--parentShutdownDuration","1m0s","--discoveryAddress","istio-pilot.istio-system:15007","--discoveryRefreshDelay","1s","--zipkinAddress","zipkin.istio-system:9411","--connectTimeout","10s","--statsdUdpAddress","istio-statsd-prom-bridge.istio-system:9125","--proxyAdminPort","","--controlPlaneAuthPolicy","NONE"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}},{"name":"INSTANCE_IP","valueFrom":{"fieldRef":{"fieldPath":"status.podIP"}}},{"name":"ISTIO_META_POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"ISTIO_META_INTERCEPTION_MODE","value":"REDIRECT"}],"image":"docker.io/istio/proxyv2:1.0.0","imagePullPolicy":"IfNotPresent","name":"istio-proxy","resources":{"requests":{"cpu":"10m"}},"securityContext":{"privileged":false,"readOnlyRootFilesystem":true,"runAsUser":},"volumeMounts":[{"mountPath":"/etc/istio/proxy","name":"istio-envoy"},{"mountPath":"/etc/certs/","name":"istio-certs","readOnly":true}]}],"initContainers":[{"args":["-p","","-u","","-m","REDIRECT","-i","*","-x","","-b","80,","-d",""],"image":"192.168.200.10/istio/proxy_init:1.0.0","imagePullPolicy":"IfNotPresent","name":"istio-init","resources":{},"securityContext":{"capabilities":{"add":["NET_ADMIN"]},"privileged":true}}],"volumes":[{"emptyDir":{"medium":"Memory"},"name":"istio-envoy"},{"name":"istio-certs","secret":{"optional":true,"secretName":"istio.default"}}]}}},"status":{}}
creationTimestamp: --15T06::17Z
generation:
name: gateway
namespace: default
resourceVersion: ""
selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/gateway
uid: 6af616cf-a053-11e8-b03c-005056845c62
spec:
progressDeadlineSeconds:
replicas:
revisionHistoryLimit:
selector:
matchLabels:
app: gateway
strategy:
rollingUpdate:
maxSurge: %
maxUnavailable: %
type: RollingUpdate
template:
metadata:
annotations:
sidecar.istio.io/status: '{"version":"9f116c4689c03bb21330a7b2baa7a88c26d7c5adb08b1deda5fca9032de8a474","initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-certs"],"imagePullSecrets":null}'
creationTimestamp: null
labels:
app: gateway
spec:
containers:
- image: 192.168.200.10/testsubject/gateway:
imagePullPolicy: IfNotPresent
name: gateway
ports:
- containerPort:
protocol: TCP
resources:
limits:
cpu: 400m
memory: 1Gi
requests:
cpu: 100m
memory: 512Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
- args:
- proxy
- sidecar
- --configPath
- /etc/istio/proxy
- --binaryPath
- /usr/local/bin/envoy
- --serviceCluster
- gateway
- --drainDuration
- 45s
- --parentShutdownDuration
- 1m0s
- --discoveryAddress
- istio-pilot.istio-system:
- --discoveryRefreshDelay
- 1s
- --zipkinAddress
- zipkin.istio-system:
- --connectTimeout
- 10s
- --statsdUdpAddress
- istio-statsd-prom-bridge.istio-system:
- --proxyAdminPort
- ""
- --controlPlaneAuthPolicy
- NONE
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: INSTANCE_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: ISTIO_META_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: ISTIO_META_INTERCEPTION_MODE
value: REDIRECT
image: docker.io/istio/proxyv2:1.0.
imagePullPolicy: IfNotPresent
name: istio-proxy
resources:
requests:
cpu: 10m
securityContext:
privileged: false
readOnlyRootFilesystem: true
runAsUser:
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/istio/proxy
name: istio-envoy
- mountPath: /etc/certs/
name: istio-certs
readOnly: true
dnsPolicy: ClusterFirst
initContainers:
- args:
- -p
- ""
- -u
- ""
- -m
- REDIRECT
- -i
- '*'
- -x
- ""
- -b
- ,
- -d
- ""
image: 192.168.200.10/istio/proxy_init:1.0.
imagePullPolicy: IfNotPresent
name: istio-init
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds:
volumes:
- emptyDir:
medium: Memory
name: istio-envoy
- name: istio-certs
secret:
defaultMode:
optional: true
secretName: istio.default
status:
availableReplicas:
conditions:
- lastTransitionTime: --15T06::28Z
lastUpdateTime: --15T06::28Z
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: --15T06::17Z
lastUpdateTime: --15T06::28Z
message: ReplicaSet "gateway-78ddd84d8d" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration:
readyReplicas:
replicas:
updatedReplicas:

Sidecar 的自动注入

使用 Kubernetes 的 mutating webhook admission controller,可以进行 Sidecar 的自动注入。Kubernetes 1.9 以后的版本才具备这一能力。使用这一功能之前首先要检查 kube-apiserver 的进程,是否具备 admission-control 参数,并且这个参数的值中需要包含 MutatingAdmissionWebhook 以及 ValidatingAdmissionWebhook 两项,并且按照正确的顺序加载,这样才能启用 admissionregistration API:

kubectl api-versions | grep admissionregistration
admissionregistration.k8s.io/v1alpha1
admissionregistration.k8s.io/v1beta1

跟手工注入不同的是,自动注入过程是发生在 Pod 级别的。因此是不会看到 Deployment 本身发生什么变化的。但是可以使用 kubectl describe 来观察单独的 Pod,在其中能看到注入 Sidecar 的相关信息。

[root@master1 helm]# kubectl get configmap istio-sidecar-injector -n istio-system -o yaml > istio-sidecar-injector.yaml
[root@master1 helm]# vim istio-sidecar-injector.yaml 修改注入的proxy2 从docker.io
[root@master1 helm]# kubectl apply -f istio-sidecar-injector.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
configmap/istio-sidecar-injector configured

Webhook 的禁用或升级

缺省情况下,用于 Sidecar 注入的 Webhook 是启用的。如果想要禁用它,可以用 Helm ,将 sidecarInjectorWebhook.enabled 参数设为 false,生成一个 istio.yaml 进行更新。也就是:

helm template --namespace=istio-system --set sidecarInjectorWebhook.enabled=false install/kubernetes/helm/istio > istio.yaml
kubectl create ns istio-system
kubectl apply -n istio-system -f istio.yaml

应用部署

给 default 命名空间设置标签:istio-injection=enabled

kubectl label namespace default istio-injection=enabled
kubectl get namespace -L istio-injection
NAME STATUS AGE ISTIO-INJECTION
default Active 1h enabled
istio-system Active 1h
kube-public Active 1h
kube-system Active 1h

这样就会在 Pod 创建时触发 Sidecar 的注入过程了。删掉运行的 Pod,会产生一个新的 Pod,新 Pod 会被注入 Sidecar。原有的 Pod 只有一个容器,而被注入 Sidecar 的 Pod 会有两个容器:

删除namespace 上的标签
kubectl label namespace default istio-injection- kubectl delete pod sleep-776b7bcdcd-7hpnk
kubectl get pod
NAME READY STATUS RESTARTS AGE
sleep-776b7bcdcd-7hpnk / Terminating 1m
sleep-776b7bcdcd-bhn9m / Running 7s

查看被注入的 Pod 的细节。不难发现多出了一个 istio-proxy 容器及其对应的存储卷。注意用正确的 Pod 名称来执行下面的命令:

 kubectl describe pod sleep-776b7bcdcd-bhn9m

禁用 default 命名空间的自动注入功能,然后检查新建 Pod 是不是就不带有 Sidecar 容器了

kubectl label namespace default istio-injection-
kubectl delete pod sleep-776b7bcdcd-bhn9m
kubectl get pod
NAME READY STATUS RESTARTS AGE
sleep-776b7bcdcd-bhn9m / Terminating 2m
sleep-776b7bcdcd-gmvnr / Running 2s

理解原理

被 Kubernetes 调用时,admissionregistration.k8s.io/v1beta1#MutatingWebhookConfiguration 会进行配置。Istio 提供的缺省配置,会在带有 istio-injection=enabled 标签的命名空间中选择 Pod。使用 kubectl edit mutatingwebhookconfiguration istio-sidecar-injector 命令可以编辑目标命名空间的范围。

修改 mutatingwebhookconfiguration 之后,应该重新启动已经被注入 Sidecar 的 Pod。

istio-system 命名空间中的 ConfigMap istio-sidecar-injector 中包含了缺省的注入策略以及 Sidecar 的注入模板。

策略

disabled - Sidecar 注入器缺省不会向 Pod 进行注入。在 Pod 模板中加入 sidecar.istio.io/inject 注解并赋值为 true 才能启用注入。

enabled - Sidecar 注入器缺省会对 Pod 进行注入。在 Pod 模板中加入 sidecar.istio.io/inject 注解并赋值为 false 就会阻止对这一 Pod 的注入

下面的例子用 sidecar.istio.io/inject 注解来禁用 Sidecar 注入

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ignored
spec:
template:
metadata:
annotations:
sidecar.istio.io/inject: "false"
spec:
containers:
- name: ignored
image: tutum/curl
command: ["/bin/sleep","infinity"]
模板

Sidecar 注入模板使用的是 golang 模板,当解析和执行时,会解码为下面的结构,其中包含了将要注入到 Pod 中的容器和卷。

type SidecarInjectionSpec struct {
InitContainers []v1.Container `yaml:"initContainers"`
Containers []v1.Container `yaml:"containers"`
Volumes []v1.Volume `yaml:"volumes"`
ImagePullSecrets []corev1.LocalObjectReference `yaml:"imagePullSecrets"`
}

在运行时,这个模板会形成如下的数据结构

type SidecarTemplateData struct {
ObjectMeta *metav1.ObjectMeta
Spec *v1.PodSpec
ProxyConfig *meshconfig.ProxyConfig // 定义来自于 https://istio.io/docs/reference/config/service-mesh.html#proxyconfig
MeshConfig *meshconfig.MeshConfig // 定义来自于 https://istio.io/docs/reference/config/service-mesh.html#meshconfig
}

ObjectMeta 和 Spec 都来自于 Pod。ProxyConfig 和 MeshConfig 来自 istio-system 命名空间中的 istio ConfigMap。模板可以使用这些数据,有条件对将要注入的容器和卷进行定义。

例如下面的模板代码段来自于 install/kubernetes/istio-sidecar-injector-configmap-release.yaml

containers:
- name: istio-proxy
image: istio.io/proxy:0.5.
args:
- proxy
- sidecar
- --configPath
- {{ .ProxyConfig.ConfigPath }}
- --binaryPath
- {{ .ProxyConfig.BinaryPath }}
- --serviceCluster
{{ if ne "" (index .ObjectMeta.Labels "app") -}}
- {{ index .ObjectMeta.Labels "app" }}
{{ else -}}
- "istio-proxy"
{{ end -}}

会在部署 Sleep 应用时应用到 Pod 上,并扩展为:

containers:
- name: istio-proxy
image: istio.io/proxy:0.5.
args:
- proxy
- sidecar
- --configPath
- /etc/istio/proxy
- --binaryPath
- /usr/local/bin/envoy
- --serviceCluster
- sleep
kubectl delete mutatingwebhookconfiguration istio-sidecar-injector
kubectl -n istio-system delete service istio-sidecar-injector
kubectl -n istio-system delete deployment istio-sidecar-injector
kubectl -n istio-system delete serviceaccount istio-sidecar-injector-service-account
kubectl delete clusterrole istio-sidecar-injector-istio-system
kubectl delete clusterrolebinding istio-sidecar-injector-admin-role-binding-istio-system

上面的命令不会从 Pod 中移除已经注入的 Sidecar。可以用一次滚动更新,或者简单的删除原有 Pod 迫使 Deployment 重新创建都可以移除 Sidecar。

除此以外,还可以删除我们在本任务中做出的其它更改:

 kubectl label namespace default istio-injection-

Istio 对 Pod 和服务的要求

要成为服务网格的一部分,Kubernetes 集群中的 Pod 和服务必须满足以下几个要求:

  1. 需要给端口正确命名:服务端口必须进行命名。端口名称只允许是<协议>[-<后缀>-]模式,其中<协议>部分可选择范围包括 httphttp2grpcmongo 以及 redis,Istio 可以通过对这些协议的支持来提供路由能力。例如 name: http2-foo 和 name: http 都是有效的端口名,但 name: http2foo 就是无效的。如果没有给端口进行命名,或者命名没有使用指定前缀,那么这一端口的流量就会被视为普通 TCP 流量(除非显式的用 Protocol: UDP 声明该端口是 UDP 端口)。

  2. 关联服务:Pod 必须关联到 Kubernetes 服务,如果一个 Pod 属于多个服务,这些服务不能再同一端口上使用不同协议,例如 HTTP 和 TCP。

  3. Deployment 应带有 app 标签:在使用 Kubernetes Deployment 进行 Pod 部署的时候,建议显式的为 Deployment 加上 app 标签。每个 Deployment 都应该有一个有意义的 app 标签。app 标签在分布式跟踪的过程中会被用来加入上下文信息。

卸载

  • 对于选项1,使用 kubectl 进行卸载:

  • $ kubectl delete -f   istio.yaml
  • 对于选项2,使用 Helm 进行卸载:

    $ helm delete --purge istio

使用 Istio 的流量管理模型,本质上是将流量与基础设施扩容解耦,让运维人员可以通过 Pilot 指定流量遵循什么规则,而不是指定哪些 pod/VM 应该接收流量——Pilot 和智能 Envoy 代理会帮我们搞定。因此,例如,您可以通过 Pilot 指定特定服务的 5% 流量可以转到金丝雀版本,而不必考虑金丝雀部署的大小,或根据请求的内容将流量发送到特定版本。

Pilot 和 Envoy

Istio 流量管理的核心组件是 Pilot,它管理和配置部署在特定 Istio 服务网格中的所有 Envoy 代理实例。它允许您指定在 Envoy 代理之间使用什么样的路由流量规则,并配置故障恢复功能,如超时、重试和熔断器。它还维护了网格中所有服务的规范模型,并使用这个模型通过发现服务让 Envoy 了解网格中的其他实例

每个 Envoy 实例都会维护负载均衡信息信息,这些信息来自 Pilot 以及对负载均衡池中其他实例的定期健康检查。从而允许其在目标实例之间智能分配流量,同时遵循其指定的路由规则。

Pilot 负责管理通过 Istio 服务网格发布的 Envoy 实例的生命周期

在网格中 Pilot 维护了一个服务规则表 并独立于底层平台。Pilot中的特定于平台的适配器负责适当地填充这个规范模型。例如,在 Pilot 中的 Kubernetes 适配器实现了必要的控制器,来观察 Kubernetes API 服务器,用于更改 pod 的注册信息、入口资源以及存储流量管理规则。这些数据被转换为规范。然后根据规范生成特定的 Envoy 的配置。

Pilot 公开了用于服务发现 、负载均衡池路由表的动态更新的 API。

运维人员可以通过 Pilot 的 Rules API 指定高级流量管理规则。这些规则被翻译成低级配置,并通过 discovery API 分发到 Envoy 实例。

请求路由

Ingress 和 Egress

Istio 进入和离开服务网络的所有流量都会通过 Envoy 代理进行传输。通过将 Envoy 代理部署在服务之前,运维人员可以针对面向用户的服务进行 A/B 测试、部署金丝雀服务等。类似地,通过使用 Envoy 将流量路由到外部 Web 服务(例如,访问 Maps API 或视频服务 API)的方式,运维人员可以为这些服务添加超时控制、重试、断路器等功能,同时还能从服务连接中获取各种细节指标。

服务发现和负载均衡

Pilot 使用来自服务注册的信息,并提供与平台无关的服务发现接口。网格中的 Envoy 实例执行服务发现,并相应地动态更新其负载均衡池

网格中的服务使用其 DNS 名称访问彼此。服务的所有 HTTP 流量都会通过 Envoy 自动重新路由。Envoy 在负载均衡池中的实例之间分发流量。虽然 Envoy 支持多种复杂的负载均衡算法,但 Istio 目前仅允许三种负载均衡模式:轮循、随机和带权重的最少请求

除了负载均衡外,Envoy 还会定期检查池中每个实例的运行状况。Envoy 遵循熔断器风格模式,根据健康检查 API 调用的失败率将实例分类为不健康和健康两种。换句话说,当给定实例的健康检查失败次数超过预定阈值时,将会被从负载均衡池中弹出。类似地,当通过的健康检查数超过预定阈值时,该实例将被添加回负载均衡池。您可以在处理故障中了解更多有关 Envoy 的故障处理功能。

服务可以通过使用 HTTP 503 响应健康检查来主动减轻负担。在这种情况下,服务实例将立即从调用者的负载均衡池中删除。

故障处理

  1. 超时

  2. 具备超时预算,并能够在重试之间进行可变抖动(间隔)的有限重试功能

  3. 并发连接数和上游服务请求数限制

  4. 对负载均衡池中的每个成员主动(定期)运行健康检查

  5. 细粒度熔断器(被动健康检查)——适用于负载均衡池中的每个实例

在 Istio 中运行的应用程序是否仍需要处理故障?

是的。Istio可以提高网格中服务的可靠性和可用性。但是,应用程序仍然需要处理故障(错误)并采取适当的回退操作。例如,当负载均衡池中的所有实例都失败时,Envoy 将返回 HTTP 503。应用程序有责任实现必要的逻辑,对这种来自上游服务的 HTTP 503 错误做出合适的响应。

故障注入

虽然 Envoy sidecar/proxy 为在 Istio 上运行的服务提供了大量的故障恢复机制,但测试整个应用程序端到端的故障恢复能力依然是必须的。错误配置的故障恢复策略(例如,跨服务调用的不兼容/限制性超时)可能导致应用程序中的关键服务持续不可用,从而破坏用户体验。

Istio 能在不杀死 Pod 的情况下,将特定协议的故障注入到网络中,在 TCP 层制造数据包的延迟或损坏。我们的理由是,无论网络级别的故障如何,应用层观察到的故障都是一样的,并且可以在应用层注入更有意义的故障(例如,HTTP 错误代码),以检验和改善应用的弹性。

运维人员可以为符合特定条件的请求配置故障,还可以进一步限制遭受故障的请求的百分比。可以注入两种类型的故障:延迟和中断。延迟是计时故障,模拟网络延迟上升或上游服务超载的情况。中断是模拟上游服务的崩溃故障。中断通常以 HTTP 错误代码或 TCP 连接失败的形式表现。

规则配置

Istio 提供了一个简单的配置模型,用来控制 API 调用以及应用部署内多个服务之间的四层通信。运维人员可以使用这个模型来配置服务级别的属性,这些属性可以是断路器、超时、重试,以及一些普通的持续发布任务,例如金丝雀发布、A/B 测试、使用百分比对流量进行控制,从而完成应用的逐步发布等。

Istio 中包含有四种流量管理配置资源,分别是 VirtualService(虚拟服务)DestinationRule(重点规则)ServiceEntry(服务入口) 以及 Gateway(网关)。下面会讲一下这几个资源的一些重点。在网络参考中可以获得更多这方面的信息。

  • VirtualService 在 Istio 服务网格中定义路由规则,控制路由如何路由到服务上。

  • DestinationRule 是 VirtualService 路由生效后,配置应用与请求的策略集

  • ServiceEntry 是通常用于在 Istio 服务网格之外启用对服务的请求。

  • Gateway 为 HTTP/TCP 流量配置负载均衡器,最常见的是在网格的边缘的操作,以启用应用程序的入口流量。

例如,将 reviews 服务 100% 的传入流量发送到 v1 版本,这一需求可以用下面的规则来实现:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
- route:
- destination:
host: reviews
subset: v1

这个配置的用意是,发送到 reviews 服务(在 host 字段中标识)的流量应该被路由到 reviews 服务实例的 v1 子集中。路由中的 subset 制定了一个预定义的子集名称,子集的定义来自于目标规则配置:

子集指定了一个或多个特定版本的实例标签。例如,在 Kubernetes 中部署 Istio 时,”version: v1” 表示只有包含 “version: v1” 标签版本的 pods 才会接收流量。

在 DestinationRule 中,你可以添加其他策略,例如:下面的定义指定使用随机负载均衡模式:

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: reviews
spec:
host: reviews
trafficPolicy:
loadBalancer:
simple: RANDOM
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2

可以使用 kubectl 命令配置规则。在配置请求路由任务中包含有配置示例。

Virtual Service

路由规则对应着一或多个用 VirtualService 配置指定的请求目的主机。这些主机可以是也可以不是实际的目标负载,甚至可以不是同一网格内可路由的服务。例如要给到 reviews 服务的请求定义路由规则,可以使用内部的名称 reviews,也可以用域名 bookinfo.comVirtualService可以定义这样的 host 字段:

在服务之间分拆流量

每个路由规则都需要对一或多个有权重的后端进行甄别并调用合适的后端。每个后端都对应一个特定版本的目标服务,服务的版本是依靠标签来区分的。如果一个服务版本包含多个注册实例,那么会根据为该服务定义的负载均衡策略进行路由,缺省策略是 round-robin

例如下面的规则会把 25% 的 reviews 服务流量分配给 v2 标签;其余的 75% 流量分配给 v1

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
- route:
- destination:
host: reviews
subset: v1
weight:
- destination:
host: reviews
subset: v2
weight:

超时和重试

缺省情况下,HTTP 请求的超时设置为 15 秒,可以使用路由规则来覆盖这个限制

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: ratings
spec:
hosts:
- ratings
http:
- route:
- destination:
host: ratings
subset: v1
timeout: 10s

还可以用路由规则来指定某些 http 请求的重试次数。下面的代码可以用来设置最大重试次数,或者在规定时间内一直重试,时间长度同样可以进行覆盖

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: ratings
spec:
hosts:
- ratings
http:
- route:
- destination:
host: ratings
subset: v1
retries:
attempts:
perTryTimeout: 2s

错误注入

在根据路由规则向选中目标转发 http 请求的时候,可以向其中注入一或多个错误。错误可以是延迟,也可以是退出。

下面的例子在目标为 ratings:v1 服务的流量中,对其中的 10% 注入 5 秒钟的延迟。

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: ratings
spec:
hosts:
- ratings
http:
- fault: #过错
delay: #延迟
percent:
fixedDelay: 5s
route:
- destination:
host: ratings
subset: v1

可以使用其他类型的故障,终止、提前终止请求。例如,模拟失败。

接下来,在目标为 ratings:v1 服务的流量中,对其中的 10% 注入 HTTP 400 错误。

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: ratings
spec:
hosts:
- ratings
http:
- fault:
abort: #终止
percent:
httpStatus:
route:
- destination:
host: ratings
subset: v1

有时会把延迟和退出同时使用。例如下面的规则对从 reviews:v2 到 ratings:v1 的流量生效,会让所有的请求延迟 5 秒钟,接下来把其中的 10% 退出:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: ratings
spec:
hosts:
- ratings
http:
- match:
- sourceLabels:
app: reviews
version: v2
fault:
delay:
fixedDelay: 5s
abort:
percent:
httpStatus:
route:
- destination:
host: ratings
subset: v1

条件规则

可以选择让规则只对符合某些要求的请求生效:

1. 使用工作负载 label 限制特定客户端工作负载。例如,规则可以指示它仅适用于实现 reviews 服务的工作负载实例(pod)的调用

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: ratings
spec:
hosts:
- ratings
http:
- match:
sourceLabels:
app: reviews
...

sourceLabels 的值取决于服务的实现。例如,在 Kubernetes 中,它可能与相应 Kubernetes 服务的 pod 选择器中使用的 label 相同。

以上示例还可以进一步细化为仅适用于 reviews 服务版本 v2 负载均衡实例的调用:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: ratings
spec:
hosts:
- ratings
http:
- match:
- sourceLabels:
app: reviews
version: v2
...

2. 根据 HTTP Header 选择规则。下面的规则只会对包含了 end-user 标头的来源请求,且值为 jason 的请求生效

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
- match:
- headers:
end-user:
exact: jason

如果规则中指定了多个标头,则所有相应的标头必须匹配才能应用规则。

3. 根据请求URI选择规则。例如,如果 URI 路径以 /api/v1 开头,则以下规则仅适用于请求:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: productpage
spec:
hosts:
- productpage
http:
- match:
- uri:
prefix: /api/v1
...

多重匹配条件

可以同时设置多个匹配条件。在这种情况下,根据嵌套,应用 AND 或 OR 语义。

如果多个条件嵌套在单个匹配子句中,则条件为 AND。例如,以下规则仅适用于客户端工作负载为 reviews:v2 且请求中包含 jason 的自定义 end-user 标头:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: ratings
spec:
hosts:
- ratings
http:
- match:
- sourceLabels:
app: reviews
version: v2
headers:
end-user:
exact: jason
...

相反,如果条件出现在单独的匹配子句中,则只应用其中一个条件(OR 语义):

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: ratings
spec:
hosts:
- ratings
http:
- match:
- sourceLabels:
app: reviews
version: v2
- headers:
end-user:
exact: jason

如果客户端工作负载是 reviews:v2,或者请求中包含 jason 的自定义 end-user 标头,则适用此规则。

优先级

当对同一目标有多个规则时,会按照在 VirtualService 中的顺序进行应用,换句话说,列表中的第一条规则具有最高优先级。

为什么优先级很重要:当对某个服务的路由是完全基于权重的时候,就可以在单一规则中完成。另一方面,如果有多重条件(例如来自特定用户的请求)用来进行路由,就会需要不止一条规则。这样就出现了优先级问题,需要通过优先级来保证根据正确的顺序来执行规则。

常见的路由模式是提供一或多个高优先级规则,这些优先规则使用源服务以及 Header 来进行路由判断,然后才提供一条单独的基于权重的规则,这些低优先级规则不设置匹配规则,仅根据权重对所有剩余流量进行分流。

例如下面的 VirtualService 包含了两个规则,所有对 reviews 服务发起的请求,如果 Header 包含 Foo=bar,就会被路由到 v2 实例,而其他请求则会发送给 v1 :

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
- match:
- headers:
Foo:
exact: bar
route:
- destination:
host: reviews
subset: v2
- route:
- destination:
host: reviews
subset: v1

基于 Header 的规则具有更高优先级。如果降低它的优先级,那么这一规则就无法生效了,这是因为那些没有限制的权重规则会首先被执行,也就是说所有请求即使包含了符合条件的 Foo 头,也都会被路由到 v1。流量特征被判断为符合一条规则的条件的时候,就会结束规则的选择过程,这就是在存在多条规则时,需要慎重考虑优先级问题的原因。

目标规则

在请求被 VirtualService 路由之后,DestinationRule 配置的一系列策略就生效了。这些策略由服务属主编写,包含断路器、负载均衡以及 TLS 等的配置内容。

DestinationRule 还定义了对应目标主机的可路由 subset(例如有命名的版本)。VirtualService 在向特定服务版本发送请求时会用到这些子集。

下面是 reviews 服务的 DestinationRule 配置策略以及子集:

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: reviews
spec:
host: reviews
trafficPolicy:
loadBalancer:
simple: RANDOM
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
- name: v3
labels:
version: v3

  

在单个 DestinationRule 配置中可以包含多条策略(比如 default 和 v2)

断路器

可以用一系列的标准,例如连接数和请求数限制来定义简单的断路器。

例如下面的 DestinationRule 给 reviews 服务的 v1 版本设置了 100 连接的限制:

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: reviews
spec:
host: reviews
subsets:
- name: v1
labels:
version: v1
trafficPolicy:
connectionPool:
tcp:
maxConnections:

规则评估

和路由规则类似,DestinationRule 中定义的策略也是和特定的 host 相关联的,如果指定了 subset,那么具体生效的 subset 的决策是由路由规则来决定的。

规则评估的第一步,是确认 VirtualService 中所请求的主机相对应的路由规则(如果有的话),这一步骤决定了将请求发往目标服务的哪一个 subset(就是特定版本)。下一步,被选中的 subset 如果定义了策略,就会开始是否生效的评估。

注意:这一算法需要留心是,为特定 subset 定义的策略,只有在该 subset 被显式的路由时候才能生效。例如下面的配置,只为 review 服务定义了规则(没有对应的 VirtualService 路由规则)。

istio1.0.2配置的更多相关文章

  1. istio1.0 实现蓝绿发布(未完成)

    istio1.0 实现蓝绿发布 环境: 192.168.0.91 master 192.168.0.92 node 第一步:安装k8s集群,参照:https://www.cnblogs.com/eff ...

  2. istio1.0安装

    1. istio1.0安装 创建 istio 目录 [root@centos-110 ~]# mkdir istio [root@centos-110 ~]# cd istio 1.1 获取安装包 链 ...

  3. CentOS 7.0安装配置Vsftp服务器

    一.配置防火墙,开启FTP服务器需要的端口 CentOS 7.0默认使用的是firewall作为防火墙,这里改为iptables防火墙. 1.关闭firewall: systemctl stop fi ...

  4. Solr4.0 如何配置使用UUID自动生成id值

    原文链接http://blog.csdn.net/keepthinking_/article/details/8501058#comments 最近学习了Lucene,随便也学习了Solr,Solr规 ...

  5. 网站开启https后加密协议始终是TLS1.0如何配置成TLS1.2?

    p { margin-bottom: 0.1in; line-height: 120% } 网站开启https后加密协议始终是TLS1.0如何配置成TLS1.2? 要在服务器上开启 TLSv1.,通常 ...

  6. 驱动开发学习笔记. 0.01 配置arm-linux-gcc 交叉编译器

    驱动开发读书笔记. 0.01 配置arm-linux-gcc 交叉编译器 什么是gcc: 就像windows上的VS 工具,用来编译代码,具体请自己搜索相关资料 怎么用PC机的gcc 和 arm-li ...

  7. RHEL 7.0 本地配置yum源

    RHEL 7.0 本地配置yum源  yum简介  yum = Yellow dog Updater, Modified 主要功能是更方便的添加/删除/更新RPM包. 它能自动解决包的倚赖性问题. 它 ...

  8. JSP的那些事儿(2)---- DWR2.0 的配置和使用

    JSP的那些事儿(2)----DWR2.0 的配置和使用 分类: Web开发 JAVA 2009-04-23 15:43 999人阅读 评论(0) 收藏 举报 jspdwrjavascriptserv ...

  9. CentOS 7.0系统安装配置LAMP服务器(Apache+PHP+MariaDB)

    CentOS 7.0接触到的用户是比较少的,今天看了站长写了一篇关于centos7中安装配置LAMP服务器的教程,下面我把文章稍加整理一下转给大家学习交流,希望例子能给各位带来帮助哦.   cento ...

随机推荐

  1. Mock session,cookie,querystring in ASB.NET MVC

    写测试用例的时候经常发现,所写的功能需要Http上下文的支持(session,cookie)这类的. 以下介绍2种应用场景. 用于控制器内Requet获取参数 控制器内的Requet其实是控制器内的属 ...

  2. Java - ArrayList源码分析

    java提高篇(二一)-----ArrayList 一.ArrayList概述 ArrayList是实现List接口的动态数组,所谓动态就是它的大小是可变的.实现了所有可选列表操作,并允许包括 nul ...

  3. JS 正则截取字符串

    1.js截取两个字符串之间的内容: varstr = "aaabbbcccdddeeefff"; str = str.match(/aaa(\S*)fff/)[1]; alert( ...

  4. mongodb与mysql区别(超详细)

    MySQL是关系型数据库. 优势: 在不同的引擎上有不同 的存储方式. 查询语句是使用传统的sql语句,拥有较为成熟的体系,成熟度很高. 开源数据库的份额在不断增加,mysql的份额页在持续增长. 缺 ...

  5. Javascript 随机数函数 学习之二:产生服从正态分布随机数

    一.为什么需要服从正态分布的随机函数 一般我们经常使用的随机数函数 Math.random() 产生的是服从均匀分布的随机数,能够模拟等概率出现的情况,例如 扔一个骰子,1到6点的概率应该相等,但现实 ...

  6. vue webpack 打包后css背景图路径问题

    最近在写vue-webpack项目时,打包后遇到了css背景图片路径报错的问题 奇怪的是,通过img标签引入的图片路径却没有问题,看来是webpack在打包后,读取css中图片的相对路径出错了. 稍微 ...

  7. CentOS6.5搭建ldap及pdc的过程

    linux   centos6.5,,配置的是本地yum,采用光盘自带的rpm包进行安装 安装openldap server 以及client yum install openldap-server ...

  8. nginx深入剖析

    1.nginx功能模块说明 nginx之所以很强大,是因为具有很多的强大的模块 nginx核心功能模块:nginx的核心功能模块负责nginx的全局应用,主要对应的是主配置文件中的Main区块和Eve ...

  9. Django 序列化三种方式 对象 列表 元组

    1.xuliehua.html <html lang="en"> <head> <meta charset="UTF-8"> ...

  10. python 实现线程安全的单例模式

    单例模式是一种常见的设计模式,该模式的主要目的是确保某一个类只有一个实例存在.当你希望在整个系统中,某个类只能出现一个实例时,单例对象就能派上用场. 比如,服务器的配置信息写在一个文件中online. ...