Kafka代理基於Cloud SQL代理的概念。它允許服務連接到KAFKA經紀人,而無需處理SASL/Plain Authentication和SSL證書。
它可以通過在本地機器上打開TCP插座並在使用插座時與相關的Kafka經紀人的連接來工作。元數據中的主機和端口以及從經紀人收到的FindCoordinator響應由本地同行代替。對於發現的經紀人(不配置為Boostrap服務器),本地偵聽器是在隨機端口上啟動的。可以禁用動態的本地偵聽器功能,並提供外部服務器映射的其他列表。
代理可以使用SASL/Plain終止TLS流量並驗證用戶。憑據驗證方法是可配置的,並在RPC上使用Golang插件系統。
這些代理還可以使用可插入的方法互相驗證,該方法透明了其他Kafka服務器和客戶端。當前,實現了服務帳戶的Google ID令牌,即代理客戶端請求,並發送服務帳戶JWT和Proxy Server接收並根據Google JWKS進行驗證。
可以限制KAFKA API調用以防止某些操作,例如主題刪除或產生請求。
看:
與Amazon MSK的Kafka代理
KAFKA協議指南
KAFKA協議指南
下表提供了受支持的Kafka版本(指定的一個和所有以前的Kafka版本)概述。並非每個Kafka版本都添加了與Kafka代理相關的新消息/版本,因此較新的Kafka版本也可以使用。
| kafka代理版本 | kafka版本 |
|---|---|
| 從0.11.0 | |
| 0.2.9 | 至2.8.0 |
| 0.3.1 | 至3.4.0 |
| 0.3.11 | 至3.7.0 |
| 0.3.12 | 至3.9.0 |
下載最新版本
Linux
curl -Ls https://github.com/grepplabs/kafka-proxy/releases/download/v0.3.12/kafka-proxy-v0.3.12-linux-amd64.tar.gz | tar xz
macos
curl -Ls https://github.com/grepplabs/kafka-proxy/releases/download/v0.3.12/kafka-proxy-v0.3.12-darwin-amd64.tar.gz | tar xz
將二進制移動到您的路上。
sudo mv ./kafka-proxy /usr/local/bin/kafka-proxy
make clean build
Docker圖像可在Docker Hub上找到。
您可以啟動一個Kafka-Proxy容器,以嘗試使用
docker run --rm -p 30001-30003:30001-30003 grepplabs/kafka-proxy:0.3.12 server --bootstrap-server-mapping "localhost:19092,0.0.0.0:30001" --bootstrap-server-mapping "localhost:29092,0.0.0.0:30002" --bootstrap-server-mapping "localhost:39092,0.0.0.0:30003" --dial-address-mapping "localhost:19092,172.17.0.1:19092" --dial-address-mapping "localhost:29092,172.17.0.1:29092" --dial-address-mapping "localhost:39092,172.17.0.1:39092" --debug-enable
Kafka-Proxy現在可以在localhost:30001 , localhost:30002和localhost:30003 ,連接到Docker(網絡橋接網路網關172.17.0.1 )的Kafka Brokers廣告廣告明文在localhost:19092 : localhost:29092和Localhost:29092和localhost:39092 。
帶有/opt/kafka-proxy/bin/的預編譯插件的docker映像用<release>-all標記。
您可以啟動帶有Auth-LDAP插件的Kafka-Proxy容器,以嘗試使用
docker run --rm -p 30001-30003:30001-30003 grepplabs/kafka-proxy:0.3.12-all server --bootstrap-server-mapping "localhost:19092,0.0.0.0:30001" --bootstrap-server-mapping "localhost:29092,0.0.0.0:30002" --bootstrap-server-mapping "localhost:39092,0.0.0.0:30003" --dial-address-mapping "localhost:19092,172.17.0.1:19092" --dial-address-mapping "localhost:29092,172.17.0.1:29092" --dial-address-mapping "localhost:39092,172.17.0.1:39092" --debug-enable --auth-local-enable --auth-local-command=/opt/kafka-proxy/bin/auth-ldap --auth-local-param=--url=ldap://172.17.0.1:389 --auth-local-param=--start-tls=false --auth-local-param=--bind-dn=cn=admin,dc=example,dc=org --auth-local-param=--bind-passwd=admin --auth-local-param=--user-search-base=ou=people,dc=example,dc=org --auth-local-param=--user-filter="(&(objectClass=person)(uid=%u)(memberOf=cn=kafka-users,ou=realm-roles,dc=example,dc=org))"
Run the kafka-proxy server
Usage:
kafka-proxy server [flags]
Flags:
--auth-gateway-client-command string Path to authentication plugin binary
--auth-gateway-client-enable Enable gateway client authentication
--auth-gateway-client-log-level string Log level of the auth plugin (default "trace")
--auth-gateway-client-magic uint Magic bytes sent in the handshake
--auth-gateway-client-method string Authentication method
--auth-gateway-client-param stringArray Authentication plugin parameter
--auth-gateway-client-timeout duration Authentication timeout (default 10s)
--auth-gateway-server-command string Path to authentication plugin binary
--auth-gateway-server-enable Enable proxy server authentication
--auth-gateway-server-log-level string Log level of the auth plugin (default "trace")
--auth-gateway-server-magic uint Magic bytes sent in the handshake
--auth-gateway-server-method string Authentication method
--auth-gateway-server-param stringArray Authentication plugin parameter
--auth-gateway-server-timeout duration Authentication timeout (default 10s)
--auth-local-command string Path to authentication plugin binary
--auth-local-enable Enable local SASL/PLAIN authentication performed by listener - SASL handshake will not be passed to kafka brokers
--auth-local-log-level string Log level of the auth plugin (default "trace")
--auth-local-mechanism string SASL mechanism used for local authentication: PLAIN or OAUTHBEARER (default "PLAIN")
--auth-local-param stringArray Authentication plugin parameter
--auth-local-timeout duration Authentication timeout (default 10s)
--bootstrap-server-mapping stringArray Mapping of Kafka bootstrap server address to local address (host:port,host:port(,advhost:advport))
--debug-enable Enable Debug endpoint
--debug-listen-address string Debug listen address (default "0.0.0.0:6060")
--default-listener-ip string Default listener IP (default "0.0.0.0")
--dial-address-mapping stringArray Mapping of target broker address to new one (host:port,host:port). The mapping is performed during connection establishment
--dynamic-advertised-listener string Advertised address for dynamic listeners. If empty, default-listener-ip is used
--dynamic-listeners-disable Disable dynamic listeners.
--dynamic-sequential-min-port int If set to non-zero, makes the dynamic listener use a sequential port starting with this value rather than a random port every time.
--external-server-mapping stringArray Mapping of Kafka server address to external address (host:port,host:port). A listener for the external address is not started
--forbidden-api-keys ints Forbidden Kafka request types. The restriction should prevent some Kafka operations e.g. 20 - DeleteTopics
--forward-proxy string URL of the forward proxy. Supported schemas are socks5 and http
--gssapi-auth-type string GSSAPI auth type: KEYTAB or USER (default "KEYTAB")
--gssapi-disable-pa-fx-fast Used to configure the client to not use PA_FX_FAST.
--gssapi-keytab string krb5.keytab file location
--gssapi-krb5 string krb5.conf file path, default: /etc/krb5.conf (default "/etc/krb5.conf")
--gssapi-password string Password for auth type USER
--gssapi-realm string Realm
--gssapi-servicename string ServiceName (default "kafka")
--gssapi-spn-host-mapping stringToString Mapping of Kafka servers address to SPN hosts (default [])
--gssapi-username string Username (default "kafka")
-h, --help help for server
--http-disable Disable HTTP endpoints
--http-health-path string Path on which to health endpoint (default "/health")
--http-listen-address string Address that kafka-proxy is listening on (default "0.0.0.0:9080")
--http-metrics-path string Path on which to expose metrics (default "/metrics")
--kafka-client-id string An optional identifier to track the source of requests (default "kafka-proxy")
--kafka-connection-read-buffer-size int Size of the operating system's receive buffer associated with the connection. If zero, system default is used
--kafka-connection-write-buffer-size int Sets the size of the operating system's transmit buffer associated with the connection. If zero, system default is used
--kafka-dial-timeout duration How long to wait for the initial connection (default 15s)
--kafka-keep-alive duration Keep alive period for an active network connection. If zero, keep-alives are disabled (default 1m0s)
--kafka-max-open-requests int Maximal number of open requests pro tcp connection before sending on it blocks (default 256)
--kafka-read-timeout duration How long to wait for a response (default 30s)
--kafka-write-timeout duration How long to wait for a transmit (default 30s)
--log-format string Log format text or json (default "text")
--log-level string Log level debug, info, warning, error, fatal or panic (default "info")
--log-level-fieldname string Log level fieldname for json format (default "@level")
--log-msg-fieldname string Message fieldname for json format (default "@message")
--log-time-fieldname string Time fieldname for json format (default "@timestamp")
--producer-acks-0-disabled Assume fire-and-forget is never sent by the producer. Enabling this parameter will increase performance
--proxy-listener-ca-chain-cert-file string PEM encoded CA's certificate file. If provided, client certificate is required and verified
--proxy-listener-cert-file string PEM encoded file with server certificate
--proxy-listener-cipher-suites strings List of supported cipher suites
--proxy-listener-curve-preferences strings List of curve preferences
--proxy-listener-keep-alive duration Keep alive period for an active network connection. If zero, keep-alives are disabled (default 1m0s)
--proxy-listener-key-file string PEM encoded file with private key for the server certificate
--proxy-listener-key-password string Password to decrypt rsa private key
--proxy-listener-read-buffer-size int Size of the operating system's receive buffer associated with the connection. If zero, system default is used
--proxy-listener-tls-enable Whether or not to use TLS listener
--proxy-listener-tls-required-client-subject strings Required client certificate subject common name; example; s:/CN=[value]/C=[state]/C=[DE,PL] or r:/CN=[^val.{2}$]/C=[state]/C=[DE,PL]; check manual for more details
--proxy-listener-write-buffer-size int Sets the size of the operating system's transmit buffer associated with the connection. If zero, system default is used
--proxy-request-buffer-size int Request buffer size pro tcp connection (default 4096)
--proxy-response-buffer-size int Response buffer size pro tcp connection (default 4096)
--sasl-aws-profile string AWS profile
--sasl-aws-region string Region for AWS IAM Auth
--sasl-enable Connect using SASL
--sasl-jaas-config-file string Location of JAAS config file with SASL username and password
--sasl-method string SASL method to use (PLAIN, SCRAM-SHA-256, SCRAM-SHA-512, GSSAPI, AWS_MSK_IAM (default "PLAIN")
--sasl-password string SASL user password
--sasl-plugin-command string Path to authentication plugin binary
--sasl-plugin-enable Use plugin for SASL authentication
--sasl-plugin-log-level string Log level of the auth plugin (default "trace")
--sasl-plugin-mechanism string SASL mechanism used for proxy authentication: PLAIN or OAUTHBEARER (default "OAUTHBEARER")
--sasl-plugin-param stringArray Authentication plugin parameter
--sasl-plugin-timeout duration Authentication timeout (default 10s)
--sasl-username string SASL user name
--tls-ca-chain-cert-file string PEM encoded CA's certificate file
--tls-client-cert-file string PEM encoded file with client certificate
--tls-client-key-file string PEM encoded file with private key for the client certificate
--tls-client-key-password string Password to decrypt rsa private key
--tls-enable Whether or not to use TLS when connecting to the broker
--tls-insecure-skip-verify It controls whether a client verifies the server's certificate chain and host name
--tls-same-client-cert-enable Use only when mutual TLS is enabled on proxy and broker. It controls whether a proxy validates if proxy client certificate exactly matches brokers client cert (tls-client-cert-file)kafka-proxy server --bootstrap-server-mapping "192.168.99.100:32400,0.0.0.0:32399" kafka-proxy server --bootstrap-server-mapping "192.168.99.100:32400,127.0.0.1:32400" --bootstrap-server-mapping "192.168.99.100:32401,127.0.0.1:32401" --bootstrap-server-mapping "192.168.99.100:32402,127.0.0.1:32402" --dynamic-listeners-disable kafka-proxy server --bootstrap-server-mapping "kafka-0.example.com:9092,0.0.0.0:32401,kafka-0.grepplabs.com:9092" --bootstrap-server-mapping "kafka-1.example.com:9092,0.0.0.0:32402,kafka-1.grepplabs.com:9092" --bootstrap-server-mapping "kafka-2.example.com:9092,0.0.0.0:32403,kafka-2.grepplabs.com:9092" --dynamic-listeners-disable kafka-proxy server --bootstrap-server-mapping "192.168.99.100:32400,127.0.0.1:32400" --external-server-mapping "192.168.99.100:32401,127.0.0.1:32402" --external-server-mapping "192.168.99.100:32402,127.0.0.1:32403" --forbidden-api-keys 20 export BOOTSTRAP_SERVER_MAPPING="192.168.99.100:32401,0.0.0.0:32402 192.168.99.100:32402,0.0.0.0:32403" && kafka-proxy server
kafka-proxy server --bootstrap-server-mapping "localhost:19092,0.0.0.0:30001,localhost:30001" --bootstrap-server-mapping "localhost:29092,0.0.0.0:30002,localhost:30002" --bootstrap-server-mapping "localhost:39092,0.0.0.0:30003,localhost:30003" --proxy-listener-cert-file "tls/ca-cert.pem" --proxy-listener-key-file "tls/ca-key.pem" --proxy-listener-tls-enable --proxy-listener-cipher-suites TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_AES_128_GCM_SHA256
SASL身份驗證是由代理啟動的。 SASL身份驗證在客戶端上被禁用,並在Kafka經紀人上啟用。
kafka-proxy server --bootstrap-server-mapping "kafka-0.grepplabs.com:9093,0.0.0.0:32399" --tls-enable --tls-insecure-skip-verify --sasl-enable --sasl-username myuser --sasl-password mysecret kafka-proxy server --bootstrap-server-mapping "kafka-0.example.com:9092,0.0.0.0:30001" --bootstrap-server-mapping "kafka-1.example.com:9092,0.0.0.0:30002" --bootstrap-server-mapping "kafka-1.example.com:9093,0.0.0.0:30003" --sasl-enable --sasl-username "alice" --sasl-password "alice-secret" --sasl-method "SCRAM-SHA-512" --log-level debug make clean build plugin.unsecured-jwt-provider && build/kafka-proxy server --sasl-enable --sasl-plugin-enable --sasl-plugin-mechanism "OAUTHBEARER" --sasl-plugin-command build/unsecured-jwt-provider --sasl-plugin-param "--claim-sub=alice" --bootstrap-server-mapping "192.168.99.100:32400,127.0.0.1:32400"
GSSAPI / Kerberos身份驗證
kafka-proxy server --bootstrap-server-mapping "kafka-0.grepplabs.com:9092,127.0.0.1:32500" --bootstrap-server-mapping "kafka-1.grepplabs.com:9092,127.0.0.1:32501" --bootstrap-server-mapping "kafka-2.grepplabs.com:9092,127.0.0.1:32502" --sasl-enable --sasl-method "GSSAPI" --gssapi-servicename kafka --gssapi-username kafkaclient1 --gssapi-realm EXAMPLE.COM --gssapi-krb5 /etc/krb5.conf --gssapi-keytab /etc/security/keytabs/kafka.keytab
AWS MSK IAM
kafka-proxy server --bootstrap-server-mapping "b-1-public.kafkaproxycluster.uls9ao.c4.kafka.eu-central-1.amazonaws.com:9198,0.0.0.0:30001" --bootstrap-server-mapping "b-2-public.kafkaproxycluster.uls9ao.c4.kafka.eu-central-1.amazonaws.com:9198,0.0.0.0:30002" --bootstrap-server-mapping "b-3-public.kafkaproxycluster.uls9ao.c4.kafka.eu-central-1.amazonaws.com:9198,0.0.0.0:30003" --tls-enable --tls-insecure-skip-verify --sasl-enable --sasl-method "AWS_MSK_IAM" --sasl-aws-region "eu-central-1" --log-level debug
SASL身份驗證由代理執行。 SASL身份驗證已在客戶端啟用,並在Kafka經紀人上被禁用。
make clean build plugin.auth-user && build/kafka-proxy server --proxy-listener-key-file "server-key.pem" --proxy-listener-cert-file "server-cert.pem" --proxy-listener-ca-chain-cert-file "ca.pem" --proxy-listener-tls-enable --auth-local-enable --auth-local-command build/auth-user --auth-local-param "--username=my-test-user" --auth-local-param "--password=my-test-password" make clean build plugin.auth-ldap && build/kafka-proxy server --auth-local-enable --auth-local-command build/auth-ldap --auth-local-param "--url=ldaps://ldap.example.com:636" --auth-local-param "--user-dn=cn=users,dc=exemple,dc=com" --auth-local-param "--user-attr=uid" --bootstrap-server-mapping "192.168.99.100:32400,127.0.0.1:32400" make clean build plugin.unsecured-jwt-info && build/kafka-proxy server --auth-local-enable --auth-local-command build/unsecured-jwt-info --auth-local-mechanism "OAUTHBEARER" --auth-local-param "--claim-sub=alice" --auth-local-param "--claim-sub=bob" --bootstrap-server-mapping "192.168.99.100:32400,127.0.0.1:32400"
驗證代理客戶端使用的客戶證書與代理啟動的身份驗證的客戶端證書完全相同
kafka-proxy server --bootstrap-server-mapping "kafka-0.grepplabs.com:9093,0.0.0.0:32399" --tls-enable --tls-client-cert-file client.crt --tls-client-key-file client.pem --tls-client-key-password changeit --proxy-listener-tls-enable --proxy-listener-key-file server.pem --proxy-listener-cert-file server.crt --proxy-listener-key-password changeit --proxy-listener-ca-chain-cert-file ca.crt --tls-same-client-cert-enable
使用Google-ID(服務帳戶JWT),Kafka代理客戶端與Kafka代理服務器之間的身份驗證
kafka-proxy server --bootstrap-server-mapping "kafka-0.grepplabs.com:9092,127.0.0.1:32500" --bootstrap-server-mapping "kafka-1.grepplabs.com:9092,127.0.0.1:32501" --bootstrap-server-mapping "kafka-2.grepplabs.com:9092,127.0.0.1:32502" --dynamic-listeners-disable --http-disable --proxy-listener-tls-enable --proxy-listener-cert-file=/var/run/secret/server.cert.pem --proxy-listener-key-file=/var/run/secret/server.key.pem --auth-gateway-server-enable --auth-gateway-server-method google-id --auth-gateway-server-magic 3285573610483682037 --auth-gateway-server-command google-id-info --auth-gateway-server-param "--timeout=10" --auth-gateway-server-param "--audience=tcp://kafka-gateway.grepplabs.com" --auth-gateway-server-param "--email-regex=^[email protected]$" kafka-proxy server --bootstrap-server-mapping "127.0.0.1:32500,127.0.0.1:32400" --bootstrap-server-mapping "127.0.0.1:32501,127.0.0.1:32401" --bootstrap-server-mapping "127.0.0.1:32502,127.0.0.1:32402" --dynamic-listeners-disable --http-disable --tls-enable --tls-ca-chain-cert-file /var/run/secret/client/ca-chain.cert.pem --auth-gateway-client-enable --auth-gateway-client-method google-id --auth-gateway-client-magic 3285573610483682037 --auth-gateway-client-command google-id-provider --auth-gateway-client-param "--credentials-file=/var/run/secret/client/service-account.json" --auth-gateway-client-param "--target-audience=tcp://kafka-gateway.grepplabs.com" --auth-gateway-client-param "--timeout=10"
通過Test Socks5代理服務器連接
kafka-proxy tools socks5-proxy --addr localhost:1080 kafka-proxy server --bootstrap-server-mapping "kafka-0.grepplabs.com:9092,127.0.0.1:32500" --bootstrap-server-mapping "kafka-1.grepplabs.com:9092,127.0.0.1:32501" --bootstrap-server-mapping "kafka-2.grepplabs.com:9092,127.0.0.1:32502" --forward-proxy socks5://localhost:1080
kafka-proxy tools socks5-proxy --addr localhost:1080 --username my-proxy-user --password my-proxy-password kafka-proxy server --bootstrap-server-mapping "kafka-0.grepplabs.com:9092,127.0.0.1:32500" --bootstrap-server-mapping "kafka-1.grepplabs.com:9092,127.0.0.1:32501" --bootstrap-server-mapping "kafka-2.grepplabs.com:9092,127.0.0.1:32502" --forward-proxy socks5://my-proxy-user:my-proxy-password@localhost:1080
使用連接方法通過測試HTTP代理服務器連接
kafka-proxy tools http-proxy --addr localhost:3128 kafka-proxy server --bootstrap-server-mapping "kafka-0.grepplabs.com:9092,127.0.0.1:32500" --bootstrap-server-mapping "kafka-1.grepplabs.com:9092,127.0.0.1:32501" --bootstrap-server-mapping "kafka-2.grepplabs.com:9092,127.0.0.1:32502" --forward-proxy http://localhost:3128
kafka-proxy tools http-proxy --addr localhost:3128 --username my-proxy-user --password my-proxy-password kafka-proxy server --bootstrap-server-mapping "kafka-0.grepplabs.com:9092,127.0.0.1:32500" --bootstrap-server-mapping "kafka-1.grepplabs.com:9092,127.0.0.1:32501" --bootstrap-server-mapping "kafka-2.grepplabs.com:9092,127.0.0.1:32502" --forward-proxy http://my-proxy-user:my-proxy-password@localhost:3128
有時,可能不僅有必要驗證客戶端證書有效,而且還必須為具體用例發布客戶端證書DN。這可以使用以下參數來實現:
--proxy-listener-tls-client-cert-validate-subject bool Whether to validate client certificate subject (default false) --proxy-listener-tls-required-client-subject-common-name string Required client certificate subject common name --proxy-listener-tls-required-client-subject-country stringArray Required client certificate subject country --proxy-listener-tls-required-client-subject-province stringArray Required client certificate subject province --proxy-listener-tls-required-client-subject-locality stringArray Required client certificate subject locality --proxy-listener-tls-required-client-subject-organization stringArray Required client certificate subject organization --proxy-listener-tls-required-client-subject-organizational-unit stringArray Required client certificate subject organizational unit
通過設置--proxy-listener-tls-client-cert-validate-subject true ,Kafka代理將檢查客戶端證書DN字段,以設置為使用--proxy-listener-tls-required-client-*參數。匹配始終是精確的,並在一起使用,所有非空值。例如,要允許country=DE and organization=grepplabs的有效證書,請以下方式配置kafka代理:
kafka-proxy server --proxy-listener-tls-client-cert-validate-subject true --proxy-listener-tls-required-client-subject-country DE --proxy-listener-tls-required-client-subject-organization grepplabs
--- apiversion:apps/v1kind:deploymentMetadata:name:myAppspec:replicas:1
選擇器:MatchLabels:App:MyApp
模板:元數據:標籤:APP:MyApp註釋:Prometheus.io/scrape:'true'spec:clandersains:containser:
- 名稱:kafka-proxy圖像:grepplabs/kafka-proxy:最新ARGS:
- 'server'-' - log-format = json'-' - bootstrap-server-mapping = kafka-0:9093,127.0.0.0.1:32400'--' - boottrap-server-server-mapping = kafka- 1:kafka-1: 9093,127.0.0.1:32401'- '--bootstrap-server-mapping=kafka-2:9093,127.0.0.1:32402'- '--tls-enable'- '--tls- ca-chain-cert- file =/var/run/secret/kafka-ca-chain-certificate/ca-chain.cert.pem'-' - tls-client-cert-file =/var/run/run/necret /secret/kafka-client/kafka-client/kafka-client/ client.cert.pem'-' - tls-client-key-file =/var/run/necret/secret/kafka-client-key/client.key .pem.pem'--'--tls-client-key-key-password = $ (tls_client_key_password)' - ' - sasl-enable'-' - sasl-jaas-config-file =/var/run/run/ secret/kafka-client-jaas/jaas.config'env:env:
- 名稱:tls_client_key_passwordvaluefrom:secretkeyref:name:tls-client-key-passwordkey:password folumemounts:
- 名稱:“ sasl-jaas-config-file”坐騎:“/var/run/secret/kafka-client-jaas” - 名稱:“ tls-ca-ca-chain-certificate” MountPath:“/var/run/ run/secret/ kafka-ca-鏈認證” - 名稱:“ tls-client-cert-file” MountPath:“/var/run/secret/kafka-client-certificate” - 名稱:“ tls-client-key-key -fire”坐騎“ MountPath:” “/var/run/secret/kafka-client-key”端口:
- name: metricscontainerPort: 9080 livenessProbe:httpGet: path: /health port: 9080initialDelaySeconds: 5periodSeconds: 3 readinessProbe:httpGet: path: /health port: 9080initialDelaySeconds: 5periodSeconds: 10timeoutSeconds: 5successThreshold: 2failureThreshold: 5- name: myapp image: myapp:最新端口:
- containerport:8080name:指標env:
- 名稱:bootstrap_serversvalue:“ 127.0.0.1:32400,127.0.0.0.1:32401,127.0.0.0.1:32402”卷:
- 名稱:sasl-jaas-config-filesecret:secretname:sasl-jaas-config-file-名稱:tls-ca-ca-chain-cercertificateSecretecret:secretname:tls-ca-chain-cain-certificate-名稱- 名稱- 名稱: :secretname:tls-client-cert-file-名稱:tls-client-key-filesecret:secretname:tls-client-key-file --- apiversion:apps/v1kind:statefulsetmetadata:名稱:kafka-proxyspec:選擇器:matchlabels:app:kafka-proxy
複製品:1
ServiceName:Kafka-Proxy
模板:元數據:標籤:APP:KAFKA-PROXYSPEC:容器:
- 名稱:kafka-proxy圖像:grepplabs/kafka-proxy:最新ARGS:
- 'server'-' - log-format = json'-' - bootstrap-server-mapping = kafka-0:9093,127.0.0.0.1:32400'--' - boottrap-server-server-mapping = kafka- 1:kafka-1: 9093,127.0.0.1:32401'- '--bootstrap-server-mapping=kafka-2:9093,127.0.0.1:32402'- '--tls-enable'- '--tls- ca-chain-cert- file =/var/run/secret/kafka-ca-chain-certificate/ca-chain.cert.pem'-' - tls-client-cert-file =/var/run/run/necret /secret/kafka-client/kafka-client/kafka-client/ client.cert.pem'-' - tls-client-key-file =/var/run/necret/secret/kafka-client-key/client.key .pem.pem'--'--tls-client-key-key-password = $ (tls_client_key_password)' - ' - sasl-enable'-' - sasl-jaas-config-file =/var/run/run/ secret/kafka-client-jaas/jaas/jaas.config'-------- - size = 32768'-' - proxy-response-buffer大小= 32768'--' - proxy-listener-read-read -read-buffer-size = 32768'--- proxy-listener-write-write-write-write-write-buffer size = 131072 ' - ' - kafka-connection-read-read-buffer-size = 131072'-' - kafka-connection-write-buffer-size = 32768'env:
- 名稱:tls_client_key_passwordvaluefrom:secretkeyref:name:tls-client-key-passwordkey:password folumemounts:
- 名稱:“ sasl-jaas-config-file”坐騎:“/var/run/secret/kafka-client-jaas” - 名稱:“ tls-ca-ca-chain-certificate” MountPath:“/var/run/ run/secret/ kafka-ca-鏈認證” - 名稱:“ tls-client-cert-file” MountPath:“/var/run/secret/kafka-client-certificate” - 名稱:“ tls-client-key-key -fire”坐騎“ MountPath:” “/var/run/secret/kafka-client-key”端口:
- 姓名:量級:9080-名稱:kafka -0containerport:32400-名稱:kafka -1containerport:32401-名稱:kafka -containerport:32402 PGET:路徑: /健康端口:9080InitialDelaySeconds:5periodseconds:10timeOutseconds:5successthreshold:2failureThreshold:2faileRethreshold :5資源:請求:內存:內存:128mi cpu:100000m restartpolicy:1000m restartpolicy:explast卷:始終卷:總是
- 名稱:sasl-jaas-config-filesecret:secretname:sasl-jaas-config-file-名稱:tls-ca-ca-chain-cercertificateSecretecret:secretname:tls-ca-chain-cain-certificate-名稱- 名稱- 名稱: :secretname:tls-client-cert-file-名稱:tls-client-key-filesecret:secretname:tls-client-key-fileKubectl Port-Forward Kafka-Proxy-0 32400:32400 32401:32401 32402:32402
使用Local主機:32400,Local Host:32401和Local Host:32402作為Bootstrap服務器
kafka.properties
broker.id=0 advertised.listeners=PLAINTEXT://kafka-0.kafka-headless.kafka:9092 ...
Kubectl Port-Forward -n Kafka Kafka-0 9092:9092
kafka-proxy服務器- bootstrap-server-mapping“ 127.0.0.1:9092,0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.92” - -dial-address-mapping” kafka-0.kafka-0 .kafka-0.kafka-headless.kafka:9092,0.0.0.0.0.0.0.0.0.0.0.0.90.0.0:90992:90992”
使用Local主機:19092作為Bootstrap服務器
Strimzi 0.13.0 CRD
apiversion:kafka.strimzi.io/v1beta1kind:kafkametadata:名稱:測試群
名稱空間:kafkaspec:kafka:版本:2.3.0 replicas:3listeners:plain:{} tls:{} config:offsets.topic.replication.factor.factor:3 trassaction.state.state.log.replication.replication.factor .factor.factor:3 Trassaction.state.state.state.state.log。 min.isr:2 num.Partitions:60 default.replication.fector.factor:3Storage:類型:JBOD卷:
- ID:0類型:持續申請大小:20GI DELETEKLAIM:true
Zookeeper:副本:3Storage:類型:持續索賠大小:5GI DELETEKLAIM:true
EntityOperator:主題操作器:{} Useroperator:{} Kubectl Port-Forward -n Kafka Test-Cluster-Kafka-0 9092:9092
kubectl port-forward -n kafka test-cluster-kafka-1 9093:9092
Kubectl Port-Forward -n Kafka Test-Cluster-Kafka-2 9094:9092
kafka-proxy服務器 - log級調試
- bootstrap-server-mapping“ 127.0.0.1:9092,0.0.0.0.0.0:19092”
- bootstrap-server-mapping“ 127.0.0.1:9093,0.0.0.0.0:19093”
- bootstrap-server-mapping“ 127.0.0.1:9094,0.0.0.0.0:19094”
-dial-address-mapping“ test-cluster-kafka-0.test-cluster-kafka-brokers.kafka.svc.cluster.local.local:9092,0.0.0.0.0.0:9092”
-dial-address-mapping“ test-cluster-kafka-1.test-cluster-kafka-brokers.kafka.svc.cluster.local:9092,0.0.0.0.0.0:9093”
-dial-address-mapping“ test-cluster-kafka-2.test-cluster-kafka-brokers.kafka.svc.cluster.local:9092,0.0.0.0.0.0:9094”使用Local主機:19092作為Bootstrap服務器
雲SQL代理
薩拉馬