Kafka 프록시는 클라우드 SQL 프록시의 아이디어를 기반으로합니다. 서비스는 SASL/일반 인증 및 SSL 인증서를 처리하지 않고도 Kafka 중개인에 연결할 수 있습니다.
소켓이 사용될 때 로컬 컴퓨터에서 TCP 소켓을 열고 관련 Kafka 브로커에 대한 프록시 연결을 사용하여 작동합니다. 메타 데이터의 호스트 및 포트 및 브로커로부터받은 FindCoordinator 응답은 현지인으로 대체됩니다. 발견 된 브로커 (부스트랩 서버로 구성되지 않음)의 경우 로컬 리스너가 임의의 포트에서 시작됩니다. 동적 로컬 리스너 기능을 비활성화 할 수 있으며 외부 서버 매핑 목록을 추가로 제공 할 수 있습니다.
프록시는 TLS 트래픽을 종료하고 SASL/Plain을 사용하여 사용자를 인증 할 수 있습니다. 자격 증명 검증 방법은 구성 가능하며 RPC를 통해 Golang 플러그인 시스템을 사용합니다.
프록시는 다른 Kafka 서버 및 클라이언트에 투명한 플러그 가능한 방법을 사용하여 서로를 인증 할 수 있습니다. 현재 서비스 계정 용 Google ID 토큰은 IE 프록시 클라이언트 요청이 구현되며 서비스 계정 JWT 및 프록시 서버를 Google JWK에 대해 수신하고 확인합니다.
Kafka API 통화는 일부 작업 (예 : 주제 삭제 또는 요청을 방지하기 위해 제한 될 수 있습니다.
보다:
Amazon MSK와의 Kafka 프록시
Kafka 프로토콜에 대한 안내서
카프카 프로토콜 안내서
다음 테이블은 지원되는 Kafka 버전 (지정된 하나 및 모든 이전 Kafka 버전)의 개요를 제공합니다. 모든 Kafka 릴리스가 Kafka 프록시와 관련된 새로운 메시지/버전을 추가하는 것은 아니기 때문에 최신 Kafka 버전도 작동 할 수 있습니다.
| Kafka 프록시 버전 | 카프카 버전 |
|---|---|
| 0.11.0에서 | |
| 0.2.9 | 2.8.0 |
| 0.3.1 | 3.4.0 |
| 0.3.11 | 3.7.0 |
| 0.3.12 | 3.9.0까지 |
최신 릴리스를 다운로드하십시오
리눅스
curl -Ls https://github.com/grepplabs/kafka-proxy/releases/download/v0.3.12/kafka-proxy-v0.3.12-linux-amd64.tar.gz | tar xz
마코스
curl -Ls https://github.com/grepplabs/kafka-proxy/releases/download/v0.3.12/kafka-proxy-v0.3.12-darwin-amd64.tar.gz | tar xz
이진을 경로로 이동하십시오.
sudo mv ./kafka-proxy /usr/local/bin/kafka-proxy
make clean build
Docker 이미지는 Docker Hub에서 사용할 수 있습니다.
Kafka-Proxy 컨테이너를 실행하여 시도 할 수 있습니다.
docker run --rm -p 30001-30003:30001-30003 grepplabs/kafka-proxy:0.3.12 server --bootstrap-server-mapping "localhost:19092,0.0.0.0:30001" --bootstrap-server-mapping "localhost:29092,0.0.0.0:30002" --bootstrap-server-mapping "localhost:39092,0.0.0.0:30003" --dial-address-mapping "localhost:19092,172.17.0.1:19092" --dial-address-mapping "localhost:29092,172.17.0.1:29092" --dial-address-mapping "localhost:39092,172.17.0.1:39092" --debug-enable
Kafka-Proxy는 이제 localhost:30001 , localhost:30002 및 localhost:30003 에서 도달 할 수 있습니다. Docker (Network Bridge Gateway 172.17.0.1 )에서 실행되는 Kafka 브로커와 연결하여 localhost:19092 , localhost:29092 및 localhost:39092 .
/opt/kafka-proxy/bin/ 에 위치한 사전 컴파일 된 플러그인이있는 Docker 이미지는 <release>-all 로 태그됩니다.
이를 시도하기 위해 Auth-Ldap 플러그인이있는 Kafka-Proxy 컨테이너를 시작할 수 있습니다.
docker run --rm -p 30001-30003:30001-30003 grepplabs/kafka-proxy:0.3.12-all server --bootstrap-server-mapping "localhost:19092,0.0.0.0:30001" --bootstrap-server-mapping "localhost:29092,0.0.0.0:30002" --bootstrap-server-mapping "localhost:39092,0.0.0.0:30003" --dial-address-mapping "localhost:19092,172.17.0.1:19092" --dial-address-mapping "localhost:29092,172.17.0.1:29092" --dial-address-mapping "localhost:39092,172.17.0.1:39092" --debug-enable --auth-local-enable --auth-local-command=/opt/kafka-proxy/bin/auth-ldap --auth-local-param=--url=ldap://172.17.0.1:389 --auth-local-param=--start-tls=false --auth-local-param=--bind-dn=cn=admin,dc=example,dc=org --auth-local-param=--bind-passwd=admin --auth-local-param=--user-search-base=ou=people,dc=example,dc=org --auth-local-param=--user-filter="(&(objectClass=person)(uid=%u)(memberOf=cn=kafka-users,ou=realm-roles,dc=example,dc=org))"
Run the kafka-proxy server
Usage:
kafka-proxy server [flags]
Flags:
--auth-gateway-client-command string Path to authentication plugin binary
--auth-gateway-client-enable Enable gateway client authentication
--auth-gateway-client-log-level string Log level of the auth plugin (default "trace")
--auth-gateway-client-magic uint Magic bytes sent in the handshake
--auth-gateway-client-method string Authentication method
--auth-gateway-client-param stringArray Authentication plugin parameter
--auth-gateway-client-timeout duration Authentication timeout (default 10s)
--auth-gateway-server-command string Path to authentication plugin binary
--auth-gateway-server-enable Enable proxy server authentication
--auth-gateway-server-log-level string Log level of the auth plugin (default "trace")
--auth-gateway-server-magic uint Magic bytes sent in the handshake
--auth-gateway-server-method string Authentication method
--auth-gateway-server-param stringArray Authentication plugin parameter
--auth-gateway-server-timeout duration Authentication timeout (default 10s)
--auth-local-command string Path to authentication plugin binary
--auth-local-enable Enable local SASL/PLAIN authentication performed by listener - SASL handshake will not be passed to kafka brokers
--auth-local-log-level string Log level of the auth plugin (default "trace")
--auth-local-mechanism string SASL mechanism used for local authentication: PLAIN or OAUTHBEARER (default "PLAIN")
--auth-local-param stringArray Authentication plugin parameter
--auth-local-timeout duration Authentication timeout (default 10s)
--bootstrap-server-mapping stringArray Mapping of Kafka bootstrap server address to local address (host:port,host:port(,advhost:advport))
--debug-enable Enable Debug endpoint
--debug-listen-address string Debug listen address (default "0.0.0.0:6060")
--default-listener-ip string Default listener IP (default "0.0.0.0")
--dial-address-mapping stringArray Mapping of target broker address to new one (host:port,host:port). The mapping is performed during connection establishment
--dynamic-advertised-listener string Advertised address for dynamic listeners. If empty, default-listener-ip is used
--dynamic-listeners-disable Disable dynamic listeners.
--dynamic-sequential-min-port int If set to non-zero, makes the dynamic listener use a sequential port starting with this value rather than a random port every time.
--external-server-mapping stringArray Mapping of Kafka server address to external address (host:port,host:port). A listener for the external address is not started
--forbidden-api-keys ints Forbidden Kafka request types. The restriction should prevent some Kafka operations e.g. 20 - DeleteTopics
--forward-proxy string URL of the forward proxy. Supported schemas are socks5 and http
--gssapi-auth-type string GSSAPI auth type: KEYTAB or USER (default "KEYTAB")
--gssapi-disable-pa-fx-fast Used to configure the client to not use PA_FX_FAST.
--gssapi-keytab string krb5.keytab file location
--gssapi-krb5 string krb5.conf file path, default: /etc/krb5.conf (default "/etc/krb5.conf")
--gssapi-password string Password for auth type USER
--gssapi-realm string Realm
--gssapi-servicename string ServiceName (default "kafka")
--gssapi-spn-host-mapping stringToString Mapping of Kafka servers address to SPN hosts (default [])
--gssapi-username string Username (default "kafka")
-h, --help help for server
--http-disable Disable HTTP endpoints
--http-health-path string Path on which to health endpoint (default "/health")
--http-listen-address string Address that kafka-proxy is listening on (default "0.0.0.0:9080")
--http-metrics-path string Path on which to expose metrics (default "/metrics")
--kafka-client-id string An optional identifier to track the source of requests (default "kafka-proxy")
--kafka-connection-read-buffer-size int Size of the operating system's receive buffer associated with the connection. If zero, system default is used
--kafka-connection-write-buffer-size int Sets the size of the operating system's transmit buffer associated with the connection. If zero, system default is used
--kafka-dial-timeout duration How long to wait for the initial connection (default 15s)
--kafka-keep-alive duration Keep alive period for an active network connection. If zero, keep-alives are disabled (default 1m0s)
--kafka-max-open-requests int Maximal number of open requests pro tcp connection before sending on it blocks (default 256)
--kafka-read-timeout duration How long to wait for a response (default 30s)
--kafka-write-timeout duration How long to wait for a transmit (default 30s)
--log-format string Log format text or json (default "text")
--log-level string Log level debug, info, warning, error, fatal or panic (default "info")
--log-level-fieldname string Log level fieldname for json format (default "@level")
--log-msg-fieldname string Message fieldname for json format (default "@message")
--log-time-fieldname string Time fieldname for json format (default "@timestamp")
--producer-acks-0-disabled Assume fire-and-forget is never sent by the producer. Enabling this parameter will increase performance
--proxy-listener-ca-chain-cert-file string PEM encoded CA's certificate file. If provided, client certificate is required and verified
--proxy-listener-cert-file string PEM encoded file with server certificate
--proxy-listener-cipher-suites strings List of supported cipher suites
--proxy-listener-curve-preferences strings List of curve preferences
--proxy-listener-keep-alive duration Keep alive period for an active network connection. If zero, keep-alives are disabled (default 1m0s)
--proxy-listener-key-file string PEM encoded file with private key for the server certificate
--proxy-listener-key-password string Password to decrypt rsa private key
--proxy-listener-read-buffer-size int Size of the operating system's receive buffer associated with the connection. If zero, system default is used
--proxy-listener-tls-enable Whether or not to use TLS listener
--proxy-listener-tls-required-client-subject strings Required client certificate subject common name; example; s:/CN=[value]/C=[state]/C=[DE,PL] or r:/CN=[^val.{2}$]/C=[state]/C=[DE,PL]; check manual for more details
--proxy-listener-write-buffer-size int Sets the size of the operating system's transmit buffer associated with the connection. If zero, system default is used
--proxy-request-buffer-size int Request buffer size pro tcp connection (default 4096)
--proxy-response-buffer-size int Response buffer size pro tcp connection (default 4096)
--sasl-aws-profile string AWS profile
--sasl-aws-region string Region for AWS IAM Auth
--sasl-enable Connect using SASL
--sasl-jaas-config-file string Location of JAAS config file with SASL username and password
--sasl-method string SASL method to use (PLAIN, SCRAM-SHA-256, SCRAM-SHA-512, GSSAPI, AWS_MSK_IAM (default "PLAIN")
--sasl-password string SASL user password
--sasl-plugin-command string Path to authentication plugin binary
--sasl-plugin-enable Use plugin for SASL authentication
--sasl-plugin-log-level string Log level of the auth plugin (default "trace")
--sasl-plugin-mechanism string SASL mechanism used for proxy authentication: PLAIN or OAUTHBEARER (default "OAUTHBEARER")
--sasl-plugin-param stringArray Authentication plugin parameter
--sasl-plugin-timeout duration Authentication timeout (default 10s)
--sasl-username string SASL user name
--tls-ca-chain-cert-file string PEM encoded CA's certificate file
--tls-client-cert-file string PEM encoded file with client certificate
--tls-client-key-file string PEM encoded file with private key for the client certificate
--tls-client-key-password string Password to decrypt rsa private key
--tls-enable Whether or not to use TLS when connecting to the broker
--tls-insecure-skip-verify It controls whether a client verifies the server's certificate chain and host name
--tls-same-client-cert-enable Use only when mutual TLS is enabled on proxy and broker. It controls whether a proxy validates if proxy client certificate exactly matches brokers client cert (tls-client-cert-file)kafka-proxy server --bootstrap-server-mapping "192.168.99.100:32400,0.0.0.0:32399" kafka-proxy server --bootstrap-server-mapping "192.168.99.100:32400,127.0.0.1:32400" --bootstrap-server-mapping "192.168.99.100:32401,127.0.0.1:32401" --bootstrap-server-mapping "192.168.99.100:32402,127.0.0.1:32402" --dynamic-listeners-disable kafka-proxy server --bootstrap-server-mapping "kafka-0.example.com:9092,0.0.0.0:32401,kafka-0.grepplabs.com:9092" --bootstrap-server-mapping "kafka-1.example.com:9092,0.0.0.0:32402,kafka-1.grepplabs.com:9092" --bootstrap-server-mapping "kafka-2.example.com:9092,0.0.0.0:32403,kafka-2.grepplabs.com:9092" --dynamic-listeners-disable kafka-proxy server --bootstrap-server-mapping "192.168.99.100:32400,127.0.0.1:32400" --external-server-mapping "192.168.99.100:32401,127.0.0.1:32402" --external-server-mapping "192.168.99.100:32402,127.0.0.1:32403" --forbidden-api-keys 20 export BOOTSTRAP_SERVER_MAPPING="192.168.99.100:32401,0.0.0.0:32402 192.168.99.100:32402,0.0.0.0:32403" && kafka-proxy server
kafka-proxy server --bootstrap-server-mapping "localhost:19092,0.0.0.0:30001,localhost:30001" --bootstrap-server-mapping "localhost:29092,0.0.0.0:30002,localhost:30002" --bootstrap-server-mapping "localhost:39092,0.0.0.0:30003,localhost:30003" --proxy-listener-cert-file "tls/ca-cert.pem" --proxy-listener-key-file "tls/ca-key.pem" --proxy-listener-tls-enable --proxy-listener-cipher-suites TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_AES_128_GCM_SHA256
SASL 인증은 프록시에 의해 시작됩니다. SASL 인증은 클라이언트에서 비활성화되고 Kafka 브로커에서 활성화됩니다.
kafka-proxy server --bootstrap-server-mapping "kafka-0.grepplabs.com:9093,0.0.0.0:32399" --tls-enable --tls-insecure-skip-verify --sasl-enable --sasl-username myuser --sasl-password mysecret kafka-proxy server --bootstrap-server-mapping "kafka-0.example.com:9092,0.0.0.0:30001" --bootstrap-server-mapping "kafka-1.example.com:9092,0.0.0.0:30002" --bootstrap-server-mapping "kafka-1.example.com:9093,0.0.0.0:30003" --sasl-enable --sasl-username "alice" --sasl-password "alice-secret" --sasl-method "SCRAM-SHA-512" --log-level debug make clean build plugin.unsecured-jwt-provider && build/kafka-proxy server --sasl-enable --sasl-plugin-enable --sasl-plugin-mechanism "OAUTHBEARER" --sasl-plugin-command build/unsecured-jwt-provider --sasl-plugin-param "--claim-sub=alice" --bootstrap-server-mapping "192.168.99.100:32400,127.0.0.1:32400"
GSSAPI / KERBEROS 인증
kafka-proxy server --bootstrap-server-mapping "kafka-0.grepplabs.com:9092,127.0.0.1:32500" --bootstrap-server-mapping "kafka-1.grepplabs.com:9092,127.0.0.1:32501" --bootstrap-server-mapping "kafka-2.grepplabs.com:9092,127.0.0.1:32502" --sasl-enable --sasl-method "GSSAPI" --gssapi-servicename kafka --gssapi-username kafkaclient1 --gssapi-realm EXAMPLE.COM --gssapi-krb5 /etc/krb5.conf --gssapi-keytab /etc/security/keytabs/kafka.keytab
AWS MSK IAM
kafka-proxy server --bootstrap-server-mapping "b-1-public.kafkaproxycluster.uls9ao.c4.kafka.eu-central-1.amazonaws.com:9198,0.0.0.0:30001" --bootstrap-server-mapping "b-2-public.kafkaproxycluster.uls9ao.c4.kafka.eu-central-1.amazonaws.com:9198,0.0.0.0:30002" --bootstrap-server-mapping "b-3-public.kafkaproxycluster.uls9ao.c4.kafka.eu-central-1.amazonaws.com:9198,0.0.0.0:30003" --tls-enable --tls-insecure-skip-verify --sasl-enable --sasl-method "AWS_MSK_IAM" --sasl-aws-region "eu-central-1" --log-level debug
SASL 인증은 프록시에 의해 수행됩니다. SASL 인증은 클라이언트에서 활성화되고 Kafka 브로커에서 비활성화됩니다.
make clean build plugin.auth-user && build/kafka-proxy server --proxy-listener-key-file "server-key.pem" --proxy-listener-cert-file "server-cert.pem" --proxy-listener-ca-chain-cert-file "ca.pem" --proxy-listener-tls-enable --auth-local-enable --auth-local-command build/auth-user --auth-local-param "--username=my-test-user" --auth-local-param "--password=my-test-password" make clean build plugin.auth-ldap && build/kafka-proxy server --auth-local-enable --auth-local-command build/auth-ldap --auth-local-param "--url=ldaps://ldap.example.com:636" --auth-local-param "--user-dn=cn=users,dc=exemple,dc=com" --auth-local-param "--user-attr=uid" --bootstrap-server-mapping "192.168.99.100:32400,127.0.0.1:32400" make clean build plugin.unsecured-jwt-info && build/kafka-proxy server --auth-local-enable --auth-local-command build/unsecured-jwt-info --auth-local-mechanism "OAUTHBEARER" --auth-local-param "--claim-sub=alice" --auth-local-param "--claim-sub=bob" --bootstrap-server-mapping "192.168.99.100:32400,127.0.0.1:32400"
프록시 클라이언트가 사용하는 클라이언트 인증서를 확인하십시오.
kafka-proxy server --bootstrap-server-mapping "kafka-0.grepplabs.com:9093,0.0.0.0:32399" --tls-enable --tls-client-cert-file client.crt --tls-client-key-file client.pem --tls-client-key-password changeit --proxy-listener-tls-enable --proxy-listener-key-file server.pem --proxy-listener-cert-file server.crt --proxy-listener-key-password changeit --proxy-listener-ca-chain-cert-file ca.crt --tls-same-client-cert-enable
Google-ID (서비스 계정 JWT)를 통한 Kafka Proxy Client와 Kafka 프록시 서버 간 인증
kafka-proxy server --bootstrap-server-mapping "kafka-0.grepplabs.com:9092,127.0.0.1:32500" --bootstrap-server-mapping "kafka-1.grepplabs.com:9092,127.0.0.1:32501" --bootstrap-server-mapping "kafka-2.grepplabs.com:9092,127.0.0.1:32502" --dynamic-listeners-disable --http-disable --proxy-listener-tls-enable --proxy-listener-cert-file=/var/run/secret/server.cert.pem --proxy-listener-key-file=/var/run/secret/server.key.pem --auth-gateway-server-enable --auth-gateway-server-method google-id --auth-gateway-server-magic 3285573610483682037 --auth-gateway-server-command google-id-info --auth-gateway-server-param "--timeout=10" --auth-gateway-server-param "--audience=tcp://kafka-gateway.grepplabs.com" --auth-gateway-server-param "--email-regex=^[email protected]$" kafka-proxy server --bootstrap-server-mapping "127.0.0.1:32500,127.0.0.1:32400" --bootstrap-server-mapping "127.0.0.1:32501,127.0.0.1:32401" --bootstrap-server-mapping "127.0.0.1:32502,127.0.0.1:32402" --dynamic-listeners-disable --http-disable --tls-enable --tls-ca-chain-cert-file /var/run/secret/client/ca-chain.cert.pem --auth-gateway-client-enable --auth-gateway-client-method google-id --auth-gateway-client-magic 3285573610483682037 --auth-gateway-client-command google-id-provider --auth-gateway-client-param "--credentials-file=/var/run/secret/client/service-account.json" --auth-gateway-client-param "--target-audience=tcp://kafka-gateway.grepplabs.com" --auth-gateway-client-param "--timeout=10"
Test Socks5 프록시 서버를 통해 연결하십시오
kafka-proxy tools socks5-proxy --addr localhost:1080 kafka-proxy server --bootstrap-server-mapping "kafka-0.grepplabs.com:9092,127.0.0.1:32500" --bootstrap-server-mapping "kafka-1.grepplabs.com:9092,127.0.0.1:32501" --bootstrap-server-mapping "kafka-2.grepplabs.com:9092,127.0.0.1:32502" --forward-proxy socks5://localhost:1080
kafka-proxy tools socks5-proxy --addr localhost:1080 --username my-proxy-user --password my-proxy-password kafka-proxy server --bootstrap-server-mapping "kafka-0.grepplabs.com:9092,127.0.0.1:32500" --bootstrap-server-mapping "kafka-1.grepplabs.com:9092,127.0.0.1:32501" --bootstrap-server-mapping "kafka-2.grepplabs.com:9092,127.0.0.1:32502" --forward-proxy socks5://my-proxy-user:my-proxy-password@localhost:1080
Connect 메소드를 사용하여 테스트 HTTP 프록시 서버를 통해 연결하십시오
kafka-proxy tools http-proxy --addr localhost:3128 kafka-proxy server --bootstrap-server-mapping "kafka-0.grepplabs.com:9092,127.0.0.1:32500" --bootstrap-server-mapping "kafka-1.grepplabs.com:9092,127.0.0.1:32501" --bootstrap-server-mapping "kafka-2.grepplabs.com:9092,127.0.0.1:32502" --forward-proxy http://localhost:3128
kafka-proxy tools http-proxy --addr localhost:3128 --username my-proxy-user --password my-proxy-password kafka-proxy server --bootstrap-server-mapping "kafka-0.grepplabs.com:9092,127.0.0.1:32500" --bootstrap-server-mapping "kafka-1.grepplabs.com:9092,127.0.0.1:32501" --bootstrap-server-mapping "kafka-2.grepplabs.com:9092,127.0.0.1:32502" --forward-proxy http://my-proxy-user:my-proxy-password@localhost:3128
때로는 클라이언트 인증서가 유효한지 확인할뿐만 아니라 클라이언트 인증서 DN이 구체적인 사용 사례에 대해 발행되었는지 확인해야 할 수도 있습니다. 이것은 다음과 같은 인수 세트를 사용하여 달성 할 수 있습니다.
--proxy-listener-tls-client-cert-validate-subject bool Whether to validate client certificate subject (default false) --proxy-listener-tls-required-client-subject-common-name string Required client certificate subject common name --proxy-listener-tls-required-client-subject-country stringArray Required client certificate subject country --proxy-listener-tls-required-client-subject-province stringArray Required client certificate subject province --proxy-listener-tls-required-client-subject-locality stringArray Required client certificate subject locality --proxy-listener-tls-required-client-subject-organization stringArray Required client certificate subject organization --proxy-listener-tls-required-client-subject-organizational-unit stringArray Required client certificate subject organizational unit
Kafka Proxy는 --proxy-listener-tls-client-cert-validate-subject true --proxy-listener-tls-required-client-* 인수로 설정된 예상 값에 대해 클라이언트 인증서 DN 필드를 검사합니다. 일치는 모든 빈 값이 아닌 모든 비어있는 값으로 항상 정확하고 함께 사용됩니다. 예를 들어, country=DE 및 organization=grepplabs 에 대한 유효한 인증서를 허용하려면 다음 방식으로 Kafka 프록시를 구성하십시오.
kafka-proxy server --proxy-listener-tls-client-cert-validate-subject true --proxy-listener-tls-required-client-subject-country DE --proxy-listener-tls-required-client-subject-organization grepplabs
--- Apiversion : Apps/V1kind : DeploymentMetadata : Name : MyAppSpec : Replicas : 1
선택기 : MatchLabels : 앱 : MyApp
템플릿 : 메타 데이터 : 레이블 : 앱 : myApp 주석 : prometheus.io/scrape : 'true'spec : 컨테이너 :
-이름 : Kafka-Proxy 이미지 : Grepplabs/Kafka-Proxy : 최신 args :
-'server'-'-log-format = json'-'-bootstrap-server-mapping = kafka-0 : 9093,127.0.1 : 32400'- '-bootstrap-server-mapping = kafka-1 : 9093,127.0.0.1 : 32401'- '-부트 스트랩-서버 맵핑 = Kafka-2 : 9093,127.0.0.1 : 32402'-'--tls-enable'- '-tls-ca-chain-cert- 파일 =/var/run/increm/kafka-ca-chain-certificate/ca- chain.cert.cert.cert.pem'- '--tls-client-cert-file =/var/run/ecrence/kafka client-certificate/ client.cert.pem'- '--tls-client-key-file =/var/run/kafka-client-key/client.key.pem'-'-tls-client-key-password = $ (tls_client_key_password) '-'-sasl-enable'- '---sasl-jaas-config-file =/run/run/ecrence/kafka-client-jaas/jaas.config'env :
-이름 : tls_client_key_passwordvaluefrom : SecretKeyRef : 이름 : tls-client-key-passwordkey : Password Volumemounts :
-이름 : "SASL-JAAS-CONFIG-FILE"MOUNTPATH : "/var/run/kafka-client-jaas"-이름 : "tls-ca-chain-artificate"mountpath : "/var/run/secret/ KAFKA-CA-Chain-Certificate "-이름 :"TLS-CLIENT-CERT-FILE "MOUNTPATH :"/var/run/Secret/Kafka-Client-Certificate "-이름 :"TLS-Client-Key-File "MountPath : "/var/run/secret/kafka-client-key"포트 :
- 이름 : MetricsContainerport : 9080 LiveiteProbe : httpget : Path : /Health Port : 9080InitialDelayseconds : 3 ReadinessProbe : httpget : /Health Port : 9080InitialDelayseconds : 5 PeriodSeconds : 5SUCONDS - 이름 : MyApp 이미지 : MyApp : 최신 포트 :
- 컨테이너 포트 : 8080name : 메트릭 env :
- 이름 : bootstrap_serversvalue : "127.0.1:32400,127.0.1:32401,127.0.0.1:32402"볼륨 :
-이름 : SASL-JAAS-CONFIG-FILESECRET : SecretName : SASL-JAAS-CONFIG-FILE- 이름 : TLS-CA-Chain-CertificatesEcret : SecretName : TLS-CA-chain-certificate- 이름 : TLS-CLIENT-CERT-FILESECRET : SecretName : TLS-CLIENT-CERT-FILE- 이름 : TLS-CLIENT-KEY-FILESECRET : SecretName : TLS-Client-Key-File --- APIVERSION : APPS/V1KIND : StateFulSetMetadata : 이름 : Kafka-ProxySpec : 선택기 : MatchLabels : App : Kafka-Proxy
복제본 : 1
ServiceName : Kafka-Proxy
템플릿 : 메타 데이터 : 레이블 : 앱 : Kafka-ProxySpec : 컨테이너 :
-이름 : Kafka-Proxy 이미지 : Grepplabs/Kafka-Proxy : 최신 args :
-'server'-'-log-format = json'-'-bootstrap-server-mapping = kafka-0 : 9093,127.0.1 : 32400'- '-bootstrap-server-mapping = kafka-1 : 9093,127.0.0.1 : 32401'- '-부트 스트랩-서버 맵핑 = Kafka-2 : 9093,127.0.0.1 : 32402'-'--tls-enable'- '-tls-ca-chain-cert- 파일 =/var/run/increm/kafka-ca-chain-certificate/ca- chain.cert.cert.cert.pem'- '--tls-client-cert-file =/var/run/ecrence/kafka client-certificate/ client.cert.pem'- '--tls-client-key-file =/var/run/kafka-client-key/client.key.pem'-'-tls-client-key-password = $ (tls_client_key_password) '-'---sasl-enable'- '---sasl-jaas-config-file =/run/run/vicer/kafka-client-jaas/jaas.config'-'-Proxy-Request-Buffer -size = 32768'- '-Proxy-Response-Buffer-Size = 32768'-'-Proxy-Listener-Read-Buffer-Size = 32768'- '-Proxy-Listener-Write-Buffer-Size = 131072 '-'-Kafka-Connection-Read-Buffer-Size = 131072 '- '-Kafka-Connection-Write-Buffer-Size = 32768'Env :
-이름 : tls_client_key_passwordvaluefrom : SecretKeyRef : 이름 : tls-client-key-passwordkey : Password Volumemounts :
-이름 : "SASL-JAAS-CONFIG-FILE"MOUNTPATH : "/var/run/kafka-client-jaas"-이름 : "tls-ca-chain-artificate"mountpath : "/var/run/secret/ KAFKA-CA-Chain-Certificate "-이름 :"TLS-CLIENT-CERT-FILE "MOUNTPATH :"/var/run/Secret/Kafka-Client-Certificate "-이름 :"TLS-Client-Key-File "MountPath : "/var/run/secret/kafka-client-key"포트 :
- 이름 : MetricsContainerport : 9080- 이름 : Kafka -0Containerport : 32400- 이름 : Kafka -1Containerport : 32401- 이름 : Kafka -2Containerport : 32402 LiveSeProbe : httpget : the Path : /Health Port : 9080InitialDelyseConds : 5 Periodpere : 3 : /Health Port : 9080InitialDelayseconds : 5periodseconds : 10 타임 아웃 초 : 5SuccessThreshold : 2 -Failurethreshold : 5 리소스 : 요청 : 메모리 : 128MI CPU : 1000M RETARTPOLICY : 항상 볼륨 : 항상 볼륨 :
-이름 : SASL-JAAS-CONFIG-FILESECRET : SecretName : SASL-JAAS-CONFIG-FILE- 이름 : TLS-CA-Chain-CertificatesEcret : SecretName : TLS-CA-chain-certificate- 이름 : TLS-CLIENT-CERT-FILESECRET : SecretName : TLS-CLIENT-CERT-FILE- 이름 : TLS-CLIENT-KEY-FILESECRET : SecretName : TLS-Client-Key-FileKubectl Port-Forward Kafka-Proxy-0 32400 : 32400 32401 : 32401 32402 : 32402
LocalHost : 32400, LocalHost : 32401 및 LocalHost : 32402를 부트 스트랩 서버로 사용하십시오
Kafka.properties
broker.id=0 advertised.listeners=PLAINTEXT://kafka-0.kafka-headless.kafka:9092 ...
kubectl port-forward -n Kafka Kafka-0 9092 : 9092
Kafka-Proxy Server-Bootstrap-Server-Mapping "127.0.0.1:9092,0.0.0.0:19092"-Dial-Address-Mapping "Kafka-0.kafka-Headless.kafka : 9092,0.0.0.0 : 9092"
LocalHost : 19092를 부트 스트랩 서버로 사용하십시오
Strimzi 0.13.0 CRD
Apiversion : Kafka.strimzi.io/v1beta1kind : Kafkametadata : 이름 : Test-Cluster
네임 스페이스 : kafkaspec : kafka : version : 2.3.0replicas : 3Listeners : plain : {} tls : {} config : offsets.topic.replication.factor : 3 transaction.state.log.replication.actor : 3 transaction.state.log. Min.ISR : 2 NUM.PARTITIONS : 60 DEFAULT.REPLICATION.FACTIR : 3STORAGE : 유형 : JBOD 볼륨 :
-ID : 0 유형 : 영구적 인 청구 크기 : 20GI deleteclaim : true
Zookeeper : Replicas : 3Storage : 유형 : 영구-클레임 크기 : 5GI deleteclaim : true
EntityOperator : TopicOperator : {} 사용자 오퍼레이터 : {}Kubectl Port-Forward -N Kafka Test-Cluster-Kafka-0 9092 : 9092 Kubectl Port-Forward -N Kafka Test-Cluster-Kafka-1 9093 : 9092 Kubectl Port-Forward -N Kafka Test-Cluster-Kafka-2 9094 : 9092 Kafka-Proxy Server-로그 레벨 디버그 -Bootstrap-Server-Mapping "127.0.1:9092,0.0.0:19092" -Bootstrap-Server-Mapping "127.0.1:9093,0.0.0:19093" -Bootstrap-Server-Mapping "127.0.1:9094,0.0.0:19094" -Dial-Address-Mapping "Test-Cluster-Kafka-0.Test-Cluster-kafka-brokers.kafka.svc.cluster.local : 9092,0.0.0.0 : 9092" -Dial-Address-Mapping "Test-Cluster-Kafka-1.test-cluster-kafka-brokers.kafka.svc.cluster.local : 9092,0.0.0.0 : 9093" -Dial-Address-Mapping "Test-Cluster-Kafka-2.Test-Cluster-kafka-brokers.kafka.svc.cluster.local : 9092,0.0.0.0 : 9094"
LocalHost : 19092를 부트 스트랩 서버로 사용하십시오
클라우드 SQL 프록시
사라마