From jakub at scholz.cz Wed Jan 2 08:27:24 2019 From: jakub at scholz.cz (Jakub Scholz) Date: Wed, 2 Jan 2019 09:27:24 +0100 Subject: [Strimzi] Kafka manager In-Reply-To: References: Message-ID: Hi Daniel, Zookeeper is currently not really accessible for third party applications - it is secured using TLS and TLS client authentication and available only to the Strimzi components. So running any applications which require access to ZK might be complicated. You would need to hack the TLS sidecar from one of the Kafka pods or from the Entity Operator and use it with KAfka MAnager to give it access to Zookeeper. In general, all other 3rd party tools (including many of the UIs) which do not need access to Zookeeper should work fine. Tools which need access to ZK will face the same issue. We have it in our backlog to integrate additional tools with Strimzi. An UI is certainly part of it. But there are no details yet like which UI (existing or our own) and when. Thanks & Regards Jakub On Mon, Dec 31, 2018 at 3:12 PM Daniel Beilin wrote: > Hello, > > I'm trying to implement kafka manager to work with strimzi/amq streams in > openshift, but I'm having some difficulties. > I have put the cluster certificate inside the kafka manager pod and > enabled tls, furthermore I've created a kafka user for the kafka manager > but it still doesn't seem to work when I try to connect to zookeeper with > the client service and port 2181 > > Is it even possible to access the zookeeper from a pod outside of the > cluster? > In addition, is there a ui in the works for the strimzi or amq streams > project? > > > Thank you in advanced, > Daniel > _______________________________________________ > Strimzi mailing list > Strimzi at redhat.com > https://www.redhat.com/mailman/listinfo/strimzi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From M.Schwarz at prosoz.de Mon Jan 7 13:40:12 2019 From: M.Schwarz at prosoz.de (Schwarz, Markus) Date: Mon, 7 Jan 2019 13:40:12 +0000 Subject: [Strimzi] Cluster-Operator 0.9.0 does not start due to error 404 on websocket connection Message-ID: Hi, We are currently running strimzi 0.4.0 ( I know, it?s old) on our Openshift Origin 3.9 cluster and everything is working about fine. I know try to update to 0.9.0 to catch up to things and implement some security. So I took all the yaml-files from the cluster-operator install folder, made the necessary namespace amendments and gave it a try. The strimzi-cluster-operator pod tries to start but then dies with the following error message: --- 2019-01-07 12:49:29 WARN WatchConnectionManager:185 - Exec Failure: HTTP 404, Status: 404 - 404 page not found java.net.ProtocolException: Expected HTTP 101 response but was '404 Not Found' at okhttp3.internal.ws.RealWebSocket.checkResponse(RealWebSocket.java:219) [cluster-operator-0.9.0.jar:0.9.0] at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:186) [cluster-operator-0.9.0.jar:0.9.0] at okhttp3.RealCall$AsyncCall.execute(RealCall.java:153) [cluster-operator-0.9.0.jar:0.9.0] at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32) [cluster-operator-0.9.0.jar:0.9.0] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_191] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_191] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191] 2019-01-07 12:49:29 INFO WatchConnectionManager:379 - Current reconnect backoff is 1000 milliseconds (T0) 2019-01-07 12:49:29 ERROR Main:141 - Cluster Operator verticle in namespace msw failed to start io.fabric8.kubernetes.client.KubernetesClientException: 404 page not found at io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager$2.onFailure(WatchConnectionManager.java:189) ~[cluster-operator-0.9.0.jar:0.9.0] at okhttp3.internal.ws.RealWebSocket.failWebSocket(RealWebSocket.java:546) ~[cluster-operator-0.9.0.jar:0.9.0] at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:188) ~[cluster-operator-0.9.0.jar:0.9.0] at okhttp3.RealCall$AsyncCall.execute(RealCall.java:153) ~[cluster-operator-0.9.0.jar:0.9.0] at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32) ~[cluster-operator-0.9.0.jar:0.9.0] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_191] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_191] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191] --- It seems to try to open a websocket connection on a url which does not exists. I don?t know if this is connected to the outdated version of kubernetes (1.9.1) or if this might be a configuration error of some sort, any hint will be appreciated. I could not find any kubernetes/openshift version requirements for strimzi. Thanks! Markus [cid:image500a05.JPG at ee34ce88.4a83b19f] prosoz-herten-footer -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image500a05.JPG Type: image/jpeg Size: 71629 bytes Desc: image500a05.JPG URL: From jakub at scholz.cz Mon Jan 7 14:03:56 2019 From: jakub at scholz.cz (Jakub Scholz) Date: Mon, 7 Jan 2019 15:03:56 +0100 Subject: [Strimzi] Cluster-Operator 0.9.0 does not start due to error 404 on websocket connection In-Reply-To: References: Message-ID: Hi Markus, 0.9.0 should work fine with with Kubernetes 1.9 / OpenShift 3.9. Could you share the complete log from the CO? The Kubernetes client normally takes the address of the Kubernetes APi from the Kubernetes environment variables and connects there. Maybe in your case there is something strange / wrong with your cluster configuration. Thanks & Regards Jakub On Mon, Jan 7, 2019 at 2:40 PM Schwarz, Markus wrote: > Hi, > > > > We are currently running strimzi 0.4.0 ( I know, it?s old) on our > Openshift Origin 3.9 cluster and everything is working about fine. > > > > I know try to update to 0.9.0 to catch up to things and implement some > security. So I took all the yaml-files from the cluster-operator install > folder, made the necessary namespace amendments and gave it a try. The > strimzi-cluster-operator pod tries to start but then dies with the > following error message: > > > > --- > > 2019-01-07 12:49:29 WARN WatchConnectionManager:185 - Exec Failure: HTTP > 404, Status: 404 - 404 page not found > > > > java.net.ProtocolException: Expected HTTP 101 response but was '404 Not > Found' > > at > okhttp3.internal.ws.RealWebSocket.checkResponse(RealWebSocket.java:219) > [cluster-operator-0.9.0.jar:0.9.0] > > at > okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:186) > [cluster-operator-0.9.0.jar:0.9.0] > > at okhttp3.RealCall$AsyncCall.execute(RealCall.java:153) > [cluster-operator-0.9.0.jar:0.9.0] > > at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32) > [cluster-operator-0.9.0.jar:0.9.0] > > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > [?:1.8.0_191] > > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > [?:1.8.0_191] > > at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191] > > 2019-01-07 12:49:29 INFO WatchConnectionManager:379 - Current reconnect > backoff is 1000 milliseconds (T0) > > 2019-01-07 12:49:29 ERROR Main:141 - Cluster Operator verticle in > namespace msw failed to start > > io.fabric8.kubernetes.client.KubernetesClientException: 404 page not found > > > > at > io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager$2.onFailure(WatchConnectionManager.java:189) > ~[cluster-operator-0.9.0.jar:0.9.0] > > at > okhttp3.internal.ws.RealWebSocket.failWebSocket(RealWebSocket.java:546) > ~[cluster-operator-0.9.0.jar:0.9.0] > > at > okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:188) > ~[cluster-operator-0.9.0.jar:0.9.0] > > at okhttp3.RealCall$AsyncCall.execute(RealCall.java:153) > ~[cluster-operator-0.9.0.jar:0.9.0] > > at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32) > ~[cluster-operator-0.9.0.jar:0.9.0] > > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > ~[?:1.8.0_191] > > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > ~[?:1.8.0_191] > > at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191] > > --- > > > > It seems to try to open a websocket connection on a url which does not > exists. I don?t know if this is connected to the outdated version of > kubernetes (1.9.1) or if this might be a configuration error of some sort, > any hint will be appreciated. I could not find any kubernetes/openshift > version requirements for strimzi. > > > > Thanks! > > Markus > > > prosoz-herten-footer > _______________________________________________ > Strimzi mailing list > Strimzi at redhat.com > https://www.redhat.com/mailman/listinfo/strimzi > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image500a05.JPG Type: image/jpeg Size: 71629 bytes Desc: not available URL: From M.Schwarz at prosoz.de Mon Jan 7 14:15:21 2019 From: M.Schwarz at prosoz.de (Schwarz, Markus) Date: Mon, 7 Jan 2019 14:15:21 +0000 Subject: [Strimzi] Cluster-Operator 0.9.0 does not start due to error 404 on websocket connection In-Reply-To: References: Message-ID: <063a0c2994e74b22a8eac2f131107141@prosoz.de> Hi Jakub, here is the complete log from the co: + JAR=/cluster-operator-0.9.0.jar + shift + . /bin/dynamic_resources.sh ++ get_heap_size +++ cat /sys/fs/cgroup/memory/memory.limit_in_bytes ++ CONTAINER_MEMORY_IN_BYTES=268435456 ++ DEFAULT_MEMORY_CEILING=1152921504606846975 ++ '[' 268435456 -lt 1152921504606846975 ']' ++ '[' -z ']' ++ CONTAINER_HEAP_PERCENT=0.50 ++ CONTAINER_MEMORY_IN_MB=256 +++ echo '256 0.50' +++ awk '{ printf "%d", $1 * $2 }' ++ CONTAINER_HEAP_MAX=128 ++ echo 128 + MAX_HEAP=128 + '[' -n 128 ']' + JAVA_OPTS='-Xms128m -Xmx128m ' + export MALLOC_ARENA_MAX=2 + MALLOC_ARENA_MAX=2 + JAVA_OPTS='-Xms128m -Xmx128m -Dvertx.cacheDirBase=/tmp -Djava.security.egd=file:/dev/./urandom' + JAVA_OPTS='-Xms128m -Xmx128m -Dvertx.cacheDirBase=/tmp -Djava.security.egd=file:/dev/./urandom -XX:NativeMemoryTracking=summary -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps' + exec java -Xms128m -Xmx128m -Dvertx.cacheDirBase=/tmp -Djava.security.egd=file:/dev/./urandom -XX:NativeMemoryTracking=summary -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -jar /cluster-operator-0.9.0.jar -Xms128m -Xmx128m -Dvertx.cacheDirBase=/tmp -Djava.security.egd=file:/dev/./urandom -XX:NativeMemoryTracking=summary -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps 2019-01-07 14:02:15 INFO Main:70 - ClusterOperator 0.9.0 is starting 2019-01-07T14:02:16.434+0000: [GC (Allocation Failure) 2019-01-07T14:02:16.434+0000: [DefNew: 34944K->4352K(39296K), 0.0154766 secs] 34944K->8729K(126720K), 0.0155884 secs] [Times: user=0.01 sys=0.01, real=0.01 secs] 2019-01-07T14:02:17.040+0000: [Full GC (Metadata GC Threshold) 2019-01-07T14:02:17.040+0000: [Tenured: 4377K->7631K(87424K), 0.0301695 secs] 24907K->7631K(126720K), [Metaspace: 20706K->20706K(1069056K)], 0.0303281 secs] [Times: user=0.03 sys=0.00, real=0.03 secs] 2019-01-07 14:02:17 INFO Main:262 - Using config: PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin KUBERNETES_PORT_53_UDP_PROTO: udp STRIMZI_DEFAULT_KAFKA_MIRRORMAKER_IMAGE: strimzi/kafka-mirror-maker:0.9.0 PROMETHEUS_PORT_9090_TCP: tcp://172.30.38.237:9090 STRIMZI_FULL_RECONCILIATION_INTERVAL_MS: 120000 STRIMZI_DEFAULT_KAFKA_CONNECT_IMAGE: strimzi/kafka-connect:0.9.0 KUBERNETES_PORT_443_TCP_PROTO: tcp KUBERNETES_PORT_53_TCP_PORT: 53 PROMETHEUS_PORT_9090_TCP_ADDR: 172.30.38.237 KAFKA_METRICS_PORT_9404_TCP_PROTO: tcp STRIMZI_VERSION: 0.9.0 PROMETHEUS_PORT_9090_TCP_PROTO: tcp KUBERNETES_PORT_53_TCP_PROTO: tcp KUBERNETES_PORT_53_TCP_ADDR: 172.30.0.1 HOSTNAME: strimzi-cluster-operator-5d5445c55d-9lb8v HOME: / MALLOC_ARENA_MAX: 2 STRIMZI_DEFAULT_ZOOKEEPER_IMAGE: strimzi/zookeeper:0.9.0 STRIMZI_NAMESPACE: msw KUBERNETES_SERVICE_PORT_HTTPS: 443 SHLVL: 1 JAVA_HOME: /usr/lib/jvm/java KAFKA_METRICS_PORT_9404_TCP_ADDR: 172.30.193.60 STRIMZI_DEFAULT_KAFKA_INIT_IMAGE: strimzi/kafka-init:0.9.0 KAFKA_METRICS_SERVICE_HOST: 172.30.193.60 KUBERNETES_PORT_443_TCP: tcp://172.30.0.1:443 PROMETHEUS_PORT_9090_TCP_PORT: 9090 STRIMZI_OPERATION_TIMEOUT_MS: 300000 PROMETHEUS_SERVICE_PORT: 9090 KAFKA_METRICS_SERVICE_PORT_METRICS: 9404 STRIMZI_DEFAULT_KAFKA_IMAGE: strimzi/kafka:0.9.0 STRIMZI_DEFAULT_TLS_SIDECAR_KAFKA_IMAGE: strimzi/kafka-stunnel:0.9.0 PROMETHEUS_PORT: tcp://172.30.38.237:9090 STRIMZI_LOG_LEVEL: INFO KUBERNETES_PORT: tcp://172.30.0.1:443 PROMETHEUS_SERVICE_HOST: 172.30.38.237 STRIMZI_DEFAULT_KAFKA_CONNECT_S2I_IMAGE: strimzi/kafka-connect-s2i:0.9.0 KAFKA_METRICS_PORT: tcp://172.30.193.60:9404 KUBERNETES_PORT_53_TCP: tcp://172.30.0.1:53 KUBERNETES_PORT_53_UDP: udp://172.30.0.1:53 KUBERNETES_SERVICE_PORT: 443 KAFKA_METRICS_SERVICE_PORT: 9404 KUBERNETES_PORT_53_UDP_ADDR: 172.30.0.1 STRIMZI_DEFAULT_TOPIC_OPERATOR_IMAGE: strimzi/topic-operator:0.9.0 PWD: / PROMETHEUS_SERVICE_PORT_PROMETHEUS: 9090 KUBERNETES_PORT_443_TCP_ADDR: 172.30.0.1 STRIMZI_DEFAULT_USER_OPERATOR_IMAGE: strimzi/user-operator:0.9.0 KUBERNETES_SERVICE_PORT_DNS_TCP: 53 STRIMZI_DEFAULT_TLS_SIDECAR_ENTITY_OPERATOR_IMAGE: strimzi/entity-operator-stunnel:0.9.0 KUBERNETES_PORT_53_UDP_PORT: 53 KAFKA_METRICS_PORT_9404_TCP: tcp://172.30.193.60:9404 KUBERNETES_SERVICE_HOST: 172.30.0.1 KUBERNETES_SERVICE_PORT_DNS: 53 KUBERNETES_PORT_443_TCP_PORT: 443 STRIMZI_DEFAULT_TLS_SIDECAR_ZOOKEEPER_IMAGE: strimzi/zookeeper-stunnel:0.9.0 2019-01-07 14:02:17 INFO ClusterOperator:58 - Creating ClusterOperator for namespace msw 2019-01-07 14:02:17 INFO ClusterOperator:86 - Starting ClusterOperator for namespace msw 2019-01-07 14:02:17 INFO ClusterOperator:93 - Started operator for Kafka kind 2019-01-07 14:02:17 WARN WatchConnectionManager:185 - Exec Failure: HTTP 404, Status: 404 - 404 page not found java.net.ProtocolException: Expected HTTP 101 response but was '404 Not Found' at okhttp3.internal.ws.RealWebSocket.checkResponse(RealWebSocket.java:219) [cluster-operator-0.9.0.jar:0.9.0] at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:186) [cluster-operator-0.9.0.jar:0.9.0] at okhttp3.RealCall$AsyncCall.execute(RealCall.java:153) [cluster-operator-0.9.0.jar:0.9.0] at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32) [cluster-operator-0.9.0.jar:0.9.0] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_191] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_191] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191] 2019-01-07 14:02:17 INFO WatchConnectionManager:379 - Current reconnect backoff is 1000 milliseconds (T0) 2019-01-07 14:02:17 ERROR Main:141 - Cluster Operator verticle in namespace msw failed to start io.fabric8.kubernetes.client.KubernetesClientException: 404 page not found at io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager$2.onFailure(WatchConnectionManager.java:189) ~[cluster-operator-0.9.0.jar:0.9.0] at okhttp3.internal.ws.RealWebSocket.failWebSocket(RealWebSocket.java:546) ~[cluster-operator-0.9.0.jar:0.9.0] at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:188) ~[cluster-operator-0.9.0.jar:0.9.0] at okhttp3.RealCall$AsyncCall.execute(RealCall.java:153) ~[cluster-operator-0.9.0.jar:0.9.0] at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32) ~[cluster-operator-0.9.0.jar:0.9.0] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_191] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_191] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191] Heap def new generation total 39296K, used 16240K [0x00000000f8000000, 0x00000000faaa0000, 0x00000000faaa0000) eden space 34944K, 46% used [0x00000000f8000000, 0x00000000f8fdc0d8, 0x00000000fa220000) from space 4352K, 0% used [0x00000000fa660000, 0x00000000fa660000, 0x00000000faaa0000) to space 4352K, 0% used [0x00000000fa220000, 0x00000000fa220000, 0x00000000fa660000) tenured generation total 87424K, used 7631K [0x00000000faaa0000, 0x0000000100000000, 0x0000000100000000) the space 87424K, 8% used [0x00000000faaa0000, 0x00000000fb213ed8, 0x00000000fb214000, 0x0000000100000000) Metaspace used 23303K, capacity 23692K, committed 24064K, reserved 1071104K class space used 2656K, capacity 2770K, committed 2816K, reserved 1048576K Thanks & Regards Markus Von: Jakub Scholz [mailto:jakub at scholz.cz] Gesendet: Montag, 7. Januar 2019 15:04 An: Schwarz, Markus Cc: strimzi at redhat.com Betreff: Re: [Strimzi] Cluster-Operator 0.9.0 does not start due to error 404 on websocket connection Hi Markus, 0.9.0 should work fine with with Kubernetes 1.9 / OpenShift 3.9. Could you share the complete log from the CO? The Kubernetes client normally takes the address of the Kubernetes APi from the Kubernetes environment variables and connects there. Maybe in your case there is something strange / wrong with your cluster configuration. Thanks & Regards Jakub On Mon, Jan 7, 2019 at 2:40 PM Schwarz, Markus > wrote: Hi, We are currently running strimzi 0.4.0 ( I know, it?s old) on our Openshift Origin 3.9 cluster and everything is working about fine. I know try to update to 0.9.0 to catch up to things and implement some security. So I took all the yaml-files from the cluster-operator install folder, made the necessary namespace amendments and gave it a try. The strimzi-cluster-operator pod tries to start but then dies with the following error message: --- 2019-01-07 12:49:29 WARN WatchConnectionManager:185 - Exec Failure: HTTP 404, Status: 404 - 404 page not found java.net.ProtocolException: Expected HTTP 101 response but was '404 Not Found' at okhttp3.internal.ws.RealWebSocket.checkResponse(RealWebSocket.java:219) [cluster-operator-0.9.0.jar:0.9.0] at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:186) [cluster-operator-0.9.0.jar:0.9.0] at okhttp3.RealCall$AsyncCall.execute(RealCall.java:153) [cluster-operator-0.9.0.jar:0.9.0] at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32) [cluster-operator-0.9.0.jar:0.9.0] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_191] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_191] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191] 2019-01-07 12:49:29 INFO WatchConnectionManager:379 - Current reconnect backoff is 1000 milliseconds (T0) 2019-01-07 12:49:29 ERROR Main:141 - Cluster Operator verticle in namespace msw failed to start io.fabric8.kubernetes.client.KubernetesClientException: 404 page not found at io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager$2.onFailure(WatchConnectionManager.java:189) ~[cluster-operator-0.9.0.jar:0.9.0] at okhttp3.internal.ws.RealWebSocket.failWebSocket(RealWebSocket.java:546) ~[cluster-operator-0.9.0.jar:0.9.0] at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:188) ~[cluster-operator-0.9.0.jar:0.9.0] at okhttp3.RealCall$AsyncCall.execute(RealCall.java:153) ~[cluster-operator-0.9.0.jar:0.9.0] at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32) ~[cluster-operator-0.9.0.jar:0.9.0] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_191] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_191] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191] --- It seems to try to open a websocket connection on a url which does not exists. I don?t know if this is connected to the outdated version of kubernetes (1.9.1) or if this might be a configuration error of some sort, any hint will be appreciated. I could not find any kubernetes/openshift version requirements for strimzi. Thanks! Markus [cid:image001.jpg at 01D4A69A.F3D64FB0] prosoz-herten-footer _______________________________________________ Strimzi mailing list Strimzi at redhat.com https://www.redhat.com/mailman/listinfo/strimzi -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 71629 bytes Desc: image001.jpg URL: From jakub at scholz.cz Mon Jan 7 14:38:44 2019 From: jakub at scholz.cz (Jakub Scholz) Date: Mon, 7 Jan 2019 15:38:44 +0100 Subject: [Strimzi] Cluster-Operator 0.9.0 does not start due to error 404 on websocket connection In-Reply-To: <063a0c2994e74b22a8eac2f131107141@prosoz.de> References: <063a0c2994e74b22a8eac2f131107141@prosoz.de> Message-ID: Thanks. I did some tests and it seems I get the same error when the Custom Resource Definitions are not properly installed. Can you do "kubectl get crd" and make sure you get something like this: $ kubectl get crd NAME AGE kafkaconnects.kafka.strimzi.io 5m39s kafkaconnects2is.kafka.strimzi.io 5m39s kafkamirrormakers.kafka.strimzi.io 5m39s kafkas.kafka.strimzi.io 5m39s kafkatopics.kafka.strimzi.io 5m39s kafkausers.kafka.strimzi.io 5m39s Thanks & Regards Jakub On Mon, Jan 7, 2019 at 3:15 PM Schwarz, Markus wrote: > Hi Jakub, > > > > here is the complete log from the co: > > > > + JAR=/cluster-operator-0.9.0.jar > > + shift > > + . /bin/dynamic_resources.sh > > ++ get_heap_size > > +++ cat /sys/fs/cgroup/memory/memory.limit_in_bytes > > ++ CONTAINER_MEMORY_IN_BYTES=268435456 > > ++ DEFAULT_MEMORY_CEILING=1152921504606846975 > > ++ '[' 268435456 -lt 1152921504606846975 ']' > > ++ '[' -z ']' > > ++ CONTAINER_HEAP_PERCENT=0.50 > > ++ CONTAINER_MEMORY_IN_MB=256 > > +++ echo '256 0.50' > > +++ awk '{ printf "%d", $1 * $2 }' > > ++ CONTAINER_HEAP_MAX=128 > > ++ echo 128 > > + MAX_HEAP=128 > > + '[' -n 128 ']' > > + JAVA_OPTS='-Xms128m -Xmx128m ' > > + export MALLOC_ARENA_MAX=2 > > + MALLOC_ARENA_MAX=2 > > + JAVA_OPTS='-Xms128m -Xmx128m -Dvertx.cacheDirBase=/tmp > -Djava.security.egd=file:/dev/./urandom' > > + JAVA_OPTS='-Xms128m -Xmx128m -Dvertx.cacheDirBase=/tmp > -Djava.security.egd=file:/dev/./urandom -XX:NativeMemoryTracking=summary > -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps' > > + exec java -Xms128m -Xmx128m -Dvertx.cacheDirBase=/tmp > -Djava.security.egd=file:/dev/./urandom -XX:NativeMemoryTracking=summary > -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -jar > /cluster-operator-0.9.0.jar -Xms128m -Xmx128m -Dvertx.cacheDirBase=/tmp > -Djava.security.egd=file:/dev/./urandom -XX:NativeMemoryTracking=summary > -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps > > 2019-01-07 14:02:15 INFO Main:70 - ClusterOperator 0.9.0 is starting > > 2019-01-07T14:02:16.434+0000: [GC (Allocation Failure) > 2019-01-07T14:02:16.434+0000: [DefNew: 34944K->4352K(39296K), 0.0154766 > secs] 34944K->8729K(126720K), 0.0155884 secs] [Times: user=0.01 sys=0.01, > real=0.01 secs] > > 2019-01-07T14:02:17.040+0000: [Full GC (Metadata GC Threshold) > 2019-01-07T14:02:17.040+0000: [Tenured: 4377K->7631K(87424K), 0.0301695 > secs] 24907K->7631K(126720K), [Metaspace: 20706K->20706K(1069056K)], > 0.0303281 secs] [Times: user=0.03 sys=0.00, real=0.03 secs] > > 2019-01-07 14:02:17 INFO Main:262 - Using config: > > PATH: > /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin > > KUBERNETES_PORT_53_UDP_PROTO: udp > > STRIMZI_DEFAULT_KAFKA_MIRRORMAKER_IMAGE: > strimzi/kafka-mirror-maker:0.9.0 > > PROMETHEUS_PORT_9090_TCP: tcp://172.30.38.237:9090 > > STRIMZI_FULL_RECONCILIATION_INTERVAL_MS: 120000 > > STRIMZI_DEFAULT_KAFKA_CONNECT_IMAGE: > strimzi/kafka-connect:0.9.0 > > KUBERNETES_PORT_443_TCP_PROTO: tcp > > KUBERNETES_PORT_53_TCP_PORT: 53 > > PROMETHEUS_PORT_9090_TCP_ADDR: 172.30.38.237 > > KAFKA_METRICS_PORT_9404_TCP_PROTO: tcp > > STRIMZI_VERSION: 0.9.0 > > PROMETHEUS_PORT_9090_TCP_PROTO: tcp > > KUBERNETES_PORT_53_TCP_PROTO: tcp > > KUBERNETES_PORT_53_TCP_ADDR: 172.30.0.1 > > HOSTNAME: strimzi-cluster-operator-5d5445c55d-9lb8v > > HOME: / > > MALLOC_ARENA_MAX: 2 > > STRIMZI_DEFAULT_ZOOKEEPER_IMAGE: strimzi/zookeeper:0.9.0 > > STRIMZI_NAMESPACE: msw > > KUBERNETES_SERVICE_PORT_HTTPS: 443 > > SHLVL: 1 > > JAVA_HOME: /usr/lib/jvm/java > > KAFKA_METRICS_PORT_9404_TCP_ADDR: 172.30.193.60 > > STRIMZI_DEFAULT_KAFKA_INIT_IMAGE: strimzi/kafka-init:0.9.0 > > KAFKA_METRICS_SERVICE_HOST: 172.30.193.60 > > KUBERNETES_PORT_443_TCP: tcp://172.30.0.1:443 > > PROMETHEUS_PORT_9090_TCP_PORT: 9090 > > STRIMZI_OPERATION_TIMEOUT_MS: 300000 > > PROMETHEUS_SERVICE_PORT: 9090 > > KAFKA_METRICS_SERVICE_PORT_METRICS: 9404 > > STRIMZI_DEFAULT_KAFKA_IMAGE: strimzi/kafka:0.9.0 > > STRIMZI_DEFAULT_TLS_SIDECAR_KAFKA_IMAGE: > strimzi/kafka-stunnel:0.9.0 > > PROMETHEUS_PORT: tcp://172.30.38.237:9090 > > STRIMZI_LOG_LEVEL: INFO > > KUBERNETES_PORT: tcp://172.30.0.1:443 > > PROMETHEUS_SERVICE_HOST: 172.30.38.237 > > STRIMZI_DEFAULT_KAFKA_CONNECT_S2I_IMAGE: > strimzi/kafka-connect-s2i:0.9.0 > > KAFKA_METRICS_PORT: tcp://172.30.193.60:9404 > > KUBERNETES_PORT_53_TCP: tcp://172.30.0.1:53 > > KUBERNETES_PORT_53_UDP: udp://172.30.0.1:53 > > KUBERNETES_SERVICE_PORT: 443 > > KAFKA_METRICS_SERVICE_PORT: 9404 > > KUBERNETES_PORT_53_UDP_ADDR: 172.30.0.1 > > STRIMZI_DEFAULT_TOPIC_OPERATOR_IMAGE: > strimzi/topic-operator:0.9.0 > > PWD: / > > PROMETHEUS_SERVICE_PORT_PROMETHEUS: 9090 > > KUBERNETES_PORT_443_TCP_ADDR: 172.30.0.1 > > STRIMZI_DEFAULT_USER_OPERATOR_IMAGE: > strimzi/user-operator:0.9.0 > > KUBERNETES_SERVICE_PORT_DNS_TCP: 53 > > STRIMZI_DEFAULT_TLS_SIDECAR_ENTITY_OPERATOR_IMAGE: > strimzi/entity-operator-stunnel:0.9.0 > > KUBERNETES_PORT_53_UDP_PORT: 53 > > KAFKA_METRICS_PORT_9404_TCP: tcp://172.30.193.60:9404 > > KUBERNETES_SERVICE_HOST: 172.30.0.1 > > KUBERNETES_SERVICE_PORT_DNS: 53 > > KUBERNETES_PORT_443_TCP_PORT: 443 > > STRIMZI_DEFAULT_TLS_SIDECAR_ZOOKEEPER_IMAGE: > strimzi/zookeeper-stunnel:0.9.0 > > > > 2019-01-07 14:02:17 INFO ClusterOperator:58 - Creating ClusterOperator > for namespace msw > > 2019-01-07 14:02:17 INFO ClusterOperator:86 - Starting ClusterOperator > for namespace msw > > 2019-01-07 14:02:17 INFO ClusterOperator:93 - Started operator for Kafka > kind > > 2019-01-07 14:02:17 WARN WatchConnectionManager:185 - Exec Failure: HTTP > 404, Status: 404 - 404 page not found > > > > java.net.ProtocolException: Expected HTTP 101 response but was '404 Not > Found' > > at > okhttp3.internal.ws.RealWebSocket.checkResponse(RealWebSocket.java:219) > [cluster-operator-0.9.0.jar:0.9.0] > > at > okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:186) > [cluster-operator-0.9.0.jar:0.9.0] > > at okhttp3.RealCall$AsyncCall.execute(RealCall.java:153) > [cluster-operator-0.9.0.jar:0.9.0] > > at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32) > [cluster-operator-0.9.0.jar:0.9.0] > > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > [?:1.8.0_191] > > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > [?:1.8.0_191] > > at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191] > > 2019-01-07 14:02:17 INFO WatchConnectionManager:379 - Current reconnect > backoff is 1000 milliseconds (T0) > > 2019-01-07 14:02:17 ERROR Main:141 - Cluster Operator verticle in > namespace msw failed to start > > io.fabric8.kubernetes.client.KubernetesClientException: 404 page not found > > > > at > io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager$2.onFailure(WatchConnectionManager.java:189) > ~[cluster-operator-0.9.0.jar:0.9.0] > > at > okhttp3.internal.ws.RealWebSocket.failWebSocket(RealWebSocket.java:546) > ~[cluster-operator-0.9.0.jar:0.9.0] > > at > okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:188) > ~[cluster-operator-0.9.0.jar:0.9.0] > > at okhttp3.RealCall$AsyncCall.execute(RealCall.java:153) > ~[cluster-operator-0.9.0.jar:0.9.0] > > at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32) > ~[cluster-operator-0.9.0.jar:0.9.0] > > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > ~[?:1.8.0_191] > > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > ~[?:1.8.0_191] > > at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191] > > Heap > > def new generation total 39296K, used 16240K [0x00000000f8000000, > 0x00000000faaa0000, 0x00000000faaa0000) > > eden space 34944K, 46% used [0x00000000f8000000, 0x00000000f8fdc0d8, > 0x00000000fa220000) > > from space 4352K, 0% used [0x00000000fa660000, 0x00000000fa660000, > 0x00000000faaa0000) > > to space 4352K, 0% used [0x00000000fa220000, 0x00000000fa220000, > 0x00000000fa660000) > > tenured generation total 87424K, used 7631K [0x00000000faaa0000, > 0x0000000100000000, 0x0000000100000000) > > the space 87424K, 8% used [0x00000000faaa0000, 0x00000000fb213ed8, > 0x00000000fb214000, 0x0000000100000000) > > Metaspace used 23303K, capacity 23692K, committed 24064K, reserved > 1071104K > > class space used 2656K, capacity 2770K, committed 2816K, reserved 1048576K > > > > Thanks & Regards > > Markus > > > > *Von:* Jakub Scholz [mailto:jakub at scholz.cz] > *Gesendet:* Montag, 7. Januar 2019 15:04 > *An:* Schwarz, Markus > *Cc:* strimzi at redhat.com > *Betreff:* Re: [Strimzi] Cluster-Operator 0.9.0 does not start due to > error 404 on websocket connection > > > > Hi Markus, > > > > 0.9.0 should work fine with with Kubernetes 1.9 / OpenShift 3.9. Could you > share the complete log from the CO? The Kubernetes client normally takes > the address of the Kubernetes APi from the Kubernetes environment variables > and connects there. Maybe in your case there is something strange / wrong > with your cluster configuration. > > > > Thanks & Regards > > Jakub > > > > On Mon, Jan 7, 2019 at 2:40 PM Schwarz, Markus > wrote: > > Hi, > > > > We are currently running strimzi 0.4.0 ( I know, it?s old) on our > Openshift Origin 3.9 cluster and everything is working about fine. > > > > I know try to update to 0.9.0 to catch up to things and implement some > security. So I took all the yaml-files from the cluster-operator install > folder, made the necessary namespace amendments and gave it a try. The > strimzi-cluster-operator pod tries to start but then dies with the > following error message: > > > > --- > > 2019-01-07 12:49:29 WARN WatchConnectionManager:185 - Exec Failure: HTTP > 404, Status: 404 - 404 page not found > > > > java.net.ProtocolException: Expected HTTP 101 response but was '404 Not > Found' > > at > okhttp3.internal.ws.RealWebSocket.checkResponse(RealWebSocket.java:219) > [cluster-operator-0.9.0.jar:0.9.0] > > at > okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:186) > [cluster-operator-0.9.0.jar:0.9.0] > > at okhttp3.RealCall$AsyncCall.execute(RealCall.java:153) > [cluster-operator-0.9.0.jar:0.9.0] > > at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32) > [cluster-operator-0.9.0.jar:0.9.0] > > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > [?:1.8.0_191] > > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > [?:1.8.0_191] > > at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191] > > 2019-01-07 12:49:29 INFO WatchConnectionManager:379 - Current reconnect > backoff is 1000 milliseconds (T0) > > 2019-01-07 12:49:29 ERROR Main:141 - Cluster Operator verticle in > namespace msw failed to start > > io.fabric8.kubernetes.client.KubernetesClientException: 404 page not found > > > > at > io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager$2.onFailure(WatchConnectionManager.java:189) > ~[cluster-operator-0.9.0.jar:0.9.0] > > at > okhttp3.internal.ws.RealWebSocket.failWebSocket(RealWebSocket.java:546) > ~[cluster-operator-0.9.0.jar:0.9.0] > > at > okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:188) > ~[cluster-operator-0.9.0.jar:0.9.0] > > at okhttp3.RealCall$AsyncCall.execute(RealCall.java:153) > ~[cluster-operator-0.9.0.jar:0.9.0] > > at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32) > ~[cluster-operator-0.9.0.jar:0.9.0] > > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > ~[?:1.8.0_191] > > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > ~[?:1.8.0_191] > > at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191] > > --- > > > > It seems to try to open a websocket connection on a url which does not > exists. I don?t know if this is connected to the outdated version of > kubernetes (1.9.1) or if this might be a configuration error of some sort, > any hint will be appreciated. I could not find any kubernetes/openshift > version requirements for strimzi. > > > > Thanks! > > Markus > > > prosoz-herten-footer > > _______________________________________________ > Strimzi mailing list > Strimzi at redhat.com > https://www.redhat.com/mailman/listinfo/strimzi > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 71629 bytes Desc: not available URL: From M.Schwarz at prosoz.de Mon Jan 7 14:44:48 2019 From: M.Schwarz at prosoz.de (Schwarz, Markus) Date: Mon, 7 Jan 2019 14:44:48 +0000 Subject: [Strimzi] Cluster-Operator 0.9.0 does not start due to error 404 on websocket connection In-Reply-To: References: <063a0c2994e74b22a8eac2f131107141@prosoz.de> Message-ID: <2b937cc896e44d088fd4705c1bf3c5be@prosoz.de> Hi Jakub, that fixed it indeed. The crds for kafkaconnect + s2i and kafkamirrormakers where missing, I guess I omitted them because I don?t want to use either kafka connect nor mirrormaker. Thanks! Markus Von: Jakub Scholz [mailto:jakub at scholz.cz] Gesendet: Montag, 7. Januar 2019 15:39 An: Schwarz, Markus Cc: strimzi at redhat.com Betreff: Re: [Strimzi] Cluster-Operator 0.9.0 does not start due to error 404 on websocket connection Thanks. I did some tests and it seems I get the same error when the Custom Resource Definitions are not properly installed. Can you do "kubectl get crd" and make sure you get something like this: $ kubectl get crd NAME AGE kafkaconnects.kafka.strimzi.io 5m39s kafkaconnects2is.kafka.strimzi.io 5m39s kafkamirrormakers.kafka.strimzi.io 5m39s kafkas.kafka.strimzi.io 5m39s kafkatopics.kafka.strimzi.io 5m39s kafkausers.kafka.strimzi.io 5m39s Thanks & Regards Jakub On Mon, Jan 7, 2019 at 3:15 PM Schwarz, Markus > wrote: Hi Jakub, here is the complete log from the co: + JAR=/cluster-operator-0.9.0.jar + shift + . /bin/dynamic_resources.sh ++ get_heap_size +++ cat /sys/fs/cgroup/memory/memory.limit_in_bytes ++ CONTAINER_MEMORY_IN_BYTES=268435456 ++ DEFAULT_MEMORY_CEILING=1152921504606846975 ++ '[' 268435456 -lt 1152921504606846975 ']' ++ '[' -z ']' ++ CONTAINER_HEAP_PERCENT=0.50 ++ CONTAINER_MEMORY_IN_MB=256 +++ echo '256 0.50' +++ awk '{ printf "%d", $1 * $2 }' ++ CONTAINER_HEAP_MAX=128 ++ echo 128 + MAX_HEAP=128 + '[' -n 128 ']' + JAVA_OPTS='-Xms128m -Xmx128m ' + export MALLOC_ARENA_MAX=2 + MALLOC_ARENA_MAX=2 + JAVA_OPTS='-Xms128m -Xmx128m -Dvertx.cacheDirBase=/tmp -Djava.security.egd=file:/dev/./urandom' + JAVA_OPTS='-Xms128m -Xmx128m -Dvertx.cacheDirBase=/tmp -Djava.security.egd=file:/dev/./urandom -XX:NativeMemoryTracking=summary -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps' + exec java -Xms128m -Xmx128m -Dvertx.cacheDirBase=/tmp -Djava.security.egd=file:/dev/./urandom -XX:NativeMemoryTracking=summary -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -jar /cluster-operator-0.9.0.jar -Xms128m -Xmx128m -Dvertx.cacheDirBase=/tmp -Djava.security.egd=file:/dev/./urandom -XX:NativeMemoryTracking=summary -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps 2019-01-07 14:02:15 INFO Main:70 - ClusterOperator 0.9.0 is starting 2019-01-07T14:02:16.434+0000: [GC (Allocation Failure) 2019-01-07T14:02:16.434+0000: [DefNew: 34944K->4352K(39296K), 0.0154766 secs] 34944K->8729K(126720K), 0.0155884 secs] [Times: user=0.01 sys=0.01, real=0.01 secs] 2019-01-07T14:02:17.040+0000: [Full GC (Metadata GC Threshold) 2019-01-07T14:02:17.040+0000: [Tenured: 4377K->7631K(87424K), 0.0301695 secs] 24907K->7631K(126720K), [Metaspace: 20706K->20706K(1069056K)], 0.0303281 secs] [Times: user=0.03 sys=0.00, real=0.03 secs] 2019-01-07 14:02:17 INFO Main:262 - Using config: PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin KUBERNETES_PORT_53_UDP_PROTO: udp STRIMZI_DEFAULT_KAFKA_MIRRORMAKER_IMAGE: strimzi/kafka-mirror-maker:0.9.0 PROMETHEUS_PORT_9090_TCP: tcp://172.30.38.237:9090 STRIMZI_FULL_RECONCILIATION_INTERVAL_MS: 120000 STRIMZI_DEFAULT_KAFKA_CONNECT_IMAGE: strimzi/kafka-connect:0.9.0 KUBERNETES_PORT_443_TCP_PROTO: tcp KUBERNETES_PORT_53_TCP_PORT: 53 PROMETHEUS_PORT_9090_TCP_ADDR: 172.30.38.237 KAFKA_METRICS_PORT_9404_TCP_PROTO: tcp STRIMZI_VERSION: 0.9.0 PROMETHEUS_PORT_9090_TCP_PROTO: tcp KUBERNETES_PORT_53_TCP_PROTO: tcp KUBERNETES_PORT_53_TCP_ADDR: 172.30.0.1 HOSTNAME: strimzi-cluster-operator-5d5445c55d-9lb8v HOME: / MALLOC_ARENA_MAX: 2 STRIMZI_DEFAULT_ZOOKEEPER_IMAGE: strimzi/zookeeper:0.9.0 STRIMZI_NAMESPACE: msw KUBERNETES_SERVICE_PORT_HTTPS: 443 SHLVL: 1 JAVA_HOME: /usr/lib/jvm/java KAFKA_METRICS_PORT_9404_TCP_ADDR: 172.30.193.60 STRIMZI_DEFAULT_KAFKA_INIT_IMAGE: strimzi/kafka-init:0.9.0 KAFKA_METRICS_SERVICE_HOST: 172.30.193.60 KUBERNETES_PORT_443_TCP: tcp://172.30.0.1:443 PROMETHEUS_PORT_9090_TCP_PORT: 9090 STRIMZI_OPERATION_TIMEOUT_MS: 300000 PROMETHEUS_SERVICE_PORT: 9090 KAFKA_METRICS_SERVICE_PORT_METRICS: 9404 STRIMZI_DEFAULT_KAFKA_IMAGE: strimzi/kafka:0.9.0 STRIMZI_DEFAULT_TLS_SIDECAR_KAFKA_IMAGE: strimzi/kafka-stunnel:0.9.0 PROMETHEUS_PORT: tcp://172.30.38.237:9090 STRIMZI_LOG_LEVEL: INFO KUBERNETES_PORT: tcp://172.30.0.1:443 PROMETHEUS_SERVICE_HOST: 172.30.38.237 STRIMZI_DEFAULT_KAFKA_CONNECT_S2I_IMAGE: strimzi/kafka-connect-s2i:0.9.0 KAFKA_METRICS_PORT: tcp://172.30.193.60:9404 KUBERNETES_PORT_53_TCP: tcp://172.30.0.1:53 KUBERNETES_PORT_53_UDP: udp://172.30.0.1:53 KUBERNETES_SERVICE_PORT: 443 KAFKA_METRICS_SERVICE_PORT: 9404 KUBERNETES_PORT_53_UDP_ADDR: 172.30.0.1 STRIMZI_DEFAULT_TOPIC_OPERATOR_IMAGE: strimzi/topic-operator:0.9.0 PWD: / PROMETHEUS_SERVICE_PORT_PROMETHEUS: 9090 KUBERNETES_PORT_443_TCP_ADDR: 172.30.0.1 STRIMZI_DEFAULT_USER_OPERATOR_IMAGE: strimzi/user-operator:0.9.0 KUBERNETES_SERVICE_PORT_DNS_TCP: 53 STRIMZI_DEFAULT_TLS_SIDECAR_ENTITY_OPERATOR_IMAGE: strimzi/entity-operator-stunnel:0.9.0 KUBERNETES_PORT_53_UDP_PORT: 53 KAFKA_METRICS_PORT_9404_TCP: tcp://172.30.193.60:9404 KUBERNETES_SERVICE_HOST: 172.30.0.1 KUBERNETES_SERVICE_PORT_DNS: 53 KUBERNETES_PORT_443_TCP_PORT: 443 STRIMZI_DEFAULT_TLS_SIDECAR_ZOOKEEPER_IMAGE: strimzi/zookeeper-stunnel:0.9.0 2019-01-07 14:02:17 INFO ClusterOperator:58 - Creating ClusterOperator for namespace msw 2019-01-07 14:02:17 INFO ClusterOperator:86 - Starting ClusterOperator for namespace msw 2019-01-07 14:02:17 INFO ClusterOperator:93 - Started operator for Kafka kind 2019-01-07 14:02:17 WARN WatchConnectionManager:185 - Exec Failure: HTTP 404, Status: 404 - 404 page not found java.net.ProtocolException: Expected HTTP 101 response but was '404 Not Found' at okhttp3.internal.ws.RealWebSocket.checkResponse(RealWebSocket.java:219) [cluster-operator-0.9.0.jar:0.9.0] at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:186) [cluster-operator-0.9.0.jar:0.9.0] at okhttp3.RealCall$AsyncCall.execute(RealCall.java:153) [cluster-operator-0.9.0.jar:0.9.0] at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32) [cluster-operator-0.9.0.jar:0.9.0] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_191] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_191] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191] 2019-01-07 14:02:17 INFO WatchConnectionManager:379 - Current reconnect backoff is 1000 milliseconds (T0) 2019-01-07 14:02:17 ERROR Main:141 - Cluster Operator verticle in namespace msw failed to start io.fabric8.kubernetes.client.KubernetesClientException: 404 page not found at io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager$2.onFailure(WatchConnectionManager.java:189) ~[cluster-operator-0.9.0.jar:0.9.0] at okhttp3.internal.ws.RealWebSocket.failWebSocket(RealWebSocket.java:546) ~[cluster-operator-0.9.0.jar:0.9.0] at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:188) ~[cluster-operator-0.9.0.jar:0.9.0] at okhttp3.RealCall$AsyncCall.execute(RealCall.java:153) ~[cluster-operator-0.9.0.jar:0.9.0] at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32) ~[cluster-operator-0.9.0.jar:0.9.0] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_191] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_191] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191] Heap def new generation total 39296K, used 16240K [0x00000000f8000000, 0x00000000faaa0000, 0x00000000faaa0000) eden space 34944K, 46% used [0x00000000f8000000, 0x00000000f8fdc0d8, 0x00000000fa220000) from space 4352K, 0% used [0x00000000fa660000, 0x00000000fa660000, 0x00000000faaa0000) to space 4352K, 0% used [0x00000000fa220000, 0x00000000fa220000, 0x00000000fa660000) tenured generation total 87424K, used 7631K [0x00000000faaa0000, 0x0000000100000000, 0x0000000100000000) the space 87424K, 8% used [0x00000000faaa0000, 0x00000000fb213ed8, 0x00000000fb214000, 0x0000000100000000) Metaspace used 23303K, capacity 23692K, committed 24064K, reserved 1071104K class space used 2656K, capacity 2770K, committed 2816K, reserved 1048576K Thanks & Regards Markus Von: Jakub Scholz [mailto:jakub at scholz.cz] Gesendet: Montag, 7. Januar 2019 15:04 An: Schwarz, Markus > Cc: strimzi at redhat.com Betreff: Re: [Strimzi] Cluster-Operator 0.9.0 does not start due to error 404 on websocket connection Hi Markus, 0.9.0 should work fine with with Kubernetes 1.9 / OpenShift 3.9. Could you share the complete log from the CO? The Kubernetes client normally takes the address of the Kubernetes APi from the Kubernetes environment variables and connects there. Maybe in your case there is something strange / wrong with your cluster configuration. Thanks & Regards Jakub On Mon, Jan 7, 2019 at 2:40 PM Schwarz, Markus > wrote: Hi, We are currently running strimzi 0.4.0 ( I know, it?s old) on our Openshift Origin 3.9 cluster and everything is working about fine. I know try to update to 0.9.0 to catch up to things and implement some security. So I took all the yaml-files from the cluster-operator install folder, made the necessary namespace amendments and gave it a try. The strimzi-cluster-operator pod tries to start but then dies with the following error message: --- 2019-01-07 12:49:29 WARN WatchConnectionManager:185 - Exec Failure: HTTP 404, Status: 404 - 404 page not found java.net.ProtocolException: Expected HTTP 101 response but was '404 Not Found' at okhttp3.internal.ws.RealWebSocket.checkResponse(RealWebSocket.java:219) [cluster-operator-0.9.0.jar:0.9.0] at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:186) [cluster-operator-0.9.0.jar:0.9.0] at okhttp3.RealCall$AsyncCall.execute(RealCall.java:153) [cluster-operator-0.9.0.jar:0.9.0] at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32) [cluster-operator-0.9.0.jar:0.9.0] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_191] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_191] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191] 2019-01-07 12:49:29 INFO WatchConnectionManager:379 - Current reconnect backoff is 1000 milliseconds (T0) 2019-01-07 12:49:29 ERROR Main:141 - Cluster Operator verticle in namespace msw failed to start io.fabric8.kubernetes.client.KubernetesClientException: 404 page not found at io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager$2.onFailure(WatchConnectionManager.java:189) ~[cluster-operator-0.9.0.jar:0.9.0] at okhttp3.internal.ws.RealWebSocket.failWebSocket(RealWebSocket.java:546) ~[cluster-operator-0.9.0.jar:0.9.0] at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:188) ~[cluster-operator-0.9.0.jar:0.9.0] at okhttp3.RealCall$AsyncCall.execute(RealCall.java:153) ~[cluster-operator-0.9.0.jar:0.9.0] at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32) ~[cluster-operator-0.9.0.jar:0.9.0] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_191] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_191] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191] --- It seems to try to open a websocket connection on a url which does not exists. I don?t know if this is connected to the outdated version of kubernetes (1.9.1) or if this might be a configuration error of some sort, any hint will be appreciated. I could not find any kubernetes/openshift version requirements for strimzi. Thanks! Markus [cid:image001.jpg at 01D4A69A.F3D64FB0] prosoz-herten-footer _______________________________________________ Strimzi mailing list Strimzi at redhat.com https://www.redhat.com/mailman/listinfo/strimzi -------------- next part -------------- An HTML attachment was scrubbed... URL: From jakub at scholz.cz Mon Jan 7 14:57:38 2019 From: jakub at scholz.cz (Jakub Scholz) Date: Mon, 7 Jan 2019 15:57:38 +0100 Subject: [Strimzi] Cluster-Operator 0.9.0 does not start due to error 404 on websocket connection In-Reply-To: <2b937cc896e44d088fd4705c1bf3c5be@prosoz.de> References: <063a0c2994e74b22a8eac2f131107141@prosoz.de> <2b937cc896e44d088fd4705c1bf3c5be@prosoz.de> Message-ID: Yeah, you have to install all of them. There is currently no way to tell the Cluster Operator to handle only some of the resources. Thanks & Regards Jakub On Mon, Jan 7, 2019 at 3:44 PM Schwarz, Markus wrote: > Hi Jakub, > > > > that fixed it indeed. The crds for kafkaconnect + s2i and > kafkamirrormakers where missing, I guess I omitted them because I don?t > want to use either kafka connect nor mirrormaker. > > > > Thanks! > > > > Markus > > > > *Von:* Jakub Scholz [mailto:jakub at scholz.cz] > *Gesendet:* Montag, 7. Januar 2019 15:39 > *An:* Schwarz, Markus > *Cc:* strimzi at redhat.com > *Betreff:* Re: [Strimzi] Cluster-Operator 0.9.0 does not start due to > error 404 on websocket connection > > > > Thanks. I did some tests and it seems I get the same error when the Custom > Resource Definitions are not properly installed. Can you do "kubectl get > crd" and make sure you get something like this: > > > > $ kubectl get crd > > NAME AGE > > kafkaconnects.kafka.strimzi.io 5m39s > > kafkaconnects2is.kafka.strimzi.io 5m39s > > kafkamirrormakers.kafka.strimzi.io 5m39s > > kafkas.kafka.strimzi.io 5m39s > > kafkatopics.kafka.strimzi.io 5m39s > > kafkausers.kafka.strimzi.io 5m39s > > > > Thanks & Regards > > Jakub > > > > > > On Mon, Jan 7, 2019 at 3:15 PM Schwarz, Markus > wrote: > > Hi Jakub, > > > > here is the complete log from the co: > > > > + JAR=/cluster-operator-0.9.0.jar > > + shift > > + . /bin/dynamic_resources.sh > > ++ get_heap_size > > +++ cat /sys/fs/cgroup/memory/memory.limit_in_bytes > > ++ CONTAINER_MEMORY_IN_BYTES=268435456 > > ++ DEFAULT_MEMORY_CEILING=1152921504606846975 > > ++ '[' 268435456 -lt 1152921504606846975 ']' > > ++ '[' -z ']' > > ++ CONTAINER_HEAP_PERCENT=0.50 > > ++ CONTAINER_MEMORY_IN_MB=256 > > +++ echo '256 0.50' > > +++ awk '{ printf "%d", $1 * $2 }' > > ++ CONTAINER_HEAP_MAX=128 > > ++ echo 128 > > + MAX_HEAP=128 > > + '[' -n 128 ']' > > + JAVA_OPTS='-Xms128m -Xmx128m ' > > + export MALLOC_ARENA_MAX=2 > > + MALLOC_ARENA_MAX=2 > > + JAVA_OPTS='-Xms128m -Xmx128m -Dvertx.cacheDirBase=/tmp > -Djava.security.egd=file:/dev/./urandom' > > + JAVA_OPTS='-Xms128m -Xmx128m -Dvertx.cacheDirBase=/tmp > -Djava.security.egd=file:/dev/./urandom -XX:NativeMemoryTracking=summary > -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps' > > + exec java -Xms128m -Xmx128m -Dvertx.cacheDirBase=/tmp > -Djava.security.egd=file:/dev/./urandom -XX:NativeMemoryTracking=summary > -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -jar > /cluster-operator-0.9.0.jar -Xms128m -Xmx128m -Dvertx.cacheDirBase=/tmp > -Djava.security.egd=file:/dev/./urandom -XX:NativeMemoryTracking=summary > -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps > > 2019-01-07 14:02:15 INFO Main:70 - ClusterOperator 0.9.0 is starting > > 2019-01-07T14:02:16.434+0000: [GC (Allocation Failure) > 2019-01-07T14:02:16.434+0000: [DefNew: 34944K->4352K(39296K), 0.0154766 > secs] 34944K->8729K(126720K), 0.0155884 secs] [Times: user=0.01 sys=0.01, > real=0.01 secs] > > 2019-01-07T14:02:17.040+0000: [Full GC (Metadata GC Threshold) > 2019-01-07T14:02:17.040+0000: [Tenured: 4377K->7631K(87424K), 0.0301695 > secs] 24907K->7631K(126720K), [Metaspace: 20706K->20706K(1069056K)], > 0.0303281 secs] [Times: user=0.03 sys=0.00, real=0.03 secs] > > 2019-01-07 14:02:17 INFO Main:262 - Using config: > > PATH: > /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin > > KUBERNETES_PORT_53_UDP_PROTO: udp > > STRIMZI_DEFAULT_KAFKA_MIRRORMAKER_IMAGE: > strimzi/kafka-mirror-maker:0.9.0 > > PROMETHEUS_PORT_9090_TCP: tcp://172.30.38.237:9090 > > STRIMZI_FULL_RECONCILIATION_INTERVAL_MS: 120000 > > STRIMZI_DEFAULT_KAFKA_CONNECT_IMAGE: > strimzi/kafka-connect:0.9.0 > > KUBERNETES_PORT_443_TCP_PROTO: tcp > > KUBERNETES_PORT_53_TCP_PORT: 53 > > PROMETHEUS_PORT_9090_TCP_ADDR: 172.30.38.237 > > KAFKA_METRICS_PORT_9404_TCP_PROTO: tcp > > STRIMZI_VERSION: 0.9.0 > > PROMETHEUS_PORT_9090_TCP_PROTO: tcp > > KUBERNETES_PORT_53_TCP_PROTO: tcp > > KUBERNETES_PORT_53_TCP_ADDR: 172.30.0.1 > > HOSTNAME: strimzi-cluster-operator-5d5445c55d-9lb8v > > HOME: / > > MALLOC_ARENA_MAX: 2 > > STRIMZI_DEFAULT_ZOOKEEPER_IMAGE: strimzi/zookeeper:0.9.0 > > STRIMZI_NAMESPACE: msw > > KUBERNETES_SERVICE_PORT_HTTPS: 443 > > SHLVL: 1 > > JAVA_HOME: /usr/lib/jvm/java > > KAFKA_METRICS_PORT_9404_TCP_ADDR: 172.30.193.60 > > STRIMZI_DEFAULT_KAFKA_INIT_IMAGE: strimzi/kafka-init:0.9.0 > > KAFKA_METRICS_SERVICE_HOST: 172.30.193.60 > > KUBERNETES_PORT_443_TCP: tcp://172.30.0.1:443 > > PROMETHEUS_PORT_9090_TCP_PORT: 9090 > > STRIMZI_OPERATION_TIMEOUT_MS: 300000 > > PROMETHEUS_SERVICE_PORT: 9090 > > KAFKA_METRICS_SERVICE_PORT_METRICS: 9404 > > STRIMZI_DEFAULT_KAFKA_IMAGE: strimzi/kafka:0.9.0 > > STRIMZI_DEFAULT_TLS_SIDECAR_KAFKA_IMAGE: > strimzi/kafka-stunnel:0.9.0 > > PROMETHEUS_PORT: tcp://172.30.38.237:9090 > > STRIMZI_LOG_LEVEL: INFO > > KUBERNETES_PORT: tcp://172.30.0.1:443 > > PROMETHEUS_SERVICE_HOST: 172.30.38.237 > > STRIMZI_DEFAULT_KAFKA_CONNECT_S2I_IMAGE: > strimzi/kafka-connect-s2i:0.9.0 > > KAFKA_METRICS_PORT: tcp://172.30.193.60:9404 > > KUBERNETES_PORT_53_TCP: tcp://172.30.0.1:53 > > KUBERNETES_PORT_53_UDP: udp://172.30.0.1:53 > > KUBERNETES_SERVICE_PORT: 443 > > KAFKA_METRICS_SERVICE_PORT: 9404 > > KUBERNETES_PORT_53_UDP_ADDR: 172.30.0.1 > > STRIMZI_DEFAULT_TOPIC_OPERATOR_IMAGE: > strimzi/topic-operator:0.9.0 > > PWD: / > > PROMETHEUS_SERVICE_PORT_PROMETHEUS: 9090 > > KUBERNETES_PORT_443_TCP_ADDR: 172.30.0.1 > > STRIMZI_DEFAULT_USER_OPERATOR_IMAGE: > strimzi/user-operator:0.9.0 > > KUBERNETES_SERVICE_PORT_DNS_TCP: 53 > > STRIMZI_DEFAULT_TLS_SIDECAR_ENTITY_OPERATOR_IMAGE: > strimzi/entity-operator-stunnel:0.9.0 > > KUBERNETES_PORT_53_UDP_PORT: 53 > > KAFKA_METRICS_PORT_9404_TCP: tcp://172.30.193.60:9404 > > KUBERNETES_SERVICE_HOST: 172.30.0.1 > > KUBERNETES_SERVICE_PORT_DNS: 53 > > KUBERNETES_PORT_443_TCP_PORT: 443 > > STRIMZI_DEFAULT_TLS_SIDECAR_ZOOKEEPER_IMAGE: > strimzi/zookeeper-stunnel:0.9.0 > > > > 2019-01-07 14:02:17 INFO ClusterOperator:58 - Creating ClusterOperator > for namespace msw > > 2019-01-07 14:02:17 INFO ClusterOperator:86 - Starting ClusterOperator > for namespace msw > > 2019-01-07 14:02:17 INFO ClusterOperator:93 - Started operator for Kafka > kind > > 2019-01-07 14:02:17 WARN WatchConnectionManager:185 - Exec Failure: HTTP > 404, Status: 404 - 404 page not found > > > > java.net.ProtocolException: Expected HTTP 101 response but was '404 Not > Found' > > at > okhttp3.internal.ws.RealWebSocket.checkResponse(RealWebSocket.java:219) > [cluster-operator-0.9.0.jar:0.9.0] > > at > okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:186) > [cluster-operator-0.9.0.jar:0.9.0] > > at okhttp3.RealCall$AsyncCall.execute(RealCall.java:153) > [cluster-operator-0.9.0.jar:0.9.0] > > at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32) > [cluster-operator-0.9.0.jar:0.9.0] > > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > [?:1.8.0_191] > > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > [?:1.8.0_191] > > at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191] > > 2019-01-07 14:02:17 INFO WatchConnectionManager:379 - Current reconnect > backoff is 1000 milliseconds (T0) > > 2019-01-07 14:02:17 ERROR Main:141 - Cluster Operator verticle in > namespace msw failed to start > > io.fabric8.kubernetes.client.KubernetesClientException: 404 page not found > > > > at > io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager$2.onFailure(WatchConnectionManager.java:189) > ~[cluster-operator-0.9.0.jar:0.9.0] > > at > okhttp3.internal.ws.RealWebSocket.failWebSocket(RealWebSocket.java:546) > ~[cluster-operator-0.9.0.jar:0.9.0] > > at > okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:188) > ~[cluster-operator-0.9.0.jar:0.9.0] > > at okhttp3.RealCall$AsyncCall.execute(RealCall.java:153) > ~[cluster-operator-0.9.0.jar:0.9.0] > > at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32) > ~[cluster-operator-0.9.0.jar:0.9.0] > > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > ~[?:1.8.0_191] > > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > ~[?:1.8.0_191] > > at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191] > > Heap > > def new generation total 39296K, used 16240K [0x00000000f8000000, > 0x00000000faaa0000, 0x00000000faaa0000) > > eden space 34944K, 46% used [0x00000000f8000000, 0x00000000f8fdc0d8, > 0x00000000fa220000) > > from space 4352K, 0% used [0x00000000fa660000, 0x00000000fa660000, > 0x00000000faaa0000) > > to space 4352K, 0% used [0x00000000fa220000, 0x00000000fa220000, > 0x00000000fa660000) > > tenured generation total 87424K, used 7631K [0x00000000faaa0000, > 0x0000000100000000, 0x0000000100000000) > > the space 87424K, 8% used [0x00000000faaa0000, 0x00000000fb213ed8, > 0x00000000fb214000, 0x0000000100000000) > > Metaspace used 23303K, capacity 23692K, committed 24064K, reserved > 1071104K > > class space used 2656K, capacity 2770K, committed 2816K, reserved 1048576K > > > > Thanks & Regards > > Markus > > > > *Von:* Jakub Scholz [mailto:jakub at scholz.cz] > *Gesendet:* Montag, 7. Januar 2019 15:04 > *An:* Schwarz, Markus > *Cc:* strimzi at redhat.com > *Betreff:* Re: [Strimzi] Cluster-Operator 0.9.0 does not start due to > error 404 on websocket connection > > > > Hi Markus, > > > > 0.9.0 should work fine with with Kubernetes 1.9 / OpenShift 3.9. Could you > share the complete log from the CO? The Kubernetes client normally takes > the address of the Kubernetes APi from the Kubernetes environment variables > and connects there. Maybe in your case there is something strange / wrong > with your cluster configuration. > > > > Thanks & Regards > > Jakub > > > > On Mon, Jan 7, 2019 at 2:40 PM Schwarz, Markus > wrote: > > Hi, > > > > We are currently running strimzi 0.4.0 ( I know, it?s old) on our > Openshift Origin 3.9 cluster and everything is working about fine. > > > > I know try to update to 0.9.0 to catch up to things and implement some > security. So I took all the yaml-files from the cluster-operator install > folder, made the necessary namespace amendments and gave it a try. The > strimzi-cluster-operator pod tries to start but then dies with the > following error message: > > > > --- > > 2019-01-07 12:49:29 WARN WatchConnectionManager:185 - Exec Failure: HTTP > 404, Status: 404 - 404 page not found > > > > java.net.ProtocolException: Expected HTTP 101 response but was '404 Not > Found' > > at > okhttp3.internal.ws.RealWebSocket.checkResponse(RealWebSocket.java:219) > [cluster-operator-0.9.0.jar:0.9.0] > > at > okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:186) > [cluster-operator-0.9.0.jar:0.9.0] > > at okhttp3.RealCall$AsyncCall.execute(RealCall.java:153) > [cluster-operator-0.9.0.jar:0.9.0] > > at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32) > [cluster-operator-0.9.0.jar:0.9.0] > > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > [?:1.8.0_191] > > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > [?:1.8.0_191] > > at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191] > > 2019-01-07 12:49:29 INFO WatchConnectionManager:379 - Current reconnect > backoff is 1000 milliseconds (T0) > > 2019-01-07 12:49:29 ERROR Main:141 - Cluster Operator verticle in > namespace msw failed to start > > io.fabric8.kubernetes.client.KubernetesClientException: 404 page not found > > > > at > io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager$2.onFailure(WatchConnectionManager.java:189) > ~[cluster-operator-0.9.0.jar:0.9.0] > > at > okhttp3.internal.ws.RealWebSocket.failWebSocket(RealWebSocket.java:546) > ~[cluster-operator-0.9.0.jar:0.9.0] > > at > okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:188) > ~[cluster-operator-0.9.0.jar:0.9.0] > > at okhttp3.RealCall$AsyncCall.execute(RealCall.java:153) > ~[cluster-operator-0.9.0.jar:0.9.0] > > at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32) > ~[cluster-operator-0.9.0.jar:0.9.0] > > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > ~[?:1.8.0_191] > > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > ~[?:1.8.0_191] > > at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191] > > --- > > > > It seems to try to open a websocket connection on a url which does not > exists. I don?t know if this is connected to the outdated version of > kubernetes (1.9.1) or if this might be a configuration error of some sort, > any hint will be appreciated. I could not find any kubernetes/openshift > version requirements for strimzi. > > > > Thanks! > > Markus > > > prosoz-herten-footer > > _______________________________________________ > Strimzi mailing list > Strimzi at redhat.com > https://www.redhat.com/mailman/listinfo/strimzi > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From scrobert at msn.com Thu Jan 17 19:02:05 2019 From: scrobert at msn.com (Robert Krawiec) Date: Thu, 17 Jan 2019 19:02:05 +0000 Subject: [Strimzi] Node affinity guidance Message-ID: I don't see any built in affinity to keep zookeeper and Kafka nodes from deploying to the same cluster node. Is this something that is recommended? -------------- next part -------------- An HTML attachment was scrubbed... URL: From jakub at scholz.cz Thu Jan 17 19:57:57 2019 From: jakub at scholz.cz (Jakub Scholz) Date: Thu, 17 Jan 2019 20:57:57 +0100 Subject: [Strimzi] Node affinity guidance In-Reply-To: References: Message-ID: Hi Robert, We support configuring Node and Pod affinity ( https://strimzi.io/docs/latest/full.html#assembly-scheduling-deployment-configuration-kafka). But we do not set any rules by default (unless you enable the Kafka rack awareness feature). My personal view is that setting them by default might not work for every user, every cluster size and every environment (e.g. development versus production). So you have to configure them your self. I you ask me, having dedicated nodes for your Kafka and Zookeeper pods is always a safe bet. But sometimes it might be a bit wasteful. But if you have big hosts, I think Zookeeper and Kafka can easily live next to each other. It always depends also on the size of your cluster and how critical it is for you. But as always, there will be many people with many opinions. Thanks & Regards Jakub On Thu, Jan 17, 2019 at 8:02 PM Robert Krawiec wrote: > I don't see any built in affinity to keep zookeeper and Kafka nodes from > deploying to the same cluster node. Is this something that is recommended? > _______________________________________________ > Strimzi mailing list > Strimzi at redhat.com > https://www.redhat.com/mailman/listinfo/strimzi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dandaniel97 at gmail.com Mon Jan 21 12:52:07 2019 From: dandaniel97 at gmail.com (Daniel Beilin) Date: Mon, 21 Jan 2019 14:52:07 +0200 Subject: [Strimzi] Environment variables in kafka connect Message-ID: Hello, I've deployed strimzi and kafka connect and I'm trying to create a s3 connector. The problem is that the s3 connector requires aws credentials. The aws credentials can sit in the environment variables but when I try to change them the cluster operator reverts back to a configuration without them. Is there a solution to put new environment variables? Or do you know any other way to put the aws credentials with the strimzi platform. Best regards, Daniel -------------- next part -------------- An HTML attachment was scrubbed... URL: From jakub at scholz.cz Mon Jan 21 16:24:54 2019 From: jakub at scholz.cz (Jakub Scholz) Date: Mon, 21 Jan 2019 17:24:54 +0100 Subject: [Strimzi] Environment variables in kafka connect In-Reply-To: References: Message-ID: Hi Daniel, This is currently implemented in master, but not released. It will be released in 0.10.0 hopefully later this week or early next week. For more info, you can have a look into the master documentation: https://strimzi.io/docs/master/full.html#assembly-kafka-connect-external-configuration-deployment-configuration-kafka-connect ... there is even an example with AWS credentials in environment variables. Hope this helps. Thanks & Regards Jakub On Mon, Jan 21, 2019 at 1:52 PM Daniel Beilin wrote: > Hello, > > I've deployed strimzi and kafka connect and I'm trying to create a s3 > connector. > > The problem is that the s3 connector requires aws credentials. The aws > credentials can sit in the environment variables but when I try to change > them the cluster operator reverts back to a configuration without them. > > Is there a solution to put new environment variables? Or do you know any > other way to put the aws credentials with the strimzi platform. > > Best regards, > Daniel > _______________________________________________ > Strimzi mailing list > Strimzi at redhat.com > https://www.redhat.com/mailman/listinfo/strimzi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From haiouxiang at gmail.com Mon Jan 28 15:27:31 2019 From: haiouxiang at gmail.com (Haiou Xiang) Date: Mon, 28 Jan 2019 23:27:31 +0800 Subject: [Strimzi] How to config kafka connect Message-ID: I deployed strimzi kafka, strimzi zookeeper, strimzi kafka connector with debezium mongodb connect plugin and Mongodb replicaset on Kubernetes, and all PODs were deployed successfully. Next, according to Debezium document, I need do two steps in below. " To use the connector to produce change events for a particular MongoDB replica set , 1. simply create a configuration file for the MongoDB Connector 2. use the Kafka Connect REST API to add that connector to your Kafka Connect cluster. But does strimzi support Kafka connect rest API? if yes, how to invoke Kafka connect rest API. If not, how can I config Kafka connect, debezium mongodb connect and Mongodb in Kubernetes? thanks, -------------- next part -------------- An HTML attachment was scrubbed... URL: From jakub at scholz.cz Mon Jan 28 16:56:02 2019 From: jakub at scholz.cz (Jakub Scholz) Date: Mon, 28 Jan 2019 17:56:02 +0100 Subject: [Strimzi] RC1 for Strimzi 0.10.0 Message-ID: Hi, The Release Candidate 1 for the first Strimzi 0.10.0 release is now available. The main changes include Kafka 2.1.0 and Kafka upgrades, Secrets management for Kafka Connect, Network Policy management and many more. For more details and the upgrade procedure, go to: https://github.com/strimzi/strimzi-kafka-operator/releases/tag/0.10.0-rc1 Jakub -------------- next part -------------- An HTML attachment was scrubbed... URL: From tbentley at redhat.com Mon Jan 28 17:21:06 2019 From: tbentley at redhat.com (Tom Bentley) Date: Mon, 28 Jan 2019 17:21:06 +0000 Subject: [Strimzi] How to config kafka connect In-Reply-To: References: Message-ID: Hi Haiou, The Kafka Connect REST API is available on port 8083, as the -connect-api service. Using it is slightly awkward because it is not exposed outside the cluster. You should be able to use something like kubectl exec -ti $CONNECT_POD -- curl -X POST -H "Content-Type: application/json" --data '{"name": "inventory-connector", "config": {"connector.class": "io.debezium.connector.mongodb.MongoDbConnector","mongodb.hosts": "rs0/ 192.168.99.100:27017", "mongodb.name": "fullfillment", "collection.whitelist": "inventory[.]*"}}' http://localhost:8083/connectors (We have an issue for making the REST API accessible outside the cluster: https://github.com/strimzi/strimzi-kafka-operator/issues/130) Regards, Tom On Mon, Jan 28, 2019 at 3:27 PM Haiou Xiang wrote: > I deployed strimzi kafka, strimzi zookeeper, strimzi kafka connector with > debezium mongodb connect plugin and Mongodb replicaset on Kubernetes, and > all PODs were deployed successfully. Next, according to Debezium document, > I need do two steps in below. > " To use the connector to produce change events for a particular MongoDB > replica set , > 1. simply create a configuration file for the MongoDB Connector > > 2. use the Kafka Connect REST API > to > add that connector to your Kafka Connect cluster. > But does strimzi support Kafka connect rest API? if yes, how to invoke > Kafka connect rest API. If not, how can I config Kafka connect, debezium > mongodb connect and Mongodb in Kubernetes? > > thanks, > _______________________________________________ > Strimzi mailing list > Strimzi at redhat.com > https://www.redhat.com/mailman/listinfo/strimzi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dandaniel97 at gmail.com Tue Jan 29 14:07:52 2019 From: dandaniel97 at gmail.com (Daniel Beilin) Date: Tue, 29 Jan 2019 16:07:52 +0200 Subject: [Strimzi] Multiple namespace AMQ Message-ID: Hello, I want to deploy AMQ streams in such a way where we have one Cluster operator sitting inside one project and other projects use it in order to deploy their clusters. But the way it seems to work is not very "as a service" and requires a cluster admin involvement in several places in order to add a new project. Firstly, you need to change the env inside the deployment of the cluster operator. Secondly, you need to use the role binding in the new project Thirdly, you need to re-deploy the cluster operator. These three steps require high privilege and not really accessible for someone who is not a cluster admin, is there a way to make this more accessible not to cluster admin? Or a way you don't need to do this for every single project? Thank you in advanced, Daniel -------------- next part -------------- An HTML attachment was scrubbed... URL: From ppatiern at redhat.com Wed Jan 30 08:08:18 2019 From: ppatiern at redhat.com (Paolo Patierno) Date: Wed, 30 Jan 2019 09:08:18 +0100 Subject: [Strimzi] Multiple namespace AMQ In-Reply-To: References: Message-ID: Hi Daniel, the Cluster Operator needs these rights in order to watch/create/update all the Kubernetes/OpenShift resources for deploying and managing one or more Kafka clusters (and Kafka Connect, Mirror Maker instances). It also needs the rights for delegating to the other operators (User and Topic) the rights for handling the other resources for users and topics management. Giving these rights using a service account and role bindings is not possible without admin rights. With OpenShift 3.11 and the OLM (Operators Lifecycle Manager) in place, it should be simpler and transparent to the final user; the OLM will take care of deploying the Cluster Operator so that admin rights aren't needed anymore. Finally just remember that, in order to deploy a Kafka cluster, you don't need admin rights anymore. In that case a "Strimzi admin" role is enough for creating the Kafka related resources (as you can read here https://strimzi.io/docs/master/#assembly-getting-started-strimzi-admin-str). Thanks, Paolo. On Tue, Jan 29, 2019 at 3:08 PM Daniel Beilin wrote: > Hello, > > I want to deploy AMQ streams in such a way where we have one Cluster > operator sitting inside one project and other projects use it in order to > deploy their clusters. But the way it seems to work is not very "as a > service" and requires a cluster admin involvement in several places in > order to add a new project. > > Firstly, you need to change the env inside the deployment of the cluster > operator. > Secondly, you need to use the role binding in the new project > Thirdly, you need to re-deploy the cluster operator. > > These three steps require high privilege and not really accessible for > someone who is not a cluster admin, is there a way to make this more > accessible not to cluster admin? Or a way you don't need to do this for > every single project? > > Thank you in advanced, > Daniel > _______________________________________________ > Strimzi mailing list > Strimzi at redhat.com > https://www.redhat.com/mailman/listinfo/strimzi > -- PAOLO PATIERNO PRINCIPAL SOFTWARE ENGINEER, MESSAGING & IOT Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From jakub at scholz.cz Wed Jan 30 09:34:00 2019 From: jakub at scholz.cz (Jakub Scholz) Date: Wed, 30 Jan 2019 10:34:00 +0100 Subject: [Strimzi] Multiple namespace AMQ In-Reply-To: References: Message-ID: I would perhaps just add one more thing ... if you replace the RoleBindings in the installation files with ClusterRoleBindings, you will not need to change the RBAC for every new namespace. You will just need to modify the namespaces (support for listening automatically in all namespaces is in progress: https://github.com/strimzi/strimzi-kafka-operator/pull/1261) in the deployment - that should not require cluster-admin rights. But that of course means that you will give the operator access to your whole cluster. So it is a bit trade-off between security and user comfort. I'm afraid it is sometimes hard to combine everything ... user-friendliness, security, features into single package. Thanks & Regards Jakub On Wed, Jan 30, 2019 at 9:08 AM Paolo Patierno wrote: > Hi Daniel, > > the Cluster Operator needs these rights in order to watch/create/update > all the Kubernetes/OpenShift resources for deploying and managing one or > more Kafka clusters (and Kafka Connect, Mirror Maker instances). > It also needs the rights for delegating to the other operators (User and > Topic) the rights for handling the other resources for users and topics > management. > Giving these rights using a service account and role bindings is not > possible without admin rights. > With OpenShift 3.11 and the OLM (Operators Lifecycle Manager) in place, it > should be simpler and transparent to the final user; the OLM will take care > of deploying the Cluster Operator so that admin rights aren't needed > anymore. > Finally just remember that, in order to deploy a Kafka cluster, you don't > need admin rights anymore. In that case a "Strimzi admin" role is enough > for creating the Kafka related resources (as you can read here > https://strimzi.io/docs/master/#assembly-getting-started-strimzi-admin-str > ). > > Thanks, > Paolo. > > On Tue, Jan 29, 2019 at 3:08 PM Daniel Beilin > wrote: > >> Hello, >> >> I want to deploy AMQ streams in such a way where we have one Cluster >> operator sitting inside one project and other projects use it in order to >> deploy their clusters. But the way it seems to work is not very "as a >> service" and requires a cluster admin involvement in several places in >> order to add a new project. >> >> Firstly, you need to change the env inside the deployment of the cluster >> operator. >> Secondly, you need to use the role binding in the new project >> Thirdly, you need to re-deploy the cluster operator. >> >> These three steps require high privilege and not really accessible for >> someone who is not a cluster admin, is there a way to make this more >> accessible not to cluster admin? Or a way you don't need to do this for >> every single project? >> >> Thank you in advanced, >> Daniel >> _______________________________________________ >> Strimzi mailing list >> Strimzi at redhat.com >> https://www.redhat.com/mailman/listinfo/strimzi >> > > > -- > > PAOLO PATIERNO > > PRINCIPAL SOFTWARE ENGINEER, MESSAGING & IOT > > Red Hat > > > > > _______________________________________________ > Strimzi mailing list > Strimzi at redhat.com > https://www.redhat.com/mailman/listinfo/strimzi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ppatiern at redhat.com Thu Jan 31 06:44:24 2019 From: ppatiern at redhat.com (Paolo Patierno) Date: Thu, 31 Jan 2019 07:44:24 +0100 Subject: [Strimzi] Strimzi project ideas for Google Summer of Code Message-ID: Hi all, the Strimzi committers team would like to take part in the GSoC 2019 (Google Summer of Code) as last year when we had a student developing a "proof of concept" for the HTTP support in the Strimzi-Kafka bridge (already supporting AMQP 1.0 protocol). This year we would like to have feedback from the Strimzi community and if you have any project ideas that could be useful for you in the Strimzi - Kafka space that we could propose as GSoC projects. You could also help to mentor the development as well. The deadline for proposing ideas is February 6th, so if you have one (or more), please let us know on the mailing list or on the Strimzi Slack channel! Thanks, -- PAOLO PATIERNO PRINCIPAL SOFTWARE ENGINEER, MESSAGING & IOT Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From jakub at scholz.cz Thu Jan 31 08:18:14 2019 From: jakub at scholz.cz (Jakub Scholz) Date: Thu, 31 Jan 2019 09:18:14 +0100 Subject: [Strimzi] RC2 for Strimzi 0.10.0 Message-ID: Hi, The Release Candidate 2 for the first Strimzi 0.10.0 release is now available. The main changes since RC1 is update of the sundr.io dependency and some related tests which were causing build issues in some environments. For more details and the upgrade procedure, go to: https://github.com/strimzi/strimzi-kafka-operator/releases/tag/0.10.0-rc2 Thanks & Regards Jakub -------------- next part -------------- An HTML attachment was scrubbed... URL: