[EnMasse] LoadBalancer pending on minikube

Tom Harris tom.harris at ammeon.com
Tue Feb 19 17:06:27 UTC 2019


Hi Ulf,
Thanks for your response. Our first POC will be to integrate legacy jms
clients to EnMasse over AMQP, so full JMS semantics will be required.
BTW I'm having an issue connecting my Apache Qpid JMS example client  to
the broker instance, it seems to hang shortly after connection is created.
on line


     connectionFactory = new
JmsConnectionFactory("user1","password","amqps://
192.168.42.66:30876?transport.trustAll=true&transport.verifyHost=false");

*     connection.start();*

        // Step 2. Create a session
        Session session = connection.createSession(false,
Session.AUTO_ACKNOWLEDGE);
I am using the following url
"amqps://*192.168.42.66:30876 <http://192.168.42.66:30876>*
?transport.trustAll=true&transport.verifyHost=false"

minikube service  list
|---------------|-----------------------------------|----------------------------|
|   NAMESPACE   |               NAME                |            URL
     |
|---------------|-----------------------------------|----------------------------|
| default       | kubernetes                        | No node port
     |
| enmasse-infra | address-space-controller          | No node port
     |
| enmasse-infra | api-server                        | No node port
     |
| enmasse-infra | broker-cks83h1y5v                 | No node port
     |
| enmasse-infra | console-cks83h1y5v                | No node port
     |
| enmasse-infra | console-cks83h1y5v-external       |
http://192.168.42.66:31862 |
| enmasse-infra | messaging-cks83h1y5v              | No node port
     |
*| enmasse-infra | messaging-cks83h1y5v-external     |
http://192.168.42.66:30876 <http://192.168.42.66:30876> |*
| enmasse-infra | messaging-wss-cks83h1y5v-external |
http://192.168.42.66:30768 |
| enmasse-infra | restapi                           |
http://192.168.42.66:30583 |
| enmasse-infra | standard-authservice              | No node port
     |
| kube-system   | kube-dns                          | No node port
     |
| kube-system   | kubernetes-dashboard              | No node port
     |
| kube-system   | tiller-deploy                     | No node port
     |
|---------------|-----------------------------------|----------------------------|


Artemis log shows
2019-02-19T14:16:40.567Z ERROR [server] AMQ224088: Timeout (10 seconds)
while handshaking has occurred.
2019-02-19T14:53:29.294Z ERROR [server] AMQ224088: Timeout (10 seconds)
while handshaking has occurred.
2019-02-19T14:55:41.728Z ERROR [server] AMQ224088: Timeout (10 seconds)
while handshaking has occurred.
2019-02-19T15:28:04.179Z ERROR [server] AMQ224088: Timeout (10 seconds)
while handshaking has occurred.


threads

AMQPQueueExample [Java Application]
jms.example.AMQPQueueExample at localhost:35752
Thread [main] (Running)
Thread [nioEventLoopGroup-2-1] (Running)
Daemon Thread [threadDeathWatcher-3-1] (Running)
Thread [QpidJMS Connection Executor:
ID:3e775af4-36a3-42ba-a5bd-953e8b7ba27b:1] (Running)
Daemon Thread [AmqpProvider:(1):[amqps://
192.168.42.66:30876?transport.verifyHost=false&transport.trustAll=true]]
(Running)
/home/jdk1.8.0_91/bin/java (19 Feb 2019, 17:02:33)


/Tom

On Tue, 19 Feb 2019 at 10:15, Ulf Lilleengen <lulf at redhat.com> wrote:

> Hi Tom,
>
> At present, there is not support for HA in the brokered address space. In
> Kubernetes a failing broker would typically be recreated within seconds, so
> the only difference between that mechanism and the traditional backup
> kicking in would be the slightly larger time window for clients to
> reconnect.
>
> We may support HA and Artemis clusters in the future via an Artemis
> 'operator' component (in development by some of the Artemis developers),
> but I don't think this is something we will implement for EnMasse
> specifically.
>
> If you don't require full JMS semantics, the standard address space will
> allow clients to keep their connection to the routers while brokers are
> restarting, which is somewhat a bit more HA :)
>
> Best regards,
>
> Ulf
>
> On Tue, Feb 19, 2019 at 10:48 AM Tom Harris <tom.harris at ammeon.com> wrote:
>
>> Thanks Ulf,
>>
>> That was a simple resolution :-) my bad for not trying the obvious.
>> But now I have another question ...is there support to deploy Apache
>> Activemq Artemis in a HA configuration i.e Live-Backup.
>> There is a pre-configured  addressspaceplan "brokered-single-broker"
>> ...what about multiple brokers defined in a HA configuration..?
>> If not is there a recognized design pattern to follow..?
>>
>> BR
>> Tom.
>>
>>
>>
>> On Tue, 19 Feb 2019 at 05:31, Ulf Lilleengen <lulf at redhat.com> wrote:
>>
>>> Hi Tom,
>>>
>>> On minikube, LoadBalancers will be in the pending state, as minikube is
>>> not running an actual load balancer, but the service should still get
>>> exposed. I noticed the same thing as you, however, that the minikube
>>> service command does not print anything.
>>>
>>> However, running minikube service list seems to work:
>>>
>>> [lulf at pteppic enmasse]$ minikube service list | grep
>>> console-qctoiu26ev-external
>>> | enmasse-infra | console-qctoiu26ev-external       |
>>> http://192.168.39.95:31871 |
>>>
>>> Remember to replace 'http' with 'https' in your browser!
>>>
>>> Note that on Kubernetes, you need to create a console admin user in
>>> order to access the console (password 'password'):
>>>
>>> cat<<EOF | kubectl create -f -
>>>  apiVersion: user.enmasse.io/v1beta1
>>>  kind: MessagingUser
>>>  metadata:
>>>    name: myspace.admin
>>>  spec:
>>>    username: admin
>>>    authentication:
>>>      type: password
>>>      password: cGFzc3dvcmQ=
>>>    authorization:
>>>      - operations: ["manage"]
>>> EOF
>>>
>>> I've raised https://github.com/EnMasseProject/enmasse/issues/2345 to
>>> make sure this gets documented.
>>>
>>> Hope this helps,
>>>
>>> Ulf
>>>
>>>
>>>
>>>
>>>
>>> On Mon, Feb 18, 2019 at 6:04 PM Tom Harris <tom.harris at ammeon.com>
>>> wrote:
>>>
>>>> Hi,
>>>> Please bear with me as I'm finding my way around enmasse and kubernetes
>>>> and this issue may be something basic on my side.
>>>>
>>>> I was trying to follow the below example for version enmasse-0.26.2
>>>> http://enmasse.io/documentation/master/kubernetes/
>>>>
>>>> I think I executed all steps, but my LoadBalancers are all <pending>
>>>> and the following returns nothing.
>>>>
>>>>  minikube service console-cks83h1y5v-external
>>>>
>>>>
>>>> kubectl get addressspace myspace -o
>>>> jsonpath={.status.endpointStatuses[?(@.name==\'messaging\')].externalHost}
>>>>
>>>> Any help appreciated.
>>>>
>>>> Here is some kubectl o/p
>>>>
>>>> kubectl get pods
>>>> NAME                                        READY   STATUS    RESTARTS
>>>>  AGE
>>>> address-space-controller-6659584688-cj6sk   1/1     Running   5
>>>>   3d
>>>> agent.cks83h1y5v-7fc896dbcb-vvn7p           1/1     Running   2
>>>>   2h
>>>> api-server-7448cdb9b9-hzbsp                 1/1     Running   3
>>>>   3d
>>>> broker.cks83h1y5v-84974b75d7-qsmjf          1/1     Running   1
>>>>   2h
>>>> keycloak-77fc676fd4-76n92                   1/1     Running   2
>>>>   3d
>>>> keycloak-controller-5f9b878f55-mgsc6        1/1     Running   2
>>>>   3d
>>>>
>>>>
>>>> kubectl get services
>>>> NAME                                TYPE           CLUSTER-IP
>>>>  EXTERNAL-IP   PORT(S)             AGE
>>>> address-space-controller            ClusterIP      10.102.64.4
>>>> <none>        8080/TCP            3d
>>>> api-server                          ClusterIP      10.104.57.84
>>>>  <none>        443/TCP,8080/TCP    3d
>>>> broker-cks83h1y5v                   ClusterIP      10.105.203.67
>>>> <none>        55671/TCP           2h
>>>> console-cks83h1y5v                  ClusterIP      10.106.49.252
>>>> <none>        8081/TCP,8088/TCP   2h
>>>> console-cks83h1y5v-external         LoadBalancer   10.110.174.169
>>>>  <pending>     8081:31862/TCP      2h
>>>> messaging-cks83h1y5v                ClusterIP      10.96.92.201
>>>>  <none>        5672/TCP,5671/TCP   2h
>>>> messaging-cks83h1y5v-external       LoadBalancer   10.108.161.106
>>>>  <pending>     5671:30876/TCP      2h
>>>> messaging-wss-cks83h1y5v-external   LoadBalancer   10.104.145.87
>>>> <pending>     5671:30768/TCP      2h
>>>> restapi                             LoadBalancer   10.104.101.118
>>>>  <pending>     443:30583/TCP       3d
>>>> standard-authservice                ClusterIP      10.101.175.149
>>>>  <none>        5671/TCP,8443/TCP   3d
>>>>
>>>> /T
>>>>
>>>>
>>>>
>>>> This email and any files transmitted with it are confidential and
>>>> intended solely for the use of the individual or entity to whom they are
>>>> addressed. If you have received this email in error please notify the
>>>> system manager. This message contains confidential information and is
>>>> intended only for the individual named. If you are not the named addressee
>>>> you should not disseminate, distribute or copy this e-mail.
>>>>
>>>> _______________________________________________
>>>> enmasse mailing list
>>>> enmasse at redhat.com
>>>> https://www.redhat.com/mailman/listinfo/enmasse
>>>>
>>>
>> This email and any files transmitted with it are confidential and
>> intended solely for the use of the individual or entity to whom they are
>> addressed. If you have received this email in error please notify the
>> system manager. This message contains confidential information and is
>> intended only for the individual named. If you are not the named addressee
>> you should not disseminate, distribute or copy this e-mail.
>>
>>

-- 
This email and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed. 
If you have received this email in error please notify the system manager. 
This message contains confidential information and is intended only for the 
individual named. If you are not the named addressee you should not 
disseminate, distribute or copy this e-mail.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/enmasse/attachments/20190219/c6c33ca0/attachment.htm>


More information about the enmasse mailing list