<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<p><br>
</p>
<br>
<div class="moz-cite-prefix">On 27. 06. 2017 10:54, Gordon Sim
wrote:<br>
</div>
<blockquote
cite="mid:260d5f34-2c3a-2e19-4a1d-d2a08d44be9f@redhat.com"
type="cite">On 27/06/17 09:40, Marko Lukša wrote:
<br>
<blockquote type="cite">
<br>
On 27. 06. 2017 10:32, Gordon Sim wrote:
<br>
<blockquote type="cite">I've been experimenting with using the
service broker integration from address-controller and it all
works very nicely. (The first address always takes a while as
it has to start up all the infrastructure which may involve
pulling down latest images.)
<br>
<br>
One thing that I do think is confusing though is the option to
put things into a particular project, as the infrastructure is
*not* actually placed in that chosen project. You even get a
notice at the end:
<br>
<br>
"enmasse-queue-2ps8x has been added to My Project
successfully"
<br>
<br>
</blockquote>
<br>
"enmasse-queue-2ps8x" in this case is the service Instance
object, which is indeed added to "My Project",
<br>
</blockquote>
<br>
Ok, that makes sense. Is there a way to view that service instance
via CLI tools (oc or kubectl)?
<br>
<br>
</blockquote>
<br>
You need kubectl version 1.6+ and then configure it to use the
service catalog API server instead of the main OpenShift API server,
like this:<br>
<br>
<code style="box-sizing: border-box; font-family: SFMono-Regular,
Consolas, "Liberation Mono", Menlo, Courier, monospace;
font-size: 13.6px; padding: 0.2em 0px; margin: 0px;
background-color: rgba(27, 31, 35, 0.0470588); border-radius:
3px;">alias sc="kubectl --server=<a class="moz-txt-link-freetext" href="https://$(oc">https://$(oc</a> get route apiserver
-n service-catalog -o jsonpath=\"{.spec.host}\")
--insecure-skip-tls-verify"</code>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
<br>
<br>
Then you can use <i>sc</i> to list brokers, serviceClasses,
instances and bindings, as explained here:
<a class="moz-txt-link-freetext" href="https://github.com/EnMasseProject/enmasse/tree/master/documentation/servicecatalog#provisioning-addresses-through-the-cli">https://github.com/EnMasseProject/enmasse/tree/master/documentation/servicecatalog#provisioning-addresses-through-the-cli</a><br>
<br>
<blockquote
cite="mid:260d5f34-2c3a-2e19-4a1d-d2a08d44be9f@redhat.com"
type="cite">
<blockquote type="cite">but the infrastructure created by the
broker is in a different namespace (could also be in a
completely different cluster). The idea of the Service Catalog
is to give users the ability to self-provision black box
services, but a broker could also provision the services in the
same namespace, allowing the user access to the internals of a
service (white box).
<br>
<br>
The namespace in which a service instance is created is now
passed to the broker, so we could create the MaaS infrastructure
in that same namespace. But, do you really want/need that?
<br>
</blockquote>
<br>
Personally I feel it would be a useful option, but that may just
be due to my familiarity with the 'old ways' and a general dislike
of 'black boxes' :-)
<br>
<br>
</blockquote>
<br>
The main problem with this is that users would then be able to
reconfigure the provisioned infrastructure components directly,
without going through the Service Catalog/Broker mechanism. This
shouldn't be allowed (the SC expects it is the only one modifying
the services).<br>
<br>
<blockquote
cite="mid:260d5f34-2c3a-2e19-4a1d-d2a08d44be9f@redhat.com"
type="cite">Let me think on it some more.
<br>
<br>
_______________________________________________
<br>
enmasse mailing list
<br>
<a class="moz-txt-link-abbreviated" href="mailto:enmasse@redhat.com">enmasse@redhat.com</a>
<br>
<a class="moz-txt-link-freetext" href="https://www.redhat.com/mailman/listinfo/enmasse">https://www.redhat.com/mailman/listinfo/enmasse</a>
<br>
</blockquote>
<br>
</body>
</html>