[Container-tools] Atomic Developer Bundle and OpenShift

Langdon White langdon at redhat.com
Tue Nov 3 17:05:08 UTC 2015



On 11/03/2015 04:21 AM, Maciej Szulik wrote:
>
>
> On 11/03/2015 05:09 AM, Praveen Kumar wrote:
>> On Mon, Nov 2, 2015 at 11:04 PM, Langdon White <langdon at redhat.com> 
>> wrote:
>>> Hi list,
>>>
>>> I think we have a number of moving parts for getting OpenShift 
>>> integrated in
>>> to the ADB vagrant box and I am nervous we don't have all the aspects
>>> assigned to anyone (on either the c-t team or the OpenShift team). I 
>>> am also
>>> concerned that I might be missing some aspects. Please weigh in if 
>>> you own
>>> one of these pieces and/or if you think we are missing any.
>>>
>>> * a method for docker images to be pre-loaded on to the vagrant 
>>> boxes: As
>>> you probably agree, we would really like the v-up experience of the 
>>> ADB to
>>> be as quick and painless as possible. One of things that will make that
>>> possible is to "pre-install" the docker images for OpenShift, 
>>> AtomicApp,
>>> v2c, etc. However, the build tooling (koji) does not allow a build 
>>> to access
>>> the general internet. As a result, "docker pull" is not an option 
>>> (at least
>>> from docker-hub). We have a couple options here:
>>>    * stand up a docker registry in the build environment that the 
>>> builds can
>>> access: While this seems like a good idea, the timeline to make this 
>>> happen
>>> is probably on the order of months not days
>>>    * auto-rpm-ify the docker images: Build the images in koji, use 
>>> koji to
>>> rpm-ify the binary images, pull the rpms as per normal, extract the 
>>> rpm and
>>> inject them in to the docker-images storage. Likely, this is the 
>>> most viable
>>> solution. However, it may run in to problems with docker-registry-v2
>>> (doesn't support import at this time).
>>> Is anyone owning testing and resolving this issue?
>>>
>>> * OpenShift needs dns to allow a user to access their applications: For
>>> OpenShift to give a good user experience, it needs to manage some 
>>> wildcard
>>> domain. In other words, when a user sets up an application, they 
>>> need to
>>> give it a name and they access the application from their host web 
>>> browser
>>> at something like "myCoolApp.myADB.lcl". OpenShift uses host-headers to
>>> route the browser to the correct app. However, this means, if 
>>> OpenShift is
>>> running in a VM, the host machine needs to know to route *.myADB.lcl 
>>> to the
>>> VM and then to OpenShift. As the VM will come up on an (likely) 
>>> unknowable
>>> IP, we planned to use vagrant-landrush, a plugin for vagrant that 
>>> manages a
>>> DNS server for this type of use case. Currently, this plugin still 
>>> has some
>>> problems on windows and has never been tested in this exact use 
>>> case. Is
>>> someone working on:
>>>    1) testing that this setup will actually work with OpenShift 
>>> (even on mac
>>> or linux where, i believe, v-landrush is known to work)
>>>    2) looking in to the issues on windows?
>>>
>>> What landrush, loosely, does:
>>> on start of the vagrant vm; vagrant launches the box; vagrant calls
>>> landrush; landrush looks at the ip of the vagrant-vm; inserts 
>>> *.myADB.lcl ->
>>> vagrant-vm-ip;
>>>
>>> full example:
>>> web browser gets user request for mycoolnewwebsite.myADB.lcl; 
>>> browser goes
>>> to host resolution (where that is depends on OS), in there it finds
>>> *.myADB.lcl-> vagrant-vm-ip; browser then navigates to that ip; 
>>> OpenShift in
>>> the VM listens on that address:80, OpenShift looks at host-header
>>> (mycoolnewwebsite.myADB.lcl) and maps it to the correct running 
>>> website in
>>> OpenShift
>>>
>>> * allow for k8s + docker to work independently of OpenShift: In the 
>>> plans
>>> for ADB we wanted to allow a developer to use k8s+docker directly as 
>>> well as
>>> OpenShift. However, this is not quite as easy as it seems as the k8s 
>>> version
>>> on CentOS and the k8s version in OpenShift are not the same. As a 
>>> result,
>>> even if they are installed separately (see installation bullets 
>>> elsewhere),
>>> they need to be listening on different IP bindings to allow them to 
>>> listen
>>> on the same port. Does someone own testing and ensuring the setup of 
>>> these
>>> conflicting services?
>>
>> I was working with k8s + openshift work independently part and tried
>> different method [0], [1] to isolate those services to avoid conflicts
>> and did discuss it with our technical meeting. The issue is still not
>> resolved and I had a discussion with openshift dev member (Maciej
>> Szulik) who said that is not a good idea and in ideal situation one
>> service should be running at any given point (k8s or openshift) [2]. I
>> would love to look it again if we have some way/suggestion to go
>> about. I will also check if bind will work as per our requirement.
>
> My $0.02 in the topic, to clear out the situation and save some time.
> OpenShift is built on top of k8s, so when running OpenShift you actually
> run k8s with OpenShift. See for example this piece of start code [1] and
> you'll see how we start k8s controllers, on which OS heavily relays on.
> That's why running both OpenShift and k8s is not possible without
> too much of a hassle. Additionally there's one more point to remember,
> when two k8s master will be running they will start fighting for the
> pods they've started and nodes they control, which will result in more
> mess you would want to have.
> Given above arguments it's perfectly understandable to show and explain
> you should have one or the other running at any given point in time.
> Besides when running OS you still have full access to k8s api in
> any way you can have when running plain k8s, iow. both kubectl and
> REST API work, this can be seen by looking again at OS source code [2]
> or logs from OS start.
> I hope that explains more, and if you have more questions don't hesitate
> to ask.
>
> Maciej
>
>
> [1] 
> https://github.com/openshift/origin/blob/master/pkg/cmd/server/start/start_master.go#L528-L557
> [2] 
> https://github.com/openshift/origin/blob/master/pkg/cmd/server/kubernetes/master.go#L50-L56
>

So.. the problem is that the two versions are different. So, if you are 
gonna deploy in prod on RHEL/CentOS (or atomic), i believe you get the 
rhel/centos version of k8s. As a result, it is not a good test to use 
the OpenShift k8s. So, I think we all understand that OpenShift is using 
k8s, the problem is it is the wrong version. IIUC.

I also understand it is painful to bind to different IPs and potential 
conflicts on docker, however, I think it is worth the experimentation to 
simplify the experience for the end user.

If the experiment fails, or proves to be too much effort, then, yes, I 
think turning them each "on and off" is probably the best answer.

langdon

>>
>> [0] 
>> http://post-office.corp.redhat.com/archives/aos-devel/2015-October/msg00936.html
>> [1] 
>> http://blog.scottlowe.org/2013/09/04/introducing-linux-network-namespaces/
>> [2] http://fpaste.org/286287/23068144/
>>
>>>
>>>
>>> Is that it?
>>>
>>> langdon
>>
>>
>>




More information about the Container-tools mailing list