[Container-tools] Atomic Developer Bundle and OpenShift

Langdon White langdon at redhat.com
Wed Nov 4 15:51:59 UTC 2015



On 11/04/2015 10:41 AM, Clayton Coleman wrote:
> The one in RHEL is going to match what we ship in 3.1 the next time we rev RHEL.
+1
> On Wed, Nov 4, 2015 at 8:31 AM, Michal Fojtik <mfojtik at redhat.com> wrote:
>> Ideally the version we have in RHEL should match the version we have in
>> OpenShift (at least on the API level).
+1
>> On Wed, Nov 4, 2015 at 1:45 PM, Vaclav Pavlin <vpavlin at redhat.com> wrote:
>>>
>>>
>>> On Wed, Nov 4, 2015 at 1:39 PM, Maciej Szulik <maszulik at redhat.com> wrote:
>>>>
>>>>
>>>> On 11/04/2015 05:06 AM, Praveen Kumar wrote:
>>>>> On Tue, Nov 3, 2015 at 10:35 PM, Langdon White <langdon at redhat.com>
>>>>> wrote:
>>>>>>
>>>>>>
>>>>>> On 11/03/2015 04:21 AM, Maciej Szulik wrote:
>>>>>>>
>>>>>>> My $0.02 in the topic, to clear out the situation and save some time.
>>>>>>> OpenShift is built on top of k8s, so when running OpenShift you
>>>>>>> actually
>>>>>>> run k8s with OpenShift. See for example this piece of start code [1]
>>>>>>> and
>>>>>>> you'll see how we start k8s controllers, on which OS heavily relays
>>>>>>> on.
>>>>>>> That's why running both OpenShift and k8s is not possible without
>>>>>>> too much of a hassle. Additionally there's one more point to remember,
>>>>>>> when two k8s master will be running they will start fighting for the
>>>>>>> pods they've started and nodes they control, which will result in more
>>>>>>> mess you would want to have.
>>>>>>> Given above arguments it's perfectly understandable to show and
>>>>>>> explain
>>>>>>> you should have one or the other running at any given point in time.
>>>>>>> Besides when running OS you still have full access to k8s api in
>>>>>>> any way you can have when running plain k8s, iow. both kubectl and
>>>>>>> REST API work, this can be seen by looking again at OS source code [2]
>>>>>>> or logs from OS start.
>>>>>>> I hope that explains more, and if you have more questions don't
>>>>>>> hesitate
>>>>>>> to ask.
>>>>>>>
>>>>>>> Maciej
>>>>>>>
>>>>>>>
>>>>>>> [1]
>>>>>>>
>>>>>>> https://github.com/openshift/origin/blob/master/pkg/cmd/server/start/start_master.go#L528-L557
>>>>>>> [2]
>>>>>>>
>>>>>>> https://github.com/openshift/origin/blob/master/pkg/cmd/server/kubernetes/master.go#L50-L56
>>>>>>>
>>>>>> So.. the problem is that the two versions are different. So, if you are
>>>>>> gonna deploy in prod on RHEL/CentOS (or atomic), i believe you get the
>>>>>> rhel/centos version of k8s. As a result, it is not a good test to use
>>>>>> the
>>>>>> OpenShift k8s. So, I think we all understand that OpenShift is using
>>>>>> k8s,
>>>>>> the problem is it is the wrong version. IIUC.
>>>>
>>>> Langdon, can you please specify what you mean by OpenShift is using
>>>> wrong version of k8s? We're trying hard to follow upstream, our
>>>> current version is 20 days old [1] with quite a lot of cherry-picks
>>>> that are crucial for our 1.1 release soon to happen.
>>> Hi Maciej,
>>>
>>> I think Langdon meant the difference between version of Kubes used in OS
>>> and shipped in RHEL Atomic (but that's just my rough guess:) )
yep
>>> Cheers,
>>> Vašek
>>>>
>>>> [1]
>>>> https://github.com/kubernetes/kubernetes/commit/4c8e6f47ec23f390978e651232b375f5f9cde3c7
>>>>
>>>>> Yes, this is what our current issue is (version mismatch). Yesterday I
>>>>> again had chat with Maciej and Michal about same. According to them in
>>>>> future this will be taken care of by openshift team (same version of
>>>>> k8s they are going to use with OS also). I also did some experiment[0]
>>>>> around k8s api which openshift provide and there is some network issue
>>>>> for service (I have to follow up with Maciej for same).
>>>>>
>>>>>> I also understand it is painful to bind to different IPs and potential
>>>>>> conflicts on docker, however, I think it is worth the experimentation
>>>>>> to
>>>>>> simplify the experience for the end user.
>>>>>
>>>>> Yes it is painful but I got a response to try bind() function[1] with
>>>>> different IP + services which I will check today and see if that
>>>>> workout and let you folks know. k8s and etcd services have options to
>>>>> run it at different port then default and when I used this option, I
>>>>> was able to run k8s services along with openshift but then whatever
>>>>> pods were running before on plain k8s  just crushed as soon as OS
>>>>> service started. So even we try to map different service with
>>>>
>>>> This is the case I was talking about. K8s is very aggressive when it
>>>> comes to managing docker containers. If you watch carefully every
>>>> container started by k8s has a `k8s_` prefix, with this in mind you
>>>> might imagine what happens when two k8s (the stadalone one and the one
>>>> OpenShift one) run, trying to take over all the containers with that
>>>> prefix. And containers is not the only part that k8s very aggressively
>>>> manages, others that I can think of will be networking which provides
>>>> access to the internal parts of the created infrastructure. The bind
>>>> problem Praveen is facing now is the first one on the long list of
>>>> problems, I imagine, you'll have to deal with.

Definitely. I would not be surprised by any of the issues described. 
However, we still have to offer our users a platform where they can 
develop containers and test them on an environment *not* running 
openshift. Personally, I think the original plan of two completely 
separate vagrant boxes would be simpler. I also think we could do the 
"on/off" switch in a vagrantfile either with provisioning or with a 
plugin. I was just hoping to provide users a simpler option for how to 
be able to choose between native kube and openshift. I did not think it 
would be easy, however, I would like the *user* experience to be as 
optimal as possible even if our engineers have to jump through some 
crazy hoops to make it work.

>>>>> different IP to make sure port conflict not occur then also this issue
>>>>> will bug us. (Maciej also faced same issue when I had discussion with
>>>>> him and then suggested to use single service at given time)
>>>>
>>>> I'm in contact with Praveen about solving problems he's having to run
>>>> services on top of the internal OpenShift k8s, but most of them are
>>>> due to the security constraints OpenShift provides to restrict
>>>> access to the k8s infrastructure.

thanks!

langdon
>>>>>> If the experiment fails, or proves to be too much effort, then, yes, I
>>>>>> think
>>>>>> turning them each "on and off" is probably the best answer.
>>>>>
>>>>> Yes, this is our last option.
>>>>>
>>>>> [0] http://fpaste.org/286696/60948414/
>>>>> [1] http://linux.die.net/man/2/bind
>>>>>
>>>> _______________________________________________
>>>> Container-tools mailing list
>>>> Container-tools at redhat.com
>>>> https://www.redhat.com/mailman/listinfo/container-tools
>>>
>>>
>>>
>>> --
>>> Architect - Senior Software Engineer
>>> Developer Experience
>>> Brno, Czech Republic
>>> Phone: +420 739 666 824
>>>
>>> _______________________________________________
>>> Container-tools mailing list
>>> Container-tools at redhat.com
>>> https://www.redhat.com/mailman/listinfo/container-tools
>>>
> _______________________________________________
> Container-tools mailing list
> Container-tools at redhat.com
> https://www.redhat.com/mailman/listinfo/container-tools




More information about the Container-tools mailing list