[Freeipa-users] multihome - single interface?

Petr Spacek pspacek at redhat.com
Wed Apr 15 14:50:48 UTC 2015


On 15.4.2015 09:59, Janne Blomqvist wrote:
> On 2015-04-14 10:17, Petr Spacek wrote:
>> On 13.4.2015 16:07, Janne Blomqvist wrote:
>>> On 2015-04-10 12:05, Petr Spacek wrote:
>>>> On 10.4.2015 10:52, Janne Blomqvist wrote:
>>>>> On 2015-04-07 14:29, Martin Kosek wrote:
>>>>>> On 04/05/2015 08:03 PM, Dmitri Pal wrote:
>>>>>>> On 04/05/2015 12:51 PM, Janelle wrote:
>>>>>>>> Hello,
>>>>>>>> 
>>>>>>>> Trying to find a way on a multi-homed server to force IPA
>>>>>>>> and its
>>>>>> related
>>>>>>>> apps to listen on a specific interface. I can find all
>>>>>>>> kinds of
>>>>>> info saying
>>>>>>>> "the services listen on all interfaces by default" so there
>>>>>>>> must be
>>>>>> a way?
>>>>>>>> 
>>>>>>>> Thank you ~J
>>>>>>>> 
>>>>>>> Sounds familiar. I think there is a ticket open for that.
>>>>>> 
>>>>>> This is the RFE:
>>>>>> 
>>>>>> https://fedorahosted.org/freeipa/ticket/3338
>>>>>> 
>>>>>> Just in case anybody would like to help us extend FreeIPA
>> installers :-)
>>>>>> 
>>>>> 
>>>>> Hi,
>>>>> 
>>>>> I have a related, or opposite really, problem.
>>>>> 
>>>>> So I have configured IPA for a domain (say, ipa.example.org).
>>>>> Then
>> I have a
>>>>> bunch of client machines that can join the domain etc. Fine so
>>>>> far.
>>>>> 
>>>>> However, I also have another bunch of client machines on an
>> internal network
>>>>> (with NAT access to the outside world). So for these I add
>>>>> another
>> network
>>>>> interface on the ipa servers.  So my ipa servers have two IP's
>>>>> and
>> dns names,
>>>>> say, ipa1.ipa.example.org (some public IP) and ipa1.local
>> (10.x.x.x IP). Now
>>>>> it doesn't work so well anymore for these clients, because the
>>>>> krb
>> principals
>>>>> for the IPA server(s) are bound to the public name, so joining
>>>>> the
>> domain
>>>>> fails (ipa1.local != ipa1.ipa.example.org). I can sort-of make
>>>>> it
>> work by
>>>>> joining via the public interface (manually creating the machine
>> accounts on
>>>>> the ipa server first, since otherwise it doesn't understand
>> clientX.local dns
>>>>> names/IP's), but then obviously all communication goes via the
>>>>> NAT
>> box which
>>>>> is a SPOF.
>>>>> 
>>>>> So is there some reasonable way to make the above work?
>>>> 
>>>> IMHO cleanest solution is to properly configure routing in your
>> network to
>>>> route your public IP range properly to the respective subnet
>> instead of going
>>>> through a NAT.
>>>> 
>>>> Details depend on your network so I do not have exact steps for
>> you, sorry.
>>>> 
>>> Thanks. So do you mean something like on each client machine in the
>> NATed network I add special routes to the ipa servers? And by that the 
>> client machines would know that ipa1.ipa.example.org can be reached via
>> ipa1.local instead of going via the default route (which is the NAT
>> box)?
>> 
>> Details really depend on your setup. For example:
>> 
>> - IPA servers are in subnet 10.1.1.0/24 and have public addresses in 
>> 192.0.2.0/24 subnet. - Clients are in 10.2.2.0/24 subnet behind NAT,
>> subnet gateway is 10.2.2.254.
>> 
>> In this setup you need to add route 192.0.2.0/24 to the gateway
>> 10.2.2.254 (and to add 192.0.2.0/24 addresses to IPA server interfaces
>> if they are not configured yet).
>> 
>> If you have really small network where all hosts are in a single 
>> network then you really might need to add route to multiple hosts to
>> get rid of SPOF on gateway.
>> 
>> Here you need to consider what happens if adding the route to all hosts
>> is worth the effort: What happens if the gateway is down? Is the
>> gateway a separate router or is it some kind of all-in-one
>> switch+router as typically seen in really small setups?
>> 
>> I hope this helps.
> 
> Ok, lets take a few steps back and allow me to explain. So the system I'm
> discussing is a HPC cluster. There is a special "frontend" node with a
> public IP & DNS where users log in, compile their code, submit batch jobs
> etc. Then there are a bunch of "compute" nodes which execute the batch
> jobs (at the moment about 550 compute nodes, FWIW). These compute nodes
> are on a private 10.x.x.x network, where also the frontend node has an IP
> and DNS name. And the frontend node then also functions as a NAT gateway
> node for the internal compute network.
> 
> Now, what we want to do is migrating from the existing cluster-specific 
> passwd/group databases to a freeIPA cluster which is also shared by some 
> other machines. But the simple solution of adding an extra interface to 
> the IPA servers to connect them directly to the cluster internal 10.x.x.x
> network doesn't work, as then the Kerberos principal name of the IPA
> servers don't match the DNS names on the cluster internal network.

Okay. Do I understand correctly that FreeIPA server will be outside the
cluster network, i.e. in the 'public' subnet?

What prevents you from using 'public' name and IP address of the FreeIPA
server for cluster nodes inside NATed network? It should just work as long
as routing on NAT box is setup properly.

What am I missing? :-)

-- 
Petr^2 Spacek




More information about the Freeipa-users mailing list