[katello-devel] RFC: Strategy for Importing Data from Spacewalk

Cliff Perry cperry at redhat.com
Mon Oct 1 12:58:53 UTC 2012


On 10/01/2012 01:46 PM, Bryan Kearney wrote:
> On 09/28/2012 03:43 PM, Mike McCune wrote:
>> On 09/28/2012 10:54 AM, Hugh Brock wrote:
>>> On Fri, Sep 28, 2012 at 01:06:13PM -0400, Bryan Kearney wrote:
>>>> On 09/28/2012 12:59 PM, James Labocki wrote:
>>>>> Comments inline
>>>>>
>>>>> ----- Original Message -----
>>>>>> From: "Bryan Kearney"<bkearney at redhat.com>
>>>>>> To: katello-devel at redhat.com, "James Labocki"<jlabocki at redhat.com>,
>>>>>> ejacobs at redhat.com
>>>>>> Sent: Friday, September 28, 2012 12:42:06 PM
>>>>>> Subject: RFC: Strategy for Importing Data from Spacewalk
>>>>>>
>>>>>> All:
>>>>>>
>>>>>> I am looking for comments on an approach for migrating data from
>>>>>> spacewalk into katello. The model I was looking at is one set of
>>>>>> scripts
>>>>>> [1] which can export data from a spacewalk server and create a set of
>>>>>> flat files. These flat files could then be loaded into katello using
>>>>>> new
>>>>>> import commands [2] in the cli. My thinking was that the import
>>>>>> commands
>>>>>> could be re-used for any other initial set up work which may not be
>>>>>> from
>>>>>> an existing spacewalk server.
>>>>>
>>>>> Are there any examples of other systems management tools that create
>>>>> flat files for backing up or migrating configuration and data? If
>>>>> there are and we could align enough with them to be able to support
>>>>> them then this approach could provide value beyond just migrating
>>>>> Satellite to Katello (Opsware to Katello, OEM to Katello, etc). I
>>>>> doubt there is a standard way that exists, but might be worth
>>>>> thinking about it.
>>>>
>>>> I dont know, I can look around.
>>>>
>>>>>
>>>>>>
>>>>>> What is working today is the following;
>>>>>>
>>>>>> Export Scripts:
>>>>>> -------------------
>>>>>> - Export orgs, users, activation keys, system groups.
>>>>>> - There are default credentials, or you can pass them in on the
>>>>>> command line
>>>>>> - The export can pass data to std out, or put it into a file.
>>>>>
>>>>> Is there anyway to go directly to Katello instead of creating the
>>>>> file? Are we doing this because we don't yet trust the export/import?
>>>>
>>>> We could. I liked the csv files for a couple of reasons:
>>>>
>>>> 1) If there was custom Transforms which needed to occur between
>>>> extract and loading, this allowed for it.
>>>> 2) The load could be run several times.
>>>> 3) The load is not connected to the export, so a user who is setting
>>>> up data could create csv files by some other means.
>>>
>>> If the data is hierarchical, I suppose there might be some merit in
>>> using JSON rather than CSV. But I like the flat file idea in general.
>>>
>>> --H
>>>
>>
>> for the love of all that is holy please lets use something other than
>> CSV. CSV is garbage when it comes to encoding, spaces, formatting,
>> hierarchy and all the other things we have support for in more modern
>> text transports (JSON, XML, YAML, etc..)
>>
>> that aside, +1 to a textual based format.
>>
>
> My thought was that folks may want to hand edit the files. But, I can
> easily move away from it.
>

Well, I do like the idea of the three step (maybe 2); 'export to file', 
[optional] mung files, read files and and write/call APIs to create new. 
A single step process removes the optional middle bit, but makes it 
easier for handling data.

But I also see Mikes point about how good is csv at holding UTF8 data.

Cliff



> -- bk
>
> _______________________________________________
> katello-devel mailing list
> katello-devel at redhat.com
> https://www.redhat.com/mailman/listinfo/katello-devel




More information about the katello-devel mailing list