[Container-tools] Nulecule/AtomicApp BoF at OSCon

John Mark Walker jowalker at redhat.com
Mon Aug 3 14:53:51 UTC 2015


----- Original Message -----

> On Mon, Aug 3, 2015 at 10:03 AM, Scott McCarty < smccarty at redhat.com > wrote:

> >> Did somebody say Satellite 6.1 is a Docker Registry server?
> >> http://red.ht/1SA5JF7

> It's not. It's a enterprise content management workflow that supports
> docker-packaged content and the docker protocol for distribution.

> It does *not* support the docker protocol for publishing content and therefor
> on it's own can not support a docker developer workflow.

Uh... that's his sig :) The stuff you should read is down below. 

> > Getting back to this question. I see container image (and glance image)
> > builds as being a software supply chain that roughly follow the following
> > paths [1]. For customers, provenance is the key governing factor [3][4]:
> 

> > Custom Development
> 
> > OS (vendor) -> Operations Team Core Build -> Middle-ware Stack (customer)
> > ->
> > Development Team (customer)
> 

> > Commercial Off the Shelf Software
> 
> > OS (vendor) -> ISV Software (vendor)-> Operations Team Core Build
> > (customer)
> > -> Development Team (customer)
> 

> > At each of these stages, a Dockerfile could/should be used.
> 

> > I think the current configuration management tools (chef, puppet, ansible,
> > salt, cf), as of August 2015 are nicely called from Dockerfiles during
> > container build. Current configuration management tools are quite good at
> > doing complicated installations, including configuration settings, etc.
> > Interestingly, each team could use which ever CF tool they want without
> > interfering with each other.
> 

> > In my lab, at the OS stage, I have Kickstarts for building Glance/RHEV
> > images, and Dockerfiles for building "container images." Both the
> > kickstarts
> > and Dockerfiles call the same puppet modules during build. This works well
> > TODAY, with no technology changes, to create a standard image for both.
> > Security hardening and things that would traditionally be included in the
> > core build are done at this stage.
> 

> > In my lab, at further stages, I have separate Dockerfiles for Middle-ware,
> > and code. In real life, each of these Dockerfiles could be owned by
> > different teams as desired: Middle-ware, Developers, Operations, etc. These
> > could be controlled with Satellite (as a registry server [5]), etc.
> 

> > The black hole I currently see, is that no configuration management tool
> > (nor
> > is Red Hat), giving the community guidance on how to build containers in a
> > standard way so that they CAN be layered (as above) and orchestrated.
> > Looking at the official MySQL container image on DockerHub [2]. Notice how
> > critical configuration options are actually reserved for startup.
> 

> > [1]: http://crunchtools.com/core-builds-service/
> 
> > [2]: https://registry.hub.docker.com/_/mysql/
> 
> > [3]: attached
> 
> > [4]: attached
> 
> > [5]: attached
> 

> > Best Regards
> 
> > Scott M
> 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/container-tools/attachments/20150803/c2c034f7/attachment.htm>


More information about the Container-tools mailing list