[almighty] Build service and providers in Almighty - overview

Tomas Nozicka tnozicka at redhat.com
Fri Oct 21 15:48:31 UTC 2016


Hi Shoubhik,

you have a good point that I haven't described OAuth specifics in my
post, just mentioned it several times. That's mostly because I haven't
really decided yet on which approach to take and that mail was already
a long one to try to attack this issue.

Continues inline...

On Thu, 2016-10-20 at 19:02 +0530, Shoubhik Bose wrote:
> Hi Tomas,
> Very nice overview!
> I had a few thoughts on the auth tokens.
> I see it mentioned in a number of sections and wanted to share a generalized understanding:
> 1. ALM would need to be pre configured with OAuth application id + application secret for every external system we integrate with for builds. These could be read by alm-core as environment variables and passed on to the SPIs when needed. The environment variables can be set as Openshift Secrets.
> This allows ALM to identify itself when it talks to the external build system.
Well, in the approach you are correct. Although I don't
think "application id + application" is actually a requirement for
OAuth. It is just a way how some services provide you with OAuth
credentials to get token, but there are other ways. For example
OpenShift does not have those. (It has a concept of serviceaccounts
which by themselves provide you with a special long-term token to use
for authentication. But they need a namespace to be created in, setup
permissions for other namespaces; generally some non-trivial
management).

The way I see it:
 - User will enter a URL of OAuth resource (like build environment) to
UI (e.g. openshift.com)
 - He will be redirected to openshift.com and asked for his regular
credentials
 - Almighty will get back a token that will be stored in configuration
with the URL

The only issue with this approach is that the token might have limited
lifetime so when running build, Almighty shall detect expiry and ask
user to refresh token by redirecting him back to openshift.com. But we
should be prepared to do the same even if it would be long-term tokens
because they can still expire someday.

There also a "backup" and more complicated way that we let user to
login by username and password and provider will setup long-term
service account on his behalf, but I would rather not go this way; at
least for now. (It may be needed in case of cross cluster deployments
but in a limited way.)

And there will be a default build environment, to the same OpenShift
instance as Almighty is running on, for which the above mentioned
authentication steps will result in "noop". (When user is already
logged in Almighty.)

Yes, environment variables and secrets will be pasted to SPI provider
using CRUD operations. In case of OSPB this will be mapped to OpenShift
secrets and env variables for pipelines.

> 2. With the help of of ALM UI , the user gives ALM authorization to manage builds on the external system. This step gives ALM an OAuth access token with which it is able to manage builds on the external system on the user's behalf.
> This access token allows ALM to do privileged stuff with the external system.
True.

> Expiry :
> The expiry of access tokens should be set to fairly large as possible because the alm gets to specify the expiry during OAuth token generation procedure.
I don't think client is the one to choose and we can't influence
default values in target platform (OAuth server) like OpenShift Online.
We can do it for the default build environment which will be pre-
configured, but that's about it.


> We would need to have ways to notify the user in the UI when the token expires, gracefully.
Correct.


Regards,
Tomas

> 
> Let me know if this is what you intended to explain.
> Thanks
> Shoubhik 
> 
> On Oct 13, 2016 8:54 PM, "Tomas Nozicka" <tnozicka at redhat.com> wrote:
> > Hi all,
> > 
> > this is a followup on Andy's initial mail[1] about this topic and
> > presents my view on what the Build Service (and providers) should look
> > like.
> > 
> > Feel free to challenge any design here as you see fit; we are still in
> > early stages and figuring it out ourself.
> > 
> > Andy published the generic design in his last email[1] at the highest
> > abstraction level and that stays the same. [2]
> > 
> > Let's start by defining what Build Service is, because the name might
> > seem a bit confusing; at least it was for me when we joined this
> > project. Build Service should provide user with the ability to
> > transform repository code (e.g. git branch) into artifacts (like docker
> > images) and the ability to *deploy* them to target environment (e.g.
> > OpenShift). Build Service should be able to cover the whole CI/*CD*
> > story, not just builds.
> > 
> > There is already a similar concept implemented in OpenShift called
> > *Pipelines* [3]. But, in ALM, we want to be more generic and provide
> > users with Build Service with plugable providers. That's why we are
> > creating Build SPI; to abstract the provider specifics away. Generally
> > speaking regular build (non-pipeline) job looks like a pipeline of 0
> > stages, which is a valid pipeline.
> > 
> > Although there will be most probably different providers in the future,
> > we will start by implementing "OpenShift Build Provider" (OSBP) that
> > will use OpenShift's pipelines.
> > 
> > There are 3 parts connected by Build SPI and Almighty public API:
> >  - ALM Core - Build Service
> >  - OSBP
> >  - ALM UI
> > 
> > Good thing is that after we define the interfaces (mainly Build SPI)
> > those can be worked on in parallel.
> > 
> > = ALM UI =
> > There is a lot of work done for visualizing pipelines in OpenShift's
> > Console. This is OSS and we could try to reuse those blocks. They will
> > most definitely need modifications because e.g. they read status
> > directly from OpenShift's API and that will be abstracted away by Build
> > SPI and ALM API in our case. Also I am not sure how well they visualize
> > pipelines without any stages. Also I remember Michael mentioning
> > Console uses Angular v1 and ALM UI v2.
> > 
> > UI will also need to be able to indirectly ask user for credentials to
> > build and target/prod environment's (more generally to 1-N
> > environments) using OAuth2. ALM's core and build providers will
> > authenticate only by token produced by this (OAuth) step. And the rest
> > of those tokens will be given to build provider as secrets for
> > authentication to other environments (e.g. in case of cross-cluster
> > deployments).
> > 
> > 
> > = ALM Build Service =
> > This will be the service in ALM core that will issue calls against
> > Build SPI provider. How it will be represented, mainly what information
> > it needs to hold, will be strict superset of provider configuration.
> > 
> > 
> > = Build Providers =
> > Build provider will comunicate with Build Service through Build SPI
> > which abstracts provider specifics away. It will have several functions
> > like:
> >  - CRUD operations
> >  - StartBuild
> >  - CancelBuild
> >  - GetStages
> >  - GetLogs
> >  - and many more (this will be in Build SPI spec)
> > which can all be fit to work with both pipelines and regular build jobs
> > with the assumption that regular build job is pipeline of 0 stages.
> > 
> > 
> > = Full picture =
> >  - ALM admin will add providers (register them) to ALM; or it will be
> > registered the way that ALM SPIs do... I haven looked into it yet:(
> >  - User will choose to add a Build Service to his project choosing
> > provider in UI
> >  - User will configure OAuth to his build environment [(URL)->(token)]
> > and optionally other OAuth resources [(URL, name)->(name, token)] which
> > will be passed to build provider as secrets that can be used e.g. for
> > cross-cluster deployments in that build
> >    Since this will be done through OAuth, ALM shall never see user's
> > credentials.
> >  - Build Service instance is created in ALM core
> >  - Build Service uses provider to create BuildConfig instance passing
> > it necessary configuration like: token, secrets[(name, token)],
> > repository reference, ...
> >  - followed by many Build SPI calls...
> > 
> > ALM core shall verify that all OAuth tokens are valid (not expired)
> > before calling Build SPI and refresh them with cooperation with ALM UI
> > if needed. (For the main token Build SPI will return unathorized, but
> > for the OAuth tokens in secrets the underlying technology does not
> > support that and build will be marked as failed otherwise. This might
> > be a common scenario since some tokens may last for about a day or so.)
> > 
> > Also in case ALM will run on the same OpenShift instance as you setup
> > for your builds, or there will be a dedicated OpenShift instance for
> > ALM build set up with ALM as OpenShift's identity provider, you can be
> > authenticated to your build environment using your ALM token without
> > the need to be asked for credentials. If allowed, this should be the
> > default.
> > 
> > = Next steps =
> > I will work on putting Build SPI proposal on paper and also on how we
> > should represent Build Service in core which is connected with it. 
> > 
> > There is also a userstory for next sprint on GitHub[3].
> > 
> > I will appreciate your feedback!
> > 
> > Regards,
> > Tomas
> > 
> > [1] - https://www.redhat.com/archives/almighty-public/2016-September/ms
> > g00150.html
> > [2] - https://drive.google.com/file/d/0B10zSvDl_cuwMHZtTmR1RVIteGM/view
> >  (RH Only, but there is a screenshot in attached in [1])
> > [3] - https://github.com/almighty/almighty-core/issues/352
> > 
> > _______________________________________________
> > almighty-public mailing list
> > almighty-public at redhat.com
> > https://www.redhat.com/mailman/listinfo/almighty-public
> > 




More information about the almighty-public mailing list