[Pulp-dev] 3.0 Validation update

Austin Macdonald amacdona at redhat.com
Tue Apr 11 19:49:23 UTC 2017


Where should business logic live? As an example, I want to consider the
sync task [0] and the need to ensure that an importer is syncable. For now,
let's say that an importer is syncable if it has a feed_url.

Since the call to sync an importer comes from the API, the earliest we can
check that the configuration is syncable is in a not-yet-written SyncView.
Checking syncability here is ideal because it allows us to immediately
return a meaningful http response instead of happily returning information
about a Task that we already know is doomed to fail.

Performing the check in the API layer is not enough. We have discussed edge
cases that lead to an importer's feed_url being removed while the sync task
is in the WAITING state. To make an assertion to the plugin that the
feed_url exists, we have to check syncability again when the Task moves
into an active state.

My thinking is that we should put this business logic on the model.

Admittedly, it is not a clean fit with the separation of concerns
philosophy but we have already violated the concept by putting the sync
method on the model. If sync is on the model, it seems like ensure_syncable
should be too.

If we write the platform API layer to use this kind of business logic, then
the plugins can add double checking business logic without modifying the
API and Task layers.


[0]: https://pulp.plan.io/issues/2399

On Fri, Apr 7, 2017 at 2:14 PM, Sean Myers <sean.myers at redhat.com> wrote:

> On 04/07/2017 12:08 PM, Brian Bouterse wrote:
> > == questions ==
> > * Where should ^ terms be documented?
>
> I'm not really sure, but recommend the wiki as a good starting point for
> putting information that we should probably "officially" document
> *somewhere*,
> but at the moment we aren't quite sure where.
>
> https://pulp.plan.io/projects/pulp/wiki/Pulp_3_Developer_Notes
>
> > * Take the case of a sync which has overrides provided? This isn't in the
> > MVP, but in the future it could be. In that case, does the serializer
> > associated with the importer validate the database data with the
> overrides
> > "added" on top of it?
>
> My question here is "validate against what?". It makes sense to validate
> against database data, but as long as the overrides aren't themselves
> stored
> in the database, what does this really stop?
>
> For example, what prevents two simultaneous syncs of repos using overrides
> that would trigger a constraint violation if they were saved, but don't do
> this because we don't save the overrides?
>
> > * For plugin writers writing a serializer for a subclassed Importer, do
> > they also need to express validations for fields defined on the base
> > Importer?
>
> It depends on the validation. If it's just validating an individual field,
> no. If it's validation of a combination of values in multiple fields, and
> one of those fields in this case was defined in a subclass, the subclass
> will need to add the proper validation support.
>
> > * The database still rejects data that doesn't adhere to the data layer
> > definition right? That occurs even without the DRF serializer correct?
>
> Again, this depends. For example, attempting to store an int in a charfield
> will work, because Django will coerce that int to string on save.
> Attempting
> to store a string in an IntegerField will fail, though, because Django is
> not able to coerce str to int prior to saving. Generally, though, your
> understanding is correct. Anything that the database can't handle will be
> rejected.
>
> > * In cases where data is created in the backend, do we need to validate
> > that as a general practice? If we do, do we call the DRF serializer
> > regularly in the backend or just let the database reject "bad" data at
> the
> > db level?
>
> As a general practice, I don't think so. Specifically, though, when we're
> passing data around, like when a bit of platform code is taking incoming
> plugin data and passing it into some standard workflow that platform
> provides
> (like running sync on an importer, say) I think it's going to be a good
> idea
> and an all-around gentlemenly thing to do to validate that data in some way
> that appropriate to the process/workflow being invoked.
>
> I'm concerned about finding the balance between making things user-friendly
> for plugin writers and having our checking code that provides that user-
> friendly-ness itself be difficult to maintain and end up being
> pulp-developer-
> unfriendly.
>
>
> _______________________________________________
> Pulp-dev mailing list
> Pulp-dev at redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/pulp-dev/attachments/20170411/01003e59/attachment.htm>


More information about the Pulp-dev mailing list