[Pulp-dev] Webserver owning the entire url namespace?

Jeremy Audet jaudet at redhat.com
Thu Nov 9 19:12:42 UTC 2017


>
> All of that said, there's no reason why a user couldn't use a web server
>> like httpd to run all three WSGI apps in the same process, multiplied
>> across its normal pool of processes. We should make the apps available as
>> separate WSGI apps, and users can deploy them in whatever combinations meet
>> their needs.
>>
>
> As mentioned above, Pulp should use configuration settings to disable and
> enable the REST API and the individual content APIs. Separate WSGI
> applications makes the deployment process more complicated.
>

...

Don't we also want to support having a single WSGI process serve both
> content and the REST API? There are practical reasons why users may want to
> deploy this way too. For example having a smaller memory footprint by
> having fewer process groups, or maybe they just want a simpler deployment.
> If we ship one WSGI application with Pulp then we allow for both deployment
> models (together or separate REST API and content serving). Users who want
> to use separated WSGI processes should configure the webserver to
> instantiate the one WSGI application Pulp would ship twice, and route
> content urls one WSGIProcessGroup and the REST API calls to another
> WSGIProcessGroup. We could document that in a deployments page which would
> be cool.
>
> So for ^ reasons I think having Pulp ship one WSGI application that could
> handle all Pulp urls would allow for the most deployment models.
>

Can someone write up an nginx configuration to show what these different
deployment options would look like in practice? I ask for this because some
of the assertions made in this thread don't match my practical experience.
For example, if I understand correctly, one proposed deployment model is
for WSGI applications to run directly in a web server's memory space. (That
is, one proposed deployment model is for WSGI applications to run as part
of a web server's process.) If my understanding is correct, then I find
this very strange. Doing so introduces unnecessarily tight coupling between
different services, making it impossible to do simple things like restart
one WSGI service while leaving others up. Furthermore, in my experience,
it's easy to run multiple WSGI applications behind a web server.

Here's a concrete example. Let's say I have two web applications that share
a single SSL certificate, and that are available at different paths. Here's
a snippet of an appropriate nginx configuration:

server {
    listen 80;
    server_name app.example.com;
    return 301 https://$host$request_uri;
}
server {
    listen 443 ssl;
    server_name          app.example.com;
    ssl_certificate      ssl/app.example.com.chained.crt;
    ssl_certificate_key  ssl/app.example.com.key;
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;  # SSLv3 is insecure
    location /user-1/ {
        proxy_set_header  Host               localhost;
        proxy_set_header  X-Forwarded-For    $proxy_add_x_forwarded_for;
        proxy_set_header  X-Forwarded-Proto  $scheme;
        proxy_set_header  X-Real-IP          $remote_addr;
        proxy_pass http://127.0.0.1:8384/;
    }
    location /user-2/ {
        proxy_set_header  Host               localhost;
        proxy_set_header  X-Forwarded-For    $proxy_add_x_forwarded_for;
        proxy_set_header  X-Forwarded-Proto  $scheme;
        proxy_set_header  X-Real-IP          $remote_addr;
        proxy_pass http://127.0.0.1:8385/;
    }
}

One especially nice thing to notice about this configuration is that the
proxied-to applications are completely opaque. They could be WSGI
applications (e.g. Gunicorn <http://gunicorn.org/>), or they could be Go
applications, or they could be Java applications, or so on. It's a nice
decoupling of concerns.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/pulp-dev/attachments/20171109/15ed6ae1/attachment.htm>


More information about the Pulp-dev mailing list