From morgan at mindviz.com Fri Jun 8 15:37:16 2007 From: morgan at mindviz.com (Morgan Demers) Date: Fri, 8 Jun 2007 11:37:16 -0400 Subject: tux not handling content due to large cookies in header Message-ID: <00ad01c7a9e2$e50aecf0$6501a8c0@MINDVIZDELL> Hi Everyone, This is my first message on the list and I'd just like to introduce myself. My name is Morgan Demers and I run a fairly large network of sites. I rely quite heavily on tux for serving static content. I've just recently happened upon what I think to be a bug, but might be by design. I rely on cookies fairly heavily - for sessions, temporary storage of small sets of data, etc... I've just recently discovered that when a a browser sends a fairly large Cookie header for a domain that tux is serving static content for - tux will break and pass the request on to Apache even though the file exists and should be served. I've tested this out with large sets of cookies and then after clearing my cookies - and it seems to be that situation each time. I have virtual hosting on, and have multiple domains on the same server. Even in the tux log you can see a problem - typically when a request is made it has --domain.com--/--request path-- but when the headers are too large (due to all the cookies) the request looks like this /--request path--. Has anyone encountered this problem before? Is there a workaround other than having a seperate domain name for static content (such that no cookies are passed to the server)? Some Debug Output from gettuxconfig: Jun 8 11:22:17 picsfolio kernel: PRINT req d58e0000 , sock d00ebc34 Jun 8 11:22:17 picsfolio kernel: ... idx: 0 Jun 8 11:22:17 picsfolio kernel: ... sock d00ebc34, sk f301c880, sk->state: 1, sk->err: 0 Jun 8 11:22:17 picsfolio kernel: ... write_queue: 0, receive_queue: 1, error_queue: 0, keepalive: 1, status: 0 Jun 8 11:22:17 picsfolio kernel: ...tp->send_head: 00000000 Jun 8 11:22:17 picsfolio kernel: ...tp->snd_una: 20971c91 Jun 8 11:22:17 picsfolio kernel: ...tp->snd_nxt: 20971c91 Jun 8 11:22:17 picsfolio kernel: ...tp->packets_out: 00000000 Jun 8 11:22:17 picsfolio kernel: ... meth:{GET /ps/admin.jpg HTTP/1.1^M Jun 8 11:22:17 picsfolio kernel: Accept: */*^M Jun 8 11:22:17 picsfolio kernel: Accept-Language: en-us^M Jun 8 11:22:17 picsfolio kernel: UA-CPU: x86^M Jun 8 11:22:17 picsfolio kernel: Accept-Encoding: gzip, deflate^M Jun 8 11:22:17 picsfolio kernel: If-Modified-Since: Wed, 07 Feb 2007 05:12:33 GMT^M Jun 8 11:22:17 picsfolio kernel: If-None-Match: "2988fa-950-f956b240"^M Jun 8 11:22:17 picsfolio kernel: User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; .NET CLR 1.1.4322; Alexa Toolbar; .NET CLR 2.0.50727)^M Jun 8 11:22:17 picsfolio kernel: Host: p.#domain removed#.com^M Jun 8 11:22:17 picsfolio kernel: Connection: Keep-Alive^M Jun 8 11:22:17 picsfolio kernel: Cookie: ###############cookie data removed for security/privacy##################... post_data:{}(0). Jun 8 11:22:17 picsfolio kernel: ... headers: {} Here is how the request looks in the tux log: #ip removed# - - [08/Jun/2007:11:22:26 -0400] "GET /ps/admin.jpg HTTP/1.1" -1 0 "-" "" and here is how the request should have looked in the tux log: #ip removed# - - [08/Jun/2007:11:23:19 -0400] "GET p.#domain removed#.com/ps/admin.jpg HTTP/1.1" 200 2384 "-" "" ------------ if you notice in the gettuxconfig log, it has "headers: {}" meaning I think the large cookie data breaks the header processing mechanism in tux and that in turn forces tux to hand over the request to Apache? Any help would be most appreciated, Thanks in advance, Morgan Demers -------------- next part -------------- An HTML attachment was scrubbed... URL: From ppokorny at penguincomputing.com Wed Jun 13 17:18:51 2007 From: ppokorny at penguincomputing.com (Philip Pokorny) Date: Wed, 13 Jun 2007 10:18:51 -0700 Subject: tux-list Digest, Vol 29, Issue 7 In-Reply-To: <20060921160030.38A2973AC5@hormel.redhat.com> References: <20060921160030.38A2973AC5@hormel.redhat.com> Message-ID: <467026FB.2000400@penguincomputing.com> tux-list-request at redhat.com wrote: >i've released the tux3-2.6.18-1 Tux patch: > > http://redhat.com/~mingo/TUX-patches/tux3-2.6.18-1 > >this is a merge to v2.6.18, plus a few lockdep related fixes, and 64-bit >compiler warning eliminations. > > I was reading through this patch to catch up and saw a couple of things that look strange to me: Index: linux/include/linux/errno.h =================================================================== --- linux.orig/include/linux/errno.h +++ linux/include/linux/errno.h @@ -24,6 +24,9 @@ #define EIOCBQUEUED 529 /* iocb queued, will get completion event */ #define EIOCBRETRY 530 /* iocb queued, will trigger a retry */ +/* Defined for TUX async IO */ +#define EWOULDBLOCKIO 530 /* Would block due to block-IO */ + #endif #endif Shouldn't EWOULDBLOCKIO have a unique value assigned to it? In this case it's the same error code as EIOCBRETRY... Later, when setting up the environment for a CGI call-out, I saw this: + WRITE_ENV("GATEWAY_INTERFACE=CGI/1.1"); + WRITE_ENV("CONTENT_LENGTH=%d", req->post_data_len); + WRITE_ENV("REMOTE_ADDR=%d.%d.%d.%d", NIPQUAD(host)); + WRITE_ENV("SERVER_PORT=%d", 80); + WRITE_ENV("SERVER_SOFTWARE=TUX/2.0 (Linux)"); Isn't the SERVER_PORT that tux listens to configurable? A hard-coded 80 here just seems wrong. The code gets the "host" IP address from the socket "inet_sk(req->sock->sk)->daddr", so I'm guessing the port should come from there as well? Another poster asked if "large" cookies could trigger a failover to Apache. I see in the code that the limit on header size defaults to 3000, but can be set with a sysctl +unsigned int tux_max_header_len = 3000; + { NET_TUX_MAX_HEADER_LEN, + "max_header_len", + &tux_max_header_len, + sizeof(int), + 0644, + NULL, + proc_dointvec, + &sysctl_intvec, + NULL, + NULL, + NULL + }, The sysctl file would be /proc/sys/net/tux/max_header_len Just wondering, Phil P.