Forum OpenACS Development: bgdelivery and https

Collapse
Posted by Michael Steigman on
We're seeing ssl_error_bad_mac_read errors when trying to serve files with bgdelivery. Everything is over https; only place we are seeing this error when trying to serve files with bgdelivery. Cert configuration has been validated by external checkers. Relevant settings from config.tcl:

ns_param ServerProtocols "SSLv3, TLSv1"
ns_param ServerCipherSuite "ALL:!ADH:RC4+RSA:+HIGH:+MEDIUM"

We do see the following quite a bit, however:

Error: nsopenssl (hub): SSL read error: ssl handshake failure

Since we have not had any reports of users having trouble, I've assumed this is a result of the ssl negotiation process (not supporting SSLv2 perhaps) and not something more troubling. I'm not an expert on the SSL/TLS handshakes, though.

Any ideas? TIA.

Collapse
2: Re: bgdelivery and https (response to 1)
Posted by Gustaf Neumann on
First of all, when serving files to clients, the client might might terminate the request for various reasons at any time (e.g. the user clicks on a page, and immediately after that, he might click on another page, etc. causing connections in various states to be terminated; this happens especially for embedded resources, style files etc.). So, the behavior might be normal.

Secondly, i wonder, how bgdelivery is coming into play, since bgdelivery does not serve files via https (there is no way to transfer the openssl context to another thread). The recommended setup is to use a proxy like nginx handling https, then the backend sees just http, and bgdelivery helps to provide scalability.

Collapse
3: Re: bgdelivery and https (response to 2)
Posted by Michael Steigman on
Thanks, Gustaf. That's what I thought. I had hoped that bgdelivery had some magic to hand off context. The reason it's coming into play in this case is because I was explicitly calling it for test purposes (i.e., not relying on cr_write_content, which checks for https).

With regard to nginx, we have begun setting that up and that will be the road we continue down. There are numerous documents written by community members floating around about using nginx to server /resources files as well as content-repository-content-files. Were you just simplifying the recommendation or do you not bother with any of that? I'm also curious how else you utilize nginx in your environment (which I know is pretty large and sees heavy utilization). There are a couple of other areas that seem interesting - chunked input support and byte range requests. Are you using those nginx features to compliment aolserver?

Collapse
4: Re: bgdelivery and https (response to 3)
Posted by Gustaf Neumann on
My recommendation for nginx would be to set it up with essentially one backend and let it handle https (including a redirect from port 80->443), and static files (css, js, logos, which includes "resources"). Unfortunately, we do not have a lean&mean nginx file for posting right now (the config files for nginx tend to get convoluted with many localisms).

i would not recommend to serve content repository files via nginx, at least if these files are not public. There are ways to integrate external authentication schemes into nginx, but we have not tried this. Background delivery is scaling great, there is no need to hand this over to nginx (yesterday, our server delivered 240 GB of data, 180GB were delivered from the backend). Right now, 32 files are concurrently spooled via bgdelivery.

We do not use any special nginx modules. h264 streaming is performed via an aolserver module and bgdelivery, chunked input and range requests are handled via naviserver. We are using naviserver in production since more than a year. Most our updates/fixes/extensions for naviserver are in the public repository at bitkeeper, we have still to clean up openacs integration layer, but we could do this in a month or so.