The post NGINX Feature Flag Reverse Proxy appeared first on Justin Silver.
]]>Use NGINX as a reverse proxy to different back-end servers based on feature-flags set in the ngx_http_auth_request_module
add-on. In my implementation for Secret Party the subdomain is used to determine which event/party a Note that this is a template file and some variables are set from the environment – $DOMAIN
, $PROXY_*_HOST
, $PROXY_*_PORT
, etc.
First create the upstream servers that we can proxy the request to. Here we will use “green”, “blue”, and “red”.
# this is the server that handles the "auth" request upstream upstream_auth { server $PROXY_AUTH_HOST:$PROXY_AUTH_PORT; } # backend app server "green" upstream upstream_green { server $PROXY_GREEN_HOST:$PROXY_GREEN_PORT; } # backend app server "blue" upstream upstream_blue { server $PROXY_BLUE_HOST:$PROXY_BLUE_PORT; } # backend app server "red" upstream upstream_red { server $PROXY_RED_HOST:$PROXY_RED_PORT; }
Next we create a mapping of route name to upstream server. This will let us choose the backend/upstream server without an evil if
.
# map service names from auth header to upstream service map $wildcard_feature_route $wildcard_service_route { default upstream_green; 'green' upstream_green; 'blue' upstream_blue; 'red' upstream_red; }
Optionally we can also support arbitrary response codes in this mapping – note that they will be strings not numbers. This uses the auth response code to choose the route that is used for the proxy from the mapping above – so the HTTP Status Code to string to Upstream Server.
# map http codes from auth response (as string!) to upstream service map $wildcard_backend_status $wildcard_mapped_route { default 'green'; '480' 'green'; '481' 'blue'; '482' 'red'; }
The Auth Handler is where NGINX sends the auth request so we assume we are handling something like http://upstream_auth/feature-flags/$host
. This endpoint chooses the route that we use either by setting a header called X-Feature-Route
with a string name that matches the mapping above, or can respond with a 4xx error code to also specify a route from the other mapping above. You get the gist.
function handleFeatureFlag(req, res) { // use the param/header data to choose the backend route // const hostname = req.params.hostname; const route = someFlag? 'green' : 'blue'; // this header is used to figure out a proxy route res.header('X-Feature-Route', route); return res.status(200).send(); }
function handleFeatureFlag(req, res) { // this http response code can be used to figure out a proxy route too! const status = someFlag ? 481 : 482; // blue, red return res.status(status).send(); }
To tie it together create a server that uses an auth request to http://upstream_auth/feature-flags/$host
. This API endpoint uses the hostname to choose the upstream service to use to fulfill the request, either by setting a header of X-Feature-Route
or returning an error code other than 200 or 401 – anything else will be returned as a 500 to NGINX which can then use the string value of this code as a route hint.
server { listen 80; # listen on wildcard subdomains server_name *.$DOMAIN; # internal feature flags route to upstream_auth location = /feature-flags { internal; # make an api request for the feature flags, pass the hostname rewrite .* /feature-flags/$host? break; proxy_pass http://upstream_auth; proxy_pass_request_body off; proxy_set_header Content-Length ""; proxy_set_header X-Original-URI $request_uri; proxy_set_header X-Original-Remote-Addr $remote_addr; proxy_set_header X-Original-Host $host; } # handle all requests for the wildcard location / { # get routing from feature flags auth_request /feature-flags; # set Status Code response to variable auth_request_set $wildcard_backend_status $upstream_status; # set X-Feature-Route header to variable auth_request_set $wildcard_feature_route $upstream_http_x_feature_route; # this is a 401 response error_page 401 = @process_backend; # anything not a 200 or 401 returns a 500 error error_page 500 = @process_backend; # this is a 200 response try_files @ @process_request; } # handle 500 errors to get the underlying code location @process_backend { # set the status code as a string mapped to a service name set $wildcard_feature_route $wildcard_mapped_route; # now process the request as normal try_files @ @process_request; } # send the request to the correct backend server location @process_request { proxy_read_timeout 10s; proxy_cache off; proxy_set_header Host $host; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; # use the mapping to determine which service to route the request to proxy_pass http://$wildcard_service_route; } }
The post NGINX Feature Flag Reverse Proxy appeared first on Justin Silver.
]]>The post Using NGINX as an Atlassian JIRA Reverse Proxy appeared first on Justin Silver.
]]>I use JIRA in a cloud infrastructure where it’s obviously desirable to serve the contents over SSL, therefore I set up an NGINX as a JIRA reverse proxy for unencrypted requests to the JIRA backend service and handle the SSL on the front end with Let’s Encrypt. We need to let JIRA know that we are proxying it over HTTPS however by setting some values in server.xml first.
Notice that my Let’s Encrypt SSL certificates are in the /etc/letsencrypt/live/jira.doublesharp.com directory, but yours will be specific to the hostname you create them for. The certs are created via the letsencrypt command and use Nginx to process the validation request. Once created the generated PEM files can be used in your Nginx config. Note that you will need to comment out this line in the SSL config if they don’t yet exist, start Nginx to create the certs, uncomment the lines to enable SSL, and then restart Nginx once again (whew!).
Configure JIRA to add proxyName
, proxyPort
, scheme
, and secure
parameters to the Tomcat Connector in server.xml
.
<Connector port="8081" maxThreads="150" minSpareThreads="25" connectionTimeout="20000" enableLookups="false" maxHttpHeaderSize="8192" protocol="HTTP/1.1" useBodyEncodingForURI="true" redirectPort="8443" acceptCount="100" disableUploadTimeout="true" bindOnInit="false" proxyName="jira.doublesharp.com" proxyPort="443" scheme="https" secure="true" />
Don’t forget to copy the database driver to $JIRA_INSTALL/lib
.
Note use of “jira.doublesharp.com” in config and change as needed. This configuration uses a subdomain specific certificate from Let’s Encrypt, but you could also use a Wildcard Certificate for your JIRA reverse proxy setup as well which can help to consolidate your key generation.
# Upstream JIRA server on port 8081. Use 127.0.0.1 and not localhost to force IPv4. upstream jira { server 127.0.0.1:8081 fail_timeout=0; } # listen on HTTP2/SSL server { listen 443 ssl http2; server_name jira.doublesharp.com; # ssl certs from letsencrypt ssl_certificate /etc/letsencrypt/live/jira.doublesharp.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/jira.doublesharp.com/privkey.pem; location / { # allow uploads up to 10MB client_max_body_size 10m; # set proxy headers for cloudflare/jira proxy_set_header Host $host:$server_port; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; # hand the request off to jira on non-ssl proxy_pass http://jira; } } # redirect HTTP and handle let's encrypt requests server { listen 80; server_name jira.doublesharp.com; root /var/lib/jira; # handle letsencrypt domain validation location ~ /.well-known { allow all; } # send everything else to HTTPS location / { return 302 https://jira.doublesharp.com; } }
The post Using NGINX as an Atlassian JIRA Reverse Proxy appeared first on Justin Silver.
]]>The post Install Jenkins as a Service on CentOS 7 appeared first on Justin Silver.
]]>I have previously written about how to Install Jenkins on CentOS as a Service where it was necessary to write your own startup, shutdown, configuration, and init.d scripts. Luckily this is all much easier now as you can install the software directly from a yum
repository – you’ll just need to fetch the repo from http://pkg.jenkins-ci.org/redhat/jenkins.repo.
Make sure you have Java on your system, then fetch the yum repository and install Jenkins.
yum -y install java curl http://pkg.jenkins-ci.org/redhat/jenkins.repo -o /etc/yum.repos.d/jenkins.repo rpm --import https://jenkins-ci.org/redhat/jenkins-ci.org.key yum -y install jenkins
Since CentOS 7 uses Systemd, use it to start the service on reboot.
systemctl enable jenkins service jenkins start
This will start jenkins on port 8080 by default (you can change these settings in /etc/sysconfig/jenkins). Leaving it as is and setting up a reverse Nginx proxy is my preference. Once you load the Jenkins home page you will be prompted to enter a password located in a file on your system to continue the setup. Here is a sample of my Nginx configuration.
# jenkins is upstream listening on port 8080 upstream jenkins { server 127.0.0.1:8080 fail_timeout=0; } # nginx is listening on port 80 server { listen 80; server_name jenkins.example.com; location / { proxy_set_header Host $host:$server_port; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_pass http://jenkins; } }
Keep in mind that you may have issues initially proxying to Jenkins if SELinux is configured to block access to port 8080. If you try to load the site via Ngnix and get a “502 Bad Gateway” error, check out the /var/log/audit/audit.log
– you will probably see errors regarding Nginx connecting to your port. You can either add the port by hand, or do it automatically with audit2allow
.
mkdir ~/.semanage && cd ~/.semanage cat /var/log/audit/audit.log | grep nginx | grep denied | audit2allow -M semanage semodule -i semanage.pp
If you need to generate an SSH key for the Jenkins user, use sudo to run as the proper user.
sudo -u jenkins ssh-keygen
Enjoy!
The post Install Jenkins as a Service on CentOS 7 appeared first on Justin Silver.
]]>The post NGINX Reverse Proxy to Legacy Website appeared first on Justin Silver.
]]>NGINX reverse proxies can be a very powerful tool for many reasons and recently came to the rescue as I was at a loss as to how to provide access to a legacy website when launching the new one. The caveat in this case was that the legacy server is, well, old. It has many hard coded values throughout including URLs and only likes to listen on particular hostnames from time to time. Since I did not write this site and do not have access to the source code (it’s a DLL on a Windows box somewhere) I had to come up up with a solution to didn’t involve modifying the code.
The first option I thought of was to just update the /etc/hosts
file (or Windows equivalent) to point the domain name to the old server IP address when needed, but this is a bit cumbersome. Comparing data between the new and old systems – presumably the main reason you would want to see the old server – is pretty much out. Faking the DNS is a no go.
An NGINX reverse proxy takes a request from a front-end NGINX server and passes it on to a back-end server in more traditional setup. In this situation the request is being made to the legacy server IP address and some special parameters are used to rewrite the domain information for redirects, cookies, and page content. We are also checking the port to determine if the request to the legacy server should be made via HTTP or HTTPS.
server { # listen on 80 and 443, ssl if the latter listen 80; listen 443 ssl; # this is the "new" url for the legacy site server_name gamma.example.com; # ssl config ssl on; ssl_certificate /etc/nginx/ssl/example.com.crt; ssl_certificate_key /etc/nginx/ssl/example.com.key; # legacy server IP address set $legacy_ip 123.123.123.123; # proxy over which protocol? set $protocol http; if ( $server_port = 443 ){ set $protocol https; } # pass everything through the proxy location / { # proxy all requests to the legacy server proxy_pass $protocol://$legacy_ip; # set the Host header on the request proxy_set_header Host "www.example.com"; # replace redirect strings proxy_redirect http://www.example.com/ /; proxy_redirect https://www.example.com/ https://gamma.example.com/; # replace cookie domains proxy_cookie_domain 'www.example.com' 'gamma.example.com'; # replace page content sub_filter_once off; sub_filter 'www.example.com' 'gamma.example.com'; } }
The post NGINX Reverse Proxy to Legacy Website appeared first on Justin Silver.
]]>