HAProxy ACL that checks each value in header against a whitelist and denies request if at least one value is not whitelisted - acl

I need to set up an ACL rule for my HAProxy that checks each value in a given header against a whitelist and denies the request if at least one value is not whitelisted.
I’ve created a small whitelist.lst:
myval1
myval2
And here's is my haproxy.cfg:
defaults
mode http
frontend http_frontend
bind *:80
acl valid-hdr req.hdr(x-my-header) -m str -f /etc/haproxy/whitelist.lst
use_backend mysite if valid-hdr
backend mysite
server mysite 172.10.1.1:80 check
Headers may contain multiple comma separated values, so I want req.hdr to check each of them against a whitelist. If at least one value is not whitelisted - request should be blocked.
Example of what I want:
"X-My-Header: myval1" - valid
"X-My-Header: myval2" - valid
"X-My-Header: myval1,myval2" - valid
"X-My-Header: myval2,myval1" - valid
"X-My-Header: myval1,evil,myval2" - NOT valid
"X-My-Header: evil" - NOT valid
Example of what I get now:
"X-My-Header: myval1" - valid
"X-My-Header: myval2" - valid
"X-My-Header: myval1,myval2" - valid
"X-My-Header: myval2,myval1" - valid
"X-My-Header: myval1,evil,myval2" - valid
"X-My-Header: evil" - NOT valid
So the current config works just in the opposite way: if at least one of the values is whitelisted - request is considered to be valid.

Related

Inject Key Storage Password into Option's ValuesUrl

I'm trying to request a raw file from a Gitlab repository as a values JSON file for my job option. The only way so far that I managed to do it is by writing my secret token as plain text in the request URL:
https://mycompanygitlab.com/api/v4/projects/XXXX/repository/files/path%2Fto%2Fmy%2Ffile.json/raw?ref=main&private_token=MyV3ry53cr3Tt0k3n
I've tried using option cascading; I created a Secure password Input Option called gitlab_token which points to a Key Storage Password and tried every possible notation (with or without .value, quoted or unquoted option) in the valuesUrl field of the second option, instead of the plain token, but I keep receiving this error message pointing an invalid char at the position of the dollar sign:
I've redacted sensitive info and edited the error print accordingly
I reproduced your issue. It works using a text option, you can use the value in this way: ${option.mytoken.value}.
- defaultTab: nodes
description: ''
executionEnabled: true
id: e4f114d5-b3af-44a5-936f-81d984797481
loglevel: INFO
name: ResultData
nodeFilterEditable: false
options:
- name: mytoken
value: deda66698444
- name: apiendpoint
valuesUrl: https://mocki.io/v1/xxxxxxxx-xxxx-xxxx-xxxx-${option.mytoken.value}
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- exec: echo ${option.mytoken}
- exec: echo ${option.apiendpoint}
keepgoing: false
strategy: node-first
uuid: e4f114d5-b3af-44a5-936f-81d984797481
Another workaround (if you don't want to use a plain text option) could be to pass the secure option to an inline script and manage the logic from there.
Please open a new issue here.

JMeter - Extract value(x-nonce) from the Response Header

[I am unable to get the values of x-nonce from response Header,
I have to pass these X-nonce values in next API's, How can I get them and use? ]1
If you want the whole value you can do it using Regular Expression Extractor configured like:
Field to check: Response Headers
Name of create variable: anything meaningful, i.e. nonce
Regular Expression: x-nonce?\s*:?\s* (.*)
Template: $1$
Explanation:
?\s* - any number of optional whitespace characters
( and ) - grouping
. - matches any character
* - repetition
More information:
Apache JMeter - Regular Expressions
Using RegEx (Regular Expression Extractor) with JMeter
Perl 5 Regex Cheat sheet

How to validate mysql database URI

I'm trying to integrate a Gem named blazer with my Rails application and I have to specify mysql database URL in blazer.yml file so that it can access data in staging and production environments.
I believe the standard format to define MySQL database URL is
mysql2://user:password#hostname:3306/database
I defined my URL in the same format as a string and when I validate the URI I get the below error
URI::InvalidURIError: bad URI(is not URI?):
mysql2://f77_oe_85_staging:LcCh%264855c6M;kG9yGhjghjZC?JquGVK#factory97-aurora-staging-cluster.cluster-cmj77682fpy4kjl.us-east-1.rds.amazonaws.com/factory97_oe85_staging
Defined Mysql database URL:
'mysql2://f77_oe_85_staging:LcCh%264855c6M;kG9yGhjghjZC?JquGVK#factory97-aurora-staging-cluster.cluster-cmj77682fpy4kjl.us-east-1.rds.amazonaws.com/factory97_oe85_staging'
Please advice
The URI is invalid.
The problem is the password contains characters which are not valid in a URI. The username:password is the userinfo part of a URI. From RFC 3986...
foo://example.com:8042/over/there?name=ferret#nose
\_/ \______________/\_________/ \_________/ \__/
| | | | |
scheme authority path query fragment
authority = [ userinfo "#" ] host [ ":" port ]
userinfo = *( unreserved / pct-encoded / sub-delims / ":" )
pct-encoded = "%" HEXDIG HEXDIG
unreserved = ALPHA / DIGIT / "-" / "." / "_" / "~"
sub-delims = "!" / "$" / "&" / "'" / "(" / ")"
/ "*" / "+" / "," / ";" / "="
Specifically it's the ? in the password LcCh%264855c6M;kG9yGhjghjZC?JquGVK. It looks like the password is only partially escaped.
I think a problem is the issue is not well isolated. Here is an example strategy of how to isolate it.
The error code of URI::InvalidURIError: bad URI(is not URI?): only indicates the library (blazer gem) successfully read a file, which may or may not be the file you have edited, /YOUR_DIR/blazer.yml or something, but nevertheless failed to parse the URI.
Now, the issues to consider include:
blazer gem really read /YOUR_DIR/blazer.yml?
does the preprocessor of the yml work as expected?
is the uri key specified correct?
mysql: or mysql2?
are the formats of IP, port, account name, password, and database name all correct? In particular, are special characters correctly escaped? (See MySql document about special characters)
I suppose the OP knows answers of some of these questions but we don't. So, let's assume any of them can be an issue.
Then a proposed strategy is this:
Find a URI that is at least in a correct format and confirm it is parsed and recognised correctly by Gem blazer. Note you only need to test the format and so dummy parameters are fine. For example, try a combination of the following and see which does not issue the error URI::InvalidURIError:
mysql://127.0.0.1/test
mysql://adam:alphabetonly#127.0.0.1/test
jdbc:mysql://adam:alphabetonly#127.0.0.1/test
Now, you know at least the potential issues (1),(3),(4) are irrelevant.
Replace the IP (hostname), account name, password, and database name one by one with the real one and find which raises the error URI::InvalidURIError. Now you have narrowed down which part causes a problem. In the OP's case, I suspect the problem is an incorrect escape of the special characters in the password part. Let's assume that is the case, and then,
properly escape the part so that they form a correct URI format as a whole. The answer by #Schwern is a good summary about the format. As a tip, you can get an escape URI by opening Rail's console (via rails c) and typing URI.encode('YOUR_PASSWORD') or alternatively, run ruby directly from the command-line in a (UNIX-shell) terminal:
ruby -ruri -e "puts URI.encode('YOUR_PASSWORD')"
Replace the password part in the URI in /YOUR_DIR/blazer.yml with the escaped string, and confirm it does not issue the error URI::InvalidURIError (hopefully).
In these processing, I deliberately avoided the preprocessor part, (2).
This answer to "Rails not parsing database URL on production" mentions about URI.encode('YOUR_PASSWORD') in a yml file, but it implicitly assumes a preprocessor works fine. During the test phase, that just adds another layer of complication, and so it is better to skip it. If you need it in your production (to mask the password etc), implement it later, when you know everything else works fine.
Hope by the time the OP has tried all of these, the problem is solved.

NXLog: Json input to GELF UDP Output

We have a setup where a program logs to a .Json file, in a format that follows the GELF specification.
Currently this is sent to a Graylog2 server using HTTP. This works, but due to the nature of HTTP there's a significant latency, which is an issue if there is a large amount of log messages.
I want to change the HTTP delivery method to UDP, in order to just 'fire and forget'.
The logs are written to files like this:
{ "short_message": "<message>", "host": "<host>", "full_message": "<message>", "_extraField1": "<value>", "_extraField2": "<value>", "_extraField3": "<value>" }
Current configuration is this:
<Extension json>
Module xm_json
</Extension>
<Input jsonLogs>
Module im_file
File '<File Location>'
PollInterval 5
SavePos True
ReadFromLast True
Recursive False
RenameCheck False
CloseWhenIdle True
</Input>
<Output udp>
Module om_udp
Host <IP>
Port <Port>
OutputType GELF_UDP
</Output>
With this setup, part of json log message is added to the "message" field of a GELF message, and sent to the server.
I've tried adding the line `Exec parse_json(), but this will simply result in all fields other than short_message and full_message being excluded.
I'm unsure how to configure this correctly. Even just having the complete log message added to a field is preferable, since I can add an extractor on the server side.
You'd need Exec parse_json() in order for GELF_UDP to generate proper output but it was unclear what the exact issue is with message and full/short_message.
Another option you could try is simply ship the log via om_tcp. In this case you'll not need to use OutputType GELF_TCP since it is already formatted that way.

How to generate a JSON log from nginx?

I'm trying to generate a JSON log from nginx.
I'm aware of solutions like this one but some of the fields I want to log include user generated input (like HTTP headers) which need to be escaped properly.
I'm aware of the nginx changelog entries from Oct 2011 and May 2008 that say:
*) Change: now the 0x7F-0x1F characters are escaped as \xXX in an
access_log.
*) Change: now the 0x00-0x1F, '"' and '\' characters are escaped as \xXX
in an access_log.
but this still doesn't help since \xXX is invalid in a JSON string.
I've also looked at the HttpSetMiscModule module which has a set_quote_json_str directive, but this just seems to add \x22 around the strings which doesn't help.
Any idea for other solutions to log in JSON format from nginx?
Finally it looks like we have good way to do this with vanilla nginx without any modules. Just define:
log_format json_combined escape=json
'{'
'"time_local":"$time_local",'
'"remote_addr":"$remote_addr",'
'"remote_user":"$remote_user",'
'"request":"$request",'
'"status": "$status",'
'"body_bytes_sent":"$body_bytes_sent",'
'"request_time":"$request_time",'
'"http_referrer":"$http_referer",'
'"http_user_agent":"$http_user_agent"'
'}';
Note that escape=json was added in nginx 1.11.8.
http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format
You can try to use that one https://github.com/jiaz/nginx-http-json-log - addition module for Nginx.
You can try to use:
addition module for Nginx nginx-http-json-log
Use any language as done in nginx-json-logformat with example /etc/nginx/conf.d/json_log.conf
A version of the Nginx HTTP stub status module that outputs in JSON format
PS:
The if parameter (1.7.0) enables conditional logging. A request will not be logged if the condition evaluates to “0” or an empty string:
map $status $http_referer{
~\xXX 0;
default 1;
}
access_log /path/to/access.log combined if=$http_referer;
It’s a good idea to use a tool such as https://github.com/zaach/jsonlint to check your JSON data. You can test the output of your new logging format and make sure it’s real-and-proper JSON.