Fluent out_file plugin to create single output file for each day - fluent

I'm using out_file plugin of fluent (version 0.12.35) to write output to file locally. My fluent config looks like :
<source>
#type forward
port 24224
bind 0.0.0.0
</source>
<source>
#type http
port 8888
bind 0.0.0.0
body_size_limit 32m
keepalive_timeout 10s
</source>
<match **>
type file
path /var/log/test/logs
format json
time_slice_format %Y%m%d
time_slice_wait 24h
compress gzip
include_tag_key true
utc
buffer_path /var/log/test/logs.*
</match>
This creates multiple gz file for every ~10min.
-rw-r--r-- 1 root root 256546 May 6 07:03 logs.20170506_0.log.gz
-rw-r--r-- 1 root root 260730 May 6 07:14 logs.20170506_1.log.gz
-rw-r--r-- 1 root root 261155 May 6 07:25 logs.20170506_2.log.gz
-rw-r--r-- 1 root root 258903 May 6 08:56 logs.20170506_10.log.gz
-rw-r--r-- 1 root root 282680 May 6 09:08 logs.20170506_11.log.gz
...
-rw-r--r-- 1 root root 261973 May 6 10:44 logs.20170506_19.log.gz
I want to know the way to create a single gzipped file for each day. Even setting time_slice_wait to 24h didn't help.

Missed a silly thing in configuration: https://docs.fluentd.org/output/file#append
Updated configuration
<source>
#type forward
port 24224
bind 0.0.0.0
</source>
<source>
#type http
port 8888
bind 0.0.0.0
body_size_limit 32m
keepalive_timeout 10s
</source>
<match **>
type file
path /var/log/test/logs
format json
time_slice_format %Y%m%d
time_slice_wait 24h
compress gzip
include_tag_key true
utc
buffer_path /var/log/test/logs.*
append true
</match>

If anyone is continuing to get errors, in the match block, type should also be #type.

Related

Where to add skip-name-resolve? in my.cnf or server.cnf?

i need some help editing the correct file. I like to add "skip-name-resolve=1" to the my.cnf file. I am using centos7 and mariadb 10.5.
I found a file in /etc/my.cnf. This is the content:
[mysqld]
bind-address = ::ffff:127.0.0.1
local-infile=0
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
# Settings user and group are ignored when systemd is used.
# If you need to run mysqld under a different user or group,
# customize your systemd unit file for mariadb according to the
# instructions in http://fedoraproject.org/wiki/Systemd
[mysqld_safe]
log-error=/var/log/mariadb/mariadb.log
pid-file=/var/run/mariadb/mariadb.pid
#
# include all files from the config directory
#
!includedir /etc/my.cnf.d
I struggle with the last line "!includedir /etc/my.cnf.d" and the excamation mark.
in the folder my.cnf.d i have these files:
-rw-r--r-- 1 root root 295 Nov 2 12:37 client.cnf
-rw-r--r-- 1 root root 763 Nov 2 12:37 enable_encryption.preset
-rw-r--r-- 1 root root 232 Nov 2 12:37 mysql-clients.cnf
-rw-r--r-- 1 root root 157 Nov 1 21:13 plesk-utf8mb4.cnf
-rw-r--r-- 1 root root 1080 Nov 2 12:37 server.cnf
-rw-r--r-- 1 root root 120 Nov 2 12:37 spider.cnf
Did i need to add "skip-name-resolve=1" to the server.cnf because this file is inside the includedir? Or did i need to add it to my.cnf after the line socket=...?
Because in server.cnf there is [mysqld] too.
#
# These groups are read by MariaDB server.
# Use it for options that only the server (but not clients) should see
#
# See the examples of server my.cnf files in /usr/share/mysql/
#
# this is read by the standalone daemon and embedded servers
[server]
# this is only for the mysqld standalone daemon
[mysqld]
#
# * Galera-related settings
#
[galera]
# Mandatory settings
#wsrep_on=ON
#wsrep_provider=
#wsrep_cluster_address=
#binlog_format=row
#default_storage_engine=InnoDB
#innodb_autoinc_lock_mode=2
#
# Allow server to accept connections on all interfaces.
#
#bind-address=0.0.0.0
#
# Optional setting
#wsrep_slave_threads=1
#innodb_flush_log_at_trx_commit=0
# this is only for embedded server
[embedded]
# This group is only read by MariaDB servers, not by MySQL.
# If you use the same .cnf file for MySQL and MariaDB,
# you can put MariaDB-only options here
[mariadb]
# This group is only read by MariaDB-10.5 servers.
# If you use the same .cnf file for MariaDB of different versions,
# use this group for options that older servers don't understand
[mariadb-10.5]
1.ps -ef|grep mysql
You can view the my.cnf file in use now.
2. Stop the database service and add parameters to vi my.cnf.
skip-name-resolve
3. Start the database for testing.
For the sake of organization, put it in the server.cnf file.
Configuration files are read in a particular order, from /etc/my.cnf down to your user ~/.my.cnf. See this MariaDB doc for the current order for your operating system. MySQL/MariaDB will read the last value set when it reads these configuration files in order.
The files under the includedir are read in alphabetical order.
You can also print the configuration values to see what is being set by your set of configuration files.

Specified file 'sql.txt' does not contain a usable HTTP request (with parameters)

Whenever I am Using - sqlmap -r sql.txt --dbms=MYSQL --dbs --batch following result get displayed.
└─# sqlmap -r sql.txt --dbms=MYSQL --dbs --batch
[!] legal disclaimer: Usage of sqlmap for attacking targets without prior mutual consent is illegal. It is the end user's responsibility to obey all applicable local, state and federal laws. Developers assume no liability and are not responsible for any misuse or damage caused by this program
[*] starting # 09:45:06 /2021-09-25/
[INFO] parsing HTTP request from 'sql.txt'
[CRITICAL] specified file 'sql.txt' does not contain a usable HTTP request (with parameters)
[*] ending # 09:45:06 /2021-09-25/
BUT when I see the content of sql.txt file with cat command show nothing.
┌──(root💀kali)-[~/Desktop/sqlmap]
└─# ls -la
total 72
drwxr-xr-x 4 root root 4096 Sep 25 08:51 .
drwxr-xr-x 4 root root 4096 Sep 5 18:34 ..
drwxr-xr-x 2 root root 4096 Apr 20 14:18 docs
-rw-r--r-- 1 root root 47756 Jun 11 09:09 map.txt
-rw-r--r-- 1 root root 335 Jun 3 17:27 new.txt
drwxr-xr-x 11 root root 4096 Sep 25 08:49 sqlmap
-rw-r--r-- 1 root root 554 Sep 25 08:34 sql.txt
┌──(root💀kali)-[~/Desktop/sqlmap]
└─# cat sql.txt
┌──(root💀kali)-[~/Desktop/sqlmap]
└─#
And then if I tried to see content of sql.txt with nano command then It show all file data.
┌──(root💀kali)-[~/Desktop/sqlmap]
└─# nano sql.txt
POST /doLogin HTTP/1.1
Host: demo.testfire.net
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Content-Type: application/x-www-form-urlencoded
Content-Length: 35
Origin: http://demo.testfire.net
Connection: close
Referer: http://demo.testfire.net/login.jsp
Cookie: JSESSIONID=D5B396CF6FE3B67C4DF520346B5C889E
Upgrade-Insecure-Requests: 1
uid=test&passw=test&btnSubmit=Login
I am not able to use SQLMap with -r please help me
After a little digging into the source code of sqlmap on github, i found that there is a "bias" towards the input file being a raw dump of a http request intercepted by burpsuite or webscarab. Any other file (manually copy-pasted for instance from a web browser's http traffic as seen in the debug console) is somehow not ok.
So, the workaround i would suggest is to fire up burp suite (i have not yet worked with webscarab, so cannot comment on it), capture the http traffic of the request you are intending to analyse, copy paste the raw http traffic from burpsuite, into a text file and provide that file as the input to sqlmap with the -r switch.

How to change the stdout and stderr log location of processes started by supervisor?

So in my system, the supervisor captures stderr and stdout into these files:
root#3a1a895598f8:/var/log/supervisor# ls -l
total 24
-rw------- 1 root root 18136 Sep 14 03:35 gunicorn-stderr---supervisor-VVVsL1.log
-rw------- 1 root root 0 Sep 14 03:35 gunicorn-stdout---supervisor-lllimW.log
-rw------- 1 root root 0 Sep 14 03:35 nginx-stderr---supervisor-HNIPIA.log
-rw------- 1 root root 0 Sep 14 03:35 nginx-stdout---supervisor-2jDN7t.log
-rw-r--r-- 1 root root 1256 Sep 14 03:35 supervisord.log
But I would like to change gunicorn's stdout and stderr log files 'location to /var/log/gunicorn and fixed the file names for monitoring purpose.
This is what I have done in the config file:
[program:gunicorn]
stdout_capture_maxbytes=50MB
stderr_capture_maxbytes=50MB
stdout = /var/log/gunicorn/gunicorn-stdout.log
stderr = /var/log/gunicorn/gunicorn-stderr.log
command=/usr/bin/gunicorn -w 2 server:app
However it does not take any effect at all. Did I miss anything in the configuration?
Change stdout and stderr to stdout_logfile and stderr_logfile and this should solve your issue.
You can also change childlogdir in the main configuration to make all the child logs appear in another directory. If your are using Auto log mode the logfile names will be auto generated into the childlogdir specified without you needing to set stdout_logfile.
In order for your changes to be reflected you need to either restart the supervisor service with:
service supervisord restart
or
reload the config supervisorctl reload and update the config in the running processes supervisorctl update.
Documentation on this can be found here http://supervisord.org/logging.html#child-process-logs

How to store logs in MongoDB

I am using Fluentd data collector for storing Apache httpd logs in MongoDB. I made the necessary changes in td-agent configuration files like:
<source>
#type tail
format apache2
path C:\Program Files (x86)\Apache Group\Apache2\logs\access.log
tag mongo.apache
</source>
and
<match mongo.**>
# plugin type
#type mongo
# mongodb db + collection
database apache
collection access
# mongodb host + port
host localhost
port 27017
# interval
flush_interval 10s
# make sure to include the time key
include_time_key true
</match>
After doing all necessary changes, I tested configurations by pinging Apache server
ab -n 100 -c 10 http://localhost/
Everything works fine, but the logs files were not stored in MongoDB.
I did this in windows.

ERR_CONTENT_LENGTH_MISMATCH on nginx and proxy on Chrome when loading large files

I'm getting the following error on my chrome console:
GET http://localhost/grunt/vendor/angular/angular.js net::ERR_CONTENT_LENGTH_MISMATCH
This only happens when a simultaneous requests are shot towards nginx e.g. when the browsers cache is empty and the whole app loads. Loading the resource above as a single requests succeeds.
Here are the headers to this requests, copied from Chrome:
Remote Address:127.0.0.1:80
Request URL:http://localhost/grunt/vendor/angular/angular.js
Request Method:GET
Status Code:200 OK
Request Headersview source
Accept:*/*
Accept-Encoding:gzip,deflate,sdch
Accept-Language:en-US,en;q=0.8,de;q=0.6,pl;q=0.4,es;q=0.2,he;q=0.2,gl;q=0.2
Cache-Control:no-cache
Connection:keep-alive
Cookie:gs_u_GSN-265185-D=1783247335:2567:5000:1377697930719
Host:localhost
Pragma:no-cache
Referer:http://localhost/grunt/
User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2062.122 Safari/537.36
Response Headersview source
Accept-Ranges:bytes
Cache-Control:public, max-age=0
Connection:keep-alive
Content-Length:873444
Content-Type:application/javascript
Date:Tue, 23 Sep 2014 11:08:19 GMT
ETag:"873444-1411465226000"
Last-Modified:Tue, 23 Sep 2014 09:40:26 GMT
Server:nginx/1.6.0
the real size of the file:
$ ll vendor/angular/angular.js
-rw-rw-r-- 1 xxxx staff 873444 Aug 30 07:21 vendor/angular/angular.js
As you can see Content-Length and the real size of the file are the same, so that's weird
And the nginx configuration to this proxy:
location /grunt/ {
proxy_pass http://localhost:9000/;
}
Any ideas?
Thanks
EDIT: found more info on the error log:
2014/09/23 13:08:19 [crit] 15435#0: *8 open() "/usr/local/var/run/nginx/proxy_temp/1/00/0000000001" failed (13: Permission denied) while reading upstream, client: 127.0.0.1, server: localhost, request: "GET /grunt/vendor/angular/angular.js HTTP/1.1", upstream: "http://127.0.0.1:9000/vendor/angular/angular.js", host: "localhost", referrer: "http://localhost/grunt/"
Adding the following line to the nginx config was the only thing that fixed the net::ERR_CONTENT_LENGTH_MISMATCH error for me:
proxy_buffering off;
It seems that under pressure, nginx tried to pull angular.js from its cache and couldn't due to permission issues. Here's what solved this issue:
root#amac-2:/usr/local/var/run/nginx $ chown -R _www:admin proxy_temp
_www:admin might be different in your case, depending which user owns the nginx process. See more information on ServerFault:
https://serverfault.com/questions/534497/why-do-nginx-process-run-with-user-nobody
I tried all of the above and still couldn't get it to work. Even after resorting to chmod 777. The only thing that solved it for me was to disable caching entirely:
proxy_max_temp_file_size 0;
Whilst not a fix and no good for production use this was OK for me since I'm only using nginx as part of a local development setup.
For me the remedy were these two settings:
In the file:
/etc/nginx/nginx.conf
Add:
proxy_max_temp_file_size 0;
proxy_buffering off;
Between the lines client_max_body_size 128M; and server_names_hash_bucket_size 256;:
http {
client_max_body_size 128M;
proxy_max_temp_file_size 0;
proxy_buffering off;
server_names_hash_bucket_size 256;
ps aux | grep "nginx: worker process"
after executing above command you'll see the user through which nginx is running
eg.
www-data 25356 0.0 0.0 68576 4800 ? S 12:45 0:00 nginx: worker process
www-data 25357 0.0 0.0 68912 5060 ? S 12:45 0:00 nginx: worker process
now you have to run below command to give permission
chown -R www-data:www-data /var/lib/nginx/
Hope it will work
For us, it turned out to be that our server's rather small root (ie. /) was full.
It had mountains of logs and files from users in /home. Moving all that cruft out to another mounted drive solved things.
Just wanted to share as this can be another cause of the problem.
If somebody ran nginx as a different user in the past, ownership of cache folder may be twisted. I got
/var/cache/nginx# LANG=C ls -l proxy_temp/
total 40
drwx------ 18 nginx nginx 4096 Jul 14 2016 0
drwx------ 19 nginx nginx 4096 Jul 14 2016 1
drwx------ 19 nginx nginx 4096 Jul 14 2016 2
drwx------ 19 nginx nginx 4096 Jul 14 2016 3
drwx------ 19 nginx nginx 4096 Jul 14 2016 4
drwx------ 19 nginx nginx 4096 Jul 14 2016 5
drwx------ 19 nginx nginx 4096 Jul 14 2016 6
drwx------ 18 nginx nginx 4096 Jul 14 2016 7
drwx------ 18 nginx nginx 4096 Jul 14 2016 8
drwx------ 18 nginx nginx 4096 Jul 14 2016 9
while nginx was running as www-data. So the solution is to change ownership of nginx’s cache directory to the user nginx is running under. In the present case
/var/cache/nginx# chown -R www-data:www-data *
or, even simpler
# rm -r /var/cache/nginx/*
What worked for me was to change the proxy_temp_path to a folder with read/write permissions (777)
location / {
proxy_temp_path /data/tmp;
}
I had same issue.
Increasing the space of Directory or Folder where nginx is installed, solved the issue.
For macOS with nginx installed with homebrew, I used the following steps to track down and fix the issue.
Run nginx -h to find your error log location. Look for the following line:
-e filename : set error log file (default: /opt/homebrew/var/log/nginx/error.log)
Take your error log path and tail it to see what error it's reporting when you try to load the page.
tail -f /opt/homebrew/var/log/nginx/error.log
From that I saw that one of the lines showed a permission denied error:
open() "/opt/homebrew/var/run/nginx/proxy_temp/9/01/0000000019" failed (13: Permission denied) while reading upstream
Which means that your cached directories have incorrect permissions for the nginx user.
Stop nginx
brew services stop nginx
Delete all the temp folders (location from the permission error log line)
sudo rm -rf /opt/homebrew/var/run/nginx/*
Start nginx again
brew services start nginx
After doing this, nginx will recreate the temp folders with the correct permissions. At this point you should be good try and reload your page that was failing before.
When I tried the aforementioned solution it didn't fix the issue. I also changed the permission to write on the location but it didn't work. Then I realized I did something wrong in there. In the location to store the file, I had something like
"/storage" + fileName + ".csv"
. I was testing on the Windows environment and it was working great. But later when we moved the application to the Linux environment it stopped working. So later I had to change it to
"./storage" + fileName + ".csv"
and it started working normally.
For me, the solution was:
sudo chown -R nginx:nginx /var/cache/nginx/fastcgi_temp/
For anyone using HAProxy as proxy and getting these exact same symptoms, increasing the timeout values resolved the issue for me:
timeout connect 5000
timeout client 50000
timeout server 50000
The only thing that helped me was the following settings in nginx site .conf file:
proxy_read_timeout 720s;
proxy_connect_timeout 720s;
proxy_send_timeout 720s;
For me I had the same error except on a
different folder /var/lib/nginx/.
I changed the owner to nginx by
chown -R nginx:nginx /var/lib/nginx/. That did not work.
Then I checked who owned the nginx worker process by
ps aux| grep nginx
And it was running as nginx but when I looked through the nginx.conf file; I found that the user was nginx but it did not have any group. So, I added nginx to the user nginx; it turned out like this
user nginx nginx
Now I rebooted the system and the issue was fixed. I suppose I could have just used
chown -R nginx /var/lib/nginx/
That may have worked as well. So if anyone is facing this issue; firstly go into var/log/nginx and
check where the permission error occurred.