Interplanetary File System - where are logs? - ipfs

I run IPFS daemon on my server and it works as expected for some amount of time, like ~ 1 day. I can write and read files for some time, but then, for unknown to me reasons, it stops working.
I searched google with phrases like "interplanetary file system logs" or "ipfs where are logs", but the results are not satisfying, mostly not related at all.
My question is: does IPFS have a logging system by default? Does it store logs somewhere? Or maybe it is possible to force it to store some logs somewhere?

Related

Apache: log storage into MySQL

Method 1: Pipe Log
Recently I've read an article about how to save Apache log in MySQL database. Briefly, the idea is to pipe each log to MySQL:
# Format log as a MySQL query
LogFormat "INSERT INTO apache_logs \
set ip='%h',\
datetime='%{%Y-%m-%d %H:%M:%S}t',\
status='%>s',\
bytes_sent='%B',\
content_type='%{Content-Type}o',\
url_requested='%r',\
user_agent='%{User-Agent}i',\
referer='%{Referer}i';" \
mysql_custom_log
# execute queries
CustomLog "|/usr/bin/mysql -h 127.0.0.1 -u log_user -plog_pass apache_logs" mysql_custom_log
# save queries to log file
CustomLog logs/mysql_custom_log mysql_custom_log
Question
It seems that untreated user inputs (ie: user_agent & referer) would be passed directly to MySQL.
Therefore, is this method vulnerable to SQL injection? If so, is it possible to harden it?
Method 2: Apache module
mod_log_sql is an Apache module that seems to do something similar, ie: "logs all requests to a database". According to the documentation, such module has several advantages:
power of data extraction with SQL-based log
more configurable and flexible than the standard module [mod_log_config]
links are kept alive in between queries to save speed and overhead
any failed INSERT commands are preserved to a local file
no more tasks like log rotation
no need to collate/interleave the many separate logfiles
However, despite all this advantages, mod_log_sql doesn't seem to be popular:
the documentation doesn't mention one production level user
few discussions through the web
several periods without a maintainer
Which sounds like a warning to me (although I might be wrong).
Questions
Any known reason why this module doesn't seem to be popular?
Is it vulnerable to SQL injection? If so, is it possible to harden it?
Which method should have better performance?
Pipe Log method is better because it creates a stream between your log and your database, this can reflect directly on time performance in insertion/searching. Another point about pipe log is the possibility to use a NoSQL database which is optmized for searching or insertion via specific queries, one example is the ELK Stack, Elasticsearch + Logstash(Log Parser + Stream) and Kibana.
Would recommend any reading related to that: https://www.guru99.com/elk-stack-tutorial.html
Related to your question about SQL Injection, it deppends on how you are communicating with your database, despite the type of database or method to store your log. You need to secure it by using tokens as a example.
Related to apache module, the intention was to made a pipe log but the last commented part it's from 2006, and the documentation is not user friendly.

How to Delete MySQL Log File

I am using Windows 7 and My Computer Name is 'COREI5' and have a 1tb Hard Drive.
My Hard Drive is showing as Full but i was not able to Locate which File is so huge to Block the Drive Space.Now Seems i Figured out the File Source.
C:\ProgramData\MySQL\MySQL Server 5.6\data\COREI5-PC-slow
So it seems this 'COREI5-PC-slow' is the culprit file as it is showing a size of aprox 640GB.Note that this filw is shown as a txt file.
My queries are:
1) Will deleting this file harm my computer ? (I am getting error "You Need permission from Computer administrator to make changes")
2) I am not able to delete this file (even after i logged in as administrator)
3) Also Tried to give special permissions but now working
Any Solution ?
Note: I am not much savvy with Such Programs and commands to request you to give details or keep it simple.
I suspect the file is the "slow query" log in the MySQL data directory.
To confirm, connect to MySQL database, and run a query:
SHOW VARIABLES LIKE 'slow%'
Variable_name Value
------------------- --------------------------------------------------------------
slow_launch_time 2
slow_query_log OFF
slow_query_log_file C:\ProgramData\MySQL\MySQL Server\MyLaptop-slow.log
I suspect that in your case, slow_query_log is set to ON. If the filename shown for slow_query_log_file matches the file on your system, you can safely turn off the slow_query_log and then delete the file.
To turn off the slow query log:
SET GLOBAL slow_query_log = 0
Re-run the SHOW VARIABLES LIKE 'slow%' to confirm it's off.
And then you can delete the file from the filesystem. (If you are doing it from the GUI, don't just Delete the file and put it the Recycle Bin. Hold down the shift key when you click Delete, and it will prompt you if you want to "permanently" delete the file.
I'd be concerned that MySQL has logged 640GB worth of slow queries.
That slow_query_launch_time determines the amount of time a query executes before it's considered slow. There also may be a setting that sends all queries that don't use an index into the slow query log, even if it runs faster than slow_query_launch_time.
While you're at it, check that the general log is turned off as well.
SHOW VARIABLES LIKE 'general%'
This question might better be asked on dba.stackexchange.com
For hunting down huge space consumers, I recommend TreeSize Free from JAM Software. An easy to use old-style windows explorer interface, that gives you the total size of directories and files.
My final objective was to delete that file stated above and i was able to achieved the same with help of SHIFT+DELETE and then restart of PC.
It worked - thank you once again.

call graph for MySQL sessions

I am trying to create a valgrind (cachegrind) analysis of MySQL client connections.
I am running valgrind with --trace-children=yes.
What I want to find is one of the internal method calls, to see the call graph when it is being used...
After running valgrind --trace-children=yes ./bin/mysqld_safe
I get many dump files that were written that moment.
I am waiting 5 minutes (for letting the new files that I expect to be created to have a different "last modified" date.
After these 5 minutes I open 30 sessions, and floud the system with small transactions, and when I am done - shutdown the MySQL.
Now the questions:
1. After running 30 transactions and shutting down the system, only 3 files are modified. I expected to see 30 files, cause I though MySQL spans processes. So first - can someone confirm MySQL spans threads and not processes for each session?
I see three different database log calls: one to a DUMMY, one to binlog, and one to the innodb log. Can someone explain why the binlog and the DUMMY are there, and what's the difference between them? (I guess the DUMMY is because of the innodb, but I don't understand why the binlog is there if my first guess is true).
Is there a better way to do this analysis?
Is there a tool like kcachegrind that can open multiple files and show the summery from all of them? (or is it possible somehow within kcachegrind?)
Thanks!!
btw - for people who extend and develop MySQL - there are many interesting things there that can be improved....
I can only help you on some issues: Yes, MySQL does not create processes, but threads, see the manual on the command which lists what is currently done by the server:
When you are attempting to ascertain what your MySQL server is doing,
it can be helpful to examine the process list, which is the set of
threads currently executing within the server.
(Highlighting by me.)
Concerning the logs: Binary log is the log used for replication. This contains all executed statements (or changed rows) and will be propagated to slaves.
The InnoDB log is independent from the binary log and is used to assure that InnoDB performs ACID conform. Transactions are inserted there first and this file is used if the server crashed and InnoDB starts a recovery.
It is kind of normal that both logs are filled on a normal server.
I cannot help you with your other questions though. Maybe you want to ask your question on dba.stackexchange.com

elasticsearch search phase execution

I'm having issues and I don't know where to turn. Long story short, my web designer left me high and dry and I have no idea what he did and he refuses to answer his phone. I have access to the main page but after that, I'm completely locked out and staring at a SearchPhaseExecutionException for every single product in my store. Any help would be much appreciated as I am completely clueless on what to do. Here is the full error log and I can post any additional information as is necessary to troubleshoot this problem:
SearchPhaseExecutionException at /category/1
Failed to execute phase [query], total failure; shardFailures {[_na_][product][0]: No active shards}{[_na_][product][1]: No active shards}{[_na_][product][2]: No active shards}{[_na_][product][3]: No active shards}{[_na_][product][4]: No active shards}
Somewhere on your web site/farm you have an elasticsearch server running. This server has an index called product, and I would guess this index contains information about products in your store. Currently, this elasticsearch server is experiencing some sort of an issue that made the index unavailable. It might be possible to tell you what is going on by looking at the log file of the elasticsearch server, which is different from the log file of your web server. Do you see any log files called elasticsearch.log?
By the way, since it might take several iterations to figure out what's going on, it might be easier to move this conversation to elasticsearch mailing list or #elasticsearch IRC channel on freenode.
some times this error happened because of the data, data to be searched has to be cleaned as elasticSearch will crash with some words like " [PREPARATION " or even " word: " as punctuations drive it crazy.
if you don't want to clean the data you can just catch the exception and it will continue

Incremental MySQL backup to Amazon S3

Looking at this now. Any script recommendation will do. Using Rails app too.
The difference with current scripts around is it's a full backup, my MySQL database files are like 120MB+ for now, which will increase overtime. So I wonder if there is any incremental method around.
This recent thread on mysql.com discusses it.
Basically, you have to set the server up to do binary logging and set a threshold for each log to whatever size increment you prefer for backing up. Then just upload a complete backup once and start your binary logging from that point forward. Then just upload each log once it is closed and a new one is opened.
It is more complicated than that but I think that should get you started.