PhpStorm Mess Detector scanning temp files - phpstorm

I can't seem to find a way to stop mess detector from scanning temp files. It should only be scanning files within my working directory?
I keep getting the following error and it is causing the application to hang sometimes as it attempts to scan this file.
PHP Mess Detector
phpmd: Can not correctly run the tool with parameters:
C:/Users/Work/AppData/Local/Temp/phpmd_temp.tmp6243/modules/Configure/Http/Controllers/PaymentController.php

Related

Is there an alternative way to open a huge SQL file rather than MySQL cmd , Em Editor and bigdump?

I am having lots of troubles while opening an SQL script. It's a database, because I managed to see a little bit of it before the things I tried crash down.
This is the script file that I get : https://steam.internet.byu.edu/
It's quite a popular database.
The zipped file is 17 GB and unzipped version of it is around 168 GB. I need to get the "steamid"s from "Player_Summaries" table. All I need is that column. I've tried the following things so far:
-I read a lot of 'opening a huge sql file posts' here and tried to source and open the file via MySQL command line client. It ran almost all night but eventually it crashed.
-I tried Em Editor, I installed the latest version and that also crashed after opening 6-7% of the file. It gives "unexpected crash" and different error reasons every time I tried.
I mean I am not even trying to run/execute the file, I was just going to copy the lines I need and thats all. All help/advice is appreciated.
Sorry for my bad English.

explore multiple GB of JSON data

I’ve got a live firebase app with a database that’s about 5GB in size. The firebase dashboard refuses to show me the contents of my database and just fails to load every time, presumably because the thing is too big. I’ve been digging around for some time now in search of some tool that makes it possible for me to come up with an ERD of my data. Help?
Atom crashes, vim takes forever and doesnt load anything, jq simply spits out a formatted version of my data, i’ve tried a couple of java tools to generate JSON schemas, but they crash after a while.. most python programs to do the same don’t even start properly.
How would you explore 5GB of json data?
Most of the file editors have line pagination, so your file should load.
Unless it's a one single line file.
In that case, you can use sed or jq to reformat the file in order to have more than one line.
After that operation you should be able to open it.
In case you need to extract data, you could use cat file.json | grep "what you need to extract".
That should work even on a one single line 5gb file.

NodeJS - JSON as DB & Corrupted data on close

I have a large file that is almost constantly updated (around 10MB but will grow to +100MB)
It is JSON and I am using it as a database. I don't want to use Mongo in this case because it needs to live self contained on a client machine (I am using electron to package it) And because it will be distributed to windows I am also trying to avoid any compiled code.
The problem is it gets corrupted when node closes. I have tried saving it to a .tmp file and then renaming it once done, which has reduced the number of corrupted incidents, but is there a better way (or a native JS DB system)? I don't need querying, just load and save.

SSIS - File system task, Create directory error

I got an error after running a SSIS package that has worked for a long time.
The error was thrown in a task used to create a directory (like this http://blogs.lessthandot.com/wp-content/uploads/blogs/DataMgmt/ssis_image_05.gif) and says "Cannot create because a file or directory with the same name already exists", but I am sure the directory or a file with the same name didn´t exist.
Before throwing error, the task created a file with no extension named as the expected directory. The file has a modified date more than 8 hours prior to the created date wich is weird.
I checked the date in the server and it is correct. I also tried running the package again and it worked.
What happened?
It sounds like some other process or person made a mistake in that directory and created a file that then blocked your SSIS package's directory create command, not a problem within your package.
Did you look at the security settings of the created file? It might have shown an owner that wasn't the credentials your SSIS package runs under. That won't help if you have many packages or processes that all run under the same credentials, but it might provide useful information.
What was in the file? The contents might provide a clue how it got there.
Did any other packages/processes have errors or warnings within a half day of your package's error? Maybe it was the result of another error. that you could locate through the logs of the other process.
Did your process fail to clean up after itself on the last run?
Does that directory get deleted at the start of your package run, at the end of your package run, or at the end of the run of the downstream consumer of the directory contents? If your package deletes it at the beginning, then something that slows the delete could present a race condition that normally resolves satisfactorily (the delete finishes before the create starts) but once in a while goes the wrong way.
Were you (or anyone) making a copy or scan of the directory in question? Sometimes copy programs (i.e. FTP) or scanning programs (anti virus, PII scans) can make a temporary copy of a large item being processed (i.e. that directory) and maybe it got interrupted and left the temp copy behind.
If it's not repeatable then finding out for sure what happened is tough, but if it happens again try exploring the above. Also, if you can afford to, you might want to increase logging. It takes more CPU and disk space and makes reviewing logs slower, but temporarily increasing log details can help isolate a problem like that.
Good luck!

Data flow task not getting completed and flat file generated is in locked state

I have a SSIS package deployed in 64-bit machine. The package is running fine if there are less number of records to be extracted and written to a file. We are using a data flow task for writing into file. However when we are runnning the package for large data extract, the data flow task is not getting completed and the file is getting locked. Please suggest a solution for this.
Are you logging the progress of your package? Do you see anything in there? If not, can you log the progress?
If you are writing to a UNC, I would suggest writing locally and then moving the file where you want.
Some debugging directions: Check if you are getting same problem with a different type of target like excel or SQL table. Check which process is using the file currently (use tools like ProcessXP for this) and check if file is showing any contents at any intermediate stage. By the way are you using transactions?