Monitor a text file for a single string/line with Zabbix? - zabbix

I have a Windows server with an application running on it. Using powershell I'm checking a file from said application for changes (being compared against another file) and outputting the result as a text file on the C:\ drive. That file just contains the string "True" or "False" (not quotes) depending on if the file changed or not.
How can I check this file for a string value? I tried:
{myserver:system.run[powershell.exe -Command "Get-Content 'C:\changed.txt' | Select-String 'True' -quiet"].regexp("False")}
But all that returns is ??F.

system.run is not really needed here. Instead you could use log file monitoring
log["C:\changed.txt",True]
Remember, this item must be configured as an active check

Related

ADF Merge-Copying JSON files in Copy Data Activity creates error for Mapping Data Flow

I am trying to do some optimization in ADF. Setup is a third-party tool copies one JSON file per object to a BLOB storage container. These feed to a Mapping Data Flow. The individual files written by the third party tool work great. If I copy these files to a different BLOB folder using an Azure Copy Data activity, the MDF can no longer parse the files and gives an error: "JSON parsing error, unsupported encoding or multiline." I started this with a Merge Files, but outcome is same regardless of copy behavior I choose.
2ND EDIT: After another day's work, I have found that the Copy Activity Merge File from JSON to JSON definitely adds an EOL character to each single JSON object as it gets imported to the Merge file. I have also found that the MDF fails definitely with those EOL characters in the Merge file. If I remove all EOL characters from the Merge file, the same MDF will work. For me, this is a bug. The copy activity is adding a character that breaks the MDF. There seems to be a second issue in some of my data that doesn't fail as an individual file but does when concatenated that breaks the MDF when I try to pull all the files together, but I have tested the basic behavior on 1-5000 files and been able to repeat the fail/success tests.
I took the original file, and the copied file, ran them through all of sorts of test, what I eventually found when I dump into Notepad++:
Copied file:
{"CustomerMasterData":{"Customer":[{"ID":"123456","name":"Customer Name",}]}}\r\n
Original file:
{"CustomerMasterData":{"Customer":[{"ID":"123456","name":"Customer Name",}]}}\n
If I change the copied file from ending with \r\n to \n, the MDF can read the file again. What is going on here? And how do I change the file write behavior or the MDF settings so that I can concatenate or copy files without the CRLF?
EDIT: NEW INFORMATION -- It seems on further review like maybe the minification/whitespace removal is the culprit. If I download the file created by the ADF copy and format it using a JSON formatter, it works. Maybe the CRLF -> LF masked something else. I'm not sure what to do at this point, but its super frustrating.
Other possibly relevant information:
Both the source and sink JSON datasets are set to use UTF-8 (not default(UTF-8), although I tried that). Would a different encoding fix this?
I have tried remapping schemas, creating new data sets, creating new Mapping Data Flows, still get the same error.
EDITED for clarity based on comments:
In the case of a single JSON element in a file, I can get this to work -- data preview returns same success or failure as pipeline when run
In the case of multiple documents merged by ADF I get the below instead. It seems on further review like maybe the minification/whitespace removal is the culprit. If I download the file created by the ADF copy and format it using a JSON formatter, it works. Maybe the CRLF -> LF masked something else. I'm not sure what to do at this point, but its super frustrating.
Repro: Create any valid JSON as a single file, put it in blob storage, use it as a source in a mapping data flow, to do any sink operation. Create a second file with same schema, get them both to run in same flow using wildcard paths. Use a Copy Activity with Merge Files as the Sink Copy Activity and Array of Objects as the File pattern. Try to make your MDF use this new file. If it fails, download the file created by ADF, run it through a formatter (I have used both VS Code -> "Format Document" from standard VS Code JSON extension, and VS 2019 "Unminify" command) and reupload... It should work now.
don't know if you already solved the problem: I came across the exact same problem 3 days ago and after several tries I found a solution:
in the copy data activity under sink settings, use "set of objects" (instead of "array of objects") under File Pattern, so that the merged big JSON has the value of the original small JSON files written per line
in the MDF after setting up the wildcard paths with the *.json pattern, under JSON Settings select: Document per line as the Document form.
After that you should be good to go, as least it solved my problem. The automatic written CRLF in "array of objects" setting in the copy data activity should be a default setting and MSFT should provide the option to omit it in the settings in the future.
According to my test:
1.copy data activity can't change unix(LF) to windows(CRLF).
2.MDF can also parse unix(LF) file and windows(CRLF) file.
Maybe there is something else wrong.
By the way,I see there is a comma after "name":"Customer Name" in your Original file,I delete it before my test.

JMeter reads %3CEOF%3E from CSV file

I use CSV Data Set Config in JMeter to provide username/password data to testsuite. In some cases it reads %3CEOF%3E from file instead of data. File is located in /bin folder.
Structure of file:
username1,password1
username2,password2
There isn't any empty lines at the end of the file.
Recycle on EOF: True
Stop Thread on EOF: False
Although you should not be seeing this issue normally you can work it around by putting your request under the If Controller and setting the following condition using __groovy() function:
${__groovy(!vars.get('foo').equals('<EOF>'),)}
Replace foo with the variable reference name from the CSV Data Set Config.
I faced the exact same problem and ran into a solution by accident. Although the txt had no empty lines when opened with "notepad" when I opened the same txt file using "notepad++" I found an empty line. I removed it from notepad++ and saved and the problem was solved.

How to pass variable as file name to jmeter CSV Data set Config?

I am using multiple csv files in one thread for comparision purpose.
Here first CSV Data set Config returns the file names
1.csv
5.csv
1000.csv
now I want to pass above file name to second CSV Data set Config
C:\\Softwares\\Installed\\jmeter-3.0\\bin\\TestData\\files\\${filename}
Is it possible in jmeter? can any one help me to resolve the problem.
Thanks,
Vijay
CSV Data Set Config is getting initialized before any JMeter Variable therefore your ${filename} will never be resolved and you will be getting "File not found" errors.
The options are in:
Consider using __CSVRead() function instead of CSV Data Set Config
Switch to JMeter Properties instead of JMeter Variables.
Change ${filename} to ${__P(filename)} (see __P() function documentation for syntax)
Define filename property. It can be done in 2 ways:
Via user.properties file. Add the next line to the file:
filename=C:\\Softwares\\Installed\\jmeter-3.0\\bin\\TestData\\files\\1.csv
JMeter restart will be required to pick up the property
Via -J command-line argument:
jmeter -Jfilename=C:\Softwares\Installed\jmeter-3.0\bin\TestData\files\1.csv
See Apache JMeter Properties Customization Guide for more information on JMeter properties and ways of setting, getting and overriding them

SSIS package: ForEach Container is picking undefined file from the source location

My SSIS Package takes gpg file rather then text file i have puts file "*.txt" file in Files. any help will be appreciated.
This is expected, documented behavior. From MSDN:
Use wildcard characters (*) to specify the files to include in the
collection. For example, to include files with names that contain
“abc”, use the following filter: *abc*.
When you specify a file name extension, the enumerator also returns
files that have the same extension with additional characters
appended. (This is the same behavior as that of the dir command in the
operating system, which also compares 8.3 file names for backward
compatibility.) This behavior of the enumerator could cause unexpected
results. For example, you want to enumerate only Excel 2003 files, and
you specify "*.xls". However, the enumerator will also return Excel
2007 files because those files have the extension, ".xlsx".
You can use an expression to specify the files to include in a
collection, by expanding Expressions on the Collection page, selecting
the FileSpec property, and then clicking the ellipsis button (…) to
add the property expression. For more information about dynamically
selecting specified files, see SSIS–Dynamically set File Mask :
FileSpec
Try using *txt instead of *.txt so it doesn't treat "txt" as an extension and include files that end in ".txt.gpg"

SSIS For Each Loop crashes the flat file connection

I created a simple SSIS package to import a flat file (.txt) into a database table. Tested that and it works perfectly. Since I have several files to import, I added a foreach loop to go through all the files, added the variables as recommended in several examples found on the net but now my flat file connection manager returns an error of "A valid file name must be selected." and the package will not run. I have so far been unsuccessful in finding the solution for this issue and would appreciate any suggestions by the SSIS gurus of this forum. Many thanks in advance!
Here is what I have in the way of variables:
SourceFileFolder which is the path to the folder that contains the files
FileName a string containing one of the names of the files I am seeking to import
SourceFilePath which is an expression driven variable that incorporates the previous two variables concatenated together. I can click "Evaluate Expression" and copy and paste it into windows explorer and open the file
ArchivePath which is an expression driven variable that creates the path to archive the file to once it is processed.
As the message says this is related to your connection manager not gathering the connection string. This can be handled using the following:
First of all clear the expression on the SourceFilePath variable.
With your Foreach Loop Container, set it up as follows:
This will use your variable SourceFileFolder as the Folder, you could also just hardcode the folder name C:\ for instance. Also make sure your folder is qualified correctly, I.E. make sure it finishes with a slash C: won't work but C:\ will work.
Next you need to map the fully qualified name to your other variable SourceFilePath
This should now store the full name of the file the loop has found into the SourceFilePath variable. For Instance C:\File.txt, you can now use this as a connection string expression on your file connection manager.
Under the properties of the connection manager make sure the expression is set to ConnectionString and then use the SourceFileName variable.
ALSO MAKE SURE DELAY VALIDATION IS SET TO TRUE
This hopefully should mean you can loop through the files.