Converting evtx log to csv error - csv

I am trying to convert and evtx log file to csv from log parser 2.2. I just want to copy all of the data into a csv.
LogParser "Select * INTO C:\Users\IBM_ADMI
N\Desktop\sample.csv FROM C:\Users\IBM_ADMIN\Desktop\Event
Logs\sample.evtx" -i:EVTX -o:csv
But I am getting the error below.
Error: Syntax Error: extra token(s) after query: 'Logs\sample.evtx'
Please assist in solving this error.

I know this has been a year but if you (or other people) still need it and for sake of reference, this is what I do:
LogParser "Select * INTO C:\Users\IBM_ADMIN\Desktop\sample.csv FROM 'C:\Users\IBM_ADMIN\Desktop\Event Logs\sample.evtx'" -i:evt -o:csv
Correct input type is evt, not evtx.
If there is space in the Event Logs folder, enclose with single quote.

The Problem was due to the extra space in between the folder name Event Logs. Changed the folder name to a single workd and it worked.

you have to convert .evtx file to .csv than you can read from this .csv file.
like this .enter image description here
//String command = "powershell.exe your command";
//called the PowerShell from java code
String command = "powershell.exe Get-WinEvent -Path C:\windows\System32\winevt\Logs\System.evtx | Export-Csv system.csv";
File seys = new File("system.csv");
Process powerShellProcess = Runtime.getRuntime().exec(command);

Related

Error - cannot convert, not a json string: [type: INPUT_STREAM, value: java.io.BufferedInputStream#5f8890c2 in karate framework

In karate framework, while executing one test case, getting error
java.lang.RuntimeException: cannot convert, not a json string: [type: INPUT_STREAM, value: java.io.BufferedInputStream#5f8890c2] at com.intuit.karate.Script.toJsonDoc(Script.java:619) at com.intuit.karate.Script.assign(Script.java:586) at com.intuit.karate.Script.assignJson(Script.java:543) at com.intuit.karate.StepDefs.castToJson(StepDefs.java:329) at ✽.* json vExpectedJSONObject = vExpectedJSONFileContent,
Acually in this framework, we are executing sql query and then result of that query is stored at abc.json file. but due to this error that result is not getting stored in that json file.
I have tired with multiple option like file incoding - set to utf-8 then adding plugin in to pom.xml.
json vExpectedJSONObject = vExpectedJSONFileContent, I am expecting the sql result should be stored in json file.
Finally got the solution:), Issue was related to framework setup, actually we are trying to call Runtime.getRuntime().exec funtion to execute our sql query by using command at cmd prompt. but due to some access privileges that command was not executing, so after debug, we have put that mysql.exe file into jre/bin folder and then it works....

Protractor - How can I hold es-bdd-id in json file, so I can use it alter in the tests?

I'm trying to hold data object variables in a json file. Everything works fine with ids, but I can't get it to work for es-bdd-ids. Entry in json file looks like this:
"businessNameInputField":"[es-bdd-id=\"rs-application-business-page-rs-business-form-es-panel-body-es-field-edit-input-businessName\"]"
And when I run the script I get error:
WebDriverError: invalid argument: 'value' must be a string
How can I keep such variables in a json file? Any help appreciated.
Code:
businessName = getRandomString(9);
var businessNameInputField = element(by.css(webObjectVariables.businessNameInputField))
businessNameInputField.sendKeys(businessName)

MySQL File or Directory not Found ODBC

I am writing a program which deals with data transformations via MySQL and it deals with big files.
I made a question earlier about another issue I was having, while I was trying out someone's answer I got the following error
[MySQL][ODBC 5.3(a) Driver][mysqld-5.5.5-10.1.9-MariaDB]File 'C:\xampp\mysql\data\ingram\' not found (Errcode: 2 "No such file or directory")
I am certain that directory exists and when I change the code to its original state it works perfectly.
What is going on there?
This is the piece of code that gives me the problem
Cmd.CommandText = String.Format("LOAD DATA INFILE ""{0}"" IGNORE INTO TABLE libros_nueva FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '""' ESCAPED BY '""' LINES TERMINATED BY '\r\n';", filepath)
Cmd.Execute()
Any help will be appreciated!
Given the salient portion of the error message:
File 'C:\xampp\mysql\data\ingram\' not found (Errcode: 2 "No such file or directory")
I am pretty sure you are passing just a path when a full path and file name are required. There is certainly no file name in the path it echoed back.
Can you please explain it [MySqlBulkLoader] to me?
Another way to import is to use MySqlBulkLoader from the MySql.Data.MySqlClient namespace:
' columns in the order they appear in the CSV file:
Dim cols As String() = {"Name", "Descr", "`Group`", "ValueA",
"Bird", "Fish", "zDate", "Color", "Active"}
Dim csvFile As String = "C:\Temp\mysqlImport.csv"
Dim rows As Int32
Using dbcon As New MySqlConnection(MySQLConnStr)
Dim bulk = New MySqlBulkLoader(dbcon)
bulk.TableName = "importer"
bulk.FieldTerminator = "," ' this is a CSV
bulk.LineTerminator = "\r\n" ' == CR/LF
bulk.FileName = csvFile ' full file path name to CSV
bulk.NumberOfLinesToSkip = 0 ' has a header?
bulk.Columns.Clear()
For Each s In cols
bulk.Columns.Add(s) ' tell MySQL the order
Next
rows = bulk.Load() ' Make it so.
End Using
Times to import 100k rows: 3619, 2719 and 2987 ms. There is also a LoadAsync method which may be of interest given your last question.
If there are data transforms to do before the insert, CSVHelper can provide an easy way to load records so you can do whatever needs to be done, then use normal SQL Inserts to update the DB.
Part of this answer shows using CSVHelper to import into Access in batches of 50k and which was pretty fast.

Importing csv file into Cassandra

I am using COPY command to load the data from csv file into Cassandra table . Following error occurs while using the command.
**Command** : COPY quote.emp(alt, high,low) FROM 'test1.csv' WITH HEADER= true ;
Error is :
get_num_processess() takes no keyword argument.
This is caused by CASSANDRA-11574. As mentioned in the ticket comments, there is a workaround:
Move copyutil.c somewhere else. Same thing if you also have a copyutil.so.
You should be able to find these files under pylib/cqlshlib/.

JSON to append in Big Query CLI using write_disposition=writeAppend fails

I could not make BQ shell to append JSON file using the keyword --write_disposition=WRITE_APPEND.
load --sour_format=NEWLINE_DELIMITED_JSON --write_disposition=WRITE_APPEND dataset.tablename /home/file1/one.log /home/file1/jschema.json
I have file named one.log and its schema jschema.json.
While executing the script, it says
FATAL flags parsing error : unknown command line flag 'write_dispostion'
RUN 'bq.py help' to get help.
I believe Big query is append only mode, there should be possibility of appending data in table, I am unable to get workaround, any assistance please.
I believe the default operational mode is WRITE_APPEND using the BQ tool.
And there is no --write_disposition switch for the BQ shell utility.
But there is a --replace should set the write_disposition to truncate.