Importing csv file into Cassandra - csv

I am using COPY command to load the data from csv file into Cassandra table . Following error occurs while using the command.
**Command** : COPY quote.emp(alt, high,low) FROM 'test1.csv' WITH HEADER= true ;
Error is :
get_num_processess() takes no keyword argument.

This is caused by CASSANDRA-11574. As mentioned in the ticket comments, there is a workaround:
Move copyutil.c somewhere else. Same thing if you also have a copyutil.so.
You should be able to find these files under pylib/cqlshlib/.

Related

What is the correct syntax for the SOURCE command in SQL

In codeAnywhere I'm trying to run pre written script files to create a table. When using codeAnywhere one must import the file to the shell for the code first, as I have done. However I have been unable to use the SOURCE command to run these files. I have currently attempted this syntax:
USE exams SOURCE students.txt;
What is the correct syntax here? Do I need to name the database in the syntax?
Are there other commands which run text files containing code?
EDIT: I tried using this syntax, to the following result:
ERROR: Failed to open file 'exams(question5.txt)', error: 2
Put the commands on separate lines, without semi-colons for the shell commands, and if this doesn't work, then prefix with \ as well (I don't need to on my setup, but it's in the docs):
USE exams
SOURCE students.txt
https://dev.mysql.com/doc/mysql-shell-excerpt/5.7/en/mysql-shell-commands.html
On the shell you can use the following command to execute the queries from a text file:
mysql db_name < text_file
Hint: If the USE command (with correct database name) is specified on the textfile you don't need to specify the database. The SOURCE command is not available on MySQL instead you need the <.
You can find more information about executing queries from text files here:
https://dev.mysql.com/doc/refman/5.7/en/mysql-batch-commands.html

Converting evtx log to csv error

I am trying to convert and evtx log file to csv from log parser 2.2. I just want to copy all of the data into a csv.
LogParser "Select * INTO C:\Users\IBM_ADMI
N\Desktop\sample.csv FROM C:\Users\IBM_ADMIN\Desktop\Event
Logs\sample.evtx" -i:EVTX -o:csv
But I am getting the error below.
Error: Syntax Error: extra token(s) after query: 'Logs\sample.evtx'
Please assist in solving this error.
I know this has been a year but if you (or other people) still need it and for sake of reference, this is what I do:
LogParser "Select * INTO C:\Users\IBM_ADMIN\Desktop\sample.csv FROM 'C:\Users\IBM_ADMIN\Desktop\Event Logs\sample.evtx'" -i:evt -o:csv
Correct input type is evt, not evtx.
If there is space in the Event Logs folder, enclose with single quote.
The Problem was due to the extra space in between the folder name Event Logs. Changed the folder name to a single workd and it worked.
you have to convert .evtx file to .csv than you can read from this .csv file.
like this .enter image description here
//String command = "powershell.exe your command";
//called the PowerShell from java code
String command = "powershell.exe Get-WinEvent -Path C:\windows\System32\winevt\Logs\System.evtx | Export-Csv system.csv";
File seys = new File("system.csv");
Process powerShellProcess = Runtime.getRuntime().exec(command);

JSON to append in Big Query CLI using write_disposition=writeAppend fails

I could not make BQ shell to append JSON file using the keyword --write_disposition=WRITE_APPEND.
load --sour_format=NEWLINE_DELIMITED_JSON --write_disposition=WRITE_APPEND dataset.tablename /home/file1/one.log /home/file1/jschema.json
I have file named one.log and its schema jschema.json.
While executing the script, it says
FATAL flags parsing error : unknown command line flag 'write_dispostion'
RUN 'bq.py help' to get help.
I believe Big query is append only mode, there should be possibility of appending data in table, I am unable to get workaround, any assistance please.
I believe the default operational mode is WRITE_APPEND using the BQ tool.
And there is no --write_disposition switch for the BQ shell utility.
But there is a --replace should set the write_disposition to truncate.

Problems with simple import of csv into a MySQL database (load data infile doesn't work)

Relatively new to Ruby, running: Ruby 1.9.2 and MySQL 5.5.19
I have a csv file full of data that I'd like to add to an existing table on a mysql database. The csv file does not have headers. Yes, I'm aware that there are multiple questions on this topic already. Here's how it breaks down:
Answer #1: Use LOAD DATA INFILE
Unfortunately, LOAD DATA INFILE gives me the following error: "Can't get stat of 'filename.csv' (Errcode: 2)"
This appears to be some kind of permissions issue. I've tried this both directly at the mysql prompt (as root), and through a Ruby script. I've tried various chmod options on the csv file, as well as moving the csv file to various directories where other users have said it works for them. No luck. Regardless, most people at this point recommend...
Answer #2: Use LOAD DATA local INFILE
Unfortunately this also returns an error. Apparently local infile is a mysql option turned off by default, because its a security risk. I've tried turning it on, but still get nothing but errors, such as:
ERROR 1148 (42000): The used command is not allowed with this MySQL version
and also
undefined method `execute' for # (NoMethodError)
Answer #3: I can find various answers involving Rails, which don't fit the situation. This isn't for a web application (although a web app might access it later), I'm just trying to add the data to the database for right now as a one-time thing to do some data analysis.
The Ruby file should be incredibly simple:
require 'rubygems'
require 'csv' (or fastercsv?)
require 'mysql'
db = mysql.connect('localhost','root','','databasename')
CSV.foreach('filename.csv') do |row|
?????
db.execute("INSERT INTO tablename ?????")
end
P.S. Much thanks in advance. Please no answers that point to using LOAD DATA INFILE or LOAD DATA LOCAL INFILE. Already wasted enough hours trying to get that to work...
ad mysql:
LOAD DATA INFILE '/complete/path/csvdata.csv' INTO TABLE mytable(column1,column2,...);
ad ruby
require 'csv'
require 'mysql'
db = mysql.real_connect('localhost','root','password','database')
file=CSV::Reader.parse('filename.csv')
file.each do |row|
values = row.inject([]){|k,v| k<<"'#{v}'";k}.join(',')
db.query("insert into table(column, column ...) values(#{values})")
end
db.close
it assumes csv file contains ALL the columns required..

Export sqlite DB to csv issue

When I used db.execSQL(.mode csv) in Java code, it generates error in logcat.
/AndroidRuntime( 1363): FATAL EXCEPTION: main
/AndroidRuntime( 1363): android.database.sqlite.SQLiteException: near ".": syntax error: .mode csv
but if I issue the same in sqlite console, it works. I also cannot set separator in java code.
sqlite> .mode csv
.mode csv
sqlite> .separator ,
.separator ,
sqlite>
Can anyone share experience with me or what is correct approach ? I will be appreciated if codes are provided.
Thanks !!
The .mode csv syntax and other dot-commands are proper to the sqlite shell, which is a specific program built on SQLite.
What you can do in java is just use the Database engine, not the .mode, .help, .quit or .separator from another program.
However you can find the source code for the SQLite shell here.
fossil clone http://www.sqlite.org/src _my_sqlite_repository
mkdir SQLite_source
cd SQLite_source
fossil open ../_my_sqlite_repository
Then you can download the latest updates with fossil update trunk and see the source code at src/shell.c. You will probably notice that this is the only piece of source code around that includes additional libraries.