How can I execute a select into outfile query using Hibernate? - mysql

I am trying to execute de following code in Hibernate to create a .csv file from a mySQL database. :
String sql =
"SELECT * INTO OUTFILE 'table.csv' FIELDS TERMINATED BY ','" +
" OPTIONALLY ENCLOSED BY '\"' LINES TERMINATED BY '\\n' " +
"FROM match INNER JOIN totala ON match_code= match";
The .csv file is created correctly but then I get the following error:
Exception in thread "main" org.hibernate.exception.GenericJDBCException: could not execute query
at org.hibernate.exception.SQLStateConverter.handledNonSpecificException(SQLStateConverter.java:140)
at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:128)
at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66)
at org.hibernate.loader.Loader.doList(Loader.java:2536)
at org.hibernate.loader.Loader.listIgnoreQueryCache(Loader.java:2276)
at org.hibernate.loader.Loader.list(Loader.java:2271)
at org.hibernate.loader.custom.CustomLoader.list(CustomLoader.java:316)
at org.hibernate.impl.SessionImpl.listCustomQuery(SessionImpl.java:1842)
at org.hibernate.impl.AbstractSessionImpl.list(AbstractSessionImpl.java:165)
at org.hibernate.impl.SQLQueryImpl.list(SQLQueryImpl.java:157)
at datuak.DatuBasea.sortu_csv_fitxategia(DatuBasea.java:101)
at sortzailea.csv_sortzailea.Csv.Sortu(Csv.java:8)
at html_erauzlea.Nagusia.main(Nagusia.java:34)
Caused by: java.sql.SQLException: ResultSet is from UPDATE. No Data
at com.mysql.jdbc.ResultSet.next(ResultSet.java:2491)
at org.hibernate.loader.Loader.doQuery(Loader.java:825)
at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:274)
at org.hibernate.loader.Loader.doList(Loader.java:2533)
... 9 more
I think that the problem is that I am executing a select query that doesn´t return any value, it only creates a .csv file and the method is expecting to return a ResultSet.
So, could someone give some suggestions?
Thanks in advance

Related

Load data to Salesforce using COPY INTO

I have been trying to load csv data into Snowflake using COPY INTO command
This is the sample data
4513194677~"DELL - ULTRASHARP 32\" MONITOR 4K U3223QE"~""~""
I have tried using below COPY INTO syntax
file_format =
type = 'csv'
field_delimiter = '~'
skip_header = 1
record_delimiter = '\\n'
field_optionally_enclosed_by = '"'
ESCAPE = 'NONE'
ESCAPE_UNENCLOSED_FIELD = 'NONE'
However, getting this error "Found character 'M' instead of field delimiter '~'"
How can I escape the " and load the columns data as DELL - ULTRASHARP 32 " MONITOR 4K U3223QE
If I try to use ESCAPE, I get below error when running the COPY command
[ERROR] ProgrammingError: 001003 (42000): 01a8e01d-3201-36a9-0050-4502537cfc7f: SQL compilation error:
syntax error line 15 at position 43 unexpected '''.
syntax error line 20 at position 20 unexpected ')'.
file_format =
type = 'csv'
field_delimiter = '~'
skip_header = 1
record_delimiter = '\\n'
field_optionally_enclosed_by = '"'
ESCAPE = '\\'
ESCAPE_UNENCLOSED_FIELD = '\\'
Try using two double quotes in the data instead of one without trying to escape the double quote
Data similar to "sample"
You can have your csv formated like below
"Data similar to ""sample"""

ErrCode with "Select Into Outfile with a variable" - Confusing Permissions

Background: I work with phpMyAdmin (MySQL Workbench) in a mysql DB. I write some PHP code to import data in the DB and execute this with the task scheduler of windows. <= this works fine!
Now I want to export some data into a file in a Windows folder. At first I write the SQL code in phpMyAdmin to see some debug-infos. <= this is where the problem occurs.
My Topic:
I want to export some columns of my DB. My Goal is to put a variable CURRENT_TIMESTAMP in the filename. For this I use the Concat statement.
My code (posted below), gets the result of the following error:
Can't create/write to file 'C:\Temp\Export\2018-08-08 09:21:27.txt' (Errcode: 13 "Permission denied")
Funny thing is, if I replace the variable CURRENT_TIMESTAMP with e.g. "Hello World" there is no error and my file is created in the folder.
Here is my code:
*set #sql = concat("SELECT `LS_ID_Nr`,
`Stk_pro_Krt_DL` * `Krt_DL` + `RB_Stk_pro_Krt_DL` * `RB_Krt_DL`,
`Umstellzeit`,
`Produktionszeit`,
`Teilmeldung`,
`Fertigmeldung` INTO OUTFILE 'C:/Temp/Export/",CURRENT_TIMESTAMP,".txt' fields terminated by ';' lines terminated by '\r\n' From praemie where Proof_P = 0");
prepare s1 from #sql;
execute s1;
DROP PREPARE s1;
UPDATE praemie SET Proof_P = 1 WHERE Proof_P = 0;*
Does anybody have an idea why there is an Permission Error with the use of a variable? Thanks in advance!
Ooooohh....
I get it now!
The problem is, that windows not handle ":" in filenames. So I have to edit the code with the Date_Format statement like this:
set #sql =
concat("SELECT `LS_ID_Nr`, `Stk_pro_Krt_DL` * `Krt_DL` + `RB_Stk_pro_Krt_DL` * `RB_Krt_DL`,
`Umstellzeit`, `Produktionszeit`, `Teilmeldung`, `Fertigmeldung`
INTO OUTFILE 'C:/Temp/Export/Test - ", DATE_FORMAT(NOW(), '%Y%m%d%H%i%s')," - Test.txt'
fields terminated by ';'
lines terminated by '\r\n'
From praemie where Proof_P = 0")

Error Code: 1193 unknown system variable when importing a CSV

I'm trying to import data from a .csv file and I'm getting and error code 1193 unknown system variable. I'm utilizing MySQL 5.5.34.
LOAD DATA LOCAL INFILE 'path to the file/student_2.csv'
INTO TABLE STUDENT
FIELDS TERMINATED BY ','
OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
IGNORE 2 LINES
(S_ID, S_LAST, S_FIRST, S_MI, S_ADDRESS, S_CITY, S_STATE, S_ZIP, S_PHONE, S_CLASS, #S_DOB, S_PIN, F_ID, #DATE_ENROLLED);
SET S_DOB = STR_TO_DATE(#S_DOB, '%m/%d/%Y'),
DATE_ENROLLED = STR_TO_DATE(#DATE_ENROLLED, '%m/%d/%Y');
The csv file's data is as follows:
S_ID,S_LAST,S_FIRST,S_MI,S_ADDRESS,S_CITY,S_STATE,S_ZIP,S_PHONE,S_CLASS,S_DOB,S_PIN,F_ID,DATE_ENROLLED
Number,String,String,String,String,String,String,String,String,String,Date/Time,String,Number,String
1,Joffs,Tami,R,1817 Eagldge Cle,Houston,TX,74027,356487654,SR,7/14/88,8891,1,1/3/13
2,Petez,Jimmge,C,951 Drainbow Place,Absail,TX,76901,3253945432,SR,18/09/76,1230,1,11/10/02
3,Marks,Johannes,A,1015 Wild St,Dallas,TX,71012,3251454321,JR,08/13/83,1613,1,8/24/03
4,Smyth,Mark,,428 EN 16 Plaza,Arsehole,TX,7012,3221143210,SO,1/14/88,1841,2,8/23/04
I also change the year format from %Y to %y and did not work either.
It is something wrong with the script?
Hm - I can't try it out and I did not dive deep into your script, but are you sure about the ; before the set-commands?
....
(S_ID, S_LAST, S_FIRST, S_MI, S_ADDRESS, S_CITY,
S_STATE, S_ZIP, S_PHONE, S_CLASS, #S_DOB, S_PIN, F_ID, #DATE_ENROLLED);
SET S_DOB = STR_TO_DATE(#S_DOB, '%m/%d/%Y'),
DATE_ENROLLED = STR_TO_DATE(#DATE_ENROLLED, '%m/%d/%Y');

Load Data Infile errors

In the syntax of load infile data i saw that the fields and line clauses are optional. So I used only character set clause for utf8
Here my sql:
cmd = new MySqlCommand("LOAD DATA INFILE " + filename + " INTO TABLE " + tblname + " CHARACTER SET 'UTF8'", conn);
filename is the addresse it's format is: "E:\Macdata\20131228\atelier.sql"
table name is directly taken from database is as : "atelier"
But I get the error : You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'E:\Macdata\20131228\atelier.sql INTO TABLE atelier CHARACTER SET 'UTF8'' at line 1
What is the mistake in my query command ?
MYSQLversion is 5.0.10 with XAMPP
After changing the query I begin to receive fatal error number 0 (enclosed filename with ')
cmd = new MySqlCommand("LOAD DATA LOCAL INFILE '" + filename + "' IGNORE INTO TABLE " + tblname + " CHARACTER SET UTF8", conn);
My data file has this form which works on phpmyadmin
INSERT INTO `atelier` VALUES(1, 'Chateau Carbonnieux -1', '2013-12-26', 23, 10, 0, '4 macarons differents', 'mamie', '2013-12-15 11:09:14', 'sabrina', '2013-12-18 05:29:26');
As the error says, your statements is wrong. Quotes are missing in your first statement (see second statement). Check the syntax here:
http://dev.mysql.com/doc/refman/5.6/en/load-data.html
Some sparse notes:
0 is not a fatal error, it's the code for success.
IGNORE handles duplicate rows, not syntax errors.

Error running Hive query with JSON data?

I have data containing the following:
{"field1":{"data1": 1},"field2":100,"field3":"more data1","field4":123.001}
{"field1":{"data2": 1},"field2":200,"field3":"more data2","field4":123.002}
{"field1":{"data3": 1},"field2":300,"field3":"more data3","field4":123.003}
{"field1":{"data4": 1},"field2":400,"field3":"more data4","field4":123.004}
I uploaded it to S3 and converted it to a Hive table using the following from the Hive console:
ADD JAR s3://elasticmapreduce/samples/hive-ads/libs/jsonserde.jar;
CREATE EXTERNAL TABLE impressions (json STRING ) ROW FORMAT DELIMITED LINES TERMINATED BY '\n' LOCATION 's3://my-bucket/';
The query:
SELECT * FROM impressions;
gives output fine but as soon as I try and use the get_json_object UDF
and run the query:
SELECT get_json_object(impressions.json, '$.field2') FROM impressions;
I get the following error:
> SELECT get_json_object(impressions.json, '$.field2') FROM impressions;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
java.io.IOException: cannot find dir = s3://nick.bucket.dev/snapshot.csv in pathToPartitionInfo: [s3://nick.bucket.dev/]
at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getPartitionDescFromPathRecursively(HiveFileFormatUtils.java:291)
at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getPartitionDescFromPathRecursively(HiveFileFormatUtils.java:258)
at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat$CombineHiveInputSplit.<init>(CombineHiveInputFormat.java:108)
at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:423)
at org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1036)
at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1028)
at org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:172)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:944)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:897)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:897)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:871)
at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:479)
at org.apache.hadoop.hive.ql.exec.MapRedTask.execute(MapRedTask.java:136)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:133)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1332)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1123)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:931)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:261)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:218)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:409)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:684)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:567)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Job Submission failed with exception 'java.io.IOException(cannot find dir = s3://my-bucket/snapshot.csv in pathToPartitionInfo: [s3://my-bucket/])'
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MapRedTask
Does anyone know what is wrong?
Any reason why you are declaring:
ADD JAR s3://elasticmapreduce/samples/hive-ads/libs/jsonserde.jar;
But not using the serde in your table definition? See the code snippet below on how to use it. I can't see any reason to use get_json_object here.
CREATE EXTERNAL TABLE impressions (
field1 string, field2 string, field3 string, field4 string
)
ROW FORMAT
serde 'com.amazon.elasticmapreduce.JsonSerde'
with serdeproperties ( 'paths'='field1, field2, field3,
field4)
LOCATION 's3://mybucket' ;