This is the code I have used to save my query result to csv file
hive -e 'select * from twitter.finalcount' > hdfs dfs /user/hue/resutsofquery/finalcount.csv
This is the output:
2017-03-18 07:27:40,810 WARN [main] mapreduce.TableMapReduceUtil: The hbase-prefix-tree module jar containing PrefixTreeCodec is not present. Continuing without it.
Logging initialized using configuration in file:/etc/hive/conf.dist/hive-log4j.properties
OK
**Time taken: 49.537 seconds, Fetched: 25363 row(s)**
But when I check my csv file it wasn't at my directory (resutsofquery). How do I find it? Provide me a good solution
I tried this way too
hive -e 'set hive.cli.print.header=true; SELECT * FROM twitter.finalcount LIMIT 0;' > /user/hue/resutsofquery/file_name.csv
But it throws an error
bash: /user/hue/resutsofquery/file_name.csv: No such file or directory
I types this in terminal
Related
I have a file sitting in my Cloudera project under "/home/cdsw/npi.json". I've tried using the following commands to use PySpark for reading from my "local" CDSW project, but can't get at it with any of the following commands. They all throw the "Path does not exist: " error
npi = sc.read.format("json").load("file:///home/cdsw/npi.json")
npi = sc.read.format("json").load("file:/home/cdsw/npi.json")
npi = sc.read.format("json").load("home/cdsw/npi.json")
As per this documentation, Accessing Data from HDFS
From terminal, copy the file from local file system to HDFS. Either use -put or -copyFromLocal.
hdfs dfs -put /home/cdsw/npi.json /destination
where, /destination is in HDFS.
Then, read the file in PySpark.
npi = sc.read.format("json").load("/destination/npi.json")
For more information:
put
put [-f] [-p] [-l] <localsrc> ... <destination>
Copy files from the local file system into fs. Copying fails if the file already
exists, unless the -f flag is given.
I want to load file into a table. I am using this command.
LOAD DATA LOCAL INFILE 'home/myuser/Documents/my_project/project_sub_directory/project_sub_directory2/project_sub_directory3/my_data_file.txt'
INTO TABLE `mydatabse`.`my_table`
fields terminated BY ',';
I use LOCAL in the command to avoid the following error (an avoid moving my data files to another directory):
Error Code: 1290. The MySQL server is running with the
--secure-file-priv option so it cannot execute this statement
After executing the command, I get this error:
Error Code: 2. File
'home/myuser/Documents/my_project/project_sub_directory/project_sub_directory2/project_sub_directory3/my_data_file.txt'
not found (Errcode: 2 - No such file or directory)
How to resolve the issue without moving my data file to another directory?
I am trying to read a simple xls file with xlsread in octave. Its csv version is shown below:
2,4,6
8,10,12
14,16,18
20,22,24
I have run the following commands in octave:
# the next commands are to select the file through a gui.
# it reports a warning, but selects the filename correctly
>> pkg load io
>> fprintf('Select the training data file ... \n');
Select the training data file ...
>> filename = uigetfile({'*.xls'; '*.xlsx'}, 'File Selector');
Gtk-Message: 14:37:32.971: GtkDialog mapped without a transient parent. This is discouraged.
>> printf('file name %s\n', filename);
file name x1.xls
# now I am trying to read the xls, and I get an error:
>> [~, ~, RAW] = xlsread(filename);
Detected XLS interfaces: None.
warning: xlsopen.m: no '.xls' spreadsheet I/O support with available interfaces.
warning: xlsread: some elements in list of return values are undefined
warning: called from
xlsread at line 268 column 1
I am using octave-4.2.2 on ubuntu-18.04 LTS. What is the reason for this error? Is there any other package that I need to install? How do I fix this problem?
octave supports xlsx, not xls.
I was tried exporting data from SQL Server table to .csv file and for that, I'm using the BCP utility in a batch file. My requirement is to pass database name, filename and server name as arguments.
bcp %1.dbo.temp_raw_MSPsalesrecordExtract out \\192.168.17.95\NonProdData1\CMA\ChennaiDA\Selva\%2.csv -c -t"|" -S "%3" -T
while running got an error
C:\Users\selva\Desktop\Bat>bcp.bat selva_test testfile ind-server01
The input line is too long.
The syntax of the command is incorrect.
Could anyone help me on this?
I'm trying to import some extra data to my existing H2 database. The extra data are in a .CSV file and I'm using the simple example SQL statement from H2 tutorial documentation:
SELECT * FROM CSVREAD('test.csv');
So far, I can only get the following exception:
Error: IO Exception: "IOException reading test.csv"; SQL statement:
SELECT * FROM CSVREAD('test.csv') [90028-176]
SQLState: 90028
ErrorCode: 90028
I am using SQuirreL client in Windows 7 to manage a local H2 database and so far, everything is working well. The test.csv is in the same directory as the database file.
Looks like a problem with the test.csv file. Is this on Linux ? Then check for case-sensitive file name and access permissions for the running process.
Could you read the file with FileInputStream from your code ? Is this a remote H2 db ?
In any case, it is the H2 server that needs access to the file. Probably the file is not in the CWD of the H2 process. Try to specify an absolute file name for the H2 server like /my/folder/test.csv or c:\my\folder\test.csv.