create external table with headers in netezza (postgres) - csv

I am creating an external table as shown below
CREATE EXTERNAL TABLE '~\test.csv'
USING ( DELIMITER ',' Y2BASE 2000 ENCODING 'internal' REMOTESOURCE 'ODBC' ESCAPECHAR '\' )
AS SELECT * FROM TEST_TABLE;
It works fine. My question is :
Is there a way we can name the header values as column names in the test.csv file ? Is it possible in either Netezza or postgres.
I think we can do it using COPY, however I want to do it using the EXTERNAL TABLE command.
Thanks

In version 7.2 of Netezza you can now specify the IncludeHeader option to achieve this with external tables.
This is documented here

It's not pretty, and it would likely add some amount of overhead to the query, but you could do something like this:
CREATE EXTERNAL TABLE ... AS
SELECT ColumnId, OtherColumn
FROM (
SELECT FALSE as IsHeader, ColumnId::VARCHAR(512), OtherColumn::VARCHAR(512)
FROM TEST_TABLE
UNION ALL
SELECT TRUE as IsHeader, 'ColumnId', 'OtherColumn'
) x
ORDER BY IsHeader DESC

This is another example, along the same idea that qSlug gave...
CREATE EXTERNAL TABLE
'C:\HEADER_TEST.csv' USING
(DELIMITER '|' ESCAPECHAR '\' Y2BASE 2000 REMOTESOURCE 'ODBC') AS
--actual query goes here. Leave the 'data' field on there.
(select store_name, address1, 'data'
from yourtable
limit 10)
union
--field names go here. Leave the 'header' field on there.
select 'store_name', 'address1', 'header'
from _v_dual
order by 3 desc
You can then just delete the last column from your csv file.

There actually is a way to include the header in the file if you have Netezza version 7.2 or greater.
The option is 'includeheader' , but it doesn't look like the Aginity Workbench highlights 'includeheader' as though it's an option (at least in my version: 4.8).
CREATE EXTERNAL TABLE '~\test.csv'
USING
(
DELIMITER ','
Y2BASE 2000
ENCODING 'internal'
REMOTESOURCE 'ODBC'
ESCAPECHAR '\'
/****THIS IS THE OPTION ****/
INCLUDEHEADER
)
AS
SELECT *
FROM TEST_TABLE;
You'll notice that Aginity doesn't apply highlighting to the option but it will execute and write a header to the first row.

Related

Different behavior with MySQL SELECT UNION INTO OUTFILE for larger files vs small files

I have a database of 15 tables. I'm using the SQL below to create CSVs for importing into GCP BigQuery.
(SELECT 'anchor_text','destination_domain','destination_page','domain_id','id','possible_spam','source_page','was_domain_dead')
UNION
SELECT IFNULL(anchor_text, \"Null\" ) AS `anchor_text`,IFNULL(destination_domain, \"Null\" ) AS `destination_domain`,
IFNULL(destination_page, \"Null\" ) AS `destination_page`,IFNULL(domain_id, \"Null\" ) AS `domain_id`,
IFNULL(id, \"Null\" ) AS `id`,IFNULL(possible_spam, \"Null\" ) AS `possible_spam`,
IFNULL(source_page, \"Null\" ) AS `source_page`,IFNULL(was_domain_dead, \"Null\" ) AS `was_domain_dead`
FROM
anchor_texts_for_domain INTO OUTFILE 'D:\\\\anyone\\\\anchor_texts_for_domain.csv' FIELDS ENCLOSED BY '\"' TERMINATED BY ',' ESCAPED BY '\"' LINES TERMINATED BY '\r\n';
The example above is for one table. 13/15 of the tables work fine. However, two of the tables put the headers for the CSV generated by the first SELECT statement at the end of the file instead of the beginning. The only thing that sets these two files apart is they are several GB big (19gb and 6gb) compared to the rest of the files/tables which are relatively small.
Why would MySQL behave differently for large files in this instance?
Why would MySQL behave differently for large files in this instance?
Because it can.
Relational tables (and views and query results and so on) represent relations. Relations are a special form of (multi) sets. And as sets have no inherent order, so don't relations, so don't tables, etc..
That follows that, unless an explicit ORDER BY clause is issued, the DBMS is free to return a query result in any order it "wishes" (determined by the path the data is accessed and processed in the actual execution of the query).
So you need to use an explicit ORDER BY.
SELECT <first column alias>,
...
<last column alias>
INTO OUTFILE ...
FROM (SELECT <first header> AS <first column alias>,
...
<last header> AS <last column alias>,
0 AS header_ordinal
UNION ALL
SELECT <first column>,
...
<last column>,
1
FROM <table>) x
ORDER BY header_ordinal;

sql select string between two strings in a column

I have a table for example - TABLE. In the table is column name - CUSTOME_FIELDS. in this column I have data like this:
{"6":"Name of company","1":"11111111","2":"564974195","4":"","5":"","3":""}
I need to take - Name of the company - and give it to new column - NEW_COLUMN.
How can I do that? I tried something like this:
SELECT SUBSTRING(CUSTOME_FIELDS, CHARINDEX('"6":"', CUSTOME_FIELDS), CHARINDEX('","1"',CUSTOME_FIELDS)) FROM TABLE
but it doesn't work.
Just in case your MySQL version is 5.7 or higher, and because the data appears to be in JSON format, you could try your luck with json_extract:
SELECT TRIM(BOTH '"' FROM json_extract(CUSTOME_FIELDS, '$."6"')) AS name FROM your_table;
Demo link
If you're storing JSON data in your database, use MySQL 5.7 or better and use a JSON column type. This means you can easily extract data:
SELECT JSON_EXTRACT(json_data, '$."6"') FROM mytable;
Depends on how your actual data ALL look like:
If your data all look like above example, then you can cut string from {"6":" and next "," :
SELECT substring_index(
substring_index(CUSTOME_FIELD, '{"6":"', -1),
'","',
1
)

select data mysql

i have in my table places named field. there are space separated values(there are problem to store csv value in one field). now i want to fire query like below. how i can do ??
select * from tablename where variablename in places
i did try this way but it shows syntax error.
select * from tablename where variablename in replace(places,' ',',')
### places ###
bank finance point_of_interest establishment
Use FIND_IN_SET
For comma separated
SELECT *
FROM tablename
WHERE ( FIND_IN_SET( 'bank', variablename ) )
Refer : SQL Fiddle
For space separated
SELECT *
FROM tablename
WHERE ( FIND_IN_SET( 'bank', replace(variablename,' ',',') ) )
Refer : SQL Fiddle
The best solution would be to normalise your data structure and do not have a single field storing multiple values.
You can make a query work without normalisation, but any solutions would be lot less optimal from a performance point of view.
Use patter matching with like operator:
... where fieldname like '% searched_value %'
Use the replace() function and combine it with find_in_set():
... where find_in_set('searched_value',replace(fieldname,' ',','))>0
Hi I think your problem comes from the usage of IN
IN for MySql is used like this
SELECT *
FROM table_name
WHERE column_name IN (bank,finance,point_of_interest, establishment);
In case of you want to select places you need to specify each place into value like

How to show database,table, columns with one mysql command in bash?

I ma testing Dynamic Multiple Data Source so that I need to show tables in both database as format below:
database|table |column1|column2
master |customer|data1 |data2
database|table |column1|column2
replica |order |data1 |data2
At the moment, I use code as below, it's not pretty..., do you have any idea?
use master;
select database();
select * from master.customer;
select * from master.customer_order;
use replica;
select database();
select * from replica.customer;
select * from replica.customer_order;
You can add the database and table names as fix strings to the sql statement:
select 'master' as `database`, 'customer' as `table`, master.customer.* from master.customer
Since database and table are reserved words, make sure they are enclosed by backticks (`).

How to export all words from sql table or sql database

Let's say we have following table.
UserId | Message
-------|-------------
1 | Hi, have a nice day
2 | Hi, I had a nice day
I need to have all { Hi,-have-a-nice-day-I-had } words separately.
Is there any way to do that ? What if I want to export words from whole database tables ?
Similar results would be also good.
try this:In Sql server 2005 or above
create table yourtable(RowID int, Layout varchar(200))
INSERT yourtable VALUES (1,'hello,world,welcome,to,tsql')
INSERT yourtable VALUES (2,'welcome,to,stackoverflow')
;WITH SplitSting AS
(
SELECT
RowID,LEFT(Layout,CHARINDEX(',',Layout)-1) AS Part
,RIGHT(Layout,LEN(Layout)-CHARINDEX(',',Layout)) AS Remainder
FROM YourTable
WHERE Layout IS NOT NULL AND CHARINDEX(',',Layout)>0
UNION ALL
SELECT
RowID,LEFT(Remainder,CHARINDEX(',',Remainder)-1)
,RIGHT(Remainder,LEN(Remainder)-CHARINDEX(',',Remainder))
FROM SplitSting
WHERE Remainder IS NOT NULL AND CHARINDEX(',',Remainder)>0
UNION ALL
SELECT
RowID,Remainder,null
FROM SplitSting
WHERE Remainder IS NOT NULL AND CHARINDEX(',',Remainder)=0
)
SELECT part FROM SplitSting ORDER BY RowID
SQLFIDDLE DEMO
Well, ok, here it goes.
In SQL Server you can use this...
SELECT word = d.value('.', 'nvarchar(max)')
FROM
(SELECT xmlWords = CAST(
'<a><i>' + replace([Message], ' ', '</i><i>') + '</i></a>' AS xml)
FROM MyMessageTbl) T(c)
CROSS APPLY c.nodes('/a/i') U(d)
And I hope that for MySQL you can use the same thing, using XML support - ExtractValue() etc.
EDIT: explanation
- replace([Message], ' ', '</i><i>') replaces e.g. 'my word' with 'my</i><i>word'
- then I add the beginning and the end of xml -> '<a><i>my</i><i>word</i></a>', so I have a valid xml... and cast it to xml type to be able to do something with it
- I select from that xml and shred xml nodes '/a/i' it to rows using CROSS APPLY c.nodes('/a/i');
alias rows using U(d), so one 'i' maps to column d (e.g. 'my')
- d.value('.', 'nvarchar(max)') extracts node content and casts it to character type