I am trying to export DB2 select with headhers. But without any success, my actual code is:
db2 "EXPORT TO /tmp/result5.csv OF DEL MODIFIED BY NOCHARDEL
SELECT 1 as id, 'DEVICE_ID', 'USER_ID' from sysibm.sysdummy1
UNION ALL (SELECT 2 as id, DEVICE_ID, USER_ID FROM MOB_DEVICES) ORDER BY id"
which is not working (I suggest because USER_ID is INTEGER), when I change it for:
db2 "EXPORT TO /tmp/result5.csv OF DEL MODIFIED BY NOCHARDEL
SELECT 1 as id, 'DEVICE_ID', 'PUSH_ID' from sysibm.sysdummy1
UNION ALL (SELECT 2 as id, DEVICE_ID, PUSH_ID FROM MOB_DEVICES) ORDER BY id"
It works, DEVICE_ID and PUSH_ID are both VARCHAR.
MOB_DEVICE TABLE Any suggest how to solve this?
Thanks for advice.
DB2 will not export a CSV file with the headers, because the headers will be included as data. Normally, CSV file is for storage not viewing. If you want to view a file with its headers you have the following options:
Export to IXF file, but this file is not a flat file. You will need a spreadsheet to view it.
Export to a CSV file and include the headers by:
Select the columns names from the name, and then perform an extra step to add it to the file. You can use the describe command or perform a select on syscat.columns for this purpose, but this process is manual.
Perform a select union, in one part the data and in the other part the headers.
Perform a select and take the output to a file. Do not use export.
select * from myTable > myTable
Ignoring the EXPORT, thus just looking exclusively at the problematic UNION ALL query:
The DB2 SQL will want to conform the data of the mismatched data-types, into the numeric data-type; in this scenario, into the INTEGER data-type. Because conspicuously, the literal string value 'USER_ID' is not a valid representation of numeric value, that value can not be cast into an INTEGER value.
However, one can explicitly request to reverse that casting [whereby SQL wants to convert from string into numeric], to ensure that the SQL obeys the desired effect, to convert the INTEGER values from the column into VARCHAR values; i.e. explicit casting can ensure the data-types between the common columns of the UNION will be compatible, by forcing the values from the INTEGER column to match the data-type of the literal\constant character-string value of 'USER_ID':
with
mob_devices (DEVICE_ID, USER_ID, PUSH_ID) as
( values( varchar('dev', 1000 ), int( 1 ), varchar('pull', 1000) ) )
( SELECT 1 as id, 'DEVICE_ID', 'USER_ID'
from sysibm.sysdummy1
)
UNION ALL
( SELECT 2 as id, DEVICE_ID , cast( USER_ID as varchar(1000) )
FROM MOB_DEVICES
)
ORDER BY id
Related
Is there any possible way I can find and set the column name by giving alias
for example
I have a sql queries which contain 4 column name fields. 3 fields are common in all the queries
id, name, field
and there is another field which column name get change every time but the only common thing in that field it has a postfix as __type
so my sql query looks like this
SELECT * from table_name
id, name, field, system_data__value
is there any possible way I can add alias to the name where I found __type as type
so if I run my queries then it look like this
SELECT * from table_name
id, name, field, type
You may use UNION ALL for to set the aliases to the columns posessionally.
You must know some value which cannot present in some column (id = -1 in shown code) for fake row removing.
SELECT *
FROM (
SELECT -1 id, NULL name, NULL field, NULL alias_for_column_4
UNION ALL
SELECT * from table_name -- returns id, name, field, system_data__value
) subquery
WHERE id > 0 -- removes fake row
It is possible that the values in fake subquery needs in explicit CAST() calls for correct datatype of the output columns setting.
I have some JSON in an oracle table:
{"orders":[{"timestamp": "2016-08-10T06:15:00.4"}]}
And using JSON_TABLE to select/create a view:
SELECT jt.*
FROM table1
JSON_TABLE (table1.json_data, '$.orders[*]' ERROR ON ERROR
COLUMNS ( StartTime TIMESTAMP PATH '$.timestamp')) AS jt;
However no matter what format I put the date/time in JSON I always get:
ORA-01830: date format picture ends before converting entire input
string
Is there a way to format the json, or something I am missing? If i pass in a date like "2016-08-10", then it will successfully create a DATE column.
When running your query on my Oracle 19.6.0.0.0 database, I do not have any problem parsing your example (see below). If you are on an older version of Oracle, it may help to apply the latest patch set. You also might have to parse it out as a string, then use TO_DATE based on the format of the date you are receiving.
SQL> SELECT jt.*
2 FROM (SELECT '{"orders":[{"timestamp": "2016-08-10T06:15:00.4"}]}' AS json_data FROM DUAL) table1,
3 JSON_TABLE (table1.json_data,
4 '$.orders[*]'
5 ERROR ON ERROR
6 COLUMNS (StartTime TIMESTAMP PATH '$.timestamp')) AS jt;
STARTTIME
__________________________________
10-AUG-16 06.15.00.400000000 AM
In Oracle 18c, your query also works (if you add in CROSS JOIN, CROSS APPLY or a comma, for a legacy cross join after table1) and change $.timeStamp to lower case.
However, if you can't get it working in Oracle 12c then you can get the string value and use TO_TIMESTAMP to convert it:
SELECT StartTime,
TO_TIMESTAMP( StartTime_Str, 'YYYY-MM-DD"T"HH24:MI:SS.FF9' )
AS StartTime_FromStr
FROM table1
CROSS JOIN
JSON_TABLE(
table1.json_data,
'$.orders[*]'
ERROR ON ERROR
COLUMNS (
StartTime TIMESTAMP PATH '$.timestamp',
StartTime_Str VARCHAR2(30) PATH '$.timestamp'
)
) jt;
So, for your sample data:
CREATE TABLE table1 ( json_data VARCHAR2(4000) CHECK ( json_data IS JSON ) );
INSERT INTO table1 ( json_data )
VALUES ( '{"orders":[{"timestamp": "2016-08-10T06:15:00.4"}]}' );
This outputs:
STARTTIME | STARTTIME_FROMSTR
:------------------------ | :---------------------------
10-AUG-16 06.15.00.400000 | 10-AUG-16 06.15.00.400000000
db<>fiddle here
My query below does not give me any result
WITH dataset AS (
SELECT responseelements FROM cloudtrail_logs
WHERE useridentity.type = 'Root'
AND eventname='CreateVpc'
ORDER BY eventsource, eventname;
AS blob
)
SELECT
json_extract(blob, '$.vpc.vpcId') AS name,
json_extract(blob, '$.ownerId') AS projects
FROM dataset
But if I run only the inner query
SELECT responseelements FROM cloudtrail_logs
WHERE useridentity.type = 'Root'
AND eventname='CreateVpc'
ORDER BY eventsource, eventname;
it gives me the correct response as a Json
{"requestId":"40aaffac-2c53-419e-a678-926decc48557","vpc":{"vpcId":"vpc-01eff2919c7c1da07","state":"pending","ownerId":"347612567792","cidrBlock":"10.0.0.0/26","cidrBlockAssociationSet":{"items":[{"cidrBlock":"10.0.0.0/26","associationId":"vpc-cidr-assoc-04136293a8ac73600","cidrBlockState":{"state":"associated"}}]},"ipv6CidrBlockAssociationSet":{},"dhcpOptionsId":"dopt-92df95e9","instanceTenancy":"default","tagSet":{},"isDefault":false}}
and if I pass this as data as below
WITH dataset AS (
SELECT '{"requestId":"40aaffac-2c53-419e-a678-926decc48557","vpc":{"vpcId":"vpc-01eff2919c7c1da07","state":"pending","ownerId":"347612567792","cidrBlock":"10.0.0.0/26","cidrBlockAssociationSet":{"items":[{"cidrBlock":"10.0.0.0/26","associationId":"vpc-cidr-assoc-04136293a8ac73600","cidrBlockState":{"state":"associated"}}]},"ipv6CidrBlockAssociationSet":{},"dhcpOptionsId":"dopt-92df95e9","instanceTenancy":"default","tagSet":{},"isDefault":false}}'
AS blob
)
SELECT
json_extract(blob, '$.vpc.vpcId') AS name,
json_extract(blob, '$.ownerId') AS projects
FROM dataset
it gives me result , what I am missing here ? So that I am able to make it run in one shot
Is it at all possible?
You're referencing the wrong column name in your query, it should be json_extract(responseelements, '$.vpc.vpcId') AS name instead of json_extract(blob, '$.vpc.vpcId') AS name. The AS blob part of this query does nothing since you can't alias an entire query, so take it out.
The AS blob works in your last example because you're selecting a value (the json string) into a column and the AS blob gives the column a name or alias of "blob". In your original query, you're selecting an existing column named responseelements so that's what you need to refer to in the json_extract function.
I have a table that have a column storing date in varchar(10) format, like '20190101'. I have a variable in SSIS is defined as DateTime format. now I want to make the value match in SSIS OLE DB data flow source, in the format as SELECT * FROM table where date = CONVERT(varchar(10), ?, 112).
Looks like the conversion didn't work correctly; no qualified data pops out in the data flow.
So what's wrong with this query (the result looks fine when putting it into SSMS), and how to debug it (it's not possible to debug it using the query like SELECT CAST(? AS varchar(10) FROM table in data source).
(PS: the last thing I want to do is to define another string variable to work around).
The reason is because of implicit conversion by SSIS data source when deciding the type of parameter according to the opposite side of operator. Try this test:
Make a valid SSIS data source connection to a table that contains a date(time) column; make a dateTime variable.
Switch the data source to SQL command and type the below queries, one by one:
SELECT 1 FROM table WHERE ? > 1
SELECT 1 FROM table WHERE ? > '1'
SELECT 1 FROM table WHERE ? > date_col
Execute the data flow.
In SSMS, quote the recent sessions:
select top 50 t.text, s.* from sys.dm_exec_query_stats s
CROSS APPLY sys.dm_exec_sql_text(sql_handle) t
where t.text LIKE <filter your session>
order by last_execution_time desc
You may find how the parameter is interpreted:
(#P1 int)SELECT 1 FROM table WHERE ? > 1
(#P1 varchar(8))SELECT 1 FROM table WHERE ? > '1'
(#P1 datetime)SELECT 1 FROM table WHERE ? > date_col
In other words, the SQL command will interpret the type of incoming parameter regardless of what type it original was.
So in my problem, the dateTime parameter was first implicitly converted into varchar with an unknown format than we are attempting to convert it into a specified date type:
SELECT * FROM table where date = CONVERT(varchar(10), 'Jan 01 2019 12:00:00', 112)
I have a little problem with some strange strings in one's database table.
I need to split such examples of strings to arrays of INT or separetely to INT looping or sth.
This must be done in 'usual' MySQL (no functions, declares etc - only SELECT statements + built-in-functions like REPLACE(), SUBSTRING())
524;779; 559;; ;559; 411;760;; + others;
Is such intention possible to perform?
If this is a run-once operation to convert your data into a sane format, you could take the idea from this answer and extend it to the maximum number of ids in one string.
Example on SQL Fiddle.
SELECT DISTINCT id, val
FROM (
SELECT id, substring_index(substring_index(`val`,';',-1),';',1) AS val FROM tab
UNION ALL
SELECT id, substring_index(substring_index(`val`,';',-2),';',1) FROM tab
UNION ALL
SELECT id, substring_index(substring_index(`val`,';',-3),';',1) FROM tab
UNION ALL
SELECT id, substring_index(substring_index(`val`,';',-4),';',1) FROM tab
UNION ALL
SELECT id, substring_index(substring_index(`val`,';',-5),';',1) FROM tab
) x
WHERE val <> ''
ORDER BY id