Snowflake- Cloud data platform - How to select all columns from external staged file - external

I have a file staged in external S3 and want to select all the columns in the select statement.

if your stage name is stage1_stage and there are 4 columns in the file, you can list the data using a query as below
SELECT $1,$2,$3,$4 FROM #stage1_stage

Related

Change MySQL data received from sensor

I receive data from a Bluetooth sensor via an ESP32 which then sends data to raspberry pi via API to MySQL. I receive the temperature on the rpi, but as 1230 instead of 12.30. Is it possible to change it in MySQL, if yes, how?
select 1230/100 will return 12.3
Do the following if you want two decimal points:
select format(temperature/100, 2)
from table1
To use this, you may want to create a view that has this calcuated field in it. You would then use the view instead of the table in your API.
create view view1 as (select format(temperature/100, 2)
from table1);
You can see how it works in Fiddle.

Neo4j: Relationships from CSV imported extremely slow

I have some issues importing a large set of relationships (2M records) from a CSV file.
I'm running Neo4j 2.1.7 on Mac OSX (10.9.5), 16GB RAM.
The file has the following schema:
user_id, shop_id
1,230
1,458
1,783
2,942
2,123
etc.
As mentioned above - it contains about 2M records (relationships).
Here is the query I'm running using the browser UI (I was also trying to do the same with a REST call):
USING PERIODIC COMMIT
LOAD CSV WITH HEADERS FROM "file://path/to/my/file.csv" AS relation
MATCH (user:User {id: relation.user_id})
MATCH (shop:Shop {id: relation.shop_id})
MERGE (user)-[:LIKES]->(shop)
This query takes ages to run, about 800 seconds. I do have indexes on :User(id) and :Shop(id). Created them with:
CREATE INDEX ON :User(id)
CREATE INDEX ON :Shop(id)
Any ideas on how to increase the performance?
Thanks
Remove the space before shop_id
try to run:
LOAD CSV WITH HEADERS FROM "file:test.csv" AS r
return r.user_id, r.shop_id limit 10;
to see if it is loaded correctly. On your original data r.shop_id is null as the column name is shop_id
Also make sure that you didn't store the id's as numeric values in the first place, then you have to use toInt(r.shop_id)
Try to profile your statement in Neo4j Browser (2.2.) or in Neo4j-Shell.
Remove the PERIODIC COMMIT for that purpose and limit the rows:
PROFILE
LOAD CSV WITH HEADERS FROM "file://path/to/my/file.csv" AS relation
WITH relation LIMIT 10000
MATCH (user:User {id: relation.user_id})
MATCH (shop:Shop {id: relation.shop_id})
MERGE (user)-[:LIKES]->(shop)

SSIS Balanced Data Distributor with Script Component

We've a small Data Flow Task which exports rows from a table to a flat file .
we added a script component for transformation operation (Converting Varbinary to String ) .
since the script component takes a while we decided to use the new Integration Services
Balanced Data Distributor and divided the export task into two more flat files .
while executing the task , it seems that the BBD isnt dividing the workcload and doesnt
work in parallel mode .
do you have any idea why ?
Have you tried using NTILE and creating multiple OLE DB sources in your Data Flow?
Example below for how to do that for 2 groups. You could of course split your source into as many as you need:
-- SQL Command text for OLE DB Source #1 named "MyGroup NTILE 1"
SELECT v.*
FROM
(SELECT t.* ,
NTILE(2) OVER(
ORDER BY t.my_key) AS MyGroup
FROM my_schema.my_table t) v
WHERE v.MyGroup = 1;
-- SQL Command text for OLE DB Source #2 named "MyGroup NTILE 2"
SELECT v.*
FROM
(SELECT t.* ,
NTILE(2) OVER(
ORDER BY t.my_key) AS MyGroup
FROM my_schema.my_table t) v
WHERE v.MyGroup = 2;
If you have a good idea in advance about the maximum number of NTILEs you need (say 10) then you could create 10 OLD DB Sources in advance.

make a table in MS Access

I have a query in MS ACCESS, I ran it in MS ACCESS:
SELECT * FROM table1
INNER JOIN table2 ON table1.f1=table2.f1 WHERE table1.f2=table2.f2
It works fine. However, I need to save the results into another table. So, I changed it to:
SELECT * Into a1
FROM table1 INNER JOIN table2 ON table1.f1=table2.f1 WHERE table1.f2=table2.f2
It does not work. I receive this error: "Cannot Open database. It may not be a database that your application recognizes, or the file may be corrupt."
Does anybody know how I can save the results in a database or txt file?
Thank you very much.
You can use the insert into command, see: http://msdn.microsoft.com/en-us/library/bb208861(office.12).aspx
Also appears that the database is in read only mode.
Is the database read-only?
Some things to check:
Is the DB file's read-only attribute set?
Did you use "Open Read Only" to open the DB?
Are you out of disk space?
Is there enough disk space to create the new table?
You can easily output the results as a .txt file or a .csv file (that you can view in Excel). To export a .txt file:
DoCmd.TransferText acExportDelim, , "myQuery", "C:\myQuery.txt", True
You can research TransferText in help to see the options for a .csv file.
This should work easily.
try create a new table with the values mentioned at your select.
step 1:
CREATE TABLE table_shadi
(
column_name1 data_type,
column_name2 data_type,
column_name3 data_type,
....
)
make sure you defined the same datatypes and number of fields as expected from you query
step 2:
Insert into table_shadi(column_name1,column_name2,column_name3)
SELECT column_name1,column_name2,column_name3
FROM table1
INNER JOIN table2
ON table1.f1=table2.f1
WHERE table1.f2=table2.f2
Hope it helps.

Pivoting Concept

Hiii,
I have a Database design as:
Table File (FileID, Name, Details)
Table Attributes (AttID, AttName, AttType)
Table AttValues (FileID, AttID, AttValue)
Till the runtime, it is not known how many Attributes are in a File and of what names.
And I want to display after insertion at Frontend in a manner like like:
FileID, FileName, Details, (Rows of Attribute Table as Column here).
i.e.,
Can anybody provide me a piece of code in Java or in MySQL to achieve this Pivoting Result.
Highly thanks full for your precious time.
Or is there any other better way to store data, So that I can get the desired result easily.
This requires two queries. First select the File:
SELECT * FROM File WHERE (...)
Then, fetch the Attributes:
SELECT *
FROM AttValues
JOIN Attributes ON (Attributes.AttId = AttValues.AttId)
WHERE FileId = $id
The latter query will provide you with one row per Attribute, which you can programmatically pivot for display on your frontend:
foreach(row in result) {
table.AddColumn(Header = row['AttName'], Value = row['AttValue']);
}
Adapt to your local programming environment as needed.
Of course, this only works for a single File or Files with the same attributes. If you want to display multiple files with different Attributes you can instead prefetch all AttNames:
SELECT Attributes.AttId, Attributes.AttName
FROM Attributes
JOIN AttValues ON (Attributes.AttId = AttValues.AttId)
WHERE FileId IN ( $list_of_ids )
Then load the values like this:
SELECT *
FROM AttValues
WHERE FileId IN ( $list_of_ids )
and use a local associative array to map from AttIds to column indexes.
As a final optimisation, you can combine the last two queries into an OUTER JOIN to avoid the third round trip. While this will probably increase the amount of data transferred, it also makes filling the table easier, if your class library supports named columns.
I answered a similar question recently: How to pivot a MySQL entity-attribute-value schema. The answer is MySQL-specific, but I guess that's OK as the question is tagged with mysql.