This line of code says that it works (green check) but I don't see an image inserted. The file path should be correct because I got it from the file data.
UPDATE `inventory`
SET bookImage = LOAD_FILE('C:\xampp\htdocs\1059\homework\books\wuthering.jpg')
WHERE isbn = '978-0141040356';
one thing u should have know that if you're connecting to a remote database server, the path is relative to the server that the DB is on, not your local machine.
UPDATE inventory
SET bookImage =
(SELECT BulkColumn FROM OPENROWSET(BULK N'C:\wuthering.jpg', SINGLE_BLOB) AS x)
WHERE isbn = '978-0141040356';
Related
I have a problem creating tables. I use the code below and on one machine it works perfectly well. On another machine, it does not give any error but also does not create the tables. I believe it has something to do with the conda environment but I made a new environment and I still get the same error. There is no difference in library versions between the machine where it works and where it does not work
python=3.7
mysql-connector-python=8.0.18. The funny thing is if I execute a select statement I get valid results.
import mysql.connector
import configparser
config = configparser.RawConfigParser()
config.read('config.ini')
conn = mysql.connector.connect(host=config['mysql report server 8']['host'],
port= config['mysql report server 8']['port'],
user=config['mysql report server 8']['user'],
password=config['mysql report server 8']['password'],
allow_local_infile=True,
autocommit=1
)
mycursor = conn.cursor()
def create_tables(mycursor,name_of_import:str):
with open(r"../SupportFiles/Table_Create_Query.sql") as f:
create_tables_str = f.read()
create_tables_str = create_tables_str .replace("xxx_replaceme",name_of_import)
mycursor.execute(create_tables_str,multi=True)
create_tables(mycursor,"my_test_import")
conn.commit()
conn.close()
the file Table_Create_Query.sql has the following contents
use cb_bht3_0_20_048817_raw;
create table xxx_replaceme_categories (
cid int,
variable varchar(255),
name varchar(255),
value int,
ordr int,
label varchar(255)
);
OK so I'm a complete Novice to SSIS but i needed to export images stored in our DB relating to specific sales orders , I attempted to do this in as an SP in SQ but this required CURSORS and I found it very easy to do the same thing in SSIS but I have a bit of an odd question , The DataFlow OLE DB to Data Flow works fine but i have have had to Declare the path OUTPUT FILE LOCATION in the SQL . I have worked out how to create a Dynamic File creation On the control flow but what i cant work out is how to remove the declared PATH and point it to the Control flow FILE SYSTEM TASK . I really hope this make sense and I appriciate any assistance
Control Flow
Data Flow Task
SQL -
DECLARE #Path nvarchar(1000);
SET #Path = N'C:\Users\X-JonC\Documents\';
SELECT
Products_IMAGEDATA.ImageData
, Products_IMAGEDATA.ImageTitle
, #Path + Imagetitle AS path
FROM
SalesOrders
INNER JOIN
SalesOrderItems
ON SalesOrders.SalesOrder = SalesOrderItems.SalesOrder
INNER JOIN
Products
ON SalesOrderItems.Product = Products.Product
AND SalesOrderItems.Product = Products.Product
INNER JOIN
Products_IMAGEDATA
ON Products.Product = Products_IMAGEDATA.Product
WHERE
SalesOrders.SalesOrderId = ?
A File System Task is something you use to perform operations on the file system (copy/rename/delete files/folders). You likely don't need a file system task unless you need to do something with the file after you've exported it to disk (like copy to a remote location or something).
A Data Flow Task is something you use to move data between multiple places. In your case, you'd like to move data from a database to a file system. The interesting twist is that you need to export binary/image data.
You have an excellent start by having created a data flow task and wired up an Export Column task to it.
The challenge that you're struggling with is how do I get my SSIS variable value into the package. Currently, you have it hard coded to a TSQL variable #Path. Assuming what you have is working, then you merely need to use the parameterization approach you already have with SalesOrderId ? and populate the value of #Path in the same manner- thus line three becomes SET #Path = ?;
One thing to note is that OLE and ODBC parameterization is based on ordinal position (0 and 1 based respectively). By adding in this new parameter placeholder in on line 3, it's now the first element as it comes before the WHERE clause's usage so you will need to update the mapping. Or you can be lazy and replace the empty line 2 with DECLARE #SalesOrderId int = ?; {this allows the first element to remain as is, you add your new element as +1 to the usage}. You'd need to replace the final question mark with the local variable like
DECLARE #Path nvarchar(1000);
DECLARE #SalesOrderId = ?
SET #Path = ?;
SELECT
Products_IMAGEDATA.ImageData
, Products_IMAGEDATA.ImageTitle
, #Path + Imagetitle AS path
FROM
SalesOrders
INNER JOIN
SalesOrderItems
ON SalesOrders.SalesOrder = SalesOrderItems.SalesOrder
INNER JOIN
Products
ON SalesOrderItems.Product = Products.Product
AND SalesOrderItems.Product = Products.Product
INNER JOIN
Products_IMAGEDATA
ON Products.Product = Products_IMAGEDATA.Product
WHERE
SalesOrders.SalesOrderId = #SalesOrderId;
Reference answers
Using SSIS to extract a XML representation of table data to a file
Export Varbinary(max) column with ssis
I was able to implement a connection from R through RMariaDB and DBI to a remote MariaDB-database. However, I am currently encountering a strange change of numbers when querying the database through R. I'll explain the differences:
I inserted one simple entry in my database with the following command:
INSERT INTO respondent ( id, name ) VALUES ( 2388793051, 'testuser' )
When I connect to this database directly on the remote server and execute a statement like this:
SELECT * FROM respondent;
it delivers these value
id: 2388793051, name: testuser
So I should also be able to connect to the database via R and receive the same results. So when I execute the following code in R, I expect to receive this inserted and saved information displayed above:
library(DBI)
library(RMariaDB)
conn <- DBI::dbConnect(drv=RMariaDB::MariaDB(), user="myusername", password="mypassword", host="127.0.0.1", port="1111", dbname="mydbname")
res <- dbGetQuery(conn, "SELECT * FROM respondent")
print(res)
However, the result of this query is the following
id name
-1906174245 testuser
As you can see, the id is now -1906174245 instead of the saved 2388793051 in the database. I don't understand this weird conversion of integers in the id-field. Can someone explain how this problem emerges and how I might solve it?
EDIT: I don't expect this to be a problem, but just to inform you: I am using an SSH tunnel to enable a connection via these specified ports from my local to my remote machine.
SOLUTION: What made the difference was to specify the id of a respondent in the database specification already as BIGINT instead of INT. Thanks to #JonnyCrunch
I have a query:
$this->source->exec("UPDATE `account` AS `m1`,
(SELECT `m2`.`id`
FROM `account` AS `m2`
WHERE `m2`.`userid` = ? AND `m2`.`demo` = 0
ORDER BY `m2`.`date` DESC LIMIT 1) AS `m2`
SET `m1`.`default` = '1'
WHERE `m1`.`id` = `m2`.`id` AND `m1`.`demo` = 0", $user_id);
Now, phpStorm is trowing an error for the ORDER BY in the subquery. The query works perfectly when the code is run. I set MySQL as the SQL dialect in phpStorm.
The error is:
GROUP or HAVING expected, ORDER got.
How can I fix this error?
actually phpstorm recognize this symbol as invalid key when using with php, cz of php doesn't know aboutsql` use
If you check you sql with index.sql its show like this (without errors)
so just create test page with test.sql and copy your sql code.
then above of the code you can see orange color tab shows(check below image)
click Configure Data Source
and click MySQL in left side panel(below image)
and then click download in bottom of the panel(below image)
It will download file(s) relevant your versions
and click ok.
now delete the index.sql and you can use as same without error.
I have tested and work fine to me
I do have a MySQL Server and administration is done via pypMyAdmin. All works fine since "forever" but now I realized that I am having a problem:
Often I do SQL Updates using the "SQL" Link (Run SQL query/queries on server).
If entering a lot of Statements (like this):
UPDATE table SET column = 'new value A' WHERE id = 1;
UPDATE table SET column = 'new value B' WHERE id = 2;
UPDATE table SET column = 'new value C' WHERE id = 3;
....
UPDATE table SET column = 'new value Z' WHERE id = 100;
I have encountered that only about 40-50 Statements are done - no error messages, nothing seems broken - just not all 100 or more short SQL Statements are fullfilled ...
Did any of you encounter the same or even better:
What can be done to make sure all lines/SQL Statements are processed?
Not much can be done. I've encountered the same thing and am comfortable submitting a few statements at a time via phpMyAdmin, but for anything more than say 20 simple statments I go to my MySQL host and import a file with all my statements like $>mysql -u me -pmypass mydb < file_with_many_statements.sql
There's clearly a limitation that could depend on a number of factors (native to phpMyAdmin, settings in your phpMyAdmin hosts's php/web server/mysql configs, network issues, etc.)