I've got a script to load a text file into an Amazon RDS mysql database. It needs to handle a variable number of columns from the text file and store them as JSON. Here's an example with 5 columns that get stored as JSON:
LOAD DATA LOCAL INFILE '/Applications/FileMaker Pro 14/containers/imports/load1.txt' INTO TABLE jdataholder (rowdatetime, #v1, #v2, #v3, #v4, #v5) SET loadno = 1, formatno = 1, jdata = JSON_OBJECT('Site', #v1, 'Nutrients', #v2, 'Dissolved_O2', #v3, 'Turbitidy', #v4, 'Nitrogen', #v5);
local_infile is 'ON' on the server. The query works in Sequel Pro. When I try to run it in Filemaker Pro 14's (running on OS X 10.12) Execute SQL script step it doesn't work. I know the connection to the server is working because I can run other queries that don't use the LOAD DATA LOCAL INFILE statement.
The error message I get says:
ODBC ERROR: [Actual][MySQL] Load data local infile forbidden
From other answers on SO and elsewhere it seems like the client also needs to have local_infile enabled. This would explain why it works on one client and not the other. I've tried to do this, but the instructions I've seen all use the terminal. I don't think Filemaker has anything like this - you can just enter SQL into a query editor and send that to a remote database. I'm not sure how or even if you can change the configuration of the client.
Does anyone know how to enable this on Filemaker? Or, is there something else I can do to make this work?
Could I avoid this if I ran the load data local infile query from a stored procedure? That was my original plan, but the documentation says that the load data infile step has to call literal strings, not variables, so I couldn't think of a way to handle a variable number of columns.
Thanks.
It's possible that the FileMaker ExecuteSQL does not support this. I would suggest using a plugin that has the ability to run terminal commands, then use that plugin to perform the action in the terminal through FileMaker. There's a free plugin that has a function for that, it's called BaseElements. Below is a link to the documentation on this specific function.
https://baseelementsplugin.zendesk.com/hc/en-us/articles/205350547-BE-ExecuteShellCommand
If you are trying to insert data into a MySQL table, a better way is to use FileMaker ESS (External SQL Sources), which allows you to work with MySQL, Oracle and other supported databases inside FileMaker. This includes being able to import data into the ESS ( MySQL table. You can see a pdf document on ESS below:
https://www.filemaker.com/downloads/documentation/techbrief_intro_ess.pdf
Related
Firstly, I understand that attempting to do this from MySQL itself is not allowed:
http://dev.mysql.com/doc/refman/5.6/en/stored-program-restrictions.html
When I try to use LOAD DATA INFILE 'c:/data.csv' ... , I get the "LOAD DATA IS NOT ALLOWED IN STORED PROCEDURES".
I am a beginner with moving data around MySQL and I realize this may not be a task it was designed to handle. Therefore, what approach should I use to grab data from a CSV file and append it to a table at a regular time interval? (I have researched a little bit about CRON, but that is for UNIX systems only and we are using a Windows based OS.)
You can run CRON job on windows also. I have found a couple of links after searching. Please look in to these links:
waytocode.com/2012/setup-cron-job-on-windows-server
http://stackoverflow.com/questions/24035090/run-cron-job-on-php-script-on-localhost-in-windows
As a security minded person, in the off chance that an SQL injection ever happens I'd like to minimize the damage caused. One such possibility is that there are queries that can read and write information to a file on the local file system. This is clearly a major issue in the case of a security breach, and the usage for these commands is fairly limited in day to day usage. Optionally, it could be turned on for an isolated period of time in case I have the need for import and exporting data, but I would like to have it turned off explicitly any other time. No amount of googling or skimming the MySQL manual has led me to a specific setting that allows me to disable this option.
I know I could easily just revoke the privilege for all users, but I'd like a simpler solution that by default increases my security (at least in this specific case).
Does anyone know of any setting or way to deactivate any file interactions in MySQL?
Thanks!
http://dev.mysql.com/doc/refman/5.6/en/load-data.html says:
Also, to use LOAD DATA INFILE on server files, you must have the FILE privilege.
http://dev.mysql.com/doc/refman/5.6/en/select-into.html says:
The SELECT ... INTO OUTFILE 'file_name' form of SELECT writes the selected rows to a file. The file is created on the server host, so you must have the FILE privilege to use this syntax.
You can also set the secure_file_priv config variable:
By default, this variable is empty. If set to the name of a directory, it limits the effect of the LOAD_FILE() function and the LOAD DATA and SELECT ... INTO OUTFILE statements to work only with files in that directory.
In Percona Server, secure_file_priv has an additional usage:
When used with no argument, the LOAD_FILE() function will always return NULL. The LOAD DATA INFILE and SELECT INTO OUTFILE statements will fail with the following error: “The MySQL server is running with the –secure-file-priv option so it cannot execute this statement”.
I am trying to upload a 32mb MYSQL database into a pre-existing database, but the php admin on my shared hosting has a 10mb limit... I have tried zipping it up - but when the server unzips the database, the uncompressed file is too large for the server to handle.
Is it possible to split the database up and upload it by pasting it in parts as an SQL query - I assume I would need each chunk to have something at the start of it which says
"Import this data into the pre-existing tables in the database"
What would this be?
At the moment there is a few hundred lines saying things like "CREATE" and "INSERT INTO"
You might try connecting to the database remotely with mysql workbench, or command line tool mysql. If you can do that, you can run:
source c:/path/to/your/file.sql
and you won't be constrained by phpmyadmin's upload size restrictions. Most shared hosting I've seen allows it. If not, you may just need to grant permissions for the user#host in phpmyadmin (or whatever the interface is).
The dump file created by mysqldump is just a set of SQL statements that will rebuild your tables.
To load it in in chunks I'd recommend either dumping it out in sets of tables and loading them one by one or if required the dump file should be roughly in the same (pseudo) format:
Set things up ready for loading
CREATE TABLE t1;
INSERT INTO TABLE t1...;
INSERT INTO TABLE t1...;
CREATE TABLE t2;
INSERT INTO TABLE t2...;
INSERT INTO TABLE t2...;
Finalise stuff after loading
You can manually split the file up by keeping the commands at the start and finish and just choosing blocks for individual tables by looking for their CREATE TABLE statements.
I am trying to transfer bulk data on a constant and continuous based from a SQL Server database to a MYSQL database. I wanted to use SQL Server's SSMS's replication but this apparently is only for SQL Server to Oracle or IBM DB2 connection. Currently we are using SSIS to transform data and push it to a temporary location at the MYSQL database where it is copied over. I would like the fastest way to transfer data and am complication several methods.
I have a new way I plan on transforming the data which I am sure will solve most time issues but I want to make sure we do not run into time problems in the future. I have set up a linked server that uses a MYSQL ODBC driver to talk between SQL Server and MYSQL. This seems VERY slow. I have some code that also uses Microsoft's ODBC driver but is used so little that I cannot gauge the performance. Does anyone know of lightening fast ways to communicate between these two databases? I have been researching MYSQL's data providers that seem to communicate with a OleDB layer. Im not too sure what to believe and which way to steer towards, any ideas?
I used the jdbc-odbc bridge in Java to do just this in the past, but performance through ODBC is not great. I would suggest looking at something like http://jtds.sourceforge.net/ which is a pure Java driver that you can drop into a simple Groovy script like the following:
import groovy.sql.Sql
sql = Sql.newInstance( 'jdbc:jtds:sqlserver://serverName/dbName-CLASS;domain=domainName',
'username', 'password', 'net.sourceforge.jtds.jdbc.Driver' )
sql.eachRow( 'select * from tableName' ) {
println "$it.id -- ${it.firstName} --"
// probably write to mysql connection here or write to file, compress, transfer, load
}
The following performance numbers give you a feel for how it might perform:
http://jtds.sourceforge.net/benchTest.html
You may find some performance advantages to dumping data to a mysql dumpfile format and using mysql loaddata instead of writing row by row. MySQL has some significant performance improvements for large data sets if you load infile's and doing things like atomic table swaps.
We use something like this to quickly load large datafiles into mysql from one system to another e.g. This is the fastest mechanism to load data into mysql. But real time row by row might be a simple loop to do in groovy + some table to keep track of what row had been moved.
mysql> select * from table into outfile 'tablename.dat';
shell> myisamchk --keys-used=0 -rq '/data/mysql/schema_name/tablename'
mysql> load data infile 'tablename.dat' into table tablename;
shell> myisamchk -rq /data/mysql/schema_name/tablename
mysql> flush tables;
mysql> exit;
shell> rm tablename.dat
The best way I have found to transfer SQL data (if you have the space) is a SQL dump in one language and then to use a converting software tool (or perl script, both are prevalent) to convert the SQL dump from MSSQL to MySQL. See my answer to this question about what converter you may be interested in :) .
We've used the ado.net driver for mysql in ssis with quite a bit of success. Basically, install the driver on the machine with integration services installed, restart bids, and it should show up in the driver list when you create an ado.net connection manager.
As for replication, what exactly are you trying to accomplish?
If you are monitoring changes, treat it as a type 1 slowly changing dimension (data warehouse terminology, but same principal applies). Insert new records, update changed records.
If you are only interested in new records and have no plans to update previously loaded data, try an incremental load strategy. Insert records where source.id > max(destination.id).
After you've tested the package, schedule a job in the sql server agent to run the package every x minutes.
Cou can also try the following.
http://kofler.info/english/mssql2mysql/
I tried this a longer time before and it worked for me. But I woudn't recommend it to you.
What is the real problem, what you try to do?
Don´t you get a MSSQL DB Connection, for example from Linux?
While trying to start uplaod the 3.9 GB sql file via BigDump there is error
UNEXPECTED: Can't set file pointer
behind the end of file
Dump of database was exported from PHPMyAdmin. File is not corrupted. What is the problem? What are other ways to import such a big database?
Bigdump uses a INSERT INTO table VALUES (....) kind of method.
This is a very slow way of inserting!
Use
LOAD DATA INFILE 'c:/filename.csv' INTO TABLE table1
Instead. Note the use of forward slashes even on Windows.
See: http://dev.mysql.com/doc/refman/5.1/en/load-data.html
This is the fastest way possible to insert data into a MySQL table.
It will only work if the input file is on the same server as the MySQL server though.
I get similar error: I can't seek into .sql
The reason for this error is, that BigDump tries to set pointer at the end of .sql-File and then find out its size (using fseek() and fteil() functions). As fseek() is failing when you work with files over 2GB, you get this error. Solution is to split your SQL-File into chunks of 1,5GB - 2GB size...