How to access sql database from website? - mysql

How to access sql database which is at e.g
http://localhost/test.php?blah/blah/blah.sql
or Online using ADO connection or something?
I use
PHP Triad (PHP, Apache 1.3.23, MySQL)
Delphi 7
I've searched google but I did not find anything.
at *. mdb I usually use the following ways.
ADOTable1.ConnectionString: = 'Provider = Microsoft.Jet.OLEDB.4.0;' +
' Data Source =' +
ExtractFilePath (Application.ExeName) +
'data.mdb; Persist Security Info = False';
My question is the core principal example below:
I took part database layout direction.
e.g:
Source = '+ ExtractFilePath (Application.ExeName) +' data.mdb;
or
Source = 'D: \ Test \ data.mdb';
or
Source = 'Drive \ Directory \ Filename.Extension';
how to change the above example to be.
Source = 'http:// localhost/Test/Data.db';
and I just want to know how to connect Ado / components that can be used to import data from MySql and then sent to the columns in DBGrid, so I can Add, Edit and delete existing data in MySql table through Delphi.
NB: I do not know the composition of the MySql table, because I usually just use *. Mdb.
- If the PHP script I've encountered a lot of tutorials, but in Delphi not found.

Related

How can I connect to a MySQL database into Apache Spark using SparkR?

I am working on Spark 2.0 and SparkR libs. I want to get a sample code on how can I do following things in SparkR?
Connect to a MySQL or any other SQL database using SparkR.
Write SQL queries like SELECT , UPDATE etc. to modify a table in that database.
I know to do it using R. However I would need some help to use Spark Sessions or SparkSQL context. I am using R Studio for the development.
Moreover, how do we submit this R code as Spark Batch to run continuously at a regular intervals?
jdbcurl <- "jdbc:mysql://xxx.xxx.x.x:xxxx/database"
data <- read.jdbc(jdbcurl, "tablename", user = "user", password = "password" )

Backup database(s) using query without using mysqldump

I'd like to dump my databases to a file.
Certain website hosts don't allow remote or command line access, so I have to do this using a series of queries.
All of the related questions say "use mysqldump" which is a great tool but I don't have command line access to this database.
I'd like CREATE and INSERT commands to be created at the same time - basically, the same performance as mysqldump. Is SELECT INTO OUTFILE the right road to travel, or is there something else I'm overlooking - or maybe it's not possible?
Use mysqldump-php a pure-PHP solution to replicate the function of the mysqldump executable for basic to med complexity use cases - I understand you may not have remote CLI and/or mysql direct access, but so long as you can execute via an HTTP request on a httpd on the host this will work:
So you should be able to just run the following purely PHP script straight from a secure-directory in /www/ and have an output file written there and grab it with a wget.
mysqldump-php - Pure PHP mysqldump on GitHub
PHP example:
<?php
require('database_connection.php');
require('mysql-dump.php')
$dumpSettings = array(
'include-tables' => array('table1', 'table2'),
'exclude-tables' => array('table3', 'table4'),
'compress' => CompressMethod::GZIP, /* CompressMethod::[GZIP, BZIP2, NONE] */
'no-data' => false,
'add-drop-table' => false,
'single-transaction' => true,
'lock-tables' => false,
'add-locks' => true,
'extended-insert' => true
);
$dump = new MySQLDump('database','database_user','database_pass','localhost', $dumpSettings);
$dump->start('forum_dump.sql.gz');
?>
With your hands tied by your host, you may have to take a rather extreme approach. Using any scripting option your host provides, you can achieve this with just a little difficulty. You can create a secure web page or strait text dump link known only to you and sufficiently secured to prevent all unauthorized access. The script to build the page/text contents could be written to follow these steps:
For each database you want to back up:
Step 1: Run SHOW TABLES.
Step 2: For each table name returned by the above query, run SHOW CREATE TABLE to get the create statement that you could run on another server to recreate the table and output the results to the web page. You may have to prepend "DROP TABLE X IF EXISTS;" before each create statement generated by the results of these queryies (!not in your query input!).
Step 3: For each table name returned from step 1 again, run a SELECT * query and capture full results. You will need to apply a bulk transformation to this query result before outputing to screen to convert each line into an INSERT INTO tblX statement and output the final transformed results to the web page/text file download.
The final web page/text download would have an output of all create statements with "drop table if exists" safeguards, and insert statements. Save the output to your own machine as a ".sql" file, and execute on any backup host as needed.
I'm sorry you have to go through with this. Note that preserving mysql user accounts that you need is something else entirely.
Use / Install PhpMySQLAdmin on your web server and click export. Many web hosts already offer you this as a service pre-configured, and it's easy to install if you don't already have it (pure php): http://www.phpmyadmin.net/
This allows you to export your database(s), as well as perform other otherwise tedious database operations very quickly and easily -- and it works for older versions of PHP < 5.3 (unlike the Mysqldump.php offered as another answer here).
I am aware that the question states 'using query' but I believe the point here is that any means necessary is sought when shell access is not available -- that is how I landed on this page, and PhpMyAdmin saved me!

Most effective way to push data from a SQL Server database into a Greenplum database?

Greenplum Database version:
PostgreSQL 8.2.15 (Greenplum Database 4.2.3.0 build 1)
SQL Server Database version:
Microsoft SQL Server 2008 R2 (SP1)
Our current approach:
1) Export each table to a flat file from SQL Server
2) Load the data into Greenplum with pgAdmin III using PSQL Console's psql.exe utility
Benifits...
Speed: OK, but is there anything faster? We load millions of rows of data in minutes
Automation: OK, we call this utility from an SSIS package using a Shell script in VB
Pitfalls...
Reliability: ETL is dependent on the file server to hold the flat files
Security: Lots of potentially sensitive data on the file server
Error handling: It's a problem. psql.exe never raises an error that we can catch even if it does error out and loads no data or a partial file
What else we have tried...
.Net Providers\Odbc Data Provider: We have configured a System DSN using DataDirect 6.0 Greenplum Wire Protocol. Good performance for a DELETE. Dog awful slow for an INSERT.
For reference, this is the aforementioned VB script in SSIS...
Public Sub Main()
Dim v_shell
Dim v_psql As String
v_psql = "C:\Program Files\pgAdmin III\1.10\psql.exe -d "MyGPDatabase" -h "MyGPHost" -p "5432" -U "MyServiceAccount" -f \\MyFileLocation\SSIS_load\sql_files\load_MyTable.sql"
v_shell = Shell(v_psql, AppWinStyle.NormalFocus, True)
End Sub
This is the contents of the "load_MyTable.sql" file...
\copy MyTable from '\\MyFileLocation\SSIS_load\txt_files\MyTable.txt' with delimiter as ';' csv header quote as '"'
If you're getting your data load done in minutes, then the current method is probably good enough. However, if you find yourself having to load larger volumes of data (terabyte scale for instance), the usual preferred method for bulk-loading into Greenplum is via gpfdist and corresponding EXTERNAL TABLE definitions. gpload is a decent wrapper that provides abstraction over much of this process and is driven by YAML control files. The general idea is that gpfdist instance(s) are spun up at the location(s) where your data is staged, preferrably as CSV text files, and then the EXTERNAL TABLE definition within Greenplum is made aware of the URIs for the gpfdist instances. From the admin guide, a sample definition of such an external table could look like this:
CREATE READABLE EXTERNAL TABLE students (
name varchar(20), address varchar(30), age int)
LOCATION ('gpfdist://<host>:<portNum>/file/path/')
FORMAT 'CUSTOM' (formatter=fixedwidth_in,
name=20, address=30, age=4,
preserve_blanks='on',null='NULL');
The above example expects to read text files whose fields from left to right are a 20-character (at most) string, a 30-character string, and an integer. To actually load this data into a staging table inside GP:
CREATE TABLE staging_table AS SELECT * FROM students;
For large volumes of data, this should be the most efficient method since all segment hosts are engaged in the parallel load. Do keep in mind that the simplistic approach above will probably result in a randomly distributed table, which may not be desirable. You'd have to customize your table definitions to specify a distribution key.

How to convert H2Database database file to MySQL database .sql file?

I have some data in H2Database file and I want to convert it to MySQL .sql database file. What are the methods I can follow?
In answer to Thomas Mueller, SquirrelSQL worked fine for me.
Here is the procedure for Windows to convert a H2 database:
Go to "drivers list", where everything is red by default.
Select "H2" driver, and specify the full path to "h2-1.3.173.jar" (for
example) in "Extra Class Path". The H2 driver should display a blue
check in the list.
Select your target driver (PostgreSQL, MySQL), and
do the same, for example for PostgreSQL, specify the full path to
"postgresql-9.4-1201.jdbc41.jar" in Extra Class Path.
Go to "Aliases", then click on "+" for H2 : configure your JDBC chain, for example copy/paste the jdbc chain you obtain when you launch H2, and do the same for your target database: click on "+", configure and "test".
When you double click on your alias, you should see everything inside your database in a new Tab. Go to the tables in source database, do a multi-select on all your tables and do a right-click : "Copy Table".
Go to your target database from Alias, and do a "Paste Table". When all tables are copied altogether, the foreign key references are also generated.
Check your primary keys : from H2 to PostgreSQL, I lost the Primary Key constraints, and the auto-increment capability.
You could also rename columns and tables by a right click : "refactor". I used it to rename reserved words columns after full copy, by disabling name check in options.
This worked well for me.
The SQL script generated by the H2 database is not fully compatible with the SQL supported by MySQL. You would have to change the SQL script manually. This requires that you know both H2 and MySQL quite well.
To avoid this problem, an alternative, probably simpler way to copy the data from H2 to MySQL is to use a 3rd party tool such as the SQuirreL SQL together with the SQuirreL DB Copy Plugin plugin. (First you need to install SQuirreL SQL and on top of that the SQuirreL DB Copy Plugin.)
I created a Groovy script that does the migration from h2 to mysql. From there you could do a mysqldump. It requires that the tables exists in the Mysql database. It should work for ohter DBMS with minor changes.
#Grapes(
[
#Grab(group='mysql', module='mysql-connector-java', version='5.1.26'),
#Grab(group='com.h2database', module='h2', version='1.3.166'),
#GrabConfig(systemClassLoader = true)
])
import groovy.sql.Sql
def h2Url='jdbc:h2:C:\\Users\\xxx\\Desktop\\h2\\sonardata\\sonar'
def h2User='sonar'
def h2Passwd='sonar'
def mysqlUrl='jdbc:mysql://10.56.xxx.xxx:3306/sonar?useunicode=true&characterencoding=utf8&rewritebatchedstatements=true'
def mysqlUser='sonar'
def mysqlPasswd='xxxxxx'
def mysqlDatabase='sonar'
sql = Sql.newInstance(h2Url, h2User, h2Passwd, 'org.h2.Driver' )
def tables = [:]
sql.eachRow("select * from information_schema.columns where table_schema='PUBLIC'") {
if(!it.TABLE_NAME.endsWith("_MY")) {
if (tables[it.TABLE_NAME] == null) {
tables[it.TABLE_NAME] = []
}
tables[it.TABLE_NAME] += it.COLUMN_NAME;
}
}
tables.each{tab, cols ->
println("processing $tab")
println("droppin $tab"+"_my")
sql.execute("DROP TABLE IF EXISTS "+tab+"_my;")
sql.execute("create linked table "+tab+"_my ('com.mysql.jdbc.Driver', '"+mysqlUrl+"', '"+mysqlUser+"', '"+mysqlPasswd+"', '"+mysqlDatabase+"."+tab.toLowerCase()+"');")
sql.eachRow("select count(*) as c from " + tab + "_my"){println("deleting $it.c entries from mysql table")}
result = sql.execute("delete from "+tab+"_my")
colString = cols.join(", ")
sql.eachRow("select count(*) as c from " + tab){println("starting to copy $it.c entries")}
sql.execute("insert into " + tab + "_my ("+colString+") select "+colString+" from " + tab)
}
The H2 database allows you to create a SQL script using the SCRIPT SQL statement or the Script command line tool. Possibly you will need to tweak the script before you can run it against the MySQL database.
You can use fullconvert to convert database. it's easy to use.
Follow steps shown here
https://www.fullconvert.com/howto/h2-to-mysql

import database dump to mysql using visual foxpro

I used leaves stru2mysql.prg and vfp2mysql_upload.prg to create a .sql dump file from DBF's. I connect to mysql database from vfp using ODBC.I KNOW how upload the sql dump file but i need to automate the whole process i.e after creating the dump file,my visual foxpro program can upload the dump file without a third party(automatically). I thought of using the source command but that needs to be run in mysql prompt.The assumption here is that my end users dont know how to import(which most of them dont).Please advice on how i can automate importation of sql file to mysql database.thank you
I think what you are looking for are the various SQL* functions in Foxpro. See the VFP help or MSDN on SQLCONNECT (or SQLSTRINGCONNECT), SQLEXEC, and SQLDISCONNECT functions to get you started. Microsoft provided good examples on each in the documentation.
You may also want to use FILETOSTR to get the output from Leafe's programs into a string for the SQLEXEC function.
Here's the steps I use to take data from a Visual FoxPro Database and upload to a MySql Database. These are all put into a custom method on a form, which is fired by a command button. For example the method would be 'uploadnewdata' and I pass parameters for whichever data tables I need
1) Connect to the Server - I use MySql ODBC
2) Validate the user (this uses a SQLEXEC to pull the correct matching record for a users tables
IF M.WorkingDatabase<>-1
nRetVal=SQLEXEC(m.WorkingDatabase,"SELECT * FROM users", "csrUsersOnServer")
SELECT csrUsersOnServer
SELECT userid,FROM csrUsersOnServer;
WHERE ALLTRIM(UPPER(userid))=ALLTRIM(UPPER(lcRanchUser));
AND ALLTRIM(UPPER(lcPassWord))=ALLTRIM(UPPER(lchPassWord));
INTO CURSOR ValidUsers
IF _TALLY>=1
ELSE
=MESSAGEBOX("Your Premise ID Does Not Match Any Records On The Server","System Message")
RETURN 0
ENDIF
ELSE
=MESSAGEBOX("Unable To Connect To Your Database", "System Message")
RETURN 0
ENDIF
3) Once that is successful I create my base cursor (this is the one I'm sending from)
4) I then loop through that cursor creating variable for the values in the fields
5) then using the SQLEXEC, and INSERT INTO, I update each record
6) once the program is finished processing the cursor, it generates a messagebox with the 'finished' message and control returns to the form.
All the user has to do, is select the starting table and enter their login information