Transfer data between two different databases with Delphi - mysql

I want to do a data transfer between two different databases in Delphi. I can not work with TBachmove because the columns that are going to be transferred are selected by the user.
I try to integrate this code but I did not know how to integer it with different databases.
INSERT INTO table_destination ( colonne1, colonne2, colonne3, ..., colonneN )
SELECT
colonne3,
colonne8,
NULL,
...,
colonne137,
...,
colonneN
FROM table_source;

I can not work with TBachmove because the columns that are going to be transferred are selected by the user.
Actually, even in D7, you should be able to do this quite easily as long as Blob columns are not involved. Steps:
Create an ODBC System DSN pointing to your MySQL destination database.
Create a Delphi project with a TTable, Table1, which opens your Paradox table, a TQuery, Query1, which uses the same BDE Alias as your Paradox Table1,
a TTable Table2 which uses the ODBC Alias from step 1, and a BatchMove component. Give Table2 a TableName which is what you want the MySQL table to be called.
Provide a gui method for the user to select which columns to copy.
When the user has selected the columns to copy, create a SQL statement to select those columns from your paradox table, load it into Query1 and call Open on it.
Set Query1 as the Source of BatchMove1, Table2 as the Destination and the BatchMove mode as batCopy.
Call BatchMove1.Execute.
That should do it.
My first attempt to test this, using the Biolife.Db table from the "FishFacts" demo failed because I got an exception complaining about an invalid blob size. My second attempt, using the Customer.Db table worked fine though.
As far as the blob-size problem is concerned, although it has been 15+year since I used the BDE, ISTR that there is a maximum blob-size configuration parameter that can be adjusted via the BDE Adminstrator and that there was a way to set it to "No maximum", possibly by setting it to -1.
Current versions of Delphi include the FireDAC data-access components, which include a "modern" BatchMove component which does not depend on the BDE, but I think FireDAC was only included with Delphi in XE8.

Related

No columns returned SSIS

I am implementing a SSIS package and currently trying to do the following.
Truncate the destination table
Fetch the data by executing the stored procedure and insert it into the destination table.
I have created an Execute SQL task to address step 1 and dataflow with oledb source and oledb destination to address the second point. It been working successfully so far but isn't working for one my stored procedure that uses temp tables.
When I edit the oledb source and click the preview button, I get the error no column returned
I know that SSIS has an issue with generating column while executing stored procedures that depend on temp tables. I have converted the stored proc to use temporary table variables and its now able to return columns in SSIS when I do a preview. The only downside is that the stored procedure is taking longer time to execute. Its taking 1 hour 15 mins as compared to 15 mins while using temp tables.
I did see a suggestion to use SET FMTONLY before executing the stored procedure as an alternate solution to changing to temp table variables but that didn't seem to work as I am getting syntax or permission denied error.
Could somebody tell me a solution to my problem which does not compromise on the performance.
Sounds like you've already read all the approaches to using Temp tables in SSIS, including the IF 1=0... trick? If you haven't seen that one yet, google it.
You say that using Table Variables causes your stored procedure to take about 5 times longer than using Temp Tables. The most likely reason for that is that you are indexing your temp tables but not your table variables. If you didn't know that table variables can be indexed, they can. You might try that.
Finally, a solution that you haven't mentioned is that you can replace your temporary table with a real table that gets truncated when you're done using it.
Short comment:
Try EXEC WITH RESULT SETS and specify the metadata yourself for a proc with temp tables; or use the Script Component as a source and specify the Output columns yourself.
Long comment:
Technically speaking, it is the driver/database you are using in SSIS that would decide the behavior when working with temp tables.
Metadata is an important factor when using SSIS's pipeline components. By metadata, I mean the names of the columns, their data types etc that a pipeline component uses. When designing a data flow, someone/something should provide this metadata to the components that require it.
In most cases, SSIS automatically retreives the metadata. Components that do not connect to a external data source, like Conditional Split etc, get their metadata from the other components they are connected to. For the pipeline components that connect to a external data source (like Oledb source, oledb destination, Lookup etc.), SSIS provides a mechanism to get this metadata without human involvement. This mechanism involves the driver connecting to the database and retrieving the metadata of the output. If the driver/database is capable of returning the metadata, then that metadata is used. If the driver/database is incapable, then you get the errors you are seeing. The rest of my comments are based on the assumption that you are using a SQL Server database in your question.
When working with a SQL Server database in SSIS, typically, we use the native client drivers provided by Microsoft. When trying to get the metadata, these drivers try to get the metadata without actually executing the SQL Statement (actual execution can have side effects; and also, might take more than a few seconds/minutes/hours; and you dont want side effects and long wait times during package design time.) So to get the metadata, the driver relies on the metadata of the actual objects used in the sql command. If the command uses a physical table or view, SQL Server already has the metadata available and can supply it to the driver. If it is a temp table, SQL Server does not have the metadata until it can create the temp table. If using FMT ONLY option, you can use it in such a way to create the temp tables, but avoid any heavy processing/side affects and thus be able to retrieve metadata without penalties. Post 2012, these native client drivers rely on some newer functionality to retrieve metadata than the drivers before 2012. In 2012 and after, the driver uses the sp_describe_first_result_set proc to retrieve metadata. So, whether you can get metadata or not is determined by the ability of the sp_describe_first_result_set proc.
So while SSIS can automatically get the metadata (because of the driver/database), it does not automatically get the metadata in some cases (again because of the driver/database). In cases involving the second scenario, some other process (typically a human) can help the driver infer metadata or provide the metadata to the component directly.
To help the driver, in case of SQL Server 2012 and after, you can use the WITH RESULTSETS clause to specify the output metadata. When this clause is present, the driver will use it and doesnt try to query the metadata from system objects; and thus avoid the error which you would otherwise get. If you are using the drivers that came with SQL Server 2008, you can use FMT ONLY. This option is at the driver/database level.
Another option could be to use a Script Component as the Source and in the Output columns, you can specify the columns/metadata. SSIS would not try to retrieve metadata from the datasource in this case, but would rely on the definitions you provided in the Output section of the Script Component.
As you can see, both options involve a human (or some other process) specifying the metadata instead of SSIS trying to retrieve the metadata in an automated fashion. I would prefer the first option if working with SQL Server and the second option if working with databases like MySql.

How to run append query from data macro MS Access?

I have 2 table, one is local named 'Client' and other is a linked table to MySQL DB in my web-server named 'ClientSql'.
When I insert data into the table Client, I want it to be insert into clientSql too. I try it with data macro (after insert), but it shows me an error saying
It's not possible in linked tables.
I have tried to create an append query successfully, and it works, but it just works if I execute it manually. My question is:
Is it possible to call it from a data macro? if is possible, can you show me how? If not, can you point me to a solution?
I'm not sure why you should be able to do it manually, but not via a macro. According to this link
http://dev.mysql.com/doc/connector-odbc/en/connector-odbc-examples-tools-with-access-linked-tables.html
you should be able to do it either way.
Another thought is to eliminate the local access client table and have the access program update the mySql table directly. However, if another program is accessing the client table at the same time, this could become tricky due to multi-user and locking situations.

How to use local DB table in a pass-through query?

I am currently working on a query in Access 2010 and I am trying to get the below query to work. I have the connection string between my local DB and the server that I am passing through to working just fine.
Select column1
, column2
from serverDB.dbo.table1
where column1 in (Select column1 from tbl_Name1)
In this situation table1 is the table on the server that I am passing through to get to, but the tbl_Name1 is the table that is actually in my Access DB that I am trying to use to create constraints on the data that I am pulling from the server.
When I try to run the query, I am getting the error that it doesn't think tbl_Name1 exists.
Any help is appreciated!
I just came across a solution that may help others in a similar situation.
This approach is easy because you can just run one query on your local Access database and get everything you need all at once. However, a lot of filtering/churning-through-results may be done on your own local computer behind the scenes, as opposed to on the remote server, so it may not necessarily be quick.
Steps
Create a query, make it a "Pass Through" query, and set up its "ODBC Connect Str" property to connect to the remote database.
Write the pass through query, something like SELECT RemoteId From RemoteTable and give your pass through query a name, maybe PassThroughQuery
Create a new query, make it a regular "Select" query.
Write your new query, using the pass through query you just created as a table in this new query (seems weird to use a query as a table, but it works) and join that PassThroughQuery "table" to your local table and filter it based on values in the local table, something like SELECT R.RemoteId, L.LocalValue FROM PassThroughQuery R INNER JOIN LocalTable L ON L.LocalId = R.RemoteId where L.LocalValue = 'SomeText'
This approach allows you to mix/join the results of a pass through query and the data in a local Access database table cleanly, albeit potentially slowly if there is a lot of data involved.
I think the issue is that a pass through query is one that is run on the server. Since one of the tables is located on the local Access file, it won't find the table.
Possible workaround if you must stay with the pass-through is you can build an SQL string with the results of the nested query rather than the query string itself (depending on the number of results this may or may not be practical)
e.g. Instead of Select column1 from tbl_Name1 you use "c1result1","c1result2",....

Why is the MySQL table name appended to my variable names on an ODBC data pull?

I am pulling data from a MySQL Server using an ODBC connection to the MySQL tables. Unfortunately, the table names are now being appended to the variables, which means I have to run an extra step after the data pull to rename every variable for use. Why is this happening and how can I prevent this?
E.G.,
MySQL table Sched has the following variables MAPID, TEST, TYPE, APPOINT, etc. but when I pull the data they come down as: Sched_MAPID, Sched_TEST, Sched_TYPE, Sched_APPOINT, etc.
I am using SAS 9.3 64b and setting a lib ref that includes the libname (MySQLLib, type of connection (odbc), the DSN, username and password. The connection obviously tests fine or I wouldn't be able to see the data, much less pull it. Basically, I set the data using the following:
libname MySQLLib odbc dsn=MySQLSSH user=&MySQLID pwd=&sqlpwd;
Data MySQLSched;
Set MySQLLib.Sched;
run;
Generally, a local SAS table (MySQLSched) is created that I can then manipulate and use as I need, without further touching the original Sched table. I still can, but every variable has the table name appended.
I am not sure if this is a SAS issue or a MySQL issue. I will also ask this same question in the SAS forums. IF I receive a pertinent answer there, I will update this post with that answer.
Any ideas?

oracle odbc connection not getting all columns

I have a linked table set up in Access to an Oracle 10 enterprise server. It works great on my computer. But I'm trying to get a co-worker set up with the same functionality, and for some reason, she can't see all the columns in the table. It connects, refreshes, says it's linked, but not all the columns are there. Using a different client or sql on command line we can see the whole table. Just not in Access. The only difference is that I'm using Oracle 9g Client and she's using Oracle 10g Express. Any ideas?
Look into what HansUp stated about caching. There is one point I'd like to make. Ensure your co-worker is selecting from the same schema and same table. Multiple schemas (users) can have similar table names.
Example:
User a has table x with columns x,y,z
user b has table x with columns x,y
If you log in as user a and select * from x then the columns you will receive is x,y,z
if you log in as user b and select * from x then the columns you will receive is x,y
Either ensure you are logging in to the correct user or explicitdly state the schema you want in the select i.e. select * from a.x;
And the winner is... a table with more than 255 columns! For whatever reason, the columns that I needed for my query were available the first time I ran it, and were available to my machine in all subsequent runs. For my co-worker, for whatever reason, 2 of the columns we needed were considered in the 255+ category.
The work-around is to use a pass-through query on the linked table in Access. And yes, I agree - 255+ columns in a table/view is HORRID design. Not my fault, just need the data!!