Getting ready to get rid of a MySQL database and switch to Oracle SQL. I am using Oracle SQL Developer. Need to get the records from a MySQL table and populate its corresponding table in SQL.
I was able to establish a Database connection in SQL Developer to the MySQL database. I checked the connection by doing a simple SELECT * from the table to make sure it returned all the records.
However, the new Oracle SQL table has quite a few changes - the names in the MySQL table all had a "tn" prefix, ie tnStore, tnConfigDate, etc. The SQL table gets rid of that prefix. That is issue #1.
There will also be several new columns in the new table. That data will be added later from elsewhere. And the data will not be in the same order as the MySQL table.
How do a write up a SELECT INTO statement in SQL Developer to populate the SQL table with the data from the MySQL table and correlate the corresponding columns while leaving new fields blank for now?
Here is a way by programming but not sure how to make it in single query:
I hope we need to use data dictionary tables in oracle all_tab_columns and I am not sure in Mysql ( like similar table)
Get the column name from Mysql table by taking out prefix "tn" and
compare the column name with SQL table. (possible use an cusrsor)
If matched build SQL statement for SELECT INTO statement and blank
for new fields possibly in a loop.
Once done for all columns , execute that statement
Consider migrating the existing MySQL tables as-is straight to Oracle using SQL Developer. Then move/refactor the data around to your new-tables with desired column definitions using INSERT as SELECTs.
Could be considerably faster, plus once the 'raw' data is there, you can do your work over and over again, until you get it just right.
Note you can also simply drag-and-drop to move a MySQL table from it's connection to an existing Oracle database connection to move the table over (DDL, Data, or Both).
Related
I am having a weird problem with mysql. I am developing a visual application on C# that stores data into a database. Previously I used SQL for the Database, but my client changed his mind for mysql. So I recreated the same schema on mysql. Now it is happening a really odd thing: My tables are completely empty, but when I execute the views, they return me back data from the old SQL tables, when I read directly from the tables they appear empty. The user that I use to connect are different and the most strange thing is that it happens even when I execute the view on mysql workbench. I have even truncated the tables in mysql and still the same thing. Does anybody know what may cause this anomaly and how to solve it?
p.s. Workbench version 6.2; Sql version SQL SERVER 2014
Regards.
In MySQL, a view is a virtual table based on the result-set of an SQL statement.
It contains rows and columns, just like a real table in your database. The fields in a view are fields from one or more real tables in the database.
When you execute the views, they return back data from the old SQL tables. It is because your view still contains the data you run a while ago. You have forgotten to Drop your View every time you execute it. To Drop a MySQL view, try this one:
DROP VIEW view_name
Views do NOT contain data of any kind -- except for Materialized Views and MySQL does not have those. If views had to be dropped and recreated every time a DML statement was executed on a table, views would be utterly useless.
The only time a view can return old data is when one process changes the contents of a table used in the view and the view is queried by another process before the first process commits the changes. You have not specified how the tables are being changed and how they are being queried. Nor have you included the create view statement. You could well be using other tables than what you think. This can happen during initial design of a database if tables are being slapped around like mad.
I'm working on a version control program, and I would like to implement database structure versioning as well.
Is there a way to get a list of all the queries that have altered the databse structure in any way?
For example I added a column to the 'users' table called 'remember_token'. Is there a way I can get the specific query that was executed on the MySQL server in order to add that column?
You may want to enable the mysql query log and then filter on ALTER queries or anything you need
I am currently working on a query in Access 2010 and I am trying to get the below query to work. I have the connection string between my local DB and the server that I am passing through to working just fine.
Select column1
, column2
from serverDB.dbo.table1
where column1 in (Select column1 from tbl_Name1)
In this situation table1 is the table on the server that I am passing through to get to, but the tbl_Name1 is the table that is actually in my Access DB that I am trying to use to create constraints on the data that I am pulling from the server.
When I try to run the query, I am getting the error that it doesn't think tbl_Name1 exists.
Any help is appreciated!
I just came across a solution that may help others in a similar situation.
This approach is easy because you can just run one query on your local Access database and get everything you need all at once. However, a lot of filtering/churning-through-results may be done on your own local computer behind the scenes, as opposed to on the remote server, so it may not necessarily be quick.
Steps
Create a query, make it a "Pass Through" query, and set up its "ODBC Connect Str" property to connect to the remote database.
Write the pass through query, something like SELECT RemoteId From RemoteTable and give your pass through query a name, maybe PassThroughQuery
Create a new query, make it a regular "Select" query.
Write your new query, using the pass through query you just created as a table in this new query (seems weird to use a query as a table, but it works) and join that PassThroughQuery "table" to your local table and filter it based on values in the local table, something like SELECT R.RemoteId, L.LocalValue FROM PassThroughQuery R INNER JOIN LocalTable L ON L.LocalId = R.RemoteId where L.LocalValue = 'SomeText'
This approach allows you to mix/join the results of a pass through query and the data in a local Access database table cleanly, albeit potentially slowly if there is a lot of data involved.
I think the issue is that a pass through query is one that is run on the server. Since one of the tables is located on the local Access file, it won't find the table.
Possible workaround if you must stay with the pass-through is you can build an SQL string with the results of the nested query rather than the query string itself (depending on the number of results this may or may not be practical)
e.g. Instead of Select column1 from tbl_Name1 you use "c1result1","c1result2",....
I'm getting data from an MSSQL DB ("A") and inserting into a MySQL DB ("B") using the date created in the MSSQL DB. I'm doing it with simple logics, but there's got to be a faster and more efficient way of doing this. Below is the sequence of logics involved:
Create one connection for MSSQL DB and one connection for MySQL DB.
Grab all of data from A that meet the date range criterion provided.
Check to see which of the data obtained are not present in B.
Insert these new data into B.
As you can imagine, step 2 is basically a loop, which can easily max out the time limit on the server, and I feel like there must be a way of doing this must faster and during when the first query is made. Can anyone point me to right direction to achieve this? Can you make "one" connection to both of the DBs and do something like below?
SELECT * FROM A.some_table_in_A.some_column WHERE
"it doesn't exist in" B.some_table_in_B.some_column
A linked server might suit this
A linked server allows for access to distributed, heterogeneous
queries against OLE DB data sources. After a linked server is created,
distributed queries can be run against this server, and queries can
join tables from more than one data source. If the linked server is
defined as an instance of SQL Server, remote stored procedures can be
executed.
Check out this HOWTO as well
If I understand your question right, you're just trying to move things in the MSSQL DB into the MySQL DB. I'm also assuming there is some sort of filter criteria you're using to do the migration. If this is correct, you might try using a stored procedure in MSSQL that can do the querying of the MySQL database with a distributed query. You can then use that stored procedure to do the loops or checks on the database side and the front end server will only need to make one connection.
If the MySQL database has a primary key defined, you can at least skip step 3 ("Check to see which of the data obtained are not present in B"). Use INSERT IGNORE INTO... and it will attempt to insert all the records, silently skipping over ones where a record with the primary key already exists.
So here is my situation: I have a vendor supplied DB we cannot modify and a custom db that imports data from the vendor app and acts on it. Once records are imported form the vendor app, they cannot appear on the list of records to be imported. Also we only want to display the 250 most recent records that have not been imported.
What I originally started with was select the list of ids that have been imported from the custom db, and then query the vendor db, using the list of ids in a .Where(x => !idList.Contains(x.Id)) clause on the remote query.
This worked up until we broke 2100 records imported into the custom db, as 2100 is the limit on the number of parameters that can be passed into SQL. After finding out this was the actual problem and not the 'invalid buffer'/'severe error' ADO.Net reported, my solution was to remove the first 2000 ids in the remote query, and then remove the remaining records in the local query.
Having to pull back a large number of irrelevant records, just to exclude them, so I can get the correct 250 records seems very inelegant. Is there a better way to do this, short of doing a cross db stored procedure?
Thanks in advance.
This might not be the best answer, depending on how many records you're dealing with, but you could force the SQL to execute and just deal with it as in-memory objects. Calling the ToList() method will execute the SQL and convert to an IEnumerable .
What I might suggest is to have started by querying the vendor database first ordering the results by some kind of criteria (perhaps a date field, oldest to most recent).
You could do a Skip().Take() to "skim" the results and then take each bulk set and insert them into the custom db where the ID doesn't already exist. That way you avoid the problem you have now.
If you have db-create access to the SQL Server that the vendor's db is running on (or if your custom db is on the same server), you could create a "has been imported" table in a different database on that same server, and then write a stored proc that does a cross-database join of that table against the vendor db, e.g.:
select top 250 from vendordb.to_be_imported
where not exists
(select 1 from customdb.has_been_imported where idWasImported = idToBeImported)
order by whatever;
You might even be able to do this in Linq 2 SQL -- I've never tried adding objects from different databases into a single DataContext...