In SQL Server 2008 I had remade the database structure similar to Access. I need to import a couple of related tables but I am worried that the foreign keys won't match with the autonumber fields from the related tables.
You have some options here:
If you export the table to SQL Server, all the data will make it through properly and then you can set your PKs and FKs
Create the Table structure with an IDENTITY column and use SET IDENTITY_INSERT to put in the values you want into the Identity column.
Without knowing more details about your table structures and locations, I can only tell you generic things like
You will have to match the keys up manually so that the PK-FK references remain the same.
If you need to match the old access ids to the new autogenerated ids in an existing table, this is something you needed to do at the time of moving the data from the orginal table unless you happened to store the access ids. Usually I do some type of a cross matching table with the old id and the new id as part of the import process. Then you use this table to match to the realted tables to update their ids. If you didn;t do this and the ids are differnt, you will have to find a way to match them to the orginal access table first before you can import the related tables. I hope your table has a natural key in that case.
If the tables are the same you could use the rather verbosely named “Microsoft SQL server Migration Assistant 2008 for Access”. This will allow you to bring over the data whilst keeping the same keys
Related
I have in Microsoft Access a linked table to an ASE Server.
On the server side, the table has no primary key or identity columns.
And has a trigger on insert that validates new entries, so that when the entry is not validated it deletes the entry from the table and writes to "table"_ERR to let the users know what error was produced.
When linking it to Access a composite key is created using 10 columns.
I have this same setup in 10 different tables (all with triggers all linked to Access)
In this particular table when trying to insert/append records to the table through Access i always get the error message:
Single-row update/delete affected more than one row of a linked table. Unique index contains duplicate values.
This error occurs when both table and table_ERR are empty and i'm only trying to insert 1 record.
If I disable the trigger i have no problem inserting records through Access
I have similar triggers in other tables that are working correctly.
What can be causing this issue and does anyone know how to solve this?
I have read that MS Access can mess up the ##identity, even so none of the solutions presented online seem to work.
links : https://groups.google.com/forum/#!msg/microsoft.public.sqlserver.programming/McHdRpPKMhs/SlyObU8w7JMJ
Stop Access from using wrong identity when appending to linked table on SQL server
Thanks in advance.
EDIT: if i try to insert the records directly from a management software (like Aqua Data Studio) there are no erros
Without knowing more specifics about your data itself, it is difficult to say why this might be happening.
However, it sounds like in this specific instance for this specific linked table, your 10 columns are not unique enough to prevent non-distinct rows from being selected.
Suggested fixes:
Add a primary key. Honestly, probably the best and easiest choice.
If for some reason you cannot add a new column to (or alter) your table; you may be able to re-link your table, and re-choose your 10 columns so that they are more unique.
Beyond that, I think we would need more information.
Just out of curiousity, what is the reason for having no key?
in database, maybe one table(assume is tableA) associate multiple tables, so if change the structure of this table(tableA),e.g. delete one associated column, then all other associated tables need to do change.
hence is there a tool can show what other tables need to be changed if I modify the tableA?
You can use for example MySQL Workbench -> Reverse Engineer to see how tables are connected to each other; that assumes that the database has proper primary and foreign keys.
I have a large database schema with over 170 tables, many of which depends upon others. For example, customers and employees both have a person_id which refers to a record in the people table.
I want to be able to generate a baseline.sql file which creates all these tables, with default values populated. I simply exported an existing database with everything properly formatted, but because the resulting baseline.sql file simply generates the tables in alphabetical order I end up with issues like customers and employees pointing to people who don't exist yet (because C<E<P, alphabetically).
Is there a way to export the database while considering the necessary table creation and population order?
I know foreign keys can be recursive or otherwise cause problems, but given my dataset does not have instances of these and the likely commonhood of this problem, I feel like there might be something easy out there before I reinvent the wheel.
Add this to the beginning of your sql file:
SET foreign_key_checks = 0;
And add this to the end:
SET foreign_key_checks = 1;
Ok, so I have a database in my testing environment called 'Food'. In this database, there is a table called 'recipe', with a column called 'source'.
This same database exists in my local environment. However, I just received an updated database (in my local environment) where all the column values (for 'source') have changed.
Is there any way I can migrate the 'source' column from my local to my test environment, without changing the values for any other column? There are 1186 rows in the 'Food' database 'recipe' table in my test environment that need to be updated ONLY with the 'source' column.
You need some way to uniquely identify your Recipes. If both tables have a surrogate key that remained constant, use that. Otherwise figure out some way to match up the new data with your test data: you might already have a unique index in mind or you might need to decide on a combination of fields that uniquely identify your Recipes.
On a side note, why can't you just overwrite all the columns? It is just test data, right?
If only a column has changed and you have IDs (or keys) on your rows, you could follow these steps:
create an intermediate table locally
insert keys and new source values there (either those which have changed or all)
use mysqldump to selectively export the table from the local database
copy the dumped table to the remote database server
import it there
join it with the production table in an update statement to replace the values
drop the intermediate table on the server
Finally reached data migration part of my Project and now trying to move data from MySQL to SQL Server.
SQL Server has new schema (mapping is not always one to one).
I am trying to use SSIS for the conversion, which I started learning today morning.
We have customer and customer location table in MySQL and equivalent table in SQL Server. In SQL server all my tables now have surrogate key column (GUID) and I am creating the same in Script Component.
Also note that I do have a primary key in current mysql tables.
What I am looking for is how I can add child records to customer location table with newly created guid as parent key.
I see that SSIS have Foreach loop container, is this of any use here.
if not another possibility that I can think of is create two Data Flow Task and [somehow] just before the master data is sent to Destination Component [Table] on primary dataflow task , add a variable with newly created GUID and another with old PrimaryID, which will be used to create source for DataTask Flow for child records.
May be to simplyfy , this can also be done once datatask for master is complete and then datatask for child reads this master data and inserts child records from MySQL to SQL Server table. This would though mean that I have to load all my parent table records back into memory.
I know this is all too confusing and it is mainly because I am very confused :-(, to bear with me and if you want more information let me know.
I have been through may links that i found through google search but none of them really explains( or I was not able to uderstand) how the process is carried out.
Please advise
regards,
Mar
** Edit 1**
after further searching and refining key words i found this link in SO and going through it to see if it can be used in my scenario
How to load parent child data found in EDI 823 lockbox file using SSIS?
OK here is what I would do. Put the my sql data into staging tables in sql server that have identity columns set up and an extra column for the eventual GUID which will start out as null. Now your records have a primary key.
Next comes the sneaky trick. Pick a required field (we use last_name) and instead of the real data insert the value form the id field in the staging table. Now you havea record that has both the guid and the id in it. Update the guid field in the staging table by joing to it on the ID and the required field you picked out. Now update the last_name field with the real data.
To avoid the sneaky trick and if this is only a onetime upload, add a column to your tables that contains the staging table id. Again you can use this to get the guid for inserting to related tables. Then when you are done, drop the extra column.
You are aware that there are performance issues involved with using GUIDs? Make sure not to make them the clustered index (as the PK they will be by default unless you specify differntly) and use newsequentialid() to populate them. Why are you using GUIDs? If an identity would work, it is usually better to use it.