Copying tables from Access DB to SQL Server - ms-access

good morning.
We need to copy tables from Access to SQL. Thing is the table names in the source vary from day to day.
I've followed the steps shown in the solution to this post How do I programmatically get the list of MS Access tables within an SSIS package?
The problem is this:
I changed variable names and that's all since the problem stated in that post is quite similar to mine.
I followed the steps and changed Country to a Table Name of my Access DB, lets say CITY. Problem is when the process Loops the tables in Access, the data copied to the tables in SQL is always the same. The data stored in CITY, it seems it's not changing tables, it always use the table provided to the OLE DB source (as shown in screenshot 14#) (the table name provided in the variable 'SelectQuery')
So I have all the tables in SQL created but they are all filled with the same information. Info coming from the same table (the one that has its name stated in the variable)
Thanks, any advice?

From reading the linked solution, it looks either incorrect, or missing one step. Right now, screenshot #14 specifies 'SQL Command from Variable', and 'SelectQuery' as the variable source; I can't see where SelectQuery is updated.
Solution 1:
Set the OLE DB Source to "Table name or view name variable", and set the variable in question to TableName (which is updated each iteration by the ForEach container)
Solution 2: Change SelectQuery to be an expression-driven variable, i.e. "select * from " + #[User:TableName]

you did realize that the table name on that example must match the sheet name on excel, right? So you probably have to do something like that on access.
Also, post more info. It seems that the value on your table variable is not being updated. Tell us how it should be.

Related

Insert data into a list of databases in MySQL

I have basic knowledge of MS SQL which actually sabotages me as the syntax differs from MySQL in which I need the code written.
I have X databases named "project_%". It's 1 database per project. I need a script or a procedure (it's gonna be rarely used by support) that's gonna take some user info from master DB and add a list of users (or at least a single user) into all (later a list might be required) "project_%" databases.
My idea was to create a variable/temp table to fill with the list of databases and run a cycle (for/while) to set a default schema and insert required data. This is where I'm stuck as I have no idea how to set default schema from a variable or to input the variable into variable_db.table.
What I've found so far always differed from what I needed and couldn't apply it to my code.
Thanks,
Z

Data Cleanse ENTIRE Access Table of Specific Value (SQL Update Query Issues)

I've been searching for a quick way to do this after my first few thoughts have failed me, but I haven't found anything.
My Issue
I'm importing raw client data into an Access database where the flat file they provide is parsed and converted into a standardized format for our organization. I do this for all of our clients, but this particular client's software gives us a file that puts "(NULL)" in every field that should be NULL. lol as a result, I have a ton of strings rather than a null field!
My goal is to do a data cleanse of the entire TABLE, rather than perform the cleanse at the FIELD level (as I do in my temporary solution below).
Data Cleanse
Temporary Solution:
I can't add those strings to our datawarehouse, so for now, I just have a query with an IIF statement check that replaces "(NULL)" with "" for each field (which took awhile to setup since the client file has roughly 96 fields). This works. However, we work with hundreds of clients, so I'd like to make a scale-able solution that doesn't require many changes if another client has a similar file; not to mention that if this client changes something in their file, I might have to redo my field specific statements.
Long-term Solution:
My first thought was an UPDATE query. I was hoping I could do something like:
UPDATE [ImportedRaw_T]
SET [ImportedRaw_T].* = ""
WHERE ((([ImportedRaw_T].* = "(NULL)"));
This would be easily scale-able, since for further clients I'd only need to change the table name and replace "(NULL)" with their particular default. Unfortunately, you can't use SELECT * with an update query.
Can anyone think of a work-around to the SELECT * issue for the update query, or have a better solution for cleansing an entire table, rather doing the cleanse at the field level?
SIDE NOTES
This conversion is 100% automated currently (Access is called via a watch folder batch), so anything requiring manual data manipulation / human intervention is out.
I've tried using a batch script to just cleanse the data in the .txt file before importing to Access - however, this caused an issue with the fixed-width format of the .txt, which has caused even larger issues with the automatic import of the file to Access. So I'd prefer to do this in Access if possible.
Any thoughts and suggestions are greatly appreciated. Thanks!
Unfortunately it's impossible to implement this in SQL using wildcards instead of column names, there is no such kind syntax.
I would suggest VBA solution, where you need to cycle thru all table fields and if field data type is string, generate and execute SQL UPDATE command for updating current field.
Also use Null instead of "", if you really need Nulls in the field instead of empty strings, they may work differently in calculations.

SSIS - check if row exists before importing from source to destination

Can someone please advise:
I have got SOURCE (which is OLD SQL Server) and we need to move data to new DESTINATION (which is NEW SERVER). So moving data between different instances.
I'm struggling how to write the package which looks up in destination first and check if row exists then do nothing else INSERT.
Regards
Here are the steps:
Take an OLEBD Source, connect it with a Lookup task.
Select the column that can be looked up. There should be some kind of ID for you to do this. Also select all the columns that need to be passed(SSIS has provisions of applying check boxes).
Connect the lookup no match rows to an OLEDB destination, do the mapping, and you are done.
If you want to redirect all those matching rows to somewhere, say a notepad file, you can do that too...
I would use an Lockup transformation and redirect match output to something else like a OLEDB command in there you can write a IF exist statement or create that in a stored procedure that way it will either insert data or update data not insert duplicates

microsoft access automated copy from one table to another

I am new to access and have mostly worked with SQL Server. I am trying to accomplish a task for our users and am not sure how to approach it.
We have a situation where the users need to manipulate some data in various access tables, then put the final results into one of several 'linked tables' that are defined in SQL Server and linked to access. The sql server tables will defined very generically with column names like 'col1', 'col2' etc to allow for different types of data to be uploaded.
What I would like to do is have some kind of macro or module that does this:
1) Lets the user select the source (access table)
2) Lets the user select destination (linked sql server) table
3) Lets the user map the columns he would like to copy from the source to the destination table. (If this is too difficult then just something that would copy the first x number of columns would work).
4) Deletes all pre-existing data from the target table
5) Copies all data from the source table to the target table.
Could someone give me an idea as to what would be the best approach or maybe even an example of some code that does something similar? Thanks in advance. We are using Microsoft Access 2010.

SSIS OLE DB conditional "insert"

I have no idea whether this can be done or not, but basically, I have the following data flow:
Extracts the data from an XML file (works fine)
Simply splits the records based on an enclosed condition (works fine)
Had to add a derived column object due to some character set issues (might be better methods, but it works)
Now "Step 4" is where I'm running into a scenario where I'd only like to insert the values that have a corresponding match in my database, for instance, the XML has about 6000 records, and from those, I have maybe 10 of them that I need to match back against and insert them instead of inserting all 6000 of them and doing the compare after the fact (which I could also do, but was hoping there'd be another method). I was thinking that I might be able to perform a sql insert command within the OLE DB DESTINATION object where the ID value in the file matches, but that's what I'm not 100% clear on or if it's even possible for that matter. Should I simply go the temp table route and scrub the data after the fact, or can I do this directly in the destination piece? Any suggestions would be greatly appreciated.
EDIT
Thanks to the last comment from billinkc, I managed to get bit closer, where I can identify the matches and use that result set, but somehow it seems to be running the data flow twice, which is strange.... I took the lookup object out to see whether it was causing it and somehow it seems to be the case, any reason why it would run this entire flow twice with the addition of the lookup? I should have a total of 8 matches, which I confirmed with the data viewer output, but then it seems to be running it a second time for the same file.
Is there a reason you can't use a Lookup transformation to find existing records. Configure it so that it routes non-match records to the no match output and then only connect the match found connector to the "Navigator Staging Manager Funds"
I believe that answers what you've asked but I wonder if you're expressing the right desire? My assumption is the lookup would go against the existing destination and so the lookup returns the id 10 for a row. All of the out of the box destinations in SSIS only perform inserts, so that row that found a match would now get doubled. As you are looking for existing rows, that usually implies you'd want to perform an update to an existing row. If that's the case, there is a specially designed transformation, the OLE DB Command. It is the component that allows for updates. There is a performance problem with that component, it issues a single update statement per row flowing through it. For 10 rows, I think it'd be fine. Otherwise, the pattern you'd use is to write all the new rows (inserts) into your destination table and then write all of your changed rows (updates) into a second staging-type table. After the data flow is complete, then use an Execute SQL Task to perform a set based update statement.
There are third party options that handle combined upserts. I know Pragmatic Works has an option and there are probably others on the tasks and components site.