I recently downloaded clean copies of WideWorldImporters & WideWorldImportersDW from GitHub. I am following the instructions here https://learn.microsoft.com/en-us/sql/samples/wide-world-importers-generate-data?view=sql-server-ver16. I have expanded WWI and reseeded DWH. When I run DailyETL.ispac all is well until EXEC Integration.MigrateStagedMovementData;" failed with the following error: "The MERGE statement conflicted with the FOREIGN KEY constraint "FK_Fact_Movement_Date_Key_Dimension_Date" about line 48. After much Googling around cannot seem to make any progress on finding a way forward. The 'Tech Support' link from within SSMS sends me somewhere pretty useless. Can't believe I'm the first person to have this happen. Can anyone point me in the right direction please?
Just in case anyone else is ever interested the 'expand' function cited in the link above generates additional data within WWI OLTP database. The sp EXEC Integration.PopulateDateDimensionForYear #YearNumber;(step 3 within the corresponding Daily.ETL.ispac updates Dimension.Date # WWI OLAP for current year only. Github sample database is up-to-date as at end 2016, years 2017 through 2021 are missing from Dimension.Date hence FK lookup failure. Manually run sp via SSMS for each year missing in Dim.Date & re-run Daily.ETL.ispac
Related
I've inherited a MS Access database which I should batch-update some data on. Thus, I've created a new query and as a first test tried to get some filtered record list - without success. Access strictly refuses to compile code that contains the LEFT function.
This does compile:
SELECT ColPath FROM MyTable;
This does not compile:
SELECT LEFT([ColPath], 3) FROM MyTable;
Even a simple
SELECT LEFT('Hello', 2);
doesn't work.
I've googled a lot now, and found solutions that either recommend checking the references in the Tools/References dialog in VBA view. There a no missing references in my case. A second solution was to check VBA modules for duplicate OPTION COMPARE DATABASE statements - none in my case.
I then created a brand new database and tried - surprisingly, everything works fine! I now compared the references of the new database to the old one: They are the same.
I'd be happy about any ideas on this...
Sounds like you messed up your references.
In the VBA editor, go to Tools, then References.
The top 2 should always be Visual Basic For Applications, then Microsoft Access ##.# Object Library, in that order (note the priority buttons to change order). Anything else will cause trouble.
Even though you have no missing references, incorrect ones can still cause this issue.
Second to that I'd do the general troubleshooting steps, decompile (Win R, MSACCESS.EXE /decompile, Open the database, hit Debug -> Compile) and a compact and repair. That will cause your entire database to recompile, and if your VBA code contains compile errors, that'll affect any queries calling any function.
I have a database which is split into an Oracle back end and Access front end applications. I tried to migrate the back end over to SQL Server, but when I did this using SSMA I lost functionality on a lot of the Access applications. I don't know where to start to resolve this, I'm thinking maybe there is a mismatch in the syntax? Is anyone able to lead me in the right direction into solving this?
Edit:
I identified the main error came from NULL values when trying to insert delegate names into a form for courses running.
SSMA has thrown up an 'Unparsed SQL' error on the below code:
CREATE OR REPLACE TRIGGER "ISTRAINING"."INSERT_COURSE_DELEGATES" BEFORE
INSERT ON "COURSE_DELEGATES" FOR EACH ROW declare
row_locked exception;
pragma exception_init (row_locked, -54);
begin
begin
select next
into :new.COURSE_DELE_ID
from ISTRAINING.sequence
where tname='COURSE_DELEGATES' and tcolname='COURSE_DELE_ID'
for update of next nowait;
exception
when row_locked then
raise_application_error (-20002,'Database temporarily locked');
end;
update ISTRAINING.sequence
set next=next+1
where tname='COURSE_DELEGATES' and tcolname='COURSE_DELE_ID';
end;
Does this help? I'm sorry I'm just a little lost and not sure what the right question is.
It not clear why issues would crop up. You make no mention of what does not work (you have to improve your question and add some examples of what does not work)
My car is broken does not help at all here.
Given that you had a working application, then all of the work and changes to make access work with Oracle should VERY much apply to SQL server.
In other words, the code changes required are close to identical in both cases.
The only area that would require changes if the application uses pass-through queries. This would mean that such code and quires are written in a 100% oracle SQL syntax as opposed to SQL server syntax.
For liked tables, and reports based on linked tables, then ZERO changes are required.
However, if ADO recordsets are being used, then that’s likely where changes are required.
So first up would be to identify if pass-through queries are used. I would simply check the syntax (simply run them) to see if they work. After the PT queries are addressed, then next up would be to search + scan and look at any ADO code (if ADO was used – we don’t even know this). If no ADO code exists, then few if any changes would and should be required here.
As noted, you not shared more information then that your car is broken – without details of what code or what in a form is failing, then we really have close to zero to go on here.
I have a package that is essentially trying to copy 26 tables from oracle to sql server.
Its not a complete table copy, we are looking for records that belong to certain 'Regions' of our company.
I pull the data from oracle
I started just doing this with elbow grease, but each of the 26 tables required several variable to do the deletes, the fetches etc.
Long story short, I decided to use variables to represent the table names (source, temp and target).
This allowed me to copy/paste one sequence and effectively bypass a lot of click click in bids.
The problem I am running into is that the meta data seems to be very fragile. Sequences all seem to run on their owwn, but when i run the whole package, it breaks. And never in the same place.
Is this approach just a bad idea w/ SSIS?
So just to take this off the board....
Each sequence container had the following ops
Script task - set variables
Execute SQL task - delete from temp
data flow SourceToTemp -
ole db source - used a generic select * from tbl to temp_tbl
derived column1 - insert a timestamp column
oledb destination - map all the columns into a temp table (**THIS IS THE BIG PROBLEM CHILD)
Execute SQL task - delete from target
Execute SQL task - insert target select from temp
the oleDB destination is the piece that kept breaking.
Since it references variables, I had to be very careful at design time to set the variables correctly before opening one of the data flows.
I am pretty sure this is the problem. Since I can not say w/ certainty when SSIS refreshes meta data in the design environment, I cant be sure if/when sequence X refreshed while the variables were set to support sequence Y.
So while it conceptually should work at run time, dev time is a change control night mare.
I have changed all the oleDB destinations to point to a hard table name. this is really a small concession since there are 4 sql statements that are still driven by variables. (saving me a lot of clicking and typing)
This small change has eliminated the 'shifting sands' problem.
Take-a-way lesson : dont have an oledb destination be basesd on a variable.
thanks for the comments
What's the best way to save my MySQL data model and automatically apply changes to my development database server as they are made (or at least nightly)?
For example, today I'm working on my project and create this table in my database, and save the statement to SQL file to deploy to production later:
create table dog (
uid int,
name varchar(50)
);
And tomorrow, I decide I want to record the breed of each dog too. So I change the SQL file to read:
create table dog (
uid int,
name varchar(50),
breed varchar(30)
);
That script will work in production for the first release, but it won't help me update my development database because ERROR 1050 (42S01): Table 'dog' already exists. Furthermore, it won't work in production if this change was made after the first release. So I really need to ALTER the table now.
So now I have two concerns:
Is this how I should be saving my
data model (a bunch of create
statements in a SQL file), and
How
should I be applying changes like
this to my database?
My goal is to release changes accurately and enable continuous integration. I use a tool called DDLSYNC do find and apply difference in an Oracle database, but I'm not sure what similar tools exist for MySQL.
At work, we developed a small script to manage our database versioning. Every change to any table or set of data gets it's own SQL file.
The files are numbered sequentially. We keep track of which update files have been run by storing that information in the database. The script inserts a row with the filename when the file is about to be executed, and updates the row with a completion timestamp when the execution finishes. This is wrapped inside a transaction. (It's worth remembering that DDL commands in MySQL can not occur within a transaction. Any attempt to perform DDL in a transaction causes an implicit commit.)
Because the SQL files are part of our source code repository, we can make running the update script part of the normal rollout process. This makes keeping the database and the code in sync easy as pie. Honestly, the hardest part is making sure another dev hasn't grabbed the next number in a pending commit.
We combine this update system with an (optional) nightly wipe of our dev database, replacing the contents with last night's live system backup. After the backup is restored, the update gets run, with any pending update files getting run in the process.
The restoration occurs in such a way that only tables that were in the live database get overwritten. Any update that adds a table therefore also has to be responsible for only adding it if it doesn't exist. DROP TABLE IF EXISTS is handy. Unfortunately not all databases support that, so the update system also allows for execution of scripts written in our language of choice, not just SQL.
All of this in about 150 lines of code. It's as easy as reading a directory, comparing the contents to a table, and executing anything that hasn't already been executed, in a determined order.
There are standard tools for this in many frameworks: Rails has something called Migrations, something that's easily replicated in PHP or any similar language.
I am in process of creating an ssis package that need to do following in specified order:
process some data
move that data to some other tables
Get some data and push it in a plain text file.
I have created 3 store procedure for these, I have 2 "Execute SQL tasks" for 1 and 2 and a "Data Flow task" for 3rd.
Now when i run the package i can see all 3 step are completed (no errors) but they are not running in correct order.
I see step 3 is run first then step 1 and 2, i think then step 3 runs again. Normally i can ignore it but as the data in the text file can be 700 mb, i need to find a way to get SSIS to run these task in sequence.
I have tried "Sequence Container" but no luck.
Can some one help me with this please?
KA
You need to use precedence constraints to tell SSIS what order your tasks need to be executed in.
Drag the green arrow from task one to task two, and from task two to task three.
You could connect as
first SQL execute task
precedence constraint on success
second SQL execute task
precedence constraint on success
data flow
SSIS will follow the sequence as we required.
thanks
prav
I had exactly this problem. Tasks were being executed in something like the order I'd created them rather than the sequence I specified later. It turned out that I'd managed to get a task that belonged to the first sequence container to appear in the last sequence container without loosing it's allegiance to the first. I discovered this by taking a backup and deleting sequence containers - the rogue task disappeared when I deleted the first sequence container.
The fix was to cut and past the task into the desired sequence container.
I encounterd an issue on SQL Server Denali when individual components were running out of sequence even though they were joined by success constraints. The problem seemed to occur when I had cut and pasted the components and the constraint. By deleting and reapplying the constraints, the package then ran in the correct order.
In my case, if I want to decide execute order in sequence containers, I will use [sub sequence containers] between execute sql task and data flow task. Hope useful for you.
The best is to use Sequence Containers... basically they help in creating a Sequence.
But since it does not work in your case, create Child Packages for all your different process
and then create the Master Package which will have a link to those child packages, USE "Execute Package task"