Getting sliced data between two timeskip database transfer data - mysql

How can I get 'sliced' data and remove the data from database?
I was using two database MySQL and SQL SERVER.
Example:
Yesterday I transfered data from MySql to Sql Server about 1000 rows,
then today I simply deleted 5 rows in MySql then did the transfer again.
So, how can I know which ID was deleted from MySQL then remove it in SQL Server?
I was transferring data using Stored Procedures that check every ID in every loop inserting.
foreach($data AS $key => $value){ $this->MsSQL->Exec('exec sp_pendaftar {ID} {NAME}'); }
I have stored procedure like this:
CREATE PROCEDURE [dbo].[sp_pendaftar] #id INT,#name varchar(45) AS DECLARE #id nvarchar(40); SET NOCOUNT ON; SET #id = (SELECT TOP 1 id FROM t_pendaftar WHERE id = #id);
IF #id IS NULL BEGIN
INSERT INTO t_pendaftar VALUES(#id,#name);
END;
ELSE
BEGIN
UPDATE t_pendaftar SET name = #name WHERE id = #id;
END;
GO
Please help.

I do not understand anything of the SQL-Server code, but I would suggest to make a part of the replication via application. So, for the bigger and more dynamical data tables you could define a delete_pendafter and a insert_pendafer table (and also a change_pendafter if needed).
Before you delete from t_pendafter, you select just those rows into delete_pendafter. This is even possible with triggers, if that does not slow down the application too much.
On SQL-Server-side I hope you have a delete-join, so you just remove the deleted rows. In MySQL this would look like
DELETE orig
FROM t_pendafter AS orig
INNER JOIN delete_pendafter AS del ON del.id = orig.id
This solution can be extended to INSERT and CHANGE, but must be done with some care.
Every now and then you should make a full copy, and you should write some checks to ensure the data is consistent.

Just got some answer from my partner.
First, grab array id from MySQL DB and then grab array id from SQL Server and compare which id that not present in SQL Server using array_diff().
$MySQL : array[id] => [11,12,13,15,16,19,25]
$SQL_Server : array[id] => [11,12,13,14,15,16,17,18,19,20,21,23,24,25 ]
$array_to_be_deleted : array($SQL_Server , $MySQL)
print_r($array_to_be_deleted);
result would be:
array => [14,17,18,20,21,23,24]
Hope anyone can try to correct me.

Related

Rails - How to reference model's own column value during update statement?

Is it possible to achieve something like this?
Suppose name and plural_name are fields of Animal's table.
Suppose pluralise_animal is a helper function which takes a string and returns its plural literal.
I cannot loop over the animal records for technical reasons.
This is just an example
Animal.update_all("plural_name = ?", pluralise_animal("I WANT THE ANIMAL NAME HERE, the `name` column's value"))
I want something similar to how you can use functions in MySQL while modifying column values. Is this out-of-scope or possible?
UPDATE animals SET plural_name = CONCAT(name, 's') -- just an example to explain what I mean by referencing a column. I'm aware of the problems in this example.
Thanks in advance
I cannot loop over the animal records for technical reasons.
Sorry, this cannot be done with this restriction.
If your pluralizing helper function is implemented in the client, then you have to fetch data values back to the client, pluralize them, and then post them back to the database.
If you want the UPDATE to run against a set of rows without fetching data values back to the client, then you must implement the pluralization logic in an SQL expression, or a stored function or something.
UPDATE statements run in the database engine. They cannot call functions in the client.
Use a ruby script to generate a SQL script that INSERTS the plural values into a temp table
File.open(filename, 'w') do |file|
file.puts "CREATE TEMPORARY TABLE pluralised_animals(id INT, plural varchar(50));"
file.puts "INSERT INTO pluralised_animals(id, plural) VALUES"
Animal.each.do |animal|
file.puts( "( #{animal.id}, #{pluralise_animal(animal.name)}),"
end
end
Note: replace the trailing comma(,) with a semicolon (;)
Then run the generated SQL script in the database to populate the temp table.
Finally run a SQL update statement in the database that joins the temp table to the main table...
UPDATE animals a
INNER JOIN pluralised_animals pa
ON a.id = pa.id
SET a.plural_name = pa.plural;

Attempt to fetch logical page in database 2 failed. It belongs to allocation unit X not to Y

Started to get following error when executing certain SP. Code related to this error is pretty simple, joining #temp table to real table
Full text of error:
Msg 605, Level 21, State 3, Procedure spSSRSRPTIncorrectRevenue, Line 123
Attempt to fetch logical page (1:558552) in database 2 failed. It belongs to allocation unit 2089673263876079616 not to 4179358581172469760.
Here is what I found:
https://support.microsoft.com/en-us/kb/2015739
This suggests some kind of issue with database. I run DBCC CHECKDB on user database and on temp database - all passes.
Second thing I'm doing - trying to find which table those allocation units belong
SELECT au.allocation_unit_id, OBJECT_NAME(p.object_id) AS table_name, fg.name AS filegroup_name,
au.type_desc AS allocation_type, au.data_pages, partition_number
FROM sys.allocation_units AS au
JOIN sys.partitions AS p ON au.container_id = p.partition_id
JOIN sys.filegroups AS fg ON fg.data_space_id = au.data_space_id
WHERE au.allocation_unit_id in(2089673263876079616, 4179358581172469760)
ORDER BY au.allocation_unit_id
This returns 2 objects in tempdb, not in user db. So, it makes me think it's some kind of data corruption in tempdb? I'm developer, not DBA. Any suggestions on what I should check next?
Also, when I run query above, how can I tell REAL object name that I understand? Like #myTempTable______... instead of #07C650CE
I was able to resolve this by clearing the SQL caches:
DBCC FREEPROCCACHE
GO
DBCC DROPCLEANBUFFERS
GO
Apparently restarting the SQL service would have had the same affect.
(via Made By SQL, reproduced here to help others!)
I have like your get errors too.
firstly you must backing up to table or object for dont panic more after. I tryed below steps on my Database.
step 1:
Backing up table (data movement to other table as manuel or vs..how can you do)
I used to below codes to my table move other table
--CODE-
set nocount on;
DECLARE #Counter INT = 1;
DECLARE #LastRecord INT = 10000000; --your table_count
WHILE #Counter < #LastRecord
BEGIN
BEGIN TRY
BEGIN
insert into your_table_new SELECT * FROM your_table WHERE your_column= #Counter --dont forget! create your_table_new before
END
END TRY
BEGIN CATCH
BEGIN
insert into error_code select #Counter,'error_number' --dont forget the create error_code table before.
END
END CATCH
SET #Counter += 1;
END;
step 2:
-DBCC CHECKTABLE(your_table , REPAIR_REBUILD )
GO
check your table. if you have an error go to other step_3.
step 3:
!!attention!! you can lost some data/datas on your table. but dont worry. so you backed-up your table in step_1.
-DBCC CHECKTABLE(your_table , REPAIR_ALLOW_DATA_LOSS)
GO
Good luck!
~~pektas
In my case, truncating and re-populating data in the concerned tables was the solution.
Most probably the data inside tables was corrupted.
Database ID 2 means your tempdb is corrupted. Fixing tempdp is easy. Restart sqlserver service and you are good to go.
This could be an instance of a bug Microsoft fixed on SQL Server 2008 with queries on temporary tables that self reference (for example we have experienced it when loading data from a real table to a temporary table while filtering any rows we already have populated in the temp table in a previous step).
It seems that it only happens on temporary tables with no identity/primary key, so a workaround is to add one, although if you patch CU3 or later you also can enable the hotfix via turning a trace flag on.
For more details on the bug/fixes: https://support.microsoft.com/en-us/help/960770/fix-you-receive-error-605-and-error-824-when-you-run-a-query-that-inse

How to create a mysql loop of multiple table names structure and data copy to second database

What I'm trying to accomplish.
My site is live, and I'm making some changes to it.
When I'm ready to go live with the changes, I will essentially drop the old database (myFirstDb) and put the new one up(mySecondDb).
BUT - I will want to keep a few of the tables from the live site(myFirstDb) - and bring them into the new database (mySecondDb).
I know I can do this via phpMyAdmin - but that only allows one table at a time to be copied.
MY QUESTION:
What kind of SQL Query in mySQL will allow me to define the tables to move from database one to database two. (Keeping the structure and data etc)
So here's what I have so far.
First off, is it possible to create a loop in mySql - SQL query? If so...
Could someone assist me proper syntax for creating an array in mySQL?
How would I go about creating the array in valid sql syntax?
// define array of table names here. <-- how to form this?
SET #table_name_array = ('tableFive', 'tableSeven', 'tableNine', 'tableFifteen', 'tableNineteen', 'tableNth', 'tableMaybeOneMore');
How would I go about looping thru that array in proper sql syntax?
// start loop thru each table name <-- how to write this?
My code so far
SET #database_name := 'myFirstDb';
SET #second_database_name := 'mySecondDb';
// define array of table names here. <-- how to form this?
SET #table_name_array = ('tableFive', 'tableSeven', 'tableNine', 'tableFifteen', 'tableNineteen', 'tableNth', 'tableMaybeOneMore');
// start loop thru each table name <-- how to write this?
SET #table_name := 'value';
SET #second_db_table_name := #table_name;
CREATE TABLE #second_db_table_name LIKE #table_name
ALTER TABLE #second_db_table_name DISABLE KEYS
INSERT INTO #second_db_table_name SELECT * FROM #table_name
ALTER TABLE #table_name ENABLE KEYS
//end loop here

Update multiple mysql rows with 1 query?

I am porting client DB to new one with different post titles and rows ID's , but he wants to keep the hits from old website,
he has over 500 articles in new DB , and updating one is not an issue with this query
UPDATE blog_posts
SET hits=8523 WHERE title LIKE '%slim charger%' AND category = 2
but how would I go by doing this for all 500 articles with 1 query ? I already have export query from old db with post title and hits so we could find the new ones easier
INSERT INTO `news_items` (`title`, `hits`) VALUES
('Slim charger- your new friend', 8523 )...
the only reference in both tables is product name word within the title everything else is different , id , full title ...
Make a tmp table for old data in old_posts
UPDATE new_posts LEFT JOIN old_posts ON new_posts.title = old_posts.title SET new_posts.hits = old_posts.hits;
Unfortunately that's not how it works, you will have to write a script/program that does a loop.
articles cursor;
selection articlesTable%rowtype;
WHILE(FETCH(cursor into selection)%hasNext)
Insert into newTable selection;
END WHILE
How you bridge it is up to you, but that's the basic pseudo code/PLSQL.
The APIs for selecting from one DB and putting into another vary by DBMS, so you will need a common intermediate format. Basically take the record from the first DB, stick it into a struct in the programming language of your choice, and prefrom an insert using those struct values using the APIs for the other DBMS.
I'm not 100% sure that you can update multiple records at once, but I think what you want to do is use a loop in combination with the update query.
However, if you have 2 tables with absolutely no relationship or common identifiers between them, you are kind of in a hard place. The hard place in this instance would mean you have to do them all manually :(
The last possible idea to save you is that the id's might be different, but they might still have the same order. If that is the case you can still loop through the old table and update the number table as I described above.
You can build a procedure that'll do it for you:
CREATE PROCEDURE insert_news_items()
BEGIN
DECLARE news_items_cur CURSOR FOR
SELECT title, hits
FROM blog_posts
WHERE title LIKE '%slim charger%' AND category = 2;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = TRUE;
OPEN news_items_cur;
LOOP
IF done THEN
LEAVE read_loop;
END IF;
FETCH news_items_cur
INTO title, hits;
INSERT INTO `news_items` (`title`, `hits`) VALUES (title, hits);
END LOOP;
CLOSE news_items_cur;
END;

T-SQL Change Data Capture log cleanup

I have enabled CDC on few tables in my SQL server 2008 database. I want to change the number of days I can keep the change history.
I have read that by default change logs are kept for 3 days, before they are deleted by sys.sp_cdc_cleanup_change_table stored proc.
Does anyone know how I can change this default value, so that I can keep the logs for longer.
Thanks
You need to update the cdc_jobs.retention field for your database. The record in the cdc_jobs table won't exist until at least one table has been enabled for CDC.
-- modify msdb.dbo.cdc_jobs.retention value (in minutes) to be the length of time to keep change-tracked data
update
j
set
[retention] = 3679200 -- 7 years
from
sys.databases d
inner join
msdb.dbo.cdc_jobs j
on j.database_id = d.database_id
and j.job_type = 'cleanup'
and d.name = '<Database Name, sysname, DatabaseName>';
Replace <Database Name, sysname, DatabaseName> with your database name.
Two alternative solutions:
Drop the cleanup job:
EXEC sys.sp_cdc_drop_job #job_type = N'cleanup';
Change the job via sp:
EXEC sys.sp_cdc_change_job
#job_type = N'cleanup',
#retention = 2880;
Retention time in minutes, max 52494800 (100 years). But if you drop the job, data is never cleaned up, the job isn't even looking, if there is data to clean up. In case of wanting to keep data indefinitely, I'd prefer dropping the job.