I have two MSSQL 2012 databases.
I have snapshot replication configured where the first server is a publisher and distributer, and the other is a subscriber.
I would like to be able to execute a command on the publisher just before the replication job occurs, and then another command on the subscriber just after the replication finishes.
I belive this should be a pull snapshot replication, so that the agent is located on the subscriber server.
Is this even possible?
EDIT. Due to the nature of snapshot replication, i switched to using transactional replication, thus removing my ability to execute scripts on replication-start and -stop.
I never did find a way to execute commands successfully when data is replicating, as i switched to transactional replication. The job handling this transaction type, will start and then just keep running, and not like snapshot replication where the job starts - replicates data - stops.
Instead i set up the jobs i needed executed, using the task scheduler. My services transfers files, to and from a webserver, through the database. And will only transfer files if not already present.
Using task scheduler is working pretty good, and it is MUCH more simple and stable than having something execute a sql script, which would then execute a powershell remoting command to connecto to the server and execute the service.
I just thought i would add this if anyone else stumbles on a similar problem :)
Related
I'm building a background service which boils down to a very complicated queue system. The idea is to use Redis as non-persistent storage, and have a sub/pub scheme which runs on an interval.
All of the subscribers will be behind a load balancer. This removes the complicated problem of maintaining state between all the servers behind the load balancer.
But, this introduces a new problem...how can I ensure that the non-persistent (Redis) and persistent (MySQL) databases are both updated by my application(s)?
It seems like I'm forced to prioritize one, and if I HAVE to prioritize one, I will prioritize persistence. But, in that scenario, what happens if MySQL is updated, Redis is not, and for some reason I have lost the connection to MySQL and cannot undo my last write?
There are two possible solutions to your problem:
Following these steps:
a. Start MySQL transaction with START TRANSACTION
b. Run your MySQL query INSERT INTO ...
c. Run your Redis command
d. Finish your MySQL transaction with COMMIT statement in case if Redis command succeeded or ROLLBACK if command failed
Using transctions ensures that data is consistent in both storages.
Write LUA script for Redis using LuaSQL library (https://realtimelogic.com/ba/doc/en/lua/luasql.html), where you will connect to MySQL, insert your data and then send commands to Redis as well. Then this LUA script can be called from client side with just one command EVAL or EVALSHA
You can try the mysql udf plugin (https://github.com/Ideonella-sakaiensis/lib_mysqludf_redis)
See the post: how to move data from mysql to redis
Ran into this situation recently using SpringBoot (1.2.3) and Flyway (3.1), and could not find much about how to handle:
Server was spinning up and executing a long running alter table add column statement against a mysql database (5.6) 20-30mins. As the script was running the server process was hard terminated since it was not responding to health checks in a given timeframe. Since the MySQL server was processing the statement, it continued to process the statement to completion but the script was not marked as failed or success. When another server was spun up, it tried to execute the script which failed cause the column already existed.
Given that the server could crash at anytime for any reason during a long running script, other than idempotent scripts or a manual db upgrade process, would like to understand established patterns for handling this situation.
Possibly a setting that indicates the server platform uses implicit commits so mark it as run when the script is sent to the server?
You bring up a good point but unfortunately, I don't think Flyway or Spring Boot have any native support for this.
One workaround, ugly as it is, is to implement the beforeEachMigrate and afterEachMigrate callbacks that Flyway provides. You could use them to maintain a separate migration table that keeps track of which migrations have been started and which ones have been completed. Then, if it contains unfinished migrations the next time your application starts, you can shut it down with a descriptive error message.
I recommend creating a feature request about it. If you do, please link us to it!
My approach would be to have separate migration scripts for any long-running SQL that has an implicit commit. Flyway makes it really easy to add minor version numbered scripts, so there's not a good reason to overcomplicate the implementation with what you're suggesting. If you're using PostgreSQL you probably wouldn't need to do this, but Oracle and MySQL would require it.
I am writing a db logging ruby gem which will simply take out a job from a Beanstalk queue and write it in the DB.
That is one process on Server A puts a job (that it wants to log) in the Beanstalk queue on Server B, and my logging process on Server B takes it out and writes it to the mysql DB on Server B.
I want to know if this is worth it?
Is putting a job in the Beanstalk queue faster than writing to the DB. Or can my process that wants to log to DB directly write it to DB instead of using the logging process.
Note that both the beanstalk server and DB are on another server.
Beanstalk internally makes a socket call from Server A to Server B.
I believe mysql would need to do the same as well?
So therefore is mysql to another server going to be slower than putting in the beanstalk queue.
It'll be much faster, primarily because Beanstalkd jobs, by default, are stored in-memory and are lost if, for example, you lose power on your server, whereas MySQL is a strongly ACID-compliant relational database, and hence will go to a lot of effort and flush each of your logs to disk.
I think you'll find that, after your do some benchmarking with a lot of logs being made by your system, that disk I/O will be your limiting factor, rather than the speed of TCP/IP sockets. Your current system's advantage is that when server A files a log on Server B's beanstalkd instance it takes up very little of Server A's time, and Server B can periodically flush our many logs at once from beanstalkd to MySQL, making the process more efficient. The disadvantage is that, the more you batch up the logs, the more logs you will lose in the event of a software / power failure, unless you use beanstalkd's "-b" parameter which makes jobs durable by writing them to disk (and hence making the process slower).
Of course, the only way to truly settle this question is to benchmark!
I have a Jenkins (Hudson) server setup that runs tests on a variety of slave machines. What I want to do is reconfigure the slave (using remote APIs), reboot the slave so that he changes take effect, then continue with the rest of the test. There are two hurdles that I've encountered so far:
Once a Jenkins job begins to run on the slave, the slave cannot go down or break the network connection to the server otherwise Jenkins immediately fails the test. Normally, I would say this is completely desirable behavior. But in this case, I would like for Jenkins to accept the disruption until the slave comes back online and Jenkins can reconnect to it - or the slave reconnects to Jenkins.
In a job that has been attached to the slave, I need to run some build tasks on the Jenkins master - not on the slave.
Is this possible? So far, I haven't found a way to do this using Jenkins or any of its plugins.
EDIT - Further Explanation
I really, really like the Jenkins slave architecture. Combined with the plugins already available, it makes it very easy to get jobs to a slave, run, and the results pulled back. And the ability to pick any matching slave allows for automatic job/test distribution.
In our situation, we use virtualized (VMware) slave machines. It was easy enough to write a script that would cause Jenkins to use VMware PowerCLI to start the VM up when it needed to run on a slave, then ship the job to it and pull the results back. All good.
EXCEPT Part of the setup of each test is to slightly reconfigure the virtual machine in some fashion. Disable UAC, logon as a different user, have a different driver installed, etc - each of these changes requires that the test VM/slave be rebooted before the changes take affect. Although I can write slave on-demand scripts (Launch Method=Launch slave via execution of command on the master) that handle this reconfig and restart, it has to be done BEFORE the job is run. That's where the problem occurs - I cannot configure the slave that early because the type of configuration changes are dependent on the job being run, which occurs only after the slave is started.
Possible Solutions
1) Use multiple slave instances on a single VM. This wouldn't work - several of the configurations are mutually exclusive, but Jenkins doesn't know that. So it would try to start one slave configuration for one job, another slave for a different job - and both slaves would be on the same VM. Locks on the jobs don't prevent this since slave starting isn't part of the job.
2) (Optimal) A build step that allows a job to know that it's slave connection MIGHT be disrupted. The build step may have to include some options so that Jenkins knows how to reconnect the slave (will the slave reconnect automatically, will Jenkins have to run a script, will simple SSH suffice). The build step would handle the disconnect of the slave, ignore the usually job-failing disconnect, then perform the reconnect. Once the slave is back up and running, the next build step can occur. Perhaps a timeout to fail the job if the slave isn't reconnectable in a certain amount of time.
** Current Solution ** - less than optimal
Right now, I can't use the slave function of Jenkins. Instead, I use a series of build steps - run on the master - that use Windows and PowerShell scripts to power on the VM, make the configurations, and restart it. The VM has a SSH server running on it and I use that to upload test files to the test VM, then remote execute them. Then download the results back to Jenkins for handling by the job. This solution is functional - but a lot more work than the typical Jenkins slave approach. Also, the scripts are targeted towards a single VM; I can't easily use a pool of slaves.
Not sure if this will work for you, but you might try making the Jenkins agent node programmatically tell the master node that it's offline.
I had a situation where I needed to make a Jenkins job that performs these steps (all while running on the master node):
revert the Jenkins agent node VM to a powered-off snapshot
tell the master that the agent node is disconnected (since the master does not seem to automatically notice the agent is down, whenever I revert or hard power off my VMs)
power the agent node VM back on
as a "Post-build action", launch a separate job restricted to run on the agent node VM
I perform the agent disconnect step with a curl POST request, but there might be a cleaner way to do it:
curl -d "offlineMessage=&json=%7B%22offlineMessage%22%3A+%22%22%7D&Submit=Yes" http://JENKINS_HOST/computer/THE_NODE_TO_DISCONNECT/doDisconnect
Then when I boot the agent node, the agent launches and automatically connects, and the master notices the agent is back online (and will then send it jobs).
I was also able to toggle a node's availability on and off with this command (using 'toggleOffline' instead of 'doDisconnect'):
curl -d "offlineMessage=back_in_a_moment&json=%7B%22offlineMessage%22%3A+%22back_in_a_moment%22%7D&Submit=Mark+this+node+temporarily+offline" http://JENKINS_HOST/computer/NODE_TO_DISCONNECT/toggleOffline
(Running the same command again puts the node status back to normal.)
The above may not apply to you since it sounds like you want to do everything from one jenkins job running on the agent node. And I'm not sure what happens if an agent node disconnects or marks itself offline in the middle of running a job. :)
Still, you might poke around in this Remote Access API doc a bit to see what else is possible with this kind of approach.
Very easy. You create a Master job that runs on the Master, from the master job you call the client job as a build step (it's a new kind of build step and I love it). You need to check that the master job should wait for the client job to finish. Then you can run your script to reconfigure your client and run the second test on the client.
An even better strategy is to have two nodes running on your slave machines. You need to configure two nodes in Jenkins. I used that strategy successfully with a unix slave. The reason was that I needed different environment variables to be set up and I didn't wanted to push that into the jobs. I used ssh clients, so I don't know if it is possible with different client types. Than you might be able to run both tests at the same time or you chain the jobs or use the master strategy mentioned above.
I'm working on a project that has a MySQL transactional database backing up a web application. The company uses SQL Server for back office and reporting applications. What is the best way to update SQL Server with the data from MySQL? Right now, we are performing a dump of the MySQL data and doing a full restore. This may not be feasible much longer due to the increasing size of the database.
I would prefer a solution that copies only newly inserted and updated rows. I also need the SQL Server database to be static after the updates are applied. Basically, it should change once a day. I can update SQL Server from a local copy of MySQL (i.e. not production) Is there a way to apply MySQL replication to a slave server at specified intervals? A perfect solution is to run a once daily update on MySQL that syncs the database as of a point in time.
Can you find a way to snapshot the mySQL DB and then do the copy? It would make an instant logical copy of the database which would be frozen in time.
http://aspiringsysadmin.com/blog/2007/08/13/consistent-mysql-backups-using-zfs-snapshots/
ZFS filesystem can do this - but you haven't mentioned your hardware/OS.
Also, perhaps you could restrict the data you are pulling - whatever is time sensitive so that your pull will only get data that is older than 1 hour if your pull takes 45 minutes. Or to make things a little safer - how about just pulling the day before?
I believe SSIS 2008 has a new module called 'maintain' table that does the common task of getting updated/inserted records and optionally deletes.
Look into DTS, Microsoft's ETL tool. It's rather nice. Do the mapping, schedule it as a cron job, and Bob's your uncle.
Regardless of how you do the import to SqlServer from the MySQL clone, I don't think you need to worry about restricting MySQL replication to specific times.
MySQL replication only requires one thread in the master server and basically just transfers the transaction log to the slave. If you can, put the master and slave MySQL servers on a private LAN segment so that replication traffic does not impact the web traffic.
if you have SQL Server Standard or higher, SQL Server will take care of all of your needs.
use ssis to grab the data
use agent to schedule your timed tasks
btw - I'm doing the exact same thing that you are doing. SQL Server is awesome - it was easy to setup (I'm a noob to SSIS) and it worked on the first shot.
It sounds like what you need to do is to set up a script to start and stop replication on a slave database. If you can do that via a script, then you can establish a workflow in SSIS such as follows:
Stop Replication to Slave MySQL Database
If Replication has Stopped, then Take Snapshot of Slave MySQL Database
If Snapshot has been Taken, then
a= Start Replication to Slave MySQL Database
b= Import Slave MySQL Database Replica into SQL Server
NB: 3a and 3b can run in parallel.
I think your best bet in such a scenario would be to use SSIS to enable and disable MySQL database replication to the slave as well as to take a snapshot of the slave database. Then you can drive the whole thing from the SQL Server Agent mechanism.
Hope this helps