Magento EE database has no triggers - mysql

I inherited a heavily-customized Magento EE project that has been through multiple stages of disaster. The production database has never been pulled down to lower environments in the roughly two years of the project. It appears the production database has no triggers defined, but all the lower databases (dev, test, etc), do have triggers, which is what you'd expect in a Magento EE project.
At this point I'm not even sure how the application is still running on production. I'm loading a triggerless mysqldump that I took from prod into another environment now to see if the database actually works.
Has anybody ever seen this before? How would this even happen? Maybe the project started out on CE and then was upgraded to EE and the upgrade failed partially? I'm at a loss.

As far as I can tell, this was caused by the upgrade path from CD to EE, and I guess the previous consultants hacked the upgrade process to not create triggers, or it failed and they didn't notice. Triggers are only necessary if you're reindexing via cron job, not after save, so the app still runs ok.

Related

vb.net application slow performance after installing on newly setup computers

i have been working on a project using Visual Studio 2015 along with vb.net and mysql. I deployed my application using ClickOnce to my company server then installed on all the computers in my company. It is working fine. Recently, i have it installed on three new computers which has the exact same os and specs with the existing computers, however, the application installed on the three new computers showed a very significant difference in performance comparing to the existing computers, it is much slower in the new computers.
I have checked through
the networks
the connection to MySql database
the memory it consumes
the .net framework version
but it is all exactly the same with the existing computers. does anyone have any idea what might be the cause? Or anyway to troubleshoot this problem?
Just add timers, stopwatches, etc to instrumented builds. In fact, in an application of any size I build-in, from the off, at least a skeletal diagnostic system with a display window and stopwatches that can be used to time specific bits of code - typically database queries. With that in place, it's simple to add specifics to produce instrumented builds to drill down to any problems that only occur on end-users' machines. You can also download DbgView from Microsoft and use that in conjunction with Trace statements in your code.

using which mysql_backup packages?

I have a DB program and I want to use mysql_backup that is from here:
http://mysqlbackupnet.codeplex.com
When you download the package,there is 3 folders: .Net 2, .Net 4 and .Net 4.5
Is there any difference that which package I use?
Although it's better to use .Net 2 because all my clients have Windows XP that .Net 4 has some crashes on it.
And if there is no difference, then why there is three packages?
thanks
What "DB program" do you have? I'll take a guess it's MySQL you're working with, but are you using something like Percona or MariaDB as a drop in? Also what flavor of linux are you running on? I only ask because perhaps there are easier/cleaner ways of doing what you are trying to do.
If you have root access to the server you are working with, then why not write a bash script to take care of backing up your databases. Going your own custom route might seem to daunting, but the upside is you get to control how your system backs up. You could get a cron job to make backups to a specific user folder, or even have backups sent to a remote Dropbox. I've got a system similar setup on my own VPS. Happy to help you out if I'm able.

Configuration management : if upgrade fail then rollback

I'm trying to develop fully stand-alone server that would be able to upgrade themselves without any human intervention. To be more acute, here are the requirements :
- Every nights the server checks for updates on a specified server :
- When a upgrade is available, it downloads a new configuration file or something
- Then the server proceeds its upgrade but if anything goes wrong (internet connection is lost or something is badly downloaded) the server rolls back to its previous configuration.
In fact we don't really care if the server isn't up to date but we want to be sure that its still running (even in a old version). I've looked for configuration management system and found fancy tools like Puppet. But, for instance, if puppet can't download a new debian package, the update will failed and there's a risk that the server can't fullfil it's task.
So i was wondering, do I have to check if every packages are correctly downloaded before launching the upgrade or is there any fancy tools that can do it for me and rollback if needed ?
One point is very important, once a server is deploy, we can't have any access to it. That's why it's better to have it running in an older version than to have it not running.
I hope that you'll understand my issue, sorry for my english
Julian

How to deal with overwhelming Jenkins updates to core and plugins?

I love Jenkins and appreciate that it is an active project. Still, I don't know what would be the correct approach for maintaining a Jenkins installation because I do see Jenkins+plugins updates daily and this takes too much time to update them.
Is there a way to automate this or a LTS version that I can use instead?
The Jenkins team do have a concept of LTS releases, so take a look at this Wiki: https://wiki.jenkins-ci.org/display/JENKINS/LTS+Release+Line
As for automating updates, you can do it if you've installed Jenkins using a package manager at the OS level. For instance, on ubuntu you could have a cron that calls apt-get update and apt-get install jenkins at midnight. I'm not sure about automating if you've installed it manually.
However, automatic updates have a bad side, as essential plugins could potentially stop working with new updates, or bugs that have slipped through the net could cause problems.
Having said that, the quality of Jenkins seems consistently good, so it might be worth the risk for you.
As far as I know there isn't a way to automate the update.
However, given the fact that an update could (in theory, Jenkins has been completely stable in my experience) break something in your build process, I don't think automating would be an appropriate solution.
What seems to work well for me is to periodically do the updates manually and then re-run all of the build jobs to verify that nothing has broken. I do this as part of our regular maintenance when we do VM backups and operating system updates on our CI environment. (The updates are done after the backups so that if something does go wrong, we have an up-to-date fall back point..)

Remote installation of Stored Procs on MySQL

I'm just setting up the live environment for my new project. Unlike my dev and testing systems, the live environment consists of a web server (Win 2003) and a separate DB server (MYSQL).
My installation process for each release of the software is nicely scripted, giving me full rollback options etc.
However, I can't work out how to install my stored procedures within that process. I can't run a MYSQL command line because MYSQL isn't installed on the web server, it only accesses the DB via ODBC.
Is there a means by which I can run MYSQL commands on the web server, via ODBC from a command line? I really want to keep it all together so I can run "Install v123" and everything whizzes off and gets installed in one go.
There may be a more elegant solution but: I had a very similar problem a number of years ago, and I eventually just wrote a little stand alone program to run my scripts at the end of the install.
Another common option is to have them run as part of a configuration utility/page the user goes to after setup, but I'm assuming you want to keep this as a 1 step installation.