WebStorm re-indexing monorepo - phpstorm

I am working on a huge monorepo application. They use rush and pnpm as monorepo manager.
When I open project or checkout a branch or run rush update, because of many file changes. My IDE goes slow and consume a lot of CPU for indexing
How can I make it faster?

Related

Should I install MariaDB to my git repo, or locally / individually to all development computers?

I am starting work on a website that will use MariaDB for storing information (no sensitive information), and would like to keep everything in my git repo.
Originally, I planned on installing MariaDB to the separate computers that I plan to develop on (my desktop and laptop), but decided that it may be easier to store all of MariaDB (the program and the databases) in the git repo, so that one would just need to clone the repo and run MariaDB from the repo just like they would run Node, but I have not found any information on how to do this.
My questions are (1) should I install MariaDB and its databases to my git repo, instead of installing MariaDB in /usr, and the database in /var/lib/mysql, and (2) how would one do that?
Instead of attempting to put a mariadb runtime environment inside your version control, consider using docker to describe how to configure an appropriate mariaDB installation. I use makefiles atop that to contain the commands I use to build and run the docker but you coould just as easily use a shell script. Finally, provide a dataabase load script that loads your test database from a text file within the repo.
using docker to describe runtime environments for your application and dependencies is awesome. It strikes a great balance between having an incomplete git repo, and having to put binaries and database data in your version control. You wouldn't want to track changes to the underlying maria db files, anyway, so best not to commit them. You can build the docker containers you need on every workstation you use without much trouble, your automation around creating them provides a mechanism to ensure consistency, and by loading a database with the correct test data every time you develop your app, you'll have a better development process and less schema and data related changes. It works great, I do nothing but docker driven development these days.

Magento EE database has no triggers

I inherited a heavily-customized Magento EE project that has been through multiple stages of disaster. The production database has never been pulled down to lower environments in the roughly two years of the project. It appears the production database has no triggers defined, but all the lower databases (dev, test, etc), do have triggers, which is what you'd expect in a Magento EE project.
At this point I'm not even sure how the application is still running on production. I'm loading a triggerless mysqldump that I took from prod into another environment now to see if the database actually works.
Has anybody ever seen this before? How would this even happen? Maybe the project started out on CE and then was upgraded to EE and the upgrade failed partially? I'm at a loss.
As far as I can tell, this was caused by the upgrade path from CD to EE, and I guess the previous consultants hacked the upgrade process to not create triggers, or it failed and they didn't notice. Triggers are only necessary if you're reindexing via cron job, not after save, so the app still runs ok.

Developing OpenShift cartridges that need large binaries

When developing an OpenShift cartridge, what is the standard way to handle large binaries that it will need? Let's say a 100 MB file. Searching around the web I saw a couple cartridges that had their required binaries in the git repository, but I thought that was generally considered to be a bad idea from a git perspective.
You can either add it to your repo as you mentioned or host the file elsewhere and wget it part of your install scripts.

How to deal with overwhelming Jenkins updates to core and plugins?

I love Jenkins and appreciate that it is an active project. Still, I don't know what would be the correct approach for maintaining a Jenkins installation because I do see Jenkins+plugins updates daily and this takes too much time to update them.
Is there a way to automate this or a LTS version that I can use instead?
The Jenkins team do have a concept of LTS releases, so take a look at this Wiki: https://wiki.jenkins-ci.org/display/JENKINS/LTS+Release+Line
As for automating updates, you can do it if you've installed Jenkins using a package manager at the OS level. For instance, on ubuntu you could have a cron that calls apt-get update and apt-get install jenkins at midnight. I'm not sure about automating if you've installed it manually.
However, automatic updates have a bad side, as essential plugins could potentially stop working with new updates, or bugs that have slipped through the net could cause problems.
Having said that, the quality of Jenkins seems consistently good, so it might be worth the risk for you.
As far as I know there isn't a way to automate the update.
However, given the fact that an update could (in theory, Jenkins has been completely stable in my experience) break something in your build process, I don't think automating would be an appropriate solution.
What seems to work well for me is to periodically do the updates manually and then re-run all of the build jobs to verify that nothing has broken. I do this as part of our regular maintenance when we do VM backups and operating system updates on our CI environment. (The updates are done after the backups so that if something does go wrong, we have an up-to-date fall back point..)

NAnt Specflow tests are running slow through cruise control

I've been running Specflow tests using Watin, NAnt, and Cruise Control. When ever I build the project through cruise control, the Specflow tests are taking forever. But if I remote into the server and build it manually and run the specflow tests manually, that no longer occurs. It seems like steps which navigate using the GoTo function in Watin are taking forever, normally 100+ seconds. But they don't seem to be timing out because the tests are passing anyway.
Anybody have this problem before? What did you do to solve it?