Gulp serve takes too long to build - gulp

After creating the project using yo teams and upon running the gulp serve command the command takes 1-2 mins to build on every save. I don’t know what is the main reason but I have disabled multiple things and tried other options too make it fast but no success so far. I have not change anything in the gulp.js and webpack config.js file both of them are having default configuration which were created by the process.
Any help would be appreciated on how to make the build process faster. Right now I have to wait for it to build the changes which is such a pain every time you are saving something and build process takes this much time.
Please refer below image where you can see on every save "Webpack:client" is getting started and it is taking ages to complete. In worst case scenarios it is taking 1.5 min to build the process again where it should take only few seconds(1-2 sec) / milliseconds

Related

SSIS ETL solution needs to import 600,000 small simple files every hour. What would be optimal Agent scheduling interval?

The hardware, infrastructure, and redundancy are not in the scope of this question.
I am building an SSIS ETL solution needs to import ~600,000 small, simple files per hour. With my current design, SQL Agent runs the SSIS package, and it takes “n” number of files and processes them.
Number of files per batch “n” is configurable
The SQL Agent SSIS package execution is configurable
I wonder if the above approach is a right choice? Or alternatively, I must have an infinite loop in the SSIS package and keep taking/processing the files?
So the question boils down to a choice between infinite loop vs. batch+schedule. Is there any other better option?
Thank you
In a similar situation, I run an agent job every minute and process all files present. If the job takes 5 minutes to run because there are alot of files, the agent skips the scheduled runs until the first one finishes so there is no worry that two processes will conflict with each other.
Is SSIS the right tool?
Maybe. Let's start with the numbers
600000 files / 60 minutes = 10,000 files per minute
600000 files / (60 minutes * 60 seconds) = 167 files per second.
Regardless of what technology you use, you're looking at some extremes here. Windows NTFS starts to choke around 10k files in a folder so you'll need to employ some folder strategy to keep that count down in addition to regular maintenance
In 2008, the SSIS team managed to load 1TB in 30 minutes which was all sourced from disk so SSIS can perform very well. It can also perform really poorly which is how I've managed to gain ~36k SO Unicorn points.
6 years is a lifetime in the world of computing so you may not need to take such drastic measures as the SSIS team did to set their benchmark but you will need to look at their approach. I know you've stated the hardware is outside of the scope of discussion but it very much is inclusive. If the file system (san, nas, local disk, flash or whatever) can't server 600k files then you'll never be able to clear your work queue.
Your goal is to get as many workers as possible engaged in processing these files. The Work Pile Pattern can be pretty effective to this end. Basically, a process asks: Is there work to be done? If so, I'll take a bit and go work on it. And then you scale up the number of workers asking and doing work. The challenge here is to ensure you have some mechanism to prevent workers from processing the same file. Maybe that's as simple as filtering by directory or file name or some other mechanism that is right for your situation.
I think you're headed down this approach based on your problem definition with the agent jobs that handle N files but wanted to give your pattern a name for further research.
I would agree with Joe C's answer - schedule the SQL Agent job to run as frequently as needed. If it's already running, it won't spawn a second process. Perhaps you're going to have multiple agents that all start every minute - AgentFolderA, AgentFolderB... AgentFolderZZH and they are each launching a master package that then has subprocesses looking for work.
Use WMI Event viewer watcher to know if new file arrived or not and next step you can call job scheduler to execute or execute direct the ssis package.
More details on WMI event .
https://msdn.microsoft.com/en-us/library/ms141130%28v=sql.105%29.aspx

Make periodic task occur every 2 seconds

I need to check regularly if a new message has been received because the API service I am integrating with does not have a push notification service. How do I set how often a periodic task runs for?
I have the boiler plate code (eg. http://www.c-sharpcorner.com/uploadfile/54f4b6/periodic-and-resourceintensive-tasks-in-windows-phone-mango/) from any example on the internet, but it seems it can only run roughly every 30 minutes :() ?
Unfortunately periodic tasks run not more often than 30 minutes and they are not even guaranteed to run. If you want to run more often than that your only bet is setting up a push notification service...

CakePHP 2.2 / AclExtras / aco_sync takes ages to execute

I have a intraweb application which uses AclExtras.
When i execute in shell
./Console/cake AclExtras.AclExtras aco_sync
it takes about 4-5 minutes (!!!).
Is there maybe something not correctely set up?
Well it could take that much time because it just needs it to regenerate all permissions, or:
If you're running it on an old or not powerful enough server it could still run slow.
If you have problems on the DB server due to high loads.
If you custom coded something it could run slower if not optimized.
.... Whatever else
There is no way to say why it takes 4-5 minutes if you do not provide any code and more detailed set up information, but on the other hand it could just be so resource intensive that the Shell requires these minutes to complete it's task... So a question like "it takes about 4-5 minutes (!!!).Is there maybe something not correctly set up?" is no good in this case. Please check this out on how to ask questions.

Pages stop responding with ruby on rails and mysql

I'm using Thin server + Ruby on Rails + Mysql, and I have several cronjobs that do heavy processing with the database every hour (scripts take about 1-2 minutes to finish).
When the cronjobs run, the website stops loading, and only responds AFTER cronjob finishes.
So my question is, how can I make everything independent, asynchronous or parallel, so that when cronjobs are running the website loads normally.
Any links to guides, or general advice much appreciated.
Update:
I'm sorry I can't share the code of cronjob, but basically it does several thousands of requests like:
SELECT 1 AS one FROM `table` WHERE `table`.`type_id` = BINARY '1251625345_4146645145056' LIMIT 1
and then several thousand of Inserts if the previous one returns null (that means the entry doesnt exist)
Does your cron job ask the Rails app to do those selects/inserts?
If so,
Thin is a single-threaded web server. It can't handle more than one request simultaneously. You cronjob should start another Ruby process (e.g. a Rake task) to do the job.

How to benchmark and optimize a really database-intensive Rails action?

There is an action in the admin section of a client's site, say Admin::Analytics (that I did not build but have to maintain) that compiles site usage analytics by performing a couple dozen, rather intensive database queries. This functionality has always been a bottleneck to application performance whenever the analytics report is being compiled. But, the bottleneck has become so bad lately that, when accessed, the site comes to a screeching halt and hangs indefinitely. Until yesterday I never had a reason to run the "top" command on the server, but doing so I realized that Admin::Analytics#index causes mysqld to spin at upwards of 350+% CPU power on the quad-core, production VPS.
I have downloaded fresh copies of production data and the production log. However, when I access Admin::Analytics#index locally on my development box, while using the production data, it loads in about 10 - 12 seconds (and utilizes ~ 150+% of my dual-core CPU), which sadly is normal. I suppose there could be a discrepancy in mysql settings that has suddenly come into play. Also, a mysqldump of the database is now 531 MB, when it was only 336 MB 28 days ago.  Anyway, I do not have root access on the VPS, so tweaking mysqld performance would be cumbersome, and I would really like to get to the exact cause of this problem. However, the production logs don't contain info. on the queries; they merely report the length that these requests took, which average out to a few minutes apiece (although they seemed to have caused mysqld to stall for much longer than this and prompting me to request our host to reboot mysqld just to get our site back up in one instance).
I suppose I can try upping the log level in production to solicit info. on the database queries being performed by Admin::Analytics#index, but at the same time I'm afraid to replicate this behavior in production because I don't feel like calling our host up to restart mysqld again! This action contains a single database request in its controller, and a couple dozen prepared statements embedded in its view!
How would you proceed to benchmark/diagnose and optimize/fix this action?!
(Aside: Obviously I would like to completely replace this functionality with Google Analytics or a similar solution, but I need fix this problem before proceeding.)
I'd recommend taking a look at this article:
http://axonflux.com/building-and-scaling-a-startup
Particularly, query_reviewer and newrelic have been a life-saver for me.
I appreciate all the help with this, but what turned out to be the fix for this was to implement a couple of indexes on the Analytics table to cater to the queries in this action. A simple Rails migration to add the indexes and the action now loads in less than a second both on my dev box and on prod!