Why is my "Component Services" window so slow? - configuration

I have a newly configured Windows Server 2003 VM.
One of the issues with the old VM was that whenever I open up "Component Services" from "Administrative Tools", the performance is very bad. It takes several minutes to create new COM+ applications and add components where it used to take only a few seconds.
I have many components to install and multiple VMs to do this on. Why would it be so slow and what can I do to make it faster?
It used to run just fine. I wonder if it could have something to do with the anti-virus software in the office...

I found a solution in my particular case. My machine is on Domain X and my user account is on Domain Y.
The solution was to log in as a user on Domain X, the same domain as the machine. This made the Component Services window respond almost instantaneously. I'm not entirely sure why this is, though.

Related

Angular SPA for Offline Use (with DDBB)

I am developing an invoice app with Angular + NodeJs + MySql.
The thing is, the app is planned to be used by one employee in his office. No need for online servers.
It is not problematic to deploy the app online, but the internet is unstable in the zone (Latinamerican problem. You may lose connection for hours, and even voltage variations that may shut down the PC).
So the app must be self sufficient to always work offline.
So my questions are:
Can I simply deploy the app offline? Like in local. If that is the case, I would need for everything to be initialized automatically when the user opens the app (server open, database connected...).
If I have no way but to deploy the app online, should I use Firebase? Also, what happen if the internet service shut downs for hours? Is there a way for the database to be available offline and sync when the internet gets back?
You could build the app as an Electron App, then its becomes a locally run program. https://www.electronjs.org/
You can host it anywhere, but turn the app in to a PWA, which means it will work locally in the browser after a successful visit (gets installed with a service worker in browser) For the database it self, you can store data in the browser but some are limited to 5mb of data in the localstorage / sessionStorage / indexdb. Firebase does have some locally cached data. But if the browser is closed it can be lost.
If it needs to run locally i would go the electron route. Its slightly harder to do but it fills out your usecase better.
You can use both ways if you want to be sync like situation you have to hold data if your internet is not working in local storage or indexed db.
and it is fine you can deploy locally also or make one dedicated server which is always on.so any body in same network can use that angular app easily.
Just take care of backup plan when you system corrupt you should have proper backup of database for such scenario.

How to setup and save qemu running option

I'm using qemu to replace bochs (since it doesn't update anymore)
In bochs, I can save the running settings into files and reload it. Furthermore, there will be a listed table of running options while boot up.
I'm wondering if I can do the same with qemu, save running settings such as cpu model, and other stuffs into some files and reload it next time I run emulation.
And if there exists a full listed running option table like thing for me to have a complete view on which options I can set.
Thanks a lot!
For this sort of UI and management of VMs you should look at a "management layer" program that sits on top of QEMU. libvirt's "virt-manager" is one common choice here. A management-layer will generally allow you to define options for a VM and save them so you can start and stop that VM without having to specify all the command line options every time. It will also configure QEMU in a more secure and performant way than you get by default, which often requires rather long QEMU command lines.
QEMU itself doesn't provide this kind of facility because its philosophy is to just be the low-level tool which runs a VM, and leave the UI and persistent-VM-management to other software which can do a better job of it.

Gnome 3 automatic execution of a script that needs network

my old father is using ubuntu-gnome. He has no static ip address. In order to perform remote administration, I need to know his ip. I was using dyndns free account (configuration in the adsl modem), but this will stop working in a couple of days.
I would like to run a script each time he logs in to publish his ip on my website. I have tried to put a script on the boot, but the network is not available. It seems that it is gnome 3 that starts the network, but I do not know much about gnome 3.
How should I do to have my script run automatically as soon as the network is available ?
One possible non-elegant solution for this is to put your script in his cron to run every X minutes :)
Looking to mine /etc/NetworkManager/ looks like there is a folder dispatcher.d that I think it'll do what you want. Just experiment with a bash/perl/python w/e script in there set the permission appropriately. You can find the UUID in the system-connections/ folder. More information is available in man networkmanager.
EDIT: Look what I found: https://askubuntu.com/questions/13963/call-script-after-connecting-to-a-wireless-network. Seems like this is exactly what you want.
The easiest way is to use another dynamic DNS service. I used to use my own. You could also put curl or wget command to cron or create a systemd service that will call that command periodically. As a target you would have to use your machine with a web server where you can see the IP in your logs.
It is not Gnome that connects the network, it is a system service called NetworkManager. It tries to connect at boot if possible. In some cases it waits for wireless signal, in other cases it waits for a user password. I recently verified that in Fedora, NetworkManager properly implements the systemd's network-online.target but it may have yet to be fixed in other distributions, see the upstream bug report.
https://bugzilla.gnome.org/show_bug.cgi?id=728965
If you want to run a system service just after boot, you need to use:
[Unit]
...
Wants=network-online.target
After=network-online.target
You could also just run a script that calls nm-online at the beginning to wait for the network connectivity if you can expect the connectivity to come up in reasonable time, otherwise it times out. Such a script can be run from any environment including a user session.
And, as noted already, you can put a script into /etc/NetworkManager/dispatcher.d that will be called on any network configuration change and such a script can then filter connection up events and start the notification script.

How to test ApplicationData.Current.DataChanged

I'm developing a windows store app, I want to test roaming settings, I developed my app using vs2012 on one of my machine, and installed the app using powershell on another machine, then changed the roaming settings, but nothing happend. What should I do?
If you have the application installed on two machines, when you change the roaming settings one machine it will eventually propagate to other machine.
To test this, debug both applications simultaneously and place a breakpoint inside the event you attached to ApplicationData.DataChanged (http://msdn.microsoft.com/en-us/library/windows/apps/windows.storage.applicationdata.datachanged)
Now when you change the roaming settings inside one the applications, the application on the other machine should break when it receives the data.
Be aware that for normal roaming settings it can take anywhere from 5-15 minutes, or longer in some cases. For testing, it is easier to send high priority data - this should take much less time, hopefully less than a minute.

ExpressionEngine : git : local development : remote database

To those of you that are trying to be good little developers and version control their ExpressionEngine sites with git, how do you handle your database?
In my limited experience with multiple developers working on one ExpressionEngine site, we've had to all run off of a single MySQL development database running on a remote web server. For those of you that have tried this, it is PAINFULLY slow. Page loads can easily take 5-10 seconds making development extremely difficult. It would be quicker to work off of a remote development server. I am trying to steer away from working off of a remote MySQL server in order to be able to work from anywhere and not depend on Internet connection speed/quality.
Just wondering how others handle their MySQL databases.
Do all of your developers run off of one central database? Have you dealt with slowness issues like we have?
Do you keep your database under version control? How do you handle export/imports among multiple developers and multiple branches?
With one developer I can import/export/commit the database very easily but as soon as you add another developer to the mix, it gets very VERY muddy. Looking forward to hearing everyone's thoughts on this mammoth topic.
Thanks!
It seems there is a lot of time lost on failing DNS requests, with a remote database.
Start your MySQL server with start mysqld with --skip-name-resolve. (More information on this topic can be found here: http://dev.mysql.com/doc/refman/5.0/en/host-cache.html)
Having a remote database still seems to be the best way for us to work on a project with multiple developers.
I almost always use a central database for development. Depending which host you use, the speed difference may not be huge.
Obviously, if you're not making changes to the database, i.e. only doing template development, keeping the database in sync is not as needed, so you could potentially bring up a local copy of the database. You just have to remember to repeat any database changes, if you do end up making some.
As far as version control, I keep a copy of my base EE install's SQL file in my base repository. Other than that I don't usually keep copies of the database in Git, so I don't do a lot of importing/exporting, etc.
Have you looked at the EE Profiler recently? You'll probably notice in the neighborhood of 20-80 queries on your home page depending on it's complexity.
The problem is that, for each query, MySQL must execute a remote request for data, download the response, and then present ExpressionEngine it's data. The 20-80 round trips to the database is what's causing your delay and I don't think there is much you can do about it. When using a remote (outside our network) database, I get the same delay as you.
When MySQL is running on your machine or the production server, it doesn't have the added network requests causing latency in it's requests for data. This is the difference.
As for fixes, all you can do is move to a database hosted on your internal network. We have a Linux machine that mimics our production environment that we use for staging. Since it's on our network, we can use the local IP address in our database.php file. This is much faster.
The problem that we still have is the issue of channels/fields/entries. When a developer is working on a new section, they'll likely need to create a new channel and fields and/or new entries. When we're ready to push that functionality to production, we have to manually make those changes on the production server as there is no way to reliably export them. I am hopeful of this addon though---we'll see.
In my company (4 developers) we each run our own DB locally. But recently I tested Rackspace Cloud Databases (but there are other cloud db providers) for a heavy DB that could become difficult to run on a little laptop. It's relatively less expensive than running our own db server, and it can be setup or deleted in the minute.