I'm developing a windows store app, I want to test roaming settings, I developed my app using vs2012 on one of my machine, and installed the app using powershell on another machine, then changed the roaming settings, but nothing happend. What should I do?
If you have the application installed on two machines, when you change the roaming settings one machine it will eventually propagate to other machine.
To test this, debug both applications simultaneously and place a breakpoint inside the event you attached to ApplicationData.DataChanged (http://msdn.microsoft.com/en-us/library/windows/apps/windows.storage.applicationdata.datachanged)
Now when you change the roaming settings inside one the applications, the application on the other machine should break when it receives the data.
Be aware that for normal roaming settings it can take anywhere from 5-15 minutes, or longer in some cases. For testing, it is easier to send high priority data - this should take much less time, hopefully less than a minute.
Related
I am developing an invoice app with Angular + NodeJs + MySql.
The thing is, the app is planned to be used by one employee in his office. No need for online servers.
It is not problematic to deploy the app online, but the internet is unstable in the zone (Latinamerican problem. You may lose connection for hours, and even voltage variations that may shut down the PC).
So the app must be self sufficient to always work offline.
So my questions are:
Can I simply deploy the app offline? Like in local. If that is the case, I would need for everything to be initialized automatically when the user opens the app (server open, database connected...).
If I have no way but to deploy the app online, should I use Firebase? Also, what happen if the internet service shut downs for hours? Is there a way for the database to be available offline and sync when the internet gets back?
You could build the app as an Electron App, then its becomes a locally run program. https://www.electronjs.org/
You can host it anywhere, but turn the app in to a PWA, which means it will work locally in the browser after a successful visit (gets installed with a service worker in browser) For the database it self, you can store data in the browser but some are limited to 5mb of data in the localstorage / sessionStorage / indexdb. Firebase does have some locally cached data. But if the browser is closed it can be lost.
If it needs to run locally i would go the electron route. Its slightly harder to do but it fills out your usecase better.
You can use both ways if you want to be sync like situation you have to hold data if your internet is not working in local storage or indexed db.
and it is fine you can deploy locally also or make one dedicated server which is always on.so any body in same network can use that angular app easily.
Just take care of backup plan when you system corrupt you should have proper backup of database for such scenario.
Say I have one prod environment and one dev environment in elastic beanstalk. I deploy my code to dev and it works and all's well, but when I deploy to production I get an error (note this is possible since sometimes instances get corrupted during deploys and apache breaks). What are the pros and cons of this solution:
have 2 prod environments that you toggle between on every deploy
deploy to the one not being used
if the deploy works, point yourdomain.com to the new production and if not, your old production is safe
Now, is SEO a concern -- if I switch around my domain between two elastic beanstalk environments, would the SEO be harmed?
The following solution is one that I have used many times without incident but remember to always test you solutions before production use.
The solution will use the following environment names which you should map to internal DNS names:
PROD01.elasticbeanstalk.com > www.example.com
PROD02.elasticbeanstalk.com
DEV01.elasticbeanstalk.com > dev-www.example.com
Typically, after developing and testing your application locally, you will deploy your application to AWS Elastic Beanstalk environment DEV01. At this point, your application will be live at URL dev-www.example.com.
Now that you have tested your application, it is easy to edit your application, redeploy, and see the results.
When you are satisfied your changes you made to your application, you can deploy it to your PROD02.elasticbeanstalk.com production environment. Using the Application Versions page promote the code running on DEV01 onto PROD02. Using your hosts file make sure everything is in order and then hit the URL swap.
This will switch PROD01.elasticbeanstalk.com and PROD02.elasticbeanstalk.com environment URL's seamlessly with zero downtime on your application.
Once you've made sure all your traffic is switched you can then update your original production environment following the same method, switch back and remove PROD01.elasticbeanstalk.com to prevent the extra cost (or you can leave it if you don't mind the $$ spend).
We currently have an application that runs on one dedicated server. I'd like to move it to OpenShift. It has:
A public-facing web app written in PhP
A Java app for administrators running on Wildfly
A Mysql database
A filesystem containing lots of images and documents that must be accessible to both the Java and PhP apps. A third party ftp's a data file to the server every day, and a perl script loads that into the db and the file system.
A perl script occasionally runs ffmpeg to generate videos, reading images from and writing videos to the filesystem.
Is Openshift a good solution for this, or would it be better to use AWS directly instead (for instance because they have dedicated file system components?)
Thanks
Michael Davis
Ottawa
The shared file system will definitely be the biggest issue here. You could get around it by setting up your applications to use Amazon S3 or some other shared Cloud file system though fairly easily.
As for the rest of the application, if I were setting this up I would:
Setup a scaled PHP application, even if you set the scaling to just use 1 gear this will allow you to put the MySQL database on it's own gear, and even choose a different size for it, such as having medium web gears (that run php) and a large gear that runs the MySQL database. This will also allow your wildfly gear to access the database since it will have a FQDN (fully qualified domain name) that any of your applications on your account can reach. However, keep in mind that it will use a non-standard port instead of 3306.
Then you can setup your WildFly server as whatever size you want, but, keep in mind that the MySQL connection variables will not be there, you will have to put them into your java application manually.
As for the perl script, depending on how intensive it is, you could run it on it's own whatever sized gear with some extra storage, or you could co-locate it with either the php or java application as a cron job. You can have it store the files on Amazon S3 and pull them down/upload them as it does the ffmpeg operations on them. Since OpenShift is also hosted on Amazon (In the US-EAST region) these operations should be pretty fast, as long as you also put your S3 bucket in the US-EAST region.
Those are my thoughts, hope it helps. Feel free to ask questions if you have them. You can also visit http://help.openshift.com and under "Contact Us" click on "Submit a request" and make sure you reference this StackOverflow question so I know what you are talking about, you can ask any questions you might have and we can discuss solutions for them.
How do I run OpenERP Web 6.1 on a different machine than OpenERP server?
In 6.0 this was easy, there were 2 config files and 2 servers (server and "web client") and they communicated over TCP/IP.
I am not sure how to setup something similar for 6.1.
I was not able to find helpful documentation on this subject. Do they still communicate over TCP/IP? How do I configure the "web client" to use a different server machine? I would like to understand the new concept here.
tl;dr answer
It's meant only for debugging, but you can.
Use the openerp-web startup script that is included in the openerp-web project, which you can install from the source. There's no separate installer for it, as it's not meant for production. You can pass parameters to set the remote OpenERP server to connect to, e.g. --server-host, --server-port, etc. Use --help to see the options.
Long answer
OpenERP 6.1 comes with a series of architectural changes that allow:
running many OpenERP server processes in parallel, thanks to improved statelessness. This makes distributed deployment a breeze, and gives load-balancing/fail-over/high-availability capabilities. It also allows OpenERP to benefit from multi-processor/multi-core hardware.
deploying the web interface as a regular OpenERP module, relieving you from having to deploy and maintain two separate server processes. When it runs embedded the web client can also make direct Python calls to the server API, avoiding unnecessary RPC marshalling, for an extra performance boost.
This change is explained in greater details in this presentation, along with all the technical reasons behind it.
A standalone mode is still available for the web client with the openerp-web script provided in the openerp-web project, but it is meant for debugging purposes rather than production. It runs in mono-thread mode by default (see the --multi-thread startup parameter), in order to serialize all RPC calls and make debugging easier. In addition to being slower, this mode will also break all modules that have a web part, unless all regular OpenERP addons are also copied in the --addons-path of the web process. And even then, some will be broken because they may still partially depend on the embedded mode.
Now if you were simply looking for a distributed deployment model, stop looking: just run multiple OpenERP (server) processes with the full stack. Have a look at the presentation mentioned above to get started with Gunicorn, WSGI, etc.
Note: Due to these severe limitations and its relative uselessness (vs maintenance cost), the standalone mode for the web client has been completely removed (see rev, 3200 on launchpad) in OpenERP 7.0.
I have a newly configured Windows Server 2003 VM.
One of the issues with the old VM was that whenever I open up "Component Services" from "Administrative Tools", the performance is very bad. It takes several minutes to create new COM+ applications and add components where it used to take only a few seconds.
I have many components to install and multiple VMs to do this on. Why would it be so slow and what can I do to make it faster?
It used to run just fine. I wonder if it could have something to do with the anti-virus software in the office...
I found a solution in my particular case. My machine is on Domain X and my user account is on Domain Y.
The solution was to log in as a user on Domain X, the same domain as the machine. This made the Component Services window respond almost instantaneously. I'm not entirely sure why this is, though.