I am using free plan from Openshift Paas for my applications. I want to set up openshift environment on my local machine so that I don't have any issues while setting up my app on live environment. I will configure the app on local and then push the code to production server, i.e the Openshift server.
Is it possible with the free plan? I am sure you can set up the Openshift Origin on your local machine using Vagrant/Docker but I have a doubt, if I will be able to push my changes on server using it?
Using your Vagrant/Docker instance on your local machine, you will be able to tie that to Github and you can pull from there. Since your local machine's IP is unlikely to be exposed on the internet, Github cannot see you: so the webhook from Github will not work to trigger automatic re-builds.
Still, that's probably a minor thing if you are just experimenting. With a few clicks, you can go to the build and click to re-build.
Related
I have a load balanced EB environment, running a PHP application on an Apache server.
We have successfully deployed the identical software to a test environment in this AWS account, as a pre-production test. This went as expected, and updated the sortware with each CLI deployment.
I cloned this environment in order to deploy the production instance. Generally, deploying the application via EB CLI results in a healthy instance. I say generally because occasionally this shows as degraded - to fix this, I select the latest application version and deploy it to the instance via the admin interface. This feels like a workaround because the console already shows the correct version as the one deployed.
The problem I am having now is in changing the environment variables, to point to the production database. When I change this via the configuration>software section, no changes are stored. When I hit 'apply' the environment starts to transition. When this is complete, the instance health has degraded and the changes made to the configuration are not persisted.
I don't really see a pattern here, and it's behaving in a way that differs from the way the test instance did - I had no problems there.
Any suggestions on how to get past this?
I'm trying to set up a Staging VM for a site that's in production that I have just inherited. The site is running Wordpress/Woocommerce and has not been updated in a while. The VM it's hosted on is running an old version of PHP. Obviously, this all needs to be fixed up but I'm unfamiliar with GCP Compute Engine. Also any attempt to run backup/clone plugins crashes the site and requires a restore from the daily snapshot which is very annoying.
Is it possible to clone the VM/disk to a new instance, point that at a temporary domain, and test/update the site? I have been trying to do this for a while now without much luck any suggestions would be much appreciated. Thanks.
Creating a clone of an existing VM is possible and quite easy.
Create a snapshot of the VM. If possible stop the VM before doing this to ensure 100% accuracy - this way you will have exact snapshot of the drive without any errors. You can do it while the VM is running too if stopping it is out of the question.
Create a VM from the shapshot - select as a boot disk a snapshot that you've just created. Remember to assign a static public IP to this VM (unless you want it changed after VM restart and since you're going to do some configuration this would likely happen). You can change the VM's specs at this time too - nothing stops you from adding/removing CPU's, RAM etc. It may well be that your VM is underutilised and you can use something smaller to save costs. Or the opposite.
Start the machine. Now you can modify your WP configuration to point to a new domain. Depending on the SSL certificate - you can either use external one or the one provided by GCP (most convinient solution).
If you already own a domain you want to use for staging you can host it in Cloud DNS or at some other provider - just point it to the external IP you just reserved.
If you will be hosting your domain in the Cloud DNS then you will find necessary infomration in the documentation about managed zones (domains).
You can also consider creating a new VM and setting it as a template for creating a group of VM's (managed autoscaled group) and creating an external HTTPS load balancer in front of it. But this adds a little to the complexity so it's just my idea if you needed to handle a lot more traffic.
I have 2 server at which I am working locally. The first is a front-end in Vuejs, and the second is back-end in Flask. From the client I request an api to the second.
I have to upload these two on a remote Linux VM (Debian), for which I have credentials and I can successfully connect it via PuTTy.
How do I transer my 2 directories to the VM?
Then, I should change the address that the client uses for api requests of the server, that is all? Or I will have to do something else?
You can copy directories by the scp or sftp protocol. In your case, this can be done most easily by the winscp software.
Both scp, sftp (implemented by winscp) and ssh (implemented by putty) use the ssh protocol. Putty is for remote terminal (i.e. you can give commands to the server), while winscp uploads, downloads and manages files on it.
If you are developing something, it is likely that you will need to this deployment more regularly. These softwares are only good for single-time deployments. In professional environments this deployment is automatized and happens quickly.
It is very likely that you also have some database in your project. Here the most common options are either some db-level synchronization, or dumping the database into files and synchronizyng on the file level. But it is already another topic.
It is also unlikely that you will need two different VMs for the vuejs and for the flask. You could wire them together to a single VM, that would make your task far more easy.
You will likely have a hard time to make your deployment on your server well working. This all is just the beginning. But don't worry, after you've learnt it all, it will be easy!
What is the story for local development for API M? I want to pull down a local copy of API M for local developers to build their API's against, is that possible (also configuring the backend resources to local copies of the code that would run in the cloud)?
I was thinking maybe a docker image of API M, configured for running locally on a developer machine for our environment. Is this possible?
Yes, dev tools support is not there yet, but you can run it locally: https://learn.microsoft.com/en-us/azure/api-management/self-hosted-gateway-overview. It still requires connection to cloud though and configuration must be done either through Azure Portal or ARM.
I'm working on an architecture to deploy my webapp. I would like to use Google Managed Instance Groups because I have some strict requirements. I was wondering:
which is the best Web container to be deployed in a distributed environment?
I'm familiar with Tomcat, it's Tomcat OK to be deployed in an instance group?
my Webapp running on tomcat will generate logs that will be stored in the current machine hosting tomcat. How should I handle distributed application logs.
I don't want to lose information and I would like to have a single view of all log of my webapp even if distributed, Is it that possible?
Thanks
I have used tomcat in GCP for over a year and it has worked without problems with the load balancer. To solve the issue of the logs you must use an agent to save the logs in stackdriver https://cloud.google.com/logging/docs/view/service/agent-logs