My previous models are working right but when I added the interview model it works in development mode but not in production. I'm using pm2 and buddy (CI).
What type of commands I need to pass to pm2?
I'm not sure about what is your development / production workflow with Strapi but.
When you are in you local machine with development environment, you have to start your application with strapi start. With this command, some feature to restart your app automatically when you create/update/delete content type are available. If you don't use this command you can have some trouble during your development process.
When you are done with your development. You have to push all you code in your production server (use the way you want to do that). And then you can start Strapi using pm2 NODE_ENV=production pm2 start npm --name api -- start
Note in production you will not have access to the Content Type Builder and Settings Manager (you haven't to update config in production).
Related
I have a load balanced EB environment, running a PHP application on an Apache server.
We have successfully deployed the identical software to a test environment in this AWS account, as a pre-production test. This went as expected, and updated the sortware with each CLI deployment.
I cloned this environment in order to deploy the production instance. Generally, deploying the application via EB CLI results in a healthy instance. I say generally because occasionally this shows as degraded - to fix this, I select the latest application version and deploy it to the instance via the admin interface. This feels like a workaround because the console already shows the correct version as the one deployed.
The problem I am having now is in changing the environment variables, to point to the production database. When I change this via the configuration>software section, no changes are stored. When I hit 'apply' the environment starts to transition. When this is complete, the instance health has degraded and the changes made to the configuration are not persisted.
I don't really see a pattern here, and it's behaving in a way that differs from the way the test instance did - I had no problems there.
Any suggestions on how to get past this?
I am getting the connection time out when running the command in bootstrap.
Any configuration suggestions on networking part if I am missing
It’s says kubernetes api calling time out
This is obviously very hard to debug without having access to your environment. Some tips to debug the OKD installation:
Before starting the installation, make sure your environment meets all the prerequisites. Often, the problem lies with a faulty DNS / DHCP / networking setup. Potentially deploy a separate VM into the network to check if everything works as expected.
The bootstrap node and the Master Nodes are deployed with the SSH key you specify, so in vCenter, get the IP of the machines that are already deployed and use SSH to connect to them. Once on the machine, use sudo crictl ps and sudo crictl logs <container-id> to review the logs for the running containers, focussing on the components:
kube-apiserver
etcd
machine-controller
In your case, the API is not coming up, so reviewing the logs of the above components will likely show the root cause.
I have production golang app running on openshift using this cartridge (https://github.com/smarterclayton/openshift-go-cart) with a mysql database. The cartridge has had some updates which I would like to pull into my app.
Is it possible to reploy the base cartridge into my gears without deleting the whole application?
If your repository contains .openshift/markers/hot_deploy, when you perform a git push OpenShift will not rebuild the application and perform a hot deployment instead.
See the Hot Deploying Applications section of the user guide, as well as this blog post (which somehow contains more specific details about where the marker file goes)
I am new to django/apache environment. I am preparing the list of services that are mandatory to get django application running without fail.
I could able to get only two of them in my mind.
1) mysqld -> mysql Daemon.
2) apache2 -> apache daemon.
Could you kindly suggest if any other services required, otherwise the django application fails to run?
you need apache2 mod-wgsi to be installed too:
$ sudo apt-get install libapache2-mod-wsgi
and you have to enable apache2 services:
$ sudo a2enmod mod-wsgi
and disable the default site too
and pass to the apache2 configuration and other
django is a framework; a set of tools that allow you to create web applications - any kind of web application.
There are no list of services required; but if you are asking from a systems management point of view; what is needed to support a typical Python web application:
You need a WSGI compatible runtime. This can be mod_wsgi if you are using Apache; gunicorn or uwsgi.
You may need a process manager if you aren't using mod_wsgi (whose processes are controlled by Apache).
You'll need a web server capable of hosting the static assets for the application. This can be Apache, nginx, lighttpd or any other capable web server.
Most applications will also have some sort of database. What database this is, will depend on the application and its requirements (not all features of the django ORM are supported by all databases). So you'll have to check with each individual application. You may choose to provide a "standard" layout; for example MySQL version xx.yy. It could also be that the application is using an external hosted server; in which case your job is just to provide connectivity to the remote hosts.
If you can take care of the above 4, you have a standard layout for host most Python WSGI-based web applications.
Keep in mind that although Python 3 has been widely available; most libraries are still in the process of being ported so making sure your server provides both Python 2.7 and Python 3 runtimes is important.
You should also make sure that the development headers for Python (and the database server you are supporting) are available - this is important if the Python application runs in a virtual environment (as this is best practice) since the drivers will need to be compiled for each virtual environment. The same also applies for any compiled libraries (like PIL).
Django has a nice deployment section in the documentation to help with specifics.
can anyone help with this.
I am using Jenkins to deploy a build to a remote server, so far so good. However, I want to run JUnit tests on that remote server, but I cannot find how to do this within Jenkins. I have tried it within the ANT but it gives me an error regarding the junit.jar.
I believe that the tests are executing locally rather than remotely.
Any help would be appreciated; Jenkins is a very new experience to me.
Initially you have to be aware of few things. Jenkins is a CI tool which built with plenty of features to make things automated. If you need to run tests on remote server, then follow the sequences to create such a setup :
Install jenkins on a Machine and properly configure it as CI-Server.
Deploy your remote server with necessary tools and configure well.
On Jenkins server, install SSH plugin to run jobs on remote machine via ssh.
Add the remote server as slave node under Jenkins -> Manage Jenkins -> Manage Nodes -> Add Node menu on Jenkins server.
Configure the node as per your requirement.
Create a new job which could run the junit tests with pre/post build actions in jenkins.
Finally schedule the build for slave node and kick it off.
For step by step instructions, refer this answer.