I am new to django/apache environment. I am preparing the list of services that are mandatory to get django application running without fail.
I could able to get only two of them in my mind.
1) mysqld -> mysql Daemon.
2) apache2 -> apache daemon.
Could you kindly suggest if any other services required, otherwise the django application fails to run?
you need apache2 mod-wgsi to be installed too:
$ sudo apt-get install libapache2-mod-wsgi
and you have to enable apache2 services:
$ sudo a2enmod mod-wsgi
and disable the default site too
and pass to the apache2 configuration and other
django is a framework; a set of tools that allow you to create web applications - any kind of web application.
There are no list of services required; but if you are asking from a systems management point of view; what is needed to support a typical Python web application:
You need a WSGI compatible runtime. This can be mod_wsgi if you are using Apache; gunicorn or uwsgi.
You may need a process manager if you aren't using mod_wsgi (whose processes are controlled by Apache).
You'll need a web server capable of hosting the static assets for the application. This can be Apache, nginx, lighttpd or any other capable web server.
Most applications will also have some sort of database. What database this is, will depend on the application and its requirements (not all features of the django ORM are supported by all databases). So you'll have to check with each individual application. You may choose to provide a "standard" layout; for example MySQL version xx.yy. It could also be that the application is using an external hosted server; in which case your job is just to provide connectivity to the remote hosts.
If you can take care of the above 4, you have a standard layout for host most Python WSGI-based web applications.
Keep in mind that although Python 3 has been widely available; most libraries are still in the process of being ported so making sure your server provides both Python 2.7 and Python 3 runtimes is important.
You should also make sure that the development headers for Python (and the database server you are supporting) are available - this is important if the Python application runs in a virtual environment (as this is best practice) since the drivers will need to be compiled for each virtual environment. The same also applies for any compiled libraries (like PIL).
Django has a nice deployment section in the documentation to help with specifics.
Related
As far as I know, while deploying your web application on Heroku (from github) you need to provide a requirement.txt file so that every library which is used can be installed. But you cannot install MySQL like that. I've used python and streamlit to create a web application. I used MySQL to store data. I don't want the local machine's data to be exported but want to store the data when it is deployed as web app and someone fill in the details (it's basically a Student DBMS).
How can I deploy such a web application that uses MySQL on heroku ?
I've read some docs and look around and found that PostgreSQL is more suitable but I want to use MySQL because this is school project.
Heroku has a add-ons called ClearDB for Mysql
https://devcenter.heroku.com/articles/cleardb
There are services like ServerPilot and many others that install on a vps that handle the lamp stack env. I'm wondering if there is a service that does this for databases. I install the service on a fresh vps and that the service would do all the heavy lifting like security, replication, separate read writes, back-ups and monitoring a long with easily setting up private network access for a set fee to use that service on my server.
Looking for a simple service to install on my own fresh vps, not RDS or Google Cloud.
Thank you!
What Does ServerPilot Really Do?
First, ServerPilot deploys complete LAMP stack on your server and that’s including world’s most used web server Apache, PHP5, and MYSQL. To make it even super, ServerPilot also installs and configures Nginx in front of Apache to achieve unbeatable speed and scalability.
Secondly, ServerPilot will secure your server with a firewall. To make it even secure, it will also update your server’s packages and make sure they stay updated all the time to avoid even single bug caused by outdated package.
Thirdly, ServerPilot also offers a premium feature to monitor real-time stats of your server’s performance including CPU, memory, disk space, and more.
What Does ServerPilot Not Do?
Meanwhile Serverpilot does not provide features related to installing, configuring and managing email and DNS. In this case you may need third-party DNS server to be able to point your domain to your VPS. Need recommendations? Try CloudFlare, PointHQ, NameCheap, etc.
Also, ServerPilot does not manage your server running other than Ubuntu.
Get more details at : http://www.servermom.org/install-manage-apache-nginx-php-mysql-easiest-serverpilot/1011/
I'm not sure about services, but assuming that your VPS is Ubuntu or some other Debian-based distro, you could perform sudo apt-get install lamp-server^ phpmyadmin on the command line to get your LAMP stack setup. This will setup Apache web server, PHP, and MySQL on your Linux server. Apache and PHP will come working out-of-the-box, and when you install MySQL, by default it asks for a root password to manage the database.
phpMyAdmin would be the key here because instead of doing all your database tasks via the command line, it provides a GUI interface in your web browser to manage databases and tables. To backup your database with phpMyAdmin, see this article.
With regards to customizations, for the firewall you can simply write a few iptables rules and for the database, you can run scheduled backups of a MySQL database by creating a cron job that runs the following command:/usr/bin/mysqldump -u dbusername -p'dbpassword' dbname > /path/backup.sql
Again, this isn't a service, but at least you wouldn't have to pay for any of the tools.
Unfortunately, there is no ultimate service than can perform all this stuff. However, you can set up this manually:
Database replication:
https://www.digitalocean.com/community/tutorials/how-to-set-up-master-slave-replication-in-mysql
Database backup:
http://www.ducea.com/2006/05/27/backup-your-mysql-databases-automatically-with-automysqlbackup/
or
https://www.backuphowto.info/how-backup-mysql-database-automatically-linux-users
Database optimization:
https://www.tecmint.com/mysql-mariadb-performance-tuning-and-optimization/
and
http://www.monitis.com/blog/101-tips-to-mysql-tuning-and-optimization/
And for the networking, this tutorial may be helpful
http://www.yolinux.com/TUTORIALS/LinuxTutorialNetworking.html
I want to configure mysql proxy on my test environment to observe the below.
1. Behavior of the proxy
2. How load, CPU usage varies on my test server for read/write distribution.
I googled and able to install proxy on my ubuntu linux.
But I didnt see any thing on configuring it in a step by step manner and how to start or stop this.
Shall some one explore on this and this would be of great help for me.
Thanks in advance
Regards,
UDAY
By default if you run the proxy on the same machine as the server it will listen to port 4040 and query a backend server on the msyql default port of 3036. Other port numbers and server locations can be configured from the command line or with a configuration file.
To distribute queries across servers, add monitoring, profiling etc. you need to provide a Lua script to mysql-proxy. See the example / tutorial scripts in /usr/local/share/docs that came with the installation download. There is work to do for a production implementation.
The basics of how the scripting works can be found here under MySQL Proxy Scripting.
Don't be worried about Lua. The syntax is quite readable given the tutorial examples to work from. As and when you need it lua.org has more details of Lua.
I am using Intellij IDEA to develop my applications and I use glassfish for my applications.
When I want to run/debug my application I can configure it from Glassfish Server -> Local and define arguments at there. However there is another section instead of Glassfish Server, there is a Remote section for configuration. I can easily configure and debug my application just defining host and port variables.
So my question is why to need for Glassfish Server Local configuration(except for when defining extra parameters) and what is difference between them(I mean performance or etc.)?
There are a number of development work-flow optimizations and automation that can be performed by an IDE when it is working with a local server. I don't have a strong background in IDEA, so I am not sure which of the following they may have implemented:
using in-place|exploded|directory deployment can eliminate jar/war/ear creation in the IDE and deconstruction in the server. This can be a significant time saver.
linked to 1 is smarter redeployment. In some cases, a file change (like changing a jsp or an html file) does not need to trigger redeployment.
JDBC driver integration allows users to configure their IDE to access a DB and then propagates that configuration (which usually includes driver jars, etc.) into the server's classpath as part of deployment of an app.
access to server log files during deployment and execution.
The ability to start and stop the server... even today, you do need to restart GlassFish sometimes.
view the generated Java sources of a JSP.
Most of these features are not available with a remote server and that has a negative effect on iterative development since the break between edit and validate can be fairly long.
This answer is based on my familiarity with the work that we have done for the NetBeans/GlassFish integration. The guys at IntelliJ are smart, so I would not be surprised if they have other features that are available when you are working with a local server.
Local starts Glassfish for you and performs the deployment. With Remote you start Glassfish manually. Remote can be used to debug apps running on another machines, Local is useful for development and testing.
I am looking to find out from the community which you think is best?
Django running with the following.
Django, mod_wsgi and MySQL
Django, mod_wsgi and Postgres
Django, nginx and MySQL
OR
Django, nginx and Postgres
?
I use nginx because it's faster and I like how the configuration is set up. I have never run into any trouble using it so I can't see why one should rather use Apache + mod_wsgi.
Also, using fastcgi, you can restart your django site without restarting the whole nginx server, which I like.
And Postgres because:
If you're not tied to any legacy system and have the freedom to choose a database back-end, we recommend PostgreSQL, which achives a fine balance between cost, features, speed and stability. (The Definitive Guide to Django, p. 15)
Copied from: MySQL vs PostgreSQL? Which should I choose for my Django project?
EDIT:
I now think that uwsgi running behind a load balancer (varnish) is the best solution. nginx can then be used to serve static content.
See "Varnish and nginx, the best way (0.9.8.4)" # http://projects.unbit.it/uwsgi/wiki/Example
You can use Emperor ( http://projects.unbit.it/uwsgi/wiki/Emperor) for managing apps in uwsgi. This will allow you to restart individual apps by simply touching their config files.
According to this benchmark Django+ uWSGI wins.
You can use nginx as a proxy and have apache run on localhost.
To start a single django project, you'd touch the wsgi file for that project and it will only reload that instance of django. You don't need to restart/reload apache