nginx - Is there a CLI tool for editing the configuration file? - configuration

For example, I'd like to set a new or remove a virtual host in the nginx.conf configuration file without editing it directly but using a command-line tool, like:
nginx-cli add-server [options]
nginx-cli remove-server [options]

I don't know if either of these will exactly fill your need, but they both offer some tools for nginx
Nginx VHost Tools includes a tool for generating and saving a nginx config file, which might help.
Nginx Tools has a tool for managing sites as well as minifying config files. Probably not what you're looking for, but you may the source helpful.

Related

How to store PM2's configuration files under /etc/pm2 like Nginx's

I'd like to have PM2 configuration files structured under /etc/pm2 like Nginx
/etc/pm2
/etc/pm2/pm2.conf
/etc/pm2/sites-enabled/*.json
/etc/pm2/sites-available/*.json
The reason for that is so all the configuration is structured in a consistent way, easy to manage PM2's user's permissions and easy to restart the processes (similar to sudo service restart/reload nginx). In addition I'd like the server automatically start all the processes when the machine is rebooted.
Is there an official/recommended way to accomplish something similar to that?
If not, how can I create a main /etc/pm2/pm2.conf that will include configuration files /etc/pm2/sites-enabled/*?
$ export PM2_HOME='/etc/pm2'
$ pm2 list

Subdirectories in openshift project cannot be found

I built a site using a php openshift project and accessing the root directory via http works fine. However, all the root directories give me a 404 not found, like this one: http://test.toppagedesign.com/sites/
I checked with ssh, and /app-root/repo/sites and app-deployments/current/repo/sites/ both exist.
EDIT
Added a directory called php and now I have 503 errors for everything...
EDIT 2
I deleted the php directory, now the 503 errors are gone. However, I do still get 404 errors for the subdirectory.
Here is my directory tree: http://pastebin.com/hzPCsCua
And I do use git to deploy my project.
php is one of the alternate document roots that you can use, please see the March Release blog post here about this (https://www.openshift.com/blogs/openshift-online-march-2014-release-blog)
As for the sub-directories not working, can you ssh into your server and use the "tree" command to post the directory/file structure of your project? Also are you using Git to deploy your project or editing files directly on the server?
You need to have an index.php or index.html file in any directory that you want to work like app-domain.rhcloud.com/sites , if you just have sub-directories, how would it know what to show? Also, indexing (showing a folders contents) is not enabled for security reasons, and I believe there is no way to enable it.
This sounds like it could be a problem with how you are serving your static content.
I recently created a new sample app for OpenShift that includes:
a basic static folder
an .htaccess file (for serving assets in production)
support for using php's local server to handle the static content (in your dev environments)
Composer and Silex - a great starting point for most new PHP apps
You can serve the project locally if you have PHP-5.4 (or better), available in your dev environment:
php -S localhost:8080 -t static app.php
For a more advanced project that is built on the same foundation, take a look at this PHP+MongoDB mapping example. I wrote up a blog post with some notes on my process for composing that app as well.
Hope these examples help!

Simplest way to host html [duplicate]

This question already has answers here:
Best lightweight web server (only static content) for Windows [closed]
(8 answers)
Closed 8 years ago.
What is the simplest way to host an HTML page over LAN?
I literally just need to have like 5 lines of HTML, so I don't want to download and setup an Apache server. I just want to know the fastest/simplest way to do this on Windows, or I can also use one of my Linux virtual machines if it's faster.
Use netcat, or nc:
:top
nc -l -p 80 -q 1 < index.html
goto top
It's a simple binary without any installation. It doesn't do CGI or PHP or anything, but it can sure dish up 5 lines of HTML.
Actually, if you use the "k" (keep-alive) option you can remove the loop, and make it simpler:
nc -kl 80 < index.html
Since you need a web server for testing and no heavy concurrent use is expected, I'll just keep it simple.
Please note that both solutions are very simple but not very secure, use them for development purposes but don't rely on neither of them for anything barely similar to a stable (people would say "production") server.
Navigate to the directory where your HTML file is located using cmd.exe, then issue:
Using Python
python -m SimpleHTTPServer
A HTTP server will be started on port 8000. Should you need a different port, just specify it:
python -m SimpleHTTPServer 8080
SimpleHTTPServer is part of the "batteries included": you will not need to install any extra package, apart from the Python interpreter, of course.
Python comes already installed on most Linux distributions, so switching to Linux might be simpler than install Python on Windows, although that boils down to downloading and running an installer.
Using PHP 5.4 or above
php -S 0.0.0.0:8080
This will also process PHP scripts, but HTML resources will be served fine.
http://www.lighttpd.net/ is pretty light weight and easy to get running.
I recently used mongoose for a similar purpose. It supports Windows. From the homepage:
Mongoose executable does not depend on any external library or
configuration. If it is copied to any directory and executed, it
starts to serve that directory on port 8080. If some additional config
is required - for example, different listening port or IP-based access
control, then a mongoose.conf file with respective options (see
example) can be created in the same directory where executable lives.
This makes Mongoose perfect for all sorts of demos, quick tests, file
sharing, and Web programming.
Download the windows exe (no need to install) from here , save it on the folder where your html file is and execute it. Check the image below to know how to start the server:
After selecting Start Browser on Port 8080 your browser will open automatically displaying the contents of the folder.

why Coldfusion-MySQL-Apache isn't running on localhost if everything is setup and connected?

Newbie question - my first attempt at Coldfusion/MySQL and getting it to run locally.
I'm running Apache Webserver (2.2), I have importet two .sql files into MySQL (5.2.) workbench, forward engineered a database from these, setup working database connection and MySQL Server. This is also running. In Coldfusion8 Admin I added my database as a data source.
I thought this would be enough :-)
Still, on http://localhost I'm still only getting an index of all files in my Apache htdocs folder. If I open one of the files it just shows the Coldfusion Markup/HTML source code. Nothing parsed.
Thanks for any hints on what I could be missing?
EDIT:
Three questions trying to implenent:
1. Can I load modules using absolute paths, like D:/Coldfusion8/lib...?
2. My lib/wsconfig folder only contains a dll file named jrunwin32.dll. Trying to use this?
3. The lib/wsconfig folder does not contain a jrunserver.store file. Not sure what to do here
It sounds as if your Apache config is not correct, as it doesn't sound as if it's having the cfm files handled correctly.
First of all, is there a specific reason for using CF8? CF9 has been around for a while, so if going from scratch then I'd advise taking a look at that instead.
That aside, I'd check for the following in your httpd.conf (or whatever your apache config file is named)
Firstly, that .cfm is acceptable as a DirectoryIndex (can have other indexes as well)
DirectoryIndex index.cfm
Secondly, that the JRUN handler is configured properly (so again, in httpd.conf)
LoadModule jrun_module /opt/coldfusion8/runtime/lib/wsconfig/1/mod_jrun22.so
<IfModule mod_jrun22.c>
JRunConfig Verbose false
JRunConfig Apialloc false
JRunConfig Ignoresuffixmap false
JRunConfig Serverstore /opt/coldfusion8/runtime/lib/wsconfig/1/jrunserver.store
JRunConfig Bootstrap 127.0.0.1:51801
AddHandler jrun-handler .jsp .jws .cfm .cfml .cfc .cfr .cfswf
</IfModule>
This is taken from my development VM, I have CF8 as a single-server install in /opt/coldfusion8/
Once you have those lines in (with the paths/ports etc appropriate for your environment) restart apache and it should work fine.
If you have installed CF8 in a Multiserver etc. install then please specify and will look to adjust my advice accordingly

How to use GVIM to edit a remote file?

I use GVIM on Ubuntu 9.10. I'm looking for the right way to configure GVIM to be able to edit remote files (HTML, PHP, CSS) by for exemple ftp.
When i use :e scp://username#remotehost/./path/to/file i get: error detected while processing BufEnter Auto commands for "*":E472: Command failed.
When i open a file on remote via Dolphin or Nautilus, i cannot use other files with NERDTree.
Finally when i edit on remote a file via Dolphin the rights are changing to access interdit.
So how to use GVIM to edit remote files like on my localhost?
I've found running the filesystem over ssh (by means of sshfs) a better option than having the editor handle that stuff or running the editor itself over an ssh tunnel.
So you need to
apt-get install sshfs
and then
sshfs remoteuser#remotehost:/remote/path /local/mountpoint
And that will let you edit your remote files as if they were on your local file system.
To make it even smoother you can add a line to /etc/fstab
sshfs#remoteusername#remotehost:/remote/path /local/mountpoint fuse user,noauto
For some reason I find that I have to use fusermount -u /local/mountpoint rather then just umount /local/mountpoint when experimenting with this. Maybe that's just my distro.
Recently I've also noted that the mounting user must be in the fuse group. So:
sudo addgroup <username> fuse
An other popular option of course, would be to run vim (rather then gvim) inside a GNU Screen session on one machine and connect to that session via ssh from wherever you happen to be. Code along all day at work and in the evening you ssh into your office computer, reattach to your gnu screen session and pick up exactly where you left off. I used find the richer color palette to be the only thing I really missed from gvim when using vim, but that can actually be fixed thanks to a fork of urxvt that will let you customize the entire 256 position color palette, not just the 16 first positions of the palette that most terminal emulators will let you customize.
There is one way and that is using the remote host's copy, using SSH to forward the X11 client to you, like so:
user#local:~/$ ssh -X user#host
...
user#host:~/$ gvim file
The latter command should open gvim on your desktop. Of course, this relies on the remote host having X11 / gnome / gvim installed in the first place, which might not be the solution you're looking for / an option in your case.
Note: X11 forwarding can be a security risk.
In order for netrw to work seamlessly, I believe you need to not be in compatibility mode.
Try
:set nocompatible
then
:edit scp://host/path/to/file
Try this
:e scp://username#remotehost//path/to/file
Note that the use of // is intentional after remotehost it gives the absolute path of your file
:)
http://www.celsius1414.com/2009/08/19/how-to-edit-remote-files-with-local-vim/
The vim tips wiki has an article on this, Editing remote files via scp in vim.
EDIT: Key authentication is not necessary for opening files over ssh. Vim will prompt for password.
It would be useful to note if netrw.vim was loaded by vim when it started.
:echo exists("g:loaded_netrwPlugin")
For opening files over ssh, you need your local machine's public key in the server's authorized keys. Following help section in vim documentation explains it pretty well.
:help netrw-ssh-hack
Quick way to export public key would be by using ssh-copy-id (if available).
ssh-copy-id user#host
And have a look at netrw documentation for network file editing over other protocols.
:help netrw
HTH.
According to the docs BufEnter is processed after the file has been read and the buffer created, so my guess is that netrw successfully read the file but you have a plugin that assumes the file is on the local filesystem and is trying to access it, e.g. to run ctags.
Try disabling all your plugin scripts except the default Vim ones, and then editing the file.
Also, try editing a directory to see if netrw can read that - you need to put the / on the end so that netrw knows it is a dir.
About your command, :e scp://username#remotehost/./path/to/file : note that with netrw, scp is taken relative to your home directory on that remote host. To avoid home-relative pathing, drop that "."; ie. :e scp://username#remotehost//path/to/file .
to accomplish this on windows download/install the Dokan library and Dokan SSHFS, which are the first and last links on this page.
I didn't think you were going to be able to directly edit a remote file using GVIM running locally. However, as others have pointed out, this is defintiely possible. This looks very interesting; I will check this out. I will leave the rest of my post up here, in case it is useful to anyone else, as an alternative method. This method will work even if you don't have SSH access to the file (ie, you only have FTP, or S3, or whatever).
You may get that effect, though, by tying GVIM into a graphical file transfer application. For example, on OS X, I use CyberDuck to transfer files (FTP, SFTP, etc). Then, I have it configured to use GVIM as my editor, so I can just double-click on a file in the remote listing, and CyberDuck will download a copy of that remote file, and open it in GVIM. When I save it in GVIM, CyberDuck uploads the file back to the remote host.
I'm sure that this functionality is not unique to CyberDuck, and is probably present in most nicer file transfer utilities.