Is it possible to create a secure connection using motion? I have embedded my motion stream on an HTML page using Apache, but it will not display as it is an insecure iframe on a secure page. I can view the motion stream at
http://example.com:<Motion-Port>
but the embedded video at
https://example.com
will not display.
iFrame code:
<iframe src="http://example.com:<Motion-Port>" width="1300" height="740"></iframe>
The answer is to not use motion. It hasn't been updated in 3 years! Use ZoneMinder or iSpy instead.
I wish I had checked this before stubbornly pushing through Motion.
Yessir -- You can totally do this -- but you cannot do it with motion alone. Motion only does minimal auth. Essentially, it boils down to you need something to proxy the http stream, and wrap it in ssl.
Within node there is a somewhat dated package called mjpeg-proxy, which you can use as a middleware. https://github.com/vizzyy-org/mothership/blob/master/routes/cam.js#L27
Within java, you can do the same thing: make a call to your webserver which makes a call to the motion stream and then wraps the whole thing within an ssl connection back to the client. https://github.com/vizzyy-org/spring_react/blob/master/src/main/java/vizzyy/controller/VideoController.java#L54
Lastly, you can accomplish this with ngix or apache2. In apache, it's just as simple as setting up mutual auth and then proxy to the stream. Here's my apache config for 2-way ssl wrapping my stream
<VirtualHost *:443>
ServerAdmin somehost
SSLEngine on
SSLProtocol -all +TLSv1.2 +TLSv1.3
SSLHonorCipherOrder on
SSLCipherSuite ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AE$
SSLCompression off
SSLSessionTickets off
SSLCertificateFile server-cert.pem
SSLCertificateKeyFile server-key.pem
SSLVerifyClient require
SSLCACertificateFile "ca-bundle-client.crt"
ProxyPass "/video" "http://stream.local:9002"
ProxyPassReverse "/video" "http://stream.local:9002"
</VirtualHost>
It is important to note that all three of the above options must occur within your LAN/VPC/Locally, as otherwise you are exposing your stream. You gotta proxy it within your trusted network, and then expose the wrapped stream to the outside net.
Motion is still actively maintained here (last commit 25 days ago), and I had a similar problem.
Motion allows us to use HTTPS with following settings:
# for web UI
webcontrol_tls on
webcontrol_cert /full/path/to/motion.crt
webcontrol_key /full/path/to/motion.key
# only for streams
# requires webcontrol_cert & webcontrol_key
stream_tls on
For local needs you can use it with self-signed certificate, as I did:
sudo apt -y install openssl
sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -out motion.crt -keyout motion.key
sudo chmod motion:motion motion.crt
sudo chmod motion:motion motion.key
Then edit motion.conf as described above and restart it.
Note: Motion will serve HTTPS only.
Hope it would help someone.
Related
I feel like this is a basic question but I'm struggling to find anything concrete in my research. This must be a common problem and im not sure what to google.
I'm running an air gapped Kubernetes cluster with a bunch of service on whom all have UIs. My services are exposed using NodePort. I can navigate to the ui by doing ip addr:NodePort. I have DNS setup using dnsmasq so I can access the URL at example.domain.com:NodePort.
I want to "hide" the nodeport portion of the url so that users/clients can access apps at example.domain.com/appname.
Im running an Apache Webserver to serve some files and I have implemented a bunch of redirects e.g.
Redirect permanent /appname http://example.domain.com:30000/
which works semi-nicely when access the UIs via firefox browser e.g. example.domain.com/appname. This does change the URL in the users address bar but I can live with that. The problem with this is that some clients don't automatically redirect to http://example.domain.com:30000/ and instead just present the 301 status code.
Can somebody point me in the right direction please.
Thanks
After seeing Ijaz answer I was able to refine my google search a little and came up with the below:
/etc/hosts
192.168.100.1 example.domain.com gitlab.domain.com example
<VirtualHost *:80>
ServerName gitlab.domain.com
ProxyPass / http://example.domain.com:30100/
ProxyReversePass / http://example.domain.com:30100/
</VirtualHost>
systemctl restart httpd dnsmasq
If you navigate to gitlab.domain.com you will be redirected to the correct port (30100).
The downside to this is that one has to have a domain name for every application that I deploy. I would have preferred to do something similar to:
/etc/hosts
192.168.100.1 example.domain.com example
<VirtualHost *:80>
ServerName example.domain.com
ProxyPass /gitlab http://example.domain.com:30100/
ProxyReversePass /gitlab http://example.domain.com:30100/
ProxyPass /jira http://example.domain.com:30111/
ProxyReversePass /jira http://example.domain.com:30111/
</VirtualHost>
However when I navigated to example.domain.com/gitlab it would append the correct url e.g. the landing page for gitlab is /users/sign_in, example.domain.com/users/sign_in however my browser displayed Not Found. The request URL /users/sign_in was not found on the server.
I couldnt figure out the correct configuration. If anyone has any further thoughts to fix this please let me know.
You have to redirect HTTP traffic from port 80 (which is standard) to your NodePort.
For example
sudo iptables -t nat -A OUTPUT -o lo -p tcp --dport 80 -j REDIRECT --to-port 30000
Using apache or nginx , you can just use a virtual server that hides the internal ports. I dont think you need to put any redirection , you just need to serve a url to external client from virtual server :80 whos backend, upstream nodes are your internal nodes , with node ports.
You can find easy and better examples for nginx , ha-proxy and others.
Here is an apache example:
<VirtualHost *:80>
ProxyRequests off
ServerName domain.com
<Proxy balancer://mycluster>
# WebHead1
BalancerMember http://node:NodePort
# WebHead2
BalancerMember http://node:NodePort
# Security "technically we aren't blocking
# anyone but this is the place to make
# those changes.
Require all granted
# In this example all requests are allowed.
# Load Balancer Settings
# We will be configuring a simple Round
# Robin style load balancer. This means
# that all webheads take an equal share of
# of the load.
ProxySet lbmethod=byrequests
</Proxy>
# balancer-manager
# This tool is built into the mod_proxy_balancer
# module and will allow you to do some simple
# modifications to the balanced group via a gui
# web interface.
<Location /balancer-manager>
SetHandler balancer-manager
# I recommend locking this one down to your
# your office
Require host example.org
</Location>
# Point of Balance
# This setting will allow to explicitly name the
# the location in the site that we want to be
# balanced, in this example we will balance "/"
# or everything in the site.
ProxyPass /balancer-manager !
ProxyPass / balancer://mycluster/
</VirtualHost>
I just uploaded a Wildfly web application to my free OpenShift account so my team members can check it out. It's a work in progress so I don't want people to know and access it via the web. If I want someone to use it then I'll send them the URL "XXX-XXX.rhcloud.com"
Is there a way to prevent people from knowing and accessing my web application on OpenShift?
You can use Basic authentication in order to anyone provides login/password before access your contents.
In openshift there is a enviroment variable called $OPENSHIFT_REPO_DIR, is the path of your working directory i.e. /var/lib/openshift/myuserlongid/app-root/runtime/repo/
I create a new enviroment variable called SECURE wrapping the folder path.
rhc set-env SECURE=/var/lib/openshift/myuserlongid/app-root/data --app myappname
Finally I connect to my app with ssh
rhc ssh myappname
And create the .htpasswd file
htpasswd -c $SECURE/.htpasswd <username>
Note: The -c option to htpasswd creates a new file, overwriting any existing htpasswd file. If your intention is to add a new user to an existing htpasswd file, simply drop the -c option.
.htaccess file
AuthType Basic
AuthName "Authentication"
AuthUserFile ${SECURE}/.htpasswd
Require valid-user
I am not sure if you configure openshift so that url is private, however I am sure you can hack your way. Instead of hosting your app at "XXX-XXX.rhcloud.com", you can set root-url of your app to be "XXX-XXX.rhcloud.com/some_hash" (for ex: XXX-XXX.rhcloud.com/2d6541ff807c289fc686ad64f10509e0e74ba0be22b0462aa0ac3a7a54dd20073101ddd5843144b9a9ee83d0ba882f35d49527e3e762162f76cfd04d355411f1 )
When it comes people finding your website on search engines you can block crawlers with robot.txt or noindex meta tag. You can further read about them here and here
Domain: https://www.amz2btc.com
Analysis from SSL Labs: https://www.ssllabs.com/ssltest/analyze.html?d=amz2btc.com
All my desktop browsers open this fine. Mobile Firefox opens this fine. Only when I tried with mobile Chrome did I get the error: err_cert_authority_invalid
I know very little about SSL, so I can't really make sense of the SSL report or why this error is coming up. If someone could ELI5, that would be ideal. :)
I just spent the morning dealing with this. The problem wasn't that I had a certificate missing. It was that I had an extra.
I started out with my ssl.conf containing my server key and three files provided by my SSL certificate authority:
# Server Certificate:
SSLCertificateFile /etc/pki/tls/certs/myserver.cer
# Server Private Key:
SSLCertificateKeyFile /etc/pki/tls/private/myserver.key
# Server Certificate Chain:
SSLCertificateChainFile /etc/pki/tls/certs/AddTrustExternalCARoot.pem
# Certificate Authority (CA):
SSLCACertificateFile /etc/pki/tls/certs/InCommonServerCA.pem
It worked fine on desktops, but Chrome on Android gave me err_cert_authority_invalid
A lot of headaches, searching and poor documentation later, I figured out that it was the Server Certificate Chain:
SSLCertificateChainFile /etc/pki/tls/certs/AddTrustExternalCARoot.pem
That was creating a second certificate chain which was incomplete. I commented out that line, leaving me with
# Server Certificate:
SSLCertificateFile /etc/pki/tls/certs/myserver.cer
# Server Private Key:
SSLCertificateKeyFile /etc/pki/tls/private/myserver.key
# Certificate Authority (CA):
SSLCACertificateFile /etc/pki/tls/certs/InCommonServerCA.pem
and now it's working on Android again. This was on Linux running Apache 2.2.
I had this same problem while hosting a web site via Parse and using a Comodo SSL cert resold by NameCheap.
You will receive two cert files inside of a zip folder:
www_yourdomain_com.ca-bundle
www_yourdomain_com.crt
You can only upload one file to Parse:
Parse SSL Cert Input Box
In terminal combine the two files using:
cat www_yourdomain_com.crt www_yourdomain_com.ca-bundle > www_yourdomain_com_combine.crt
Then upload to Parse. This should fix the issue with Android Chrome and Firefox browsers. You can verify that it worked by testing it at https://www.sslchecker.com/sslchecker
For those having this problem on IIS servers.
Explanation: sometimes certificates carry an URL of an intermediate certificate instead of the actual certificate. Desktop browsers can DOWNLOAD the missing intermediate certificate using this URL. But older mobile browsers are unable to do that. So they throw this warning.
You need to
1) make sure all intermediate certificates are served by the server
2) disable unneeded certification paths in IIS - Under "Trusted Root Certification Authorities", you need to "disable all purposes" for the certificate that triggers the download.
PS. my colleague has wrote a blog post with more detailed steps: https://www.jitbit.com/maxblog/21-errcertauthorityinvalid-on-android-and-iis/
The report from SSLabs says:
This server's certificate chain is incomplete. Grade capped to B.
....
Chain Issues Incomplete
Desktop browsers often have chain certificates cached from previous connections or download them from the URL specified in the certificate. Mobile browsers and other applications usually don't.
Fix your chain by including the missing certificates and everything should be right.
I hope i am not too late, this solution here worked for me, i am using COMODO SSL, the above solutions seem invalid over time, my website lifetanstic.co.ke
Instead of contacting Comodo Support and gain a CA bundle file You can do the following:
When You get your new SSL cert from Comodo (by mail) they have a zip file attached. You need to unzip the zip-file and open the following files in a text editor like notepad:
AddTrustExternalCARoot.crt
COMODORSAAddTrustCA.crt
COMODORSADomainValidationSecureServerCA.crt
Then copy the text of each ".crt" file and paste the texts above eachother in the "Certificate Authority Bundle (optional)" field.
After that just add the SSL cert as usual in the "Certificate" field and click at "Autofil by Certificate" button and hit "Install".
Inspired by this gist: https://gist.github.com/ipedrazas/6d6c31144636d586dcc3
I also had a problem with the chain and managed to solve using this guide https://gist.github.com/bradmontgomery/6487319
if you're like me who is using AWS and CloudFront, here's how to solve the issue. it's similar to what others have shared except you don't use your domain's crt file, just what comodo emailed you.
cat COMODORSADomainValidationSecureServerCA.crt COMODORSAAddTrustCA.crt AddTrustExternalCARoot.crt > ssl-bundle.crt
this worked for me and my site no longer displays the ssl warning on chrome in android.
A decent way to check whether there is an issue in your certificate chain is to use this website:
https://www.digicert.com/help/
Plug in your test URL and it will tell you what may be wrong. We had an issue with the same symptom as you, and our issue was diagnosed as being due to intermediate certificates.
SSL Certificate is not trusted
The certificate is not signed by a trusted authority (checking against
Mozilla's root store). If you bought the certificate from a trusted
authority, you probably just need to install one or more Intermediate
certificates. Contact your certificate provider for assistance doing
this for your server platform.
I solved my problem with this commands:
cat __mydomain_com.crt __mydomain_com.ca-bundle > __mydomain_com_combine.crt
and after:
cat __mydomain_com_combine.crt COMODORSADomainValidationSecureServerCA.crt
COMODORSAAddTrustCA.crt AddTrustExternalCARoot.crt > mydomain.pem
And in my domain nginx .conf I put on the server 443:
ssl_certificate ssl/mydomain.pem;
ssl_certificate_key ssl/mydomain.private.key;
I don't forget restart your "Nginx"
service nginx restart
I had the same probleme but the response made by Mike A helped me to figure it out:
I had a my certificate, an intermediate certificate (Gandi) , an other intermediate (UserTrustRSA) and finally the RootCA certificate (AddTrust).
So first i made a chain file with Gandi+UserTrustRSA+AddTrust and specified it with SSLCertificateChainFile. But it didn't worked.
So i tried MikeA answer by just putting AddTruct cert in a file and specified it with SSLCACertificateFile and removing SSLCertificateChainFile.But it didn't worked.
So finnaly i made a chain file with only Gandi+UserTrustRSA specified by SSLCertificateChainFile and the other file with only the RootCA specified by SSLCACertificateFile and it worked.
# Server Certificate:
SSLCertificateFile /etc/ssl/apache/myserver.cer
# Server Private Key:
SSLCertificateKeyFile /etc/ssl/apache/myserver.key
# Server Certificate Chain:
SSLCertificateChainFile /etc/ssl/apache/Gandi+UserTrustRSA.pem
# Certificate Authority (CA):
SSLCACertificateFile /etc/ssl/apache/AddTrust.pem
Seems logical when you read but hope it helps.
I guess you should install CA certificate form one if authority canter:
ssl_trusted_certificate ssl/SSL_CA_Bundle.pem;
Just do the following for Version 44.0.2403.155 dev-m
Privacy -->Content settings -->Do not allow any site to run JavaScript
Problem Solved
My gitlab is on a virtual machine on a host server. I reach the VM with a non-standard SSH port (i.e. 766) which an iptable rule then forward from host:766 to vm:22.
So when I create a new repo, the instruction to add a remote provide a mal-formed URL (as it doesn't use the 766 port. For instance, the web interface give me this:
Malformed
git remote add origin git#git.domain.com:group/project.git
Instead of an URL containing :766/ before the group.
Wellformed
git remote add origin git#git.domain.com:766/group/project.git
So it time I create a repo, I have to do the modification manually, same for my collaborator.
How can I fix that ?
In Omnibus-packaged versions you can modify that property in the /etc/gitlab/gitlab.rb file:
gitlab_rails['gitlab_shell_ssh_port'] = 766
Then, you'll need to reconfigure GitLab:
# gitlab-ctl reconfigure
Your URIs will then be correctly displayed as ssh://git#git.domain.com:766/group/project.git in the web interface.
if you configure the ssh_port correctly in config/gitlab.yml, the webpages will show the correct repo url.
## GitLab Shell settings
gitlab_shell:
...
# If you use non-standard ssh port you need to specify it
ssh_port: 766
ps.
the correct url is:
ssh://git#git.domain.com:766/group/project.git
edit: after the change you need to clear caches, etc:
bundle exec rake cache:clear assets:clean assets:precompile RAILS_ENV=production
N.B.: this was tested on an old Giltab version (v5-v6), and might not be suitable for modern instance.
You can achieve similar behavior in a 2 step process:
1. Edit: config/gitlab.yml
On the server, set the port to the one you use:
ssh_port: 766
2. Edit ~/.ssh/config
On your machine, add the following section corresponding to your gitlab:
Host sub.domain.com
Port 766
Limit
You will need to repeat this operation on each user's computer…
References
GitLab and a non-standard SSH port
Easy way to fix this issue:
ssh://git#my-server:4837/~/test.git
git clone -v ssh://git#my-server:4837/~/test.git
Reference URL
I know there have been a few threads on this before, but I have tried absolutely everything suggested (that I could find) and nothing has worked for me thus far...
With that in mind, here is what I'm trying to do:
First, I want to allow users to publish pages and give them each a subdomain of their choice (ex: user.example.com). From what I can gather, the best way to do this is to map user.example.com to example.com/user with mod_rewrite and .htaccess - is that correct?
If that is correct, can somebody give me explicit instructions on how to do this?
Also, I am doing all of my development locally, using MAMP, so if somebody could tell me how to set up my local environment to work in the same manner (I've read this is more difficult), I would greatly appreciate it. Honestly, I have been trying a everything to no avail, and since this is my first time doing something like this, I am completely lost.
Some of these answers have been REALLY helpful, but for the system I have in mind, manually adding a subdomain for each user is not an option. What I'm really asking is how to do this on the fly, and redirect wildcard.example.com to example.com/wildcard -- the way Tumblr is set up is a perfect example of what I'd like to do.
As far as how to set up the DNS subdomain wildcard, that would be a function of your DNS hosting provider. This would be different steps depending on which hosting provider you have and would be a better question for them.
Once you've set that up with the DNS host, from your web app you really are just URL rewriting, which can be done with some sort of module for the web server itself, such as isapi rewrite if you're on IIS (this would be the preferred route if possible). You could also handle rewriting at the application level as well (like using routing if on ASP.NET).
You'd rewrite the URL so http://myname.example.com would become http://example.com/something.aspx?name=myname or something. From there on out, you just handle it as if the myname value was in the query string as normal. Does that make sense? Hope I didn't misunderstand what you're after.
I am not suggesting that you create a subdomain for each user, but instead create a wildcard subdomain for the domain itself, so anything.example.com (basically *.example.com) goes to your site. I have several domains setup with MyDomain. Their instructions for setting this up is like this:
Yes, you can configure a wild card but
it will only work if you set it up as
an A Record. Wildcards do not work
with a C Name. To use a wildcard, you
use the asterisks character ''. For
example, if you create and A Record
using a wild card, *.example.com,
anything that is entered in the place
where the '' is located, will resolve
to the specified IP address. So if you
enter 'www', 'ftp', 'site', or
anything else before the domain name,
it will always resolve to the IP
address
I have some that are setup in just this way, having *.example.com go to my site. I then can read the base URL in my web app to see that ryan.example.com is what was currently accessed, or that bill.example.com is what was used. I can then either:
Use URL rewriting so that the subdomain becomes a part of the query string OR
Simply read the host value from the accessed URL and perform some logic based on that value.
Does that make sense? I have several sites set up in just this exact way: create the wildcard for the domain with the DNS host and then simply read the host, or base domain from the URL to decide what to display based on the subdomain (which was actually a username)
Edit 2:
There is no way to do this without a DNS entry. The "online world" needs to know that name1.example.com, name2.example.com,..., nameN.example.com all go to the IP address for your server. The only way to do this is with the appropriate DNS entry. You have to add the wildcard DNS entry for your domain with your DNS host. Then it's just a matter of you reading the subdomain from the URL and taking the appropriate action in your code.
The best thing to do if you are running *AMP is to do what Thomas suggests and do virtual hosts in Apache. You can do this either with or without the redirect you describe.
Virtual hosts
Most likely you will want to do name-based virtual hosts, as it's easiest to set up and only requires one IP address (so will also be easy to set up and test on your local MAMP machine). IP-based virtual hosts is better in some other respects, but you have to have an IP address for each domain.
This Wikipedia page discusses the differences and links to a good basic walk-thru of how to do name-based vhosts at the bottom.
On your local machine for testing, you'll also have to set up fake DNS names in /etc/hosts for your fake test domain names. i.e. if you have Apache listening on localhost and set up vhost1.test.domain and vhost2.test.domain in your Apache configs, you'd just add these domains to the 127.0.0.1 line in /etc/hosts, after localhost:
127.0.0.1 localhost vhost1.test.domain vhost2.test.domain
Once you've done the /etc/hosts edit and added the name-based virtual host configs to your Apache configuration file(s), that's it, restart Apache and your test domains should work.
Redirect with mod_rewrite
If you want to do redirects with mod_rewrite (so that user.example.com isn't directly hosted and instead redirects to example.com/user), then you will also need to do a RewriteCond to match the subdomain and redirect it:
RewriteEngine On
RewriteCond %{HTTP_HOST} ^subdomain\.example\.com
RewriteRule ^(.*)$ http://example.com/subdomain$1 [R]
You can put this in a .htaccess or in your main Apache config.
You will need to add a pair of rules like the last two for each subdomain you want to redirect. Or, you may be able to capture the subdomain in a RewriteCond to be able to use one wildcard rule to redirect *.example.com to example.com/
-- but that smells really bad to me from a security standpoint.
All together, vhosts and redirect
It's better to be more explicit and set up a virtual host configuration section for each hostname you want to listen for, and put the rewrite rules for each of these hostnames inside its virtual host config. (It is always more secure and faster to put this kind of stuff inside your Apache config and not .htaccess, if you can help it -- .htaccess slows performance because Apache is constantly scouring the filesystem for .htaccess files and reparsing them, and it's less secure because these can be screwed up by users.)
All together like that, the vhost config inside your Apache configs would be:
NameVirtualHost 127.0.0.1:80
# Your "default" configuration must go first
<VirtualHost 127.0.0.1:80>
ServerName example.com
ServerAlias www.example.com
DocumentRoot /www/siteroot
# etc.
</VirtualHost>
# First subdomain you want to redirect
<VirtualHost 127.0.0.1:80>
ServerName vhost1.example.com
RewriteEngine On
RewriteRule ^(.*)$ http://example.com/vhost1$1 [R]
</VirtualHost>
# Second subdomain you want to redirect
<VirtualHost 127.0.0.1:80>
ServerName vhost2.example.com
RewriteEngine On
RewriteRule ^(.*)$ http://example.com/vhost2$1 [R]
</VirtualHost>
I realize that I'm pretty late responding to this question, but I had the same problem in regards to a local development solution. In another SO thread I found better solutions and thought I would share them for anyone with the same question in the future:
VMware owned wild card domain that resolves any subdomain to 127.0.0.1:
vcap.me resolves to 127.0.0.1
www.vcap.me resolves to 127.0.0.1
or for more versatility 37 Signals owns a domain to map any subdomain to any given IP using a specific format:
127.0.0.1.xip.io resolves to 127.0.0.1
www.127.0.0.1.xip.io resolves to 127.0.0.1
db.192.168.0.1.xip.io resolves to 192.168.0.1
see xip.io for more info
I am on Ubuntu 16.04 and since 14.04 I've using solution provided by Dave Evans here and it works fine for me.
Install dnsmasq
sudo apt-get install dnsmasq
Create new file localhost.conf under /etc/dnsmasq.d dir with the following line
#file /etc/dnsmasq.d/localhost.conf
address=/localhost/127.0.0.1
Edit /etc/dhcp/dhclient.conf and add the following line
prepend domain-name-servers 127.0.0.1;
(You’ll probably find that this line is already there and you just need to uncomment it.)
Last one is restart the service
sudo systemctl restart dnsmasq
sudo dhclient
Finally, you should check if it's working.
dig whatever.localhost
note:
If you want to use it on your web server, you need to simply change the 127.0.0.0 to your actual IP address.
I had to do exactly the same for one of my sites. You can follow the following steps
If you've cPanel on your server, create a subdomain *, if not, you'd have to set-up an A record in your DNS (for BIND see http://ma.tt/2003/10/wildcard-dns-and-sub-domains/). On your dev. server you'd be far better off faking subdomains by adding each to your hosts file.
(If you used cPanel you won't have to do this). You'll have to add soemthing like the following to your apache vhosts file. It largely depends on what type of server (shared or not) you're running. THE FOLLOWING CODE IS NOT COMPLETE. IT'S JUST TO GIVE DIRECTION. NOTE: ServerAlias example.com *.example.com is important.
<VirtualHost 127.0.0.1:80>
DocumentRoot /var/www/
ServerName example.com
ServerAlias example.com *.example.com
</VirtualHost>
Next, since you can use the PHP script to check the "Host" header and find out the subdomain and serve content accordingly.
First, I want to allow users to
publish pages and give them each a
subdomain of their choice (ex:
user.mysite.com). From what I can
gather, the best way to do this is to
map user.mysite.com to mysite.com/user
with mod_rewrite and .htaccess - is
that correct?
You may be better off using virtual hosts. That way, each user can have a webserver configuration pretty much independent of others.
The syntax goes something like this:
<VirtualHost *:80>
DocumentRoot /var/www/user
ServerName user.mysite.com
...
</VirtualHost>
From what I have seen on many webhosts, they setup a virtual host on apache.
So if your www.mysite.com is served from /var/www, you could create a folder for each user. Then map the virtual host to that folder.
With that, both mysite.com/user and user.mysite.com works.
As for your test enviroment, if you are on windows, I would suggest editing your HOSTS file to map mysite.com to your local PC (127.0.0.1), as well as any subdomains you set up for testing.
The solution I found for Ubuntu 18.04 is similar to this one but involves NetworkManager config:
Edit the file /etc/NetworkManager/NetworkManager.conf, and add the line dns=dnsmasq to the [main] section
sudo editor /etc/NetworkManager/NetworkManager.conf
should look like this:
[main]
plugins=ifupdown,keyfile
dns=dnsmasq
...
Start using NetworkManager's resolv.conf
sudo rm /etc/resolv.conf
sudo ln -s /var/run/NetworkManager/resolv.conf /etc/resolv.conf
Create a file with your wildcard configuration
echo 'address=/.localhost/127.0.0.1' | sudo tee /etc/NetworkManager/dnsmasq.d/localhost-wildcard.conf
Reload NetworkManager configuration
sudo systemctl reload NetworkManager
Test it
dig localdomain.localhost
You can also add any other domain, quite useful for some types of authentication when using a local development setup.
echo 'address=/.local-dev.workdomain.com/127.0.0.1' | sudo tee /etc/NetworkManager/dnsmasq.d/workdomain-wildcard.conf
Then this works:
dig petproject.local-dev.workdomain.com
;; ANSWER SECTION:
petproject.local-dev.workdomain.com. 0 IN A 127.0.0.1