AWS are now depreciating all their versions of Apache on Elastic Beanstalk, which means I can suddenly no longer use .htaccess files to rewrite urls.
No matter how much I read their ebextensions documentation, I can't seem to find out how to replace this functionality.
I've tried placing the following file in:
/.ebextensions/nginx/conf.d/custom.conf
With the following code:
location / {try_files $uri $uri/ /test.php?$args;}
It seems like Elastic Beanstalk doesn't even recognize the config file, nothing happens.
What am I doing wrong here with my package?
Related
I currently have a static website running on an nginx server. I am moving this over to React and so need to get my website working as an NPM project.
Currently all the html pages work without having the .html file extension in the URL. For example www.xyz.com/login is equivalent to www.xyz.com/login.html (but looks cleaner in the browser).
How do I get this to behave in the same way on an NPM based web project?
Working perfectly on nginx using the following line in nginx.conf file:
location / { try_files $uri $uri/ /$uri.html /index.html; }
What is the npm equivalent?
So I have hosted my angular app on s3 with a cloudfront dist. I do file revision-ing (using grunt filerev) to make sure that I never get stale content. But, how should I version the index.html file. Its required because all other files are referenced inside index.html.
I have configured my bucket to be used as a static site. So it just picks up the index.html when I reference the bucket in url.
Cloudfront says that you should set min TTL to 0, so it would always hit origin to serve the content. But, I don't need this since I am doing file revisioning of all my files (except index.html). I can take advantage of cdn caching for these files.
They also say that in order to invalidate a single object, set the max-age headers to 0. I tried adding following to my index.html
<meta http-equiv="Cache-Control" content="public, must-revalidate, proxy-revalidate, max-age=0"/>
But this does not reflect once you upload on s3. Do I need to explicitly set headers on s3 using s3cmd or dashboard? And do I need to do this every time when index.html changes and I upload it?
I am aware that I could invalidate a single file using cmd but its a repeating process and It would be great if it can take care of itself just by deploying on s3.
Although the accepted answer is correct if you are using s3cmd, I was using the AWS CLI, so what I did was the following 2 commands:
First, to actually deploy the code:
aws s3 sync ./ s3://bucket-name-here/ --delete
Then, to create an invalidation on CloudFront:
aws cloudfront create-invalidation --distribution-id <distribution-id> --paths /index.html
Answering my own question. I deploy my site to S3 using s3cmd tool and there is an option you could provide to invalidate CloudFront cache of all the files changed (diff between your dist folder and S3 bucket). This invalidates cache of all the files changed including index file. It usually takes around 15-20 mins to reflect the new changes on production.
Here is the command
s3cmd sync --acl-public --reduced-redundancy --delete-removed --cf-invalidate [your-distribution-folder]/* s3://[your-s3-bucket]
Note: On macOS, you can install this tool via: brew install s3cmd.
Hope this helps.
You can automate a process using Lambda. It allows you to create a function that will perform certain actions (Object invalidation in your case) in response to certain events (new file in S3).
More information here:
https://aws.amazon.com/documentation/lambda/
I have had the same problem with my static website hosted on S3 and distributed with CloudFront. In my case invalidating /index.html didn't work.
I talked with AWS support and what I needed to do was to invalidate with only /. This is due I am accessing my website with https://website.com/ URL and not with https://website.com/index.html (which would have brought the updated content with the /index.html invalidation). This was done in AWS CloudFront console and not with the AWS CLI.
When you sync local directory with s3, you can do this:
aws s3 sync ./dist/ s3://your-bucket --delete
aws s3 cp \
s3://your-bucket s3://your-bucket \
--exclude 'index.html' --exclude 'robots.txt' \
--cache-control 'max-age=604800' \
--metadata-directive REPLACE --acl public-read \
--recursive
The first command is just a normal sync, the second command enable S3 to return cache-control for all the files except index.html and robots.txt.
Then your SPA can be fully cached (except index.html).
If you use s3cmd sync and utilize the --cf-invalidate option, you may have to also specify --cf-invalidate-default-index depending on your setup.
From the man page:
When using Custom Origin and S3 static website, invalidate the default index file.
This will ensure to also invalidate your index document, most likely index.html, which will otherwise be skipped regardless if updated or not through the sync.
I'm getting a 500 Internal Server Error while trying to install Bolt.
First time Bolt user (been using Drupal for a few years).
I'm running on a VPS (with cPanel/WHM).
PHP version 5.4.37, Apache version 2.4.12
PHP memory_limit = 128M
PDO extension, curlssl extension, and GD extension are enabled
Chrome version 40.0.2214.111
mod_rewrite, SQLite, and MySQL 5.6.22
Downloaded the latest version and installed the traditional way (FTP)
Unzipped and updated permissions
.htaccess is there and looks the same as the one referenced on the Bolt installation page
I tried MySQL database and as-is to use SQLite database
Checked host configuration and AllowOverrides is enabled
tried enabling RewriteBase in .htaccess as well as the "FallbackResource /index.php" method
Bolt is in the root directory (not a subfolder)
I have PHP compiled as FCGI with suEXEC on and Ruid2 off.
All I get is the 500 Internal Server Error. What am I missing?
Have the same problem with Bolt 3.0.0 today.
Was solved by removing
<IfModule mod_negotiation.c>
Options -MultiViews
</IfModule>
from .htaccess
Looks like I just figured it out.
I disabled "Zend Guard Loader" on my VPS (cPanel/WHM >> EasyApache) and now I'm good to go!
I had a similar problem with Bold 2.2.20 today. If you're using cPanel, look for something like 'jail php for WordPress'. Disable that and you'll be ready to go.
Here's the description shown:
This plugin will jail anonymous web page requests from users that are not logged in. Jailed requests will be read only, and linux will prevent writes to the filesystem. This will prevent almost all hacks.
i've build my application on localhost and running it without any error. i choose openshift to host my application code but i have a problem to make it works perfectly like on my localhost.
i want to add directive of AllowEncodedSlashes and set it to On in my apache2 configuration file, i have tried to edit the file from ~/php/configuration/etc/conf/httpd.conf and then restart the server using ctl_all restart. but the result are http error code 400 (Bad Request). before i add this directive into httpd.conf the result are http error code 404, i am just not sure if the changes are in effect or not. or apache is bugging?
is there anyone knows howto make this work for me?
See if you can add it into .htaccess file instead of httpd.conf file. Also the best way to troubleshoot these problems would be by reviewing your application logs for errors. All you have to do is run "rhc tail {appName}" from your client machine (where the rhc client tools are installed). That gives you the current log entries.
To get to the entire log, you'll want to ssh onto the gear(s) on which the language framework/cartridge is installed using this FAQ and run: more ~/{cartridgeID}/logs/*.log
where {cartridgeID} is your framework cartridge like nodejs-0.6, or your embedded cartridge logs like mysql-5.1.
I created a feature request for this. See this Trello card and feel free to vote it up.
Newbie question - my first attempt at Coldfusion/MySQL and getting it to run locally.
I'm running Apache Webserver (2.2), I have importet two .sql files into MySQL (5.2.) workbench, forward engineered a database from these, setup working database connection and MySQL Server. This is also running. In Coldfusion8 Admin I added my database as a data source.
I thought this would be enough :-)
Still, on http://localhost I'm still only getting an index of all files in my Apache htdocs folder. If I open one of the files it just shows the Coldfusion Markup/HTML source code. Nothing parsed.
Thanks for any hints on what I could be missing?
EDIT:
Three questions trying to implenent:
1. Can I load modules using absolute paths, like D:/Coldfusion8/lib...?
2. My lib/wsconfig folder only contains a dll file named jrunwin32.dll. Trying to use this?
3. The lib/wsconfig folder does not contain a jrunserver.store file. Not sure what to do here
It sounds as if your Apache config is not correct, as it doesn't sound as if it's having the cfm files handled correctly.
First of all, is there a specific reason for using CF8? CF9 has been around for a while, so if going from scratch then I'd advise taking a look at that instead.
That aside, I'd check for the following in your httpd.conf (or whatever your apache config file is named)
Firstly, that .cfm is acceptable as a DirectoryIndex (can have other indexes as well)
DirectoryIndex index.cfm
Secondly, that the JRUN handler is configured properly (so again, in httpd.conf)
LoadModule jrun_module /opt/coldfusion8/runtime/lib/wsconfig/1/mod_jrun22.so
<IfModule mod_jrun22.c>
JRunConfig Verbose false
JRunConfig Apialloc false
JRunConfig Ignoresuffixmap false
JRunConfig Serverstore /opt/coldfusion8/runtime/lib/wsconfig/1/jrunserver.store
JRunConfig Bootstrap 127.0.0.1:51801
AddHandler jrun-handler .jsp .jws .cfm .cfml .cfc .cfr .cfswf
</IfModule>
This is taken from my development VM, I have CF8 as a single-server install in /opt/coldfusion8/
Once you have those lines in (with the paths/ports etc appropriate for your environment) restart apache and it should work fine.
If you have installed CF8 in a Multiserver etc. install then please specify and will look to adjust my advice accordingly