How to customize an openshift repo - openshift

I have read stuffs about Openshift cartridge and I still can't see how to just customize repo for Openshift like this one for WP https://github.com/openshift/wordpress-example
I clone that repo on my local machine and I'd like to just add some new plugins. Can someone explain or point me to an article to do so?

They've updated their wordpress cartridge from https://openshift.redhat.com/app/console/applications I guess after so many people were interested in having it scale they tweaked it a bit so it could, when they did they made it a little too easy it seems.
When you clone an application created from the standard cartridge (scaled or not) all your plugins and themes should be added in
/.openshift. Important to note you don't use .zip files in these folders, you'll have to extract your plugins and themes them and drop those in the appropriate folder.
I'm not sure if they've designed this cartridge to upload plugins to this same directory from within wordpress*.
I'm also not so certain if they dealt with uploading media so that can scale as well.
What I do know is that after going through this you'll find wordpress up and running and if you load-test it you'll see (if you check /haproxy-status) that multiple gears will start up.

Related

How to upgrade Magento 1.9.2.2 to 1.9.3.8 and security patch in localhost?

I am beginner Magento developer. I have worked on Magento 1.9.2.2. How to upgrade Magento 1.9.3.8 and security patch in my localhost?
Here is a quick and dirty method I sometimes use for upgrades. It is "dirty" because there are much better ways using version control and Composer.
Make a backup of your site. Lots of things can go wrong.
Download clean copies of 1.9.2.2 and 1.9.3.8 from the release archives. Extract them into sub-folders of your Magento folder, let's call them old and new for clarity.
Check for a file called app/etc/applied.patches.list. If it exists then download all those patches from the archive and apply them to the old folder. That folder should now be a pretty good representation of your actual site without any customisations.
Open a console (you are using Linux, right?), change to the Magento folder and run this command:
diff -ruN old new | patch -fp1
The -f option means 'force' and assumes common sense answers to any problems that might come up. Let's deal with the consequences of that.
Now find all file changes that were rejected:
find -name "*.rej"
Manually edit each file listed and copy-paste the new code into place. Rejects happen when a core file has already been altered and the patch program cannot work out what to do by itself. In my experience this is more common with older, badly managed sites. If you're lucky there may be none.
Flush Magento's caches. Upgrade scripts will automatically run and update the database. Test all aspects of your site and restore from backup if it is badly broken, otherwise:
Clean up temporary files:
find -name "*.rej" -delete
rm -rf old new

Publishing NopCommerce

I have my site up and running, but because of number of changes, i decided to publish an updated version. Before doing so i have made backup of my files and databases on the host, just in case.
Now this is what i did: Publish Nop.Web used FTP, configuration is set to release and from file publish options checked Delete all existing files prior to publish, as i was publishing to the same folder wwwroot. After publish was completed NopCommerce installation appeared (btw i would like to use the same db i used before) even tho settings.txt from the project I was publishing had the correct string path. I tried 2-3 times to pass the installation with no success (error: One or more sequence... something like that), checked settings.txt on the host and it was empty (no idea why), but i just edited it with the string path.
Now installation is gone i have my site running again with all the products and user information (i assume that means string path to db is good), but my theme is reseted to default, like all my changes to it (footer links, background, logo, favicon..etc etc) only thing that stayed as it should was the nivo slider widget that has the correct pictures displaying on this 'reseted' theme.
Checked General settings for theme settings if its the correct theme selected.
Also i have noticed this, i assume with those 2-3 unsuccesful install tried i have made some changes in db
http://i.imgur.com/wfXQYj6.png
Any suggestions how to sort this whole thing, before publishing i was running my site locally and it was good, i have backups of db and files(ones that i used before this publish)
I am using Nop version 3.4 and arvixe hosting. Sorry for my long post but i wanted to describe my steps and error as detailed as possible.
Thanks for reading and looking forward for your suggestions about this.
I haven't tried publishing features of NopCommerce version > 3.10, but you can try a more "manual" approach to make sure that files are properly updated on the server.
In short, you get files from your local machine which are needed for the built website and you upload them to your website folder on the server. You can make a backup and empty the server website folder first.
I presented that approach in this answer:
How to deploy nopCommerce 3.5 to new server from source?
You can check this batch script to see which files need to be sent to the server. The script also includes some suggestions about what else you may need to do to update the website on the server: https://gist.github.com/dan-mirescu/c14cc72e3f8ecca988b7
For Publishing the NopCommerce Application website below is the step:
Step : 1 - Publish the Nop.Web project.
Step : 2 - Publish the Nop.Admin project.
Go to the publish folder where your publish created
Step : 3 - Cut all dll from the Administration and Paste all dll to bin folder which in main bin folder for whole project
Step : 4 - Copy two things from your source project and in App_Data folder Settings.txt and InstalledPlugins.txt which is not published in your publish file so paste this two files in your publish folder in App_Data. (You need to change the connection string in Setting.txt as per your database host).
Step : 5 - Now you need to copy whole plugins folder from your source folder (but remember this plugins folder you need to copy from the Presentation folder not from the main source where the solution file are there.).
Step : 6 - Now your publish have been ready.(now you can deploy on hosting server)

Managing composer and deployment

So, I'm enjoying using composer, but I'm struggling to understand how others use it in relation to a deployment service. Currently I'm using deployhq, and yes, I can set it to deploy and run composer when there is an update to the repo, but this doesn't make sense to me now.
My main composer repo, containing just the json file of all of the packages I want to include in my build, only gets updated when I add a new package to the list.
When I update my theme, or custom extension (which is referenced in the json file), there is no "hook" to update my deployment service. So I have to log in to my server and manually run composer (which takes the site down until it's finished).
So how do others manage this? Should I only run composer locally and include the vendor folder in my repo?
Any answers would be greatly appreciated.
James
There will always be arguments as to the best way to do things such as this and there are different answers and different options - the trick is to find the one that works best for you.
Firstly
I would first take a step back and look at how you are managing your composer.json
I would recommend that all of your packages in composer.json be locked down to the exact version number of the item in Packagist. If you are using github repo's for any of the packages (or they are set to dev-master) then I would ensure that these packages are locked to a specific commit hash! It sounds like you are basically there with this as you say nothing updates out of the packages when you run it.
Why?
This is to ensure that when you run composer update on the server, these packages are taken from the cache if they exist and to ensure that you dont accidentally deploy untested code if one of the modules happens to get updated between you testing and your deployment.
Actual deployments
Possible Method 1
My opinion is slightly controversial in that when it comes to Composer for many of my projects that don't go through a CI system, I will commit the entire vendor directory to version control. This is quite simply to ensure that I have a completely deployable branch at any stage, it also makes deployments incredibly quick and easy (git pull).
There will already be people saying that this is unnecessary and that locking down the version numbers will be enough to ensure any remote system failures will be handled, it clogs up the VCS tree etc etc - I won't go into these now, there are arguments for and against (a lot of it opinion based), but as you mentioned it in your question I thought I would let you know that it has served me well on a lot of projects in the past and it is a viable option.
Possible Method 2
By using symlinks on your server to your document root you can ensure that the build completes before you switch over the symlink to the new directory once you have confirmed the build completed.
This is the least resistance path towards a safe deployment for a basic code set using composer update on the server. I actually use this method in conjunction with most of my deployments (including the ones above and below).
Possible Method 3
Composer can use "artifacts" rather than a remote server, this will mean that you will basically be creating a "repository folder" of your vendor files, this is an alternative to adding the entire vendor folder into your VCS - but it also protects you against Github / Packagist outages / files being removed and various other potential issues. The files are retrieved from the artifacts folder and installed directly from the zip file rather than being retrieved from a server - this folder can be stored remotely - think of it as a poor mans private packagist (another option btw).
IMO - The best method overall
Set up a CI system (like Jenkins), create some tests for your application and have them respond to push webhooks on your VCS so it builds each time something is pushed. In this build you will set up the system to:
run tests on your application (If they exist)
run composer update
generate an artifact of these files (if the above items succeed)
Jenkins can also do an actual deployment for you if you wish (and the build process doesn't fail), it can:
push the artifact to the server via SSH
deploy the artifact using a script
But if you already have a deployment system in place, having a tested artifact to be deployed will probably be one of its deployment scenarios.
Hope this helps :)

asset versioning (js and css) and browser not pulling the most recent asset

I am currently using asset versioning on my symfony2 projects when ever I have a new update to the site, before doing assetic dump I changed the asset number first and then I ran
sudo php app/console assetic:dump --env=prod
then I cleared the cache. However on my windows machine when I tried it still uses the old assets before the update and hence messing a lot of the layout. What is the best way to prevent this from happening?
I think that you you messed with assets and Assetic library. Assetic library gives you ability to process your css and js resources. And so assetic:dump is just processing your js and css files (minimizing it, compile many files in one or any other processing).
To make your assets be accessible you need to run php app/console assets:install. If you want it be always up to date with your Resources folder you can just add --symlink option to this command. It will create symlink web/bundles/yourbundle pointing to your src/YourBundle/Resources/public.

Get changes from mercurial to FTP site

I work with a partner on an PHP site for a client. We have a common Mercurial repository (on Bitbucket), both local copies and the live site. We have only FTP access to the live site (which can't be changed since it is a hosting package with FTP only).
I want to be able to push changes from the repository to the live site.
Until now I simply keep track of changed files in the repo and copy them manually with FileZilla - a error prone and annoying task. My idea is, to mount the remote location locally (i.e. using CurlFtpFS) and tell mercurial to automagically copy changed files to the site. Ideally I want to be able to specify which changes but this would be a bonus. It would be sufficient if the local state of the files within the repo are synced.
Is there any good way to do this using linux commandline tools?
My first recommendation is, if at all possible, get a package that allows more access. FTP only is just brutal.
But since you are looking for a real answer to your question, I have two ideas for you:
I would suggest looking into the mercurial FTP Extension. I personally have never used it since I have never gotten myself stuck in a ftp-only situation (not for a long time at least), but it looks promising. Looks like if you make sure that you tag your production releases it will work really well for you. (make sure to use the -uploaded param)
Also, if you only ever want the tip to be installed on your production env, then you could look at the suggestion Martin Geisler made on the bitbucket user group a few days ago. Basically his suggestion is to utilize bitbucket's "ping url" functionality. You would have to write a server-side script/url handler that would accept that ping, then fetch the tip from bitbucket (as a zip) and then unzip/unpack it. This is a bit complicated, but if you are looking for complete automation and the tip will always be the best this could work for you.
One notion is the use the hg archive command:
hg archive /path/to/curlftpsfs
which will put a snapshot of your repo in that location -- it will however overwrite any file already there.
Another option is to create a Mercurial clone in that same /path/to/curlftpsfs and then just do a hg pull ; hg update in it on your local system with the remote one mounted. Setting that up initially will mean transferring the whole thing but subsequently you'll only be sending deltas.
Some folks don't like this last options because it exposes your entire /.hg repository too, but you can block access to that at the web server.
I came across this problem a while ago after switching from AWS to a local web hosting that provides only ssh/ftp.
My previous approach of updating a production site on AWS using "hg pull; hg update -C" can no longer be used on the new web hosting. They don't have mercurial installed for shared hosts.
So, what I did is to mount the remote location using ftp, to a local machine (i.e. your laptop), then run the hg pull and update commands locally on your machine at the path where has the remote ftp site mounted.
Windows solution:
BeyondCompare (http://www.scootersoftware.com/) is an awesome piece of software. Apart from being awesome it can mirror your local folder to the FTP site. It's comparing files and only transfers what's new.