How to upgrade Magento 1.9.2.2 to 1.9.3.8 and security patch in localhost? - magento-1.9

I am beginner Magento developer. I have worked on Magento 1.9.2.2. How to upgrade Magento 1.9.3.8 and security patch in my localhost?

Here is a quick and dirty method I sometimes use for upgrades. It is "dirty" because there are much better ways using version control and Composer.
Make a backup of your site. Lots of things can go wrong.
Download clean copies of 1.9.2.2 and 1.9.3.8 from the release archives. Extract them into sub-folders of your Magento folder, let's call them old and new for clarity.
Check for a file called app/etc/applied.patches.list. If it exists then download all those patches from the archive and apply them to the old folder. That folder should now be a pretty good representation of your actual site without any customisations.
Open a console (you are using Linux, right?), change to the Magento folder and run this command:
diff -ruN old new | patch -fp1
The -f option means 'force' and assumes common sense answers to any problems that might come up. Let's deal with the consequences of that.
Now find all file changes that were rejected:
find -name "*.rej"
Manually edit each file listed and copy-paste the new code into place. Rejects happen when a core file has already been altered and the patch program cannot work out what to do by itself. In my experience this is more common with older, badly managed sites. If you're lucky there may be none.
Flush Magento's caches. Upgrade scripts will automatically run and update the database. Test all aspects of your site and restore from backup if it is badly broken, otherwise:
Clean up temporary files:
find -name "*.rej" -delete
rm -rf old new

Related

How do I modify the defaul graphdb.home directory?

I have installed GraphDB Free v9.3 in LinuxMint 19.3.
The workbench is running fine though I haven't created any repositories yet. This is because I have noticed that although the application is installed at /opt/graphdb-free, the data, conf and log files are in a hidden folder below my home folder: /home/ianpiper/.graphdb/conf (etc).
I would prefer to store these folders on a separate volume, mounted at /mnt/bigdata. In the documentation it suggests that I can set graphdb.home using the graphdb.properties file (though I don't seem to have such a file in my installation) or in the startup script. I think this script might be /opt/graphdb-free/app/bin/setvars.in.sh, and that I could use this to change
-Dgraphdb.home=""
to
-Dgraphdb.home="/mnt/bigdata"
Could a knowledgeable person advise as to whether my understanding is correct, and if so what the best way is to change the location of graphdb.home?
Thanks,
Ian.

Managing composer and deployment

So, I'm enjoying using composer, but I'm struggling to understand how others use it in relation to a deployment service. Currently I'm using deployhq, and yes, I can set it to deploy and run composer when there is an update to the repo, but this doesn't make sense to me now.
My main composer repo, containing just the json file of all of the packages I want to include in my build, only gets updated when I add a new package to the list.
When I update my theme, or custom extension (which is referenced in the json file), there is no "hook" to update my deployment service. So I have to log in to my server and manually run composer (which takes the site down until it's finished).
So how do others manage this? Should I only run composer locally and include the vendor folder in my repo?
Any answers would be greatly appreciated.
James
There will always be arguments as to the best way to do things such as this and there are different answers and different options - the trick is to find the one that works best for you.
Firstly
I would first take a step back and look at how you are managing your composer.json
I would recommend that all of your packages in composer.json be locked down to the exact version number of the item in Packagist. If you are using github repo's for any of the packages (or they are set to dev-master) then I would ensure that these packages are locked to a specific commit hash! It sounds like you are basically there with this as you say nothing updates out of the packages when you run it.
Why?
This is to ensure that when you run composer update on the server, these packages are taken from the cache if they exist and to ensure that you dont accidentally deploy untested code if one of the modules happens to get updated between you testing and your deployment.
Actual deployments
Possible Method 1
My opinion is slightly controversial in that when it comes to Composer for many of my projects that don't go through a CI system, I will commit the entire vendor directory to version control. This is quite simply to ensure that I have a completely deployable branch at any stage, it also makes deployments incredibly quick and easy (git pull).
There will already be people saying that this is unnecessary and that locking down the version numbers will be enough to ensure any remote system failures will be handled, it clogs up the VCS tree etc etc - I won't go into these now, there are arguments for and against (a lot of it opinion based), but as you mentioned it in your question I thought I would let you know that it has served me well on a lot of projects in the past and it is a viable option.
Possible Method 2
By using symlinks on your server to your document root you can ensure that the build completes before you switch over the symlink to the new directory once you have confirmed the build completed.
This is the least resistance path towards a safe deployment for a basic code set using composer update on the server. I actually use this method in conjunction with most of my deployments (including the ones above and below).
Possible Method 3
Composer can use "artifacts" rather than a remote server, this will mean that you will basically be creating a "repository folder" of your vendor files, this is an alternative to adding the entire vendor folder into your VCS - but it also protects you against Github / Packagist outages / files being removed and various other potential issues. The files are retrieved from the artifacts folder and installed directly from the zip file rather than being retrieved from a server - this folder can be stored remotely - think of it as a poor mans private packagist (another option btw).
IMO - The best method overall
Set up a CI system (like Jenkins), create some tests for your application and have them respond to push webhooks on your VCS so it builds each time something is pushed. In this build you will set up the system to:
run tests on your application (If they exist)
run composer update
generate an artifact of these files (if the above items succeed)
Jenkins can also do an actual deployment for you if you wish (and the build process doesn't fail), it can:
push the artifact to the server via SSH
deploy the artifact using a script
But if you already have a deployment system in place, having a tested artifact to be deployed will probably be one of its deployment scenarios.
Hope this helps :)

Get changes from mercurial to FTP site

I work with a partner on an PHP site for a client. We have a common Mercurial repository (on Bitbucket), both local copies and the live site. We have only FTP access to the live site (which can't be changed since it is a hosting package with FTP only).
I want to be able to push changes from the repository to the live site.
Until now I simply keep track of changed files in the repo and copy them manually with FileZilla - a error prone and annoying task. My idea is, to mount the remote location locally (i.e. using CurlFtpFS) and tell mercurial to automagically copy changed files to the site. Ideally I want to be able to specify which changes but this would be a bonus. It would be sufficient if the local state of the files within the repo are synced.
Is there any good way to do this using linux commandline tools?
My first recommendation is, if at all possible, get a package that allows more access. FTP only is just brutal.
But since you are looking for a real answer to your question, I have two ideas for you:
I would suggest looking into the mercurial FTP Extension. I personally have never used it since I have never gotten myself stuck in a ftp-only situation (not for a long time at least), but it looks promising. Looks like if you make sure that you tag your production releases it will work really well for you. (make sure to use the -uploaded param)
Also, if you only ever want the tip to be installed on your production env, then you could look at the suggestion Martin Geisler made on the bitbucket user group a few days ago. Basically his suggestion is to utilize bitbucket's "ping url" functionality. You would have to write a server-side script/url handler that would accept that ping, then fetch the tip from bitbucket (as a zip) and then unzip/unpack it. This is a bit complicated, but if you are looking for complete automation and the tip will always be the best this could work for you.
One notion is the use the hg archive command:
hg archive /path/to/curlftpsfs
which will put a snapshot of your repo in that location -- it will however overwrite any file already there.
Another option is to create a Mercurial clone in that same /path/to/curlftpsfs and then just do a hg pull ; hg update in it on your local system with the remote one mounted. Setting that up initially will mean transferring the whole thing but subsequently you'll only be sending deltas.
Some folks don't like this last options because it exposes your entire /.hg repository too, but you can block access to that at the web server.
I came across this problem a while ago after switching from AWS to a local web hosting that provides only ssh/ftp.
My previous approach of updating a production site on AWS using "hg pull; hg update -C" can no longer be used on the new web hosting. They don't have mercurial installed for shared hosts.
So, what I did is to mount the remote location using ftp, to a local machine (i.e. your laptop), then run the hg pull and update commands locally on your machine at the path where has the remote ftp site mounted.
Windows solution:
BeyondCompare (http://www.scootersoftware.com/) is an awesome piece of software. Apart from being awesome it can mirror your local folder to the FTP site. It's comparing files and only transfers what's new.

Workflow for using TextMate/Coda with Transmit and Versions

I use TextMate to do my HTMl,PHP,JS/Other languages and CSSEdit to do my CSS.
I want to integrate TextMate with Transmit better because at the moment I work like this:
TextMate: Edit code
Transmit: Look for folder and drag to online server
Firefox: Refresh page
Rinse, Repeat.
It feels very clunky to me and I do the same with CSSEdit (although CSSEdit's live preview means that I only have to upload once) but I would like to be able to, on save, have Transmit upload the edited document to the relevant place on the server (given that linked browsing is enabled).
Does anyone have a certain workflow that they follow or macros enabled in TextMate to do such tasks as they would certainly make my life a lot easier, Coda is also an option instead of TextMate if needed.
Being able to have Versions/Git-Tower auto commit on save would be great too.
I recommend #Adam's solution for the uploading part of your question but why are you using Git and Transmit simultaneously? Why not Git for everything?
My workflow:
On my machine I keep a Git repository where I do all the work. The working directory is served by MAMP so that I can test my code before commiting anything.
When I'm satisfied I commit my latest changes until I think the branch I'm working on is stable.
When I'm ready, I push to the server where a post-commit hook checks out the latest version to what the "pre-prod server".
When everything has been tested to death, branches merged and so on I check out manually the repository to the "prod server".
No need to use an FTP client at any point, everything is done from the editor (TextMate before, Vim now).
If you set up a site in Transmit, and open the local directory that holds your files, you can activate the Textmate Transmit bundle by typing ctrl-shift-f. Then hit either 1 or 2. 1 will upload the current directory, 2 will send the current file.
You might consider using Transmit's ability to mount FTP servers as volumes and simply edit the files directly on the server. To TextMate the mounted FTP server will appear to be just another volume. Search the help files for Transmit Disk, their name for this feature.

Why doesn't Mercurial support remote repository creations over HTTP?

I know it is not possible to create Mercurial repositories remotely using HTTP(S), for instance:
$ hg init https://host.org/repos/project
or
$ hg clone /path/to/local/project https://host.org/repos/project
But, what's the reason? Security issues? No need for it? Simply because nobody has implemented it yet?
Rationale for this question: In my company we share most resources via HTTPS, i.e. access permissions are managed by Apache only and regular users cannot login via SSH on the server. That's just perfect as long as repositories need to be served only (for that purpose we are happy with hgwebdir.cgi). However, we also want to allow the remote creation of repos, without the need to maintain additional/patched scripts on the server and extra tools on clients.
To be clear: This question does not ask for solutions to our particular problem but for the reason why Mercurial does not support this feature itself.
UPDATE
Here's a more technical description of the situation I'm thinking of. Supposed hgwebdir.cgi serves a collection of repositories in /path/to/repos at https://.../repos (with pushing enabled). Every user allowed to access this URL (as configured in Apache) may pull and push changesets, effectively this means that hgwebdir.cgi (and thus hg) edits and creates files below /path/to/repos. Now, what's the barrier in letting hgwebdir.cgi also create new repositories below /path/to/repos?
I think the reason is that adding support for creating repositories will bring in a fair amount of baggage:
if you can create repositories you would expect to be able to delete them. While that might seem simple, it would be a big step away from the safe manner in which Mercurial normally works -- there is no destructive commands in standard Mercurial.
people would also want to edit the .hg/hgrc files to set the description and contact information -- standard Mercurial never changes the config files, so this would again be a new thing.
people would also want to manage users' access to the new repositories -- this means editing .htaccess files or the equivalent for other webservers.
... and so on. Implementing this "little" feature will open up for a lot of extra feature requests and we only have a few Mercurial developers that are also sawy web developers.
However, there is now an excellent open source solution: Kallithea gives you a "mini-Bitbucket" that you can deploy on your own server. It will do all of the above. I would install that on my server if I needed something more powerful than plain hgweb.cgi. It supports both Mercurial and Git.
As far as I know, none of the SCM alternatives allow the creation of remote repositories natively. SVN, CVS, Git, et al.
That's usually the job of a hosting provider: SourceForge, Google Code, BitBucket. All of them implement the repository creation on top of their authetication infrastructure.
For example, Debian's Mercurial hosting is limited to Debian Developers, and to create a new repository you need to login via SSH to the server and create the repository on your local home folder, much like Apache's public_html directory.
Various answers (including your own) give some pretty good reasons why the functionality isn't there (separation of concerns mostly), but if you really want to add it you could do so with just a line or two of shell. Here's a hideously unsafe example I gave quite a while ago showing how to add that funcionality in high trust environments: Remote Repository Creation in Mercurial over HTTP