I currently have 1.23.1 installed and would like to update to ´1.30´ and got bunch of extensions installed. What would be the best way to update the wiki? Should I update version by version or jump straight the the final version that I want?
There is no real benefit in updating version by version; the upgrade script just applies DB schema changes from each version successively, so mostly the result will be the same both ways. If something breaks, going step by step and testing after each step will tell you which version broke it, and thus you are in a better position fixing/reporting it, but it's usually not too hard to figure that out anyway and it would be a vast time sink.
Related
I'm considering to move one of the company internal wikis (very basic wiki with few/no extensions and not so many pages) to another machine and wondering if at the same time I can upgrade the mediawiki version, passing from 1.6 to the current latest 1.25 (in order to use extensions only available for the latest versions)
The Upgrade guide
https://www.mediawiki.org/wiki/Manual:Upgrading
seems to omit the scenario in which an upgrade of the underlying software (apache,mysql) is also required for setting up the target version.
and the Moving guide
https://www.mediawiki.org/wiki/Manual:Moving_a_wiki
strictly recommends that source and target wikis share the same software level.
So I'm a bit stuck. I would attempt an export/import of an xml dump, but I'm not confident for the above reason (there is a huge version gap in source and target wikis)
Or is there a better way to approach the problem? Thx
Edit after some tests
I consider Florian's answer the most safe and advisable, but I would share the final solution I came up with.
Install the new wiki (blank)
Export an xml dump of the original wiki
php maintenance\dumpBackup.php --full > dump.xml
I first encountered a "Cannot connect to database" error. So i had to add in the LocalSettings.php the lines
$wgDBadminuser=...
$wgDBadminpassword=...
Import the xml dump in the new wiki (first try in dry-run mode)
php maintenance\importDump.php --dry-run < dump.xml
php maintenance\importDump.php < dump.xml
Then I was prompted to run
php maintenance\rebuildrecentchanges.php
Copy the physical files from the old to the new wiki, in the same path(for common wikis they should be in the "images" folder. That was not my case).
Re-create the users (manually) in the new wiki
Last edited the LocalSettings.php with the most essential settings I wanted to preserve (groups, restrictions,...) .
And the moving was done! The new wiki is ok and already usable at this stage: pages are there, links are working..
In fact, it should work, if you move the wiki from one server to another and after that upgrade it on the new server. Like you may already know, it's important to backup all files and data you have for the wiki in the "old" environment, so you can easily restore it from there.
If I would want to do, what you want to do, I would first follow the "Moving a wiki" guide except the "Test" section. After that I would upgrade the wiki to the newest version. Now I can test the wiki intensively to see, if anything worked well.
If you don't want to do that, you really need to upgrade the wiki in the "old" source and move it after that. If I understand you correctly, that would require an update of the server software (I expect php and mysql?).
What is "vendoring" exactly? How would you define this term?
Does it mean the same thing in different programming languages? Conceptually speaking, not looking at the exact implementation.
Based on this answer
Defined here for Go as:
Vendoring is the act of making your own copy of the 3rd party packages
your project is using. Those copies are traditionally placed inside
each project and then saved in the project repository.
The context of this answer is in the Go language, but the concept still applies.
If your app depends on certain third-party code to be available you could declare a dependency and let your build system install the dependency for you.
If however the source of the third-party code is not very stable you could "vendor" that code. You take the third-party code and add it to your application in a more or less isolated way. If you take this isolation seriously you should "release" this code internally to your organization/working environment.
Another reason for vendoring is if you want to use certain third-party code but you want to change it a little bit (a fork in other words). You can copy the code, change it, release it internally and then let your build system install this piece of code.
Vendoring means putting a dependency into you project folder (vs. depending on it globally) AND committing it to the repo.
For example, running cp /usr/local/bin/node ~/yourproject/vendor/node & committing it to the repo would "vendor" the Node.js binary – all devs on the project would use this exact version. This is not commonly done for node itself but e.g. Yarn 2 ("Berry") is used like this (and only like this; they don't even install the binary globally).
The committing act is important. As an example, node_modules are already installed in your project but only committing them makes them "vendored". Almost nobody does that for node_modules but e.g. PnP + Zero Installs of Yarn 2 are actually built around vendoring – you commit .yarn/cache with many ZIP files into the repo.
"Vendoring" inherently brings tradeoffs between repo size (longer clone times, more data transferred, local storage requirements etc.) and reliability / reproducibility of installs.
Summarizing other, (too?) long answers:
Vendoring is hard-coding the often forked version of a dependency.
This typically involves static linking or some other copy but it doesn't have to.
Right or wrong, the term "hard-coding" has an old and bad reputation. So you won't find it near projects openly vendoring, however I can't think of a more accurate term.
As far as I know the term comes from Ruby on Rails.
It describes a convention to keep a snapshot of the full set of dependencies in source control, in directories that contain package name and version number.
The earliest occurrence of vendor as a verb I found is the vendor everything post on err the blog (2007, a bit before the author co-founded GitHub). That post explains the motivation and how to add dependencies. As far as I understand the code and commands, there was no special tool support for calling the directory vendor at that time (patches and code snippets were floating around).
The err blog post links to earlier ones with the same convention, like this fairly minimal way to add vendor subdirectories to the Rails import path (2006).
Earlier articles referenced from the err blog, like this one (2005), seemed to use the lib directory, which didn't make the distinction between own code and untouched snapshots of dependencies.
The goal of vendoring is more reproducibility, better deployment, the kind of things people currently use containers for; as well as better transparency through source control.
Other languages seem to have picked up the concept as is; one related concept is lockfiles, which define the same set of dependencies in a more compact form, involving hashes and remote package repositories. Lockfiles can be used to recreate the vendor directory and detect any alterations. The lockfile concept may have come from the Ruby gems community, but don't quote me on that.
The solution we’ve come up with is to throw every Ruby dependency in vendor. Everything. Savvy? Everyone is always on the same page: we don’t have to worry about who has what version of which gem. (we know) We don’t have to worry about getting everyone to update a gem. (we just do it once) We don’t have to worry about breaking the build with our libraries. […]
The goal here is simple: always get everyone, especially your production environment, on the same page. You don’t want to guess at which gems everyone does and does not have. Right.
There’s another point lurking subtlety in the background: once all your gems are under version control, you can (probably) get your app up and running at any point of its existence without fuss. You can also see, quite easily, which versions of what gems you were using when. A real history.
On my website I use both a postgresql and mysql database
I want to convert to PDO as I have been informed that PHP will be removing the old mysql_ functions soon and I assume this means the pg_ functions will disappear as well.
I only ever use:
pg_connect/mysql_connect & mysql_select_db
pg_query/mysql_query
pg_result/mysql_result
pg_numrows/mysql_numrows (for checking if there is a result, or looping through resultset)
pg_fetch_array
I have thousands of queries and don't relish the idea of going through every one.
Is it possible just to go through and make global changes in my code to implement PDO?
Does that mean I can just do a global change mysql=mysqli in the interim?
Well the answer is somewhat complex and can be divided into 2 parts.
Let's look into the question first:
There are actually 2 possible reasons to change your codes.
Deprecation process for mysql extension.
Making your code safer against SQL injection.
For the first one, it is not actually an urgent reason.
It will be deprecated in the not-released-yet version and removed in not-even-known version. So, to hit whatever trouble you will need to have a PHP version with removed mysql support. According to my experience, new versions moves on the shared hosts slowly, and you have 7 to 10 years ahead.
For the second reason, simple bulk search and replace will do no good at all.
So, instead of going for this option in a hurry, I'd go for gradual refactoring, eventually replacing old code with better versions.
In the company I am I was asked to write an autoupdate function a la chrome. I.e. It should check periodically whether a new version is available, download the new version and apply it silently the next time the application starts.
I already have something up and running but it is more like a dirty hack than something I feel happy about it. So, I would like to know how to design and implement such a solution. My horrible hack works as this:
Have a mechanism to check whether a new version exists (a database query or a web service)
Download a full zip with the whole new version.
Check file signature. If everything went alright, set a registry value: must update to true.
When the application restarts, if the must update value is true, launch an update program and exist.
The update deletes the contents of the application folder, unzips the update and replaces the old contents, launches the application and exits.
Now, I would like to change it, so it works cleaner. I am planning to send the update as a bsdiff file. It gets downloaded. But the question is, what happens next?
When do apply the update?
Who is in charge of applying the patch? is it the program itself or is it a third program, as I did, which is in charge of applying the patch and relaunch the application?
If your going down the C++ route you can go to chromium and download the Chrome source code and dig around to see how the update is done, this might give you a better idea on how to approach it. Here's an article that might help.
If your familiar with .NET the recently release nuget also has an auto update feature that might be useful to look at, you can get the source code from here. David Ebbo has a blog about how its done here.
I'm not up to date on Delphi but you might be able to use either of the above options.
The workflow you proposed is more or less like it should work, but there's no need to re-invent the wheel - there are plenty libraries out there that will do this for you. Using a 3rd party library has the benefit of keeping your code cleaner while making sure the dirty process of auto-update is contained and working flawlessly.
Trust me, I know. I'm the author of NAppUpdate, an app update framework for .NET (which you might want to try out or learn from).
So, after giving it a lot of though, this is what I came with (for active directory I will refer to the directory where the main program lies, active program is the main program and update program is the one that replaces the active program and its resource files):
The active program checks if there is a new version every certain amount of time. If so, download it
Prepare new version in a separate folder (this can be done by copying the contents of the directory with the program to a subdirectory and applying a binary patch, or simply unziping the new version).
Set a flag that indicates that a new version is ready.
When a program is exiting (and one has to control for different interrupts here):
The active program checks the new version ready flag. Launch the update program and exit.
The update program checks if it can write in the active directory. If so, replaces the contents with the prepared version.
The update program has to recheck links and update them accordingly.
So guys, if you have a better workflow, please tell me.
You could literally use the Google Chrome update workflow by using the Google Chrome updater:
http://code.google.com/p/omaha/
They open sourced it Feb 2009.
Which of the hg plug-ins has:
the least hassles
causes the least trouble
is prettiest
Can't claim to have tried a wide variety, but, what's wrong with hg4idea...?
In regards to the "don't use one" response - this is hardly adequate. What if I use my IDE to do a refactoring that renames a file? Without IDE/source control integration, the file rename is made without regards to source control, and then Mercurial (or whatever else) thinks a file went missing and a new one appeared. Then you have to go back to wrangle with the source control to sort things out.
JetBrains seems to have chosen hg4idea-luciad for its upcoming Python editor (PyCharm) and it is now more active than hg4idea
It looks like a leader is on the way :-)
To answer your question: The best IDE PlugIn is don't use on.
I think IDE integration is not necessary when working with a DVCS. When working in a centralized System, it is reasonable for the purpose of automatic check out on edit. However, I like keeping things separate. I don't want my IDE cluttered up. I don't see any benefits in using a plug-in compared to a standalone solution (that I keep running on a second monitor etc. ).
I am fine with TortoiseHG and the command line for more complicated tasks.