We have created a manual library (MediaWiki) for a system that we have, and we like to expose it to one of the customers.
I googled for a while and found many filesharing sites, but I don't know if that is a good idea, as it won't look "nice". We like to make it something like wikipedia. So what options do we have?
any help is more than appreciated...
Extension Push allows you to push content from your own wiki to a customer's wiki. The extension is currently under heavy development, but should be relatively stable.
You share it by making it accessible to them over the internet. You don't send them files. It's a website. That's the point.
The question is not really clear.
If you want to share the information as it is with the customer. Install MediaWiki on the customer servers. Then, periodically dump your information and push it into the Cusomer's MediaWiki server. MediaWiki back-end is usually a MySQL server, you can easily export that data. This way you can only expose the documentation to the client once you have stable copy.
Related
I recently made my own personal MediaWiki and I would like it to be available on different computers. I set it up with XAMPP, so currently, what I did was make two repositories:
one for xampp\htdocs\(my-wiki)
one for xampp\mysql\data\(my-sql-folder)
Then I cloned those repositories to the same folders on another computer. However, when I go to localhost(my-wiki) on that computer, I get the error "Sorry! This site is experiencing technical difficulties. (Cannot access the database)."
Whenever I make changes to the Wiki, xampp\htdocs(my-wiki) does not change at all, while xampp\mysql\data(my-sql-folder) frequently shows edits. What am I doing wrong?
Edit: After looking at the internal error data, it appears that none of the tables in the wiki exist anymore (Table xxx doesn't exist in engine). I'm unsure of why this would be!
There are two things that change when you use a wiki: the uploads directory and the database, so for some some sort of decentralized wiki you need to replicate those. Uploads are simple (you could use git, or some shared central storage like NFS, or some decentralized file store - Wikipedia for example uses Swift). As for the database, there are a few experimental tools to use git as a storage engine (e.g. git-mediawiki), but nothing I would rely on. If your computers run all the time, you can use database replication, but that's not a beginner-level setup. In practice you'll probably be best off just using database dumps. Or buy a server on the internet (a decent VPS is pretty cheap these days) and use that as the wiki's DB backend so you can reach it from all your machines. (Or I guess you can just put your whole wiki on the internet at that point.)
Figured it out. I was missing the files ib_logfile0, ib_logfile1, and ibdata1 from the xampp/mysql/data folder. This, however, makes my Git setup even more annoying. If anybody has any suggestions for a better way to setup my Wiki and make it available across different computers, it'd be much appreciated! Thanks
At first, this question appeared to be too trivial to me to actually require a Stackoverflow post. However, after executing many Google searches for the information, I am at a lost when trying to figure this out about Couchbase.
In Couchbase (I am using the 2.2 Community version), how do I share views among developers? Is there some sort if import/export functionality available? If not, then how does Couchbase intend for developers to share the views that they are using without needing to do manual copying/pasting? It is obvious that the code that a development team would write for querying Couchbase will require accurate view names. Without having a way to send a developer a view file, to accurately setup a Couchbase DB, how can it even be possible to develop with Couchbase locally as a team?
I'm sorry if I sound a little desperate or harsh here, but if it isn't possible to share views among multiple developers, then I don't see how Couchbase can be a viable DB solution for a team of developers trying to share database configuration, similar to how a team using an SQL DB would share schema files to set up the DB.
Several ways you can approach this:
1) Create views programmatically as demonstrated here in java:
http://tugdualgrall.blogspot.com.es/2012/12/couchbase-101-create-views-mapreduce.html
or here in node.js:
http://www.tuicool.com/articles/RvYbQn
2) Store all your views in your version control system (This is the option I use). If you are developing locally then only you need your personal view code, once they are working and your tests are all passing then you can check them in.
I assume you'd then be developing on an testing environment so yes sadly here you'd have to update the views either by hand or by using option 1.
You could also take a look at perhaps using this tool but only for views: http://www.couchbase.com/communities/q-and-a/how-bulk-import-design-docs-and-views-couchbase-server
This functionality currently is not available in the admin UI.
There is a defect/enhancement open Ability to import/export views MB-8436. You can leave there your feedback and vote for it so it will be included in the next release.
In the meantime you can use Design Document REST API
Also there is a workaround blog
I am a WordPress Designer/Developer, who is getting more and more heavily involved with using version control, notably Git, though I do use SVN for some projects. I am currently using Beanstalk for my remote repo.
Adding all of the WordPress files to my repo is no problem, if I wanted to I know I could .gitignore the wp-config file, but since I'm the only developer, currently, and these projects are closed source, it really makes little sense.
WordPress relies heavily on the database, as any CMS does, to keep textual content, and many settings depending on the specific plugin/theme configuration I'm using. I'm wondering what the best way of using version control on the database would be, if it's even possible. I guess I could do a SQL dump, though my MySQL server is running on Windows (read as: I don't know how to do it), and then add the SQL dump to my repository. But when I push something live, that poses huge security threats.
Is there an accepted practice of doing this?
You can backup your database within a git repository. Of course, if you place the data into git in a binary form, you will lose all of git's ability to efficiently store the data using diffs (changes). So the number one best practice is this: store the data in a text serialised format.
mysqldump is a suitable program to help you do this. It isn't perfect though. If anything disturbs the serialisation order of items (eg. as a result of creating new tables, etc.) then artificial breaks will enter into the diff. That will decrease the efficiency of storage. You could write a custom serialiser to serialise changes only -- but then you are doing the hard work that git is already good at. Just use the sql dump.
That being said, what you are wanting to do isn't what devs normally mean when they talk about putting the database in git. For instance, if you read the link posted by #eggyal (link to codinghorror) you will see that what is actually placed in git are the scripts needed to generate the initial database. There may be additional scripts, like those to populate the database data with a clean state, or to populate it with testing data. All such sql scripts are text files, and pretty much the same format as the sql dump you would get from mysqldump. So there's no reason you can't do it that way with your day-to-day data as well.
There are not many software available to version control databases like MySQL and MongoDB.
But one is under development and the beta version is about to be launched soon. Check out Klonio - Version Control for databases
The article How to Sync A Local & Remote WordPress Blog Using Version Control gives advice on how to automate sync between two instances (development, production) of a WordPress blog using Mercurial. Is mentions that for this scenario, Git and Mercurial are very similar.
Step 4 (Synchronizing The Databases) is of interest here.
The database content will be exported to a file that is tracked by the revision control. Each time we pull changes, the database content will be replaced by this file, making our database up-to-date.
Then, it elaborates on conflicts and the scripting part of the job.
There is a version control tutorial in Mercurial out there, if you're not familiar with it.
If you are only interested in schema changes under version control, there is a nice stuff SqlRog. It extracts schema into the project files that can be put under the git.
Be aware that Wordpress stores all news feed content in the database, so even if you don't make any changes, there will be a lot of changing content.
Some background:
We provide a complex system consisting of a large database and several programs - most written in C#, however some legacy applications are still running on MFC.
Most of the stuff we provide runs on a single server (runs SQL server and SQL Management studio 2005), however several applications can run on a number of client's computers. Updating this is a real pain, since after we update the database the outdated software is likely to break due to database changes. Updating the server software manually is one thing, however making sure all the client software works too is practically impossible, and will only get worse with time.
I am to write an updating service, which will be able to update the whole product - update the database, reinstall services and applications. (However only the programs / files /tables / etc that are actually modified should be updated. Downloading the whole product each time there is a update available is not an option. Also, some computers may only have a subset of avaliable programs installed)
First of all is there a already a good way of doing this? If there is something similar to ClickOnce that would also be able to update databases already out there I'd much rather use that.
If not, what are the best practices when it comes to updating? All and any material will be greatly appreciated.
I will need some updates to be installed on the server ASAP after the updates have been submitted, without any user input. That includes a windows service (that is running at all times) and any database changes. After these changes have been made, I will have to prevent any software that is not up to date from either accessing the parts that have been changed, or from running at all.
Any advice will be greatly appreciated - If I do have to write a system like that, I'd like to do it right.
Best practice would be to package the app up in an MSI and use Group Policy to push the updates out to each client.
If that's not possible then you need some way of informing the client app that it is out-of-date (simple check against a server holding the current version number would probably suffice) and refuse to work until an update patch is downloaded and installed - you could even launch this process from inside the app itself.
This answer may help you, I haven't personally used Wix but this seems to be along the lines of what you're looking for. Make sure to check out Lesson 4 in the linked tutorial, as this provides the details you would require.
I'm not sure where you would find best practices when it comes to updating, but in my personal opinion you shouldn't ever force a user to update unless it breaks the underlying application (like yours does). I would be very interested to hear if someone has a link to a list of best practices on this topic.
Edit
I was interested in possible best practices for updating so I started another question thread here. The general consensus in the answers is "Ask the user/client", but there may be some other details in the answers which may help you, I'm afraid I can't find any actual hard rules on the subject anywhere (which I was expecting).
I'm about to start a renaming project on a major data driven website USING VISUAL STUDIO and WINDOWS XP. I've got a script to change the company name in the data/tempates thousands of times. Does anyone have any ideas how I can verify that all of the names have been changed? Is there a way to pull the generated files down to my disk so I can grep it out?
Do you use any specific web server or publishing system? But perhaps one huge, recursive wget is enough?
what you need is a website crawler that can save retrieved pages to disk. There are many programs of this kind available on the Internet.
Visual Studio offers a Find In File (ctrl+shift+f). Search for what you need and it will list all the location of what it was able to find.
Review and replace.