How to allow self signed cert in Mercurial through server configuration? - mercurial

I've read similiar questions here and elsewhere. This is not intended to be a duplicate, but I haven't found the answer.
I'm trying to ask a very particular question, so please don't mark this as duplicate unless you can point me somewhere with a very specific correct answer.
I'm running CentOS 6 and I have Mercurial 1.9 installed as our Mercurial Server.
I can add repositories and and I can clone, and commit changes, and push back to the server with no problems as long as I don't try to use SSL.
The apache website is configured with a self signed SSL cert (I am aware of the pros and cons around self signed SSL certs, but we have made the decision to use one unless it is technically impossible).
Our client machines are Windows 7 with TortoiseHG 2.1.4 installed. In Visual Studio 2010 I'm using "Mercurial Source Control Package".
What I would like to do, is make a server configuration change that would either on a server level or repository level allow a self signed certificate.
Per client machine changes are burdensom because even after I update everyones machine, next time I have to setup a new client I have to have these changes documented and remember to go back through the steps.
I've tried the hostfingerprints option but I haven't been able to get it to work. I'm not sure if this is supposed to work as a server configuration or if I'm putting the setting in the correct file or what.
As a side note, I finally found how to turn on --insecure through the TortoiseHG UI (clicking the lock icon), but it looks like the visual studio source control provider doesn't have an option (at least that I can find).
I'm not a Linux expert (but I have access to experts if needed) so please be verbose in your explanations.
Everyone in our organization is an HG novise.
As a last resort, we may just get an SSL cert.

Jamie F is correct, but I'll put it down here since s/he didn't. There is nothing a server can do to tell a client to trust it -- there would be little point in that. You need to either configure your clients or use a certificate signed by a CA that your client systems already trust.

Related

Using versioning on a VM with several users

We are looking for a way to use GitHub on an internal system that we are developing at work. We have developed it in PHP and MySQL, with a fair bit of jQuery/Ajax, on a Windows Server VM running IIS. Other staff can access the frontend over the network using the IP address.
There are currently three people working on it and at the moment we directly edit the file on the VM as we need it to still communicate with the database to check our changes have worked. There is no option to install anything like WAMP on our individual machines and there are the usual group policy restrictions so the only access we have to a database is via the VM. We have been working with copies of files/folders and the database but there is always the risk that then merging these would be a massive task.
I do use GitHub (mainly desktop but I can just about get by with using the command line as long as I have a list of the command in front of me) at home to sync between my PC and Laptop, via GitHub.com and believe that the issues we get with several people needing to update the same file would be eradicated by using it here at work.
However, there are some queries we need to ensure we have straight in our heads before putting forward a request.
Is what we are asking for viable? Can several branches on the same server be worked on at the same time or would this only work on an individual machine.
Given that our network is fairly restricted, is there any way that we can work on the files on our machine and connect to a VM hosted database? I believe that an IDE will allow us to run php files on a standard machine (although a request for Eclipse is now around 6 weeks old and there is still no confirmation that we will get it any time soon) but will this also allow .
The stuff we do is not overly sensitive but the company would certainly not want what we do out there in a public repository (and also would not be likely to pay for a premium GitHub account) so we would need to branch/pull/merge directly from our machines to the VM.
Does anyone have any advice/suggestions/solutions to this? Although GitHub would be a preferred option as I already use it, we are open to any suggestion that will allow three people, on different machines, simultaneously work on a central system while ensuring that we do not overwrite or affect each others stuff.
Setting up a git repo on Windows is not trivial and may require a fair bit of work. You can try using SVN it is fairly straight forward to install on windows and has a better learning curve than Git. I am not saying SVN is better/worse as compared to Git, it's much better suited to your needs. We have a similar setup and we use Tortoise SVN https://subversion.apache.org/ as a client. SVN also has branches and stuff.
SVN for server side repository https://subversion.apache.org/
If you would still prefer Git on windows, check this out - https://www.linkedin.com/pulse/step-guide-setup-secure-git-remote-repository-windows-nivedan-bamal
1) It is possible to work on many branches and then merge them into a single branch. That's the preferred Git development way. You can do the same on SVN.

OpenShift system and package updates/patches

How does one keep OpenShift gears up-to-date? For example, updates to:
The Linux kernel
Important components/libraries like libc
Apache
Apache modules like mod_wsgi
Python
Python packages
Does OpenShift automatically update these and then restart the gear (or reboot the node)? Or does OpenShift send email notifications and the end-user can restart the gear during maintenance windows? What is the model?
What got me thinking about this was back in January there was a remote-code-execution bug in Ruby on Rails that everyone had to patch immediately.
This FAQ seems to suggest that some level of upgrades happen automatically, but it isn’t clear whether this only applies to the OpenShift-specific code, or also other components like the kernel, Apache, etc.
I can tell you from my experience that changes to the openshift system are not always automatic. They had a change about 10 days ago and I'm still tracking down what they did to make my app run correctly. As far as I know there was no email sent. I did find a blog post of some of the major changes, not all. Of course, they introduced at least one bug that I know of. YMMV
My experiences over the last few weeks have been the following:
Last week there seemed to be an unannounced reboot of the server. I detected this by logging from a custom action hook. I didn't receive any email about it and I didn't see any notice at https://twitter.com/openshift_ops or https://openshift.redhat.com/app/status.
This week, there was the Heartbleed OpenSSL vulnerability and it seems like some gears were restarted. I didn't receive any email about it, Twitter didn't show anything, but there was information on the status page.

Mercurial Introduction - BitBucket and 3rd Party Server Issue

I have just started working on a web project that uses Mercurial version control system to a bitbucket account.
The web project is hosted on a 3rd party server - Webfaction.
I have followed all the Mercurial tutorials at Mercurial
The tutorials state that a repository should be made on the local pc and then changes made to the code in the repository on the local pc and then added, committed and pushed to the bitbucket account.
But my project is hosted on a server - WebFaction, so all the code changes should happen on the server, so I can see that the changes work.
I cannot find a reference to changing the code on the WebFaction server (only on the local pc) and then committing and pushing the code from the WebFaction server to the bitbucket account. I simply don't know how to do this (or even if it can be done!).
Can someone give me the steps and syntax (as much as possible) to do this? Could you also keep the answers as simple as possible as there are huge parts of Mercurial I don't yet understand.
Thanks.
Assuming you have full SSH access to the WebFaction server (you should according to the WebFaction features page), I suggest you try following the detailled instructions found here. If you get stuck on any step, then you can ask a more specific question (probably better to ask on serverfault though).
The fact that the repository is on a remote server does not really change anything. You connect through SSH to the remote server (WebFaction) and you follow the steps as if it was a local machine.

SVN web authentication by MySql

I want to do authentication for my SVN server through Apache Web Server by mod_dav_svn. Authentication users I want to use MySql since later I want to extend other functions later on.
I've follow this instruction and it's working out correctly and perfect for me
SVN Authentication using MySQL
But what happen since I want to define group of user with read-only and other groups read-write permission.
I'm out of ideas so please help me :) .
PS: AuthzSVNAccessFile dynamic editing would take too much of effort :'(
From what I have read if you are going to use the open source subversion server, your options are limited to modifying the access file as you were hesitant to do.
The issue is mentioned here, although in regards to LDAP auth: https://serverfault.com/questions/188023/webinterface-for-configuring-svn-access-in-mod-dav-svn
My advice is to set up a cron job to automatically generate the auth file on a regular interval.

Locking down/securing TortoiseHg's web server

I'm migrating a few projects from SVN to Mercurial and I'm not sure how to address this issue: because we are working with MVC 3, we have some SQL connection strings stored in our Web.config file.
Since TortoiseHg automatically starts a wide-open web-server when you click "Web Server" from the context menu, I'm looking into ways to restrict it or lock it down, but I haven't been having any luck. We obviously don't want anyone being able to browse or pull, which is enabled by default. While the simplest solution is just to not run it, it is entirely possible that a developer accidentally clicks it while trying to synchronize or clone, clicks X to close it, and then ends up with his local server without a clue.
How do other developers address this? Am I missing something? I've thought about pushing out a GPO blocking :8000 remote access, but there's nothing stopping a dev. from scrolling up and changing the ports or something silly.
After all clarifications, I still believe you're trying to solve the wrong problem.
hg serve is a legitimate tool that can be used to pull changesets between developers on the same network when it's too early to push those changesets to the server. It may or may not fit into your workflow, but I don't think the problem lies there.
If you expect malice, than nothing prevents any developer to expose the sensitive information in the Web.config (and, by the way, the source code itself) to the third party even you somehow block hg serve.
On the other hand, if you expect carelessness, then you should instruct the developers not to use hg serve, or stop storing any sensitive information there, possibly both.