I've recently switched from Netbeans to PHPStorm and I have weird issue with PHPStorm. Sometimes (really often) when I pull or push it shows that it is pulling/pushing but it never ends. I must restart PHPStorm, do it again and then it work.
I synchronize with bitbucket if that helps and I use mercurial.
PHPStorm doesn't show any errors.
Does anyone know what can cause this?
Have a look at SSH, a lot of people prefer SSH over username/password. With SSH you need to enter password for private key once on first pull or push. Here is manual.
Related
So I own a VPS server running CentOS, and decided to use git for deployment. Man! That's fun. Push, done!
I'm really happier than i was with the old ftp approach.
But I wish I could go further, today it deploys automagically all my files, but it doesn't even touch my db. And if I change it in the mods, I have to update it manually. So i was thinking about using some git hooks to do this also automatically.
By now I'm using one git hook at the server, it's a post-receive hook and basically copies files to the production directory when pushed to master.
The prerequisites for the DB deployment are:
It needs to go both ways, if i pull from db, and it's different from my local it should update my local db.
It should be based on modifications and patchs and not the dump of the whole db, this way i can work with the team without compromising other guys work.
I was thinking about keeping a db.sql on the version control, and make a script to analyze it on post-receive (on server) and post-merge(on local), so it can take the mods and apply, and i would keep a database of which mods were applied already (the script should run in both, client and server).
Any of you guys have already done something similar to this? What would you recommend?
Thank you very much already,
I have a very simple MAMP setup, with my index.php and related files in my htdocs folder. I was rolling along fine last night, being able to access the files by typing in things like localhost/index.php. Now, all of the sudden, I get 404s (file not found on this server) when I try to connect to any of the pages that are in my localhost folder or subdirectories of it.
What's more, when I just type in localhost, it shows me some of the directories but DOES NOT show any of my .php files, even though they show up when I perform a ls in the command line.
My MAMP app shows that I am connected to my Apache/MySQL servers. I can still access the localhost/MAMP homepage. But for some obscene reason, all of the sudden my php files are inaccessible. I have changed nothing inside of them! What's going on?
Edit: Turns out I needed to change the permissions of my php files -- they were set to read and write only for sudo and read only for everyone else. I ran chmod 777 on the applicable files and things were back to normal, but this begs several questions:
Why was it working earlier then changed without me ever modifying the file permissions?
Why should I have to make it writeable for other users to be able to access it on my local host as the admin user?
If I were to deploy this code in the wild (I know MAMP isn't used that often in the wild, but still), what would I do? Wouldn't creating these kinds of permissions result in serious security holes?
EDIT 2: Aaaaaand now it's not working again. Again, no changes made to file preferences, etc., just a few tweaks to the actual php files themselves. I don't have any sharing enabled under my sharing settings in System Preferences... this behavior is really starting to become frustrating.
Open Activity Monitor and make sure all instances of Apache and MySQL are closed. Sometimes MAMP has a tendency to not actually quit those processes and the next time you start it up they're still running and it generally messes with things (how's that for a technical explanation?).
Make sure there isn't any other process that's trying to use localhost for any reason. I came across this problem with POW installed. The POW process had stopped responding and it ended up interfering with MAMP's Apache.
Make sure that MAMP's settings haven't somehow been changes. I've seen MAMP revert custom document roots for seemingly no reason which can cause this.
I'd say even before any of this open your system preferences and make sure your Mac's own built-in Apache is off. You'll be able to see this in the Sharing section (it looks like its been moved in Mavericks however).
Make sure you're not routing traffic through a VPN or SSH using Sidestep. I had this problem after going back to an old project I built with MAMP while working from a coffee shop.
I'm using Mercurial (latest bundled with THG) and have a repo on Google Code. I enabled the mercurial_keyring extension and this worked perfectly until I changed the password on my Google account. Now Google Code returns a HTTP 403 error due to the wrong password stored in the keychain, which causes HG to abort the push without asking for the password again.
Is there any way to force the password change on the keyring, or even just to reset it, so that I can re-enter the new password? A tool to manage the stored entries for the python Win32CryptoKeyring would also be fine, since I could use that to delete my password.
I found this question accidentally. Mercurial_keyring tries to detect such cases and re-ask for the password, but for one reason or another this did not work.
I created issue https://bitbucket.org/Mekk/mercurial_keyring/issue/45/some-way-to-clear-password-and-maybe to track the problem, anybody wishing to add some information or to follow the work is welcome to track it.
(mercurial_keyring author)
I setup mercurial on my server, but I am unclear how things should be. I am looking for more examples of different setups, but perhaps I am using the wrong keywords. Right now, it is only going to be a handful of developers, and I am unsure if I should just make the repo as the DocumentRoot. I really don't know what questions to ask since this is new to me, but I would appreciate it if anyone could provide some knowledge and guidance. Some questions that I do have right now is, how I should setup my servers and repositories? Should I setup a separate VirtualHost for a test clone before making it live? Anything would be helpful! Thanks in advance!
There's probably not a reason to do this. I would keep them separate but set up an automated process (either a custom script or continuous integration (CI)) to deploy from Mercurial to the site by running a single command. Optionally, you can make every commit trigger a deployment.
EDIT: With continuous integration, it is the CI's server's responsibility for deploying. If you use SSH, the CI would pull from hg, export, then upload through SSH. That should address your issues. For a comparison of CI servers that support Mercurial, see this question.
I don't have The answer to give you, since many variables and need affect the workflow, but here is some links to get you started :
http://www.zdnetasia.com/a-development-workflow-for-mercurial-62204755.htm
https://www.mercurial-scm.org/wiki/Workflows
http://www.webdevelopment.nicholastuck.com/tools/one-project-one-repository-mercurial-used-right/
I will also recommend you to read this excellent Mercurial introduction : http://hginit.com/
You can also find various questions on SO about workflows with Mercurial, have a look on the sidebars to the right for example.
When you will have some more specific question, don't hesitate to ask again !
I would make your DocumentRoot directory a first-level subdirectory of your repository, and here's some reasons why:
If you're using something like Apache to manage your server, you could put other meta-information - like sites-available and sites-enabled configuration files - in a sibling directory, since they're not really a part of the website documents.
Similarly, you can keep a "docs" directory right next to the code.
If your repository root is your DocumentRoot, all other things being equal, you are also serving up your .hg directory, where your whole repository history is, and your .hgignore file, that kind of thing. You can fix this with a .htaccess file, of course, but it's simpler just to have the child folder.
Essentially, codebases tend not to be exactly one-to-one matches with deployed sites, so I tend to favor having the document root be a subdirectory.
Deployment is a whole 'nother can of worms. It really depends on your needs as to what you do, but here's what I do:
I run a VirtualBox instance on my computer that looks as close as possible to what my deployed server looks like, at least as close as I can get the configuration files to be. I would argue that this approach is less error-prone than an additional VirtualHost entry. Depending on the project, I can get this down to being identical minus perhaps some DNS entries, so I can set everything up to either point to testing.myproject or production.myproject, and this I always automate (I use chef, but that is overkill for a smaller project) so that it's testable code and not prone to finger-fumbling. There's nothing worse than running smoke tests that wipe your database - and have the config accidentally pointing to your prod db. Running a virtual machine lets you painlessly test upgrades to the environment or OS of your server, and you can nuke and restore to a snapshot if you want to go to an earlier state of the machine's configuration.
If you really want to prevent SSH developer access to your prod machines - and IMO, that's a bad idea, because if you have problems on your production server, you've prevented your developers from diagnosing or fixing it - then I think your best bet is to use something like hudson, which is a continuous integration framework. You only give ssh access to the Hudson user to run your deploy script, but anyone (with the right privileges set in Hudson) can run that job. In fact, this is handy to have in an environment where you have e.g. some product management members you want to have the ability to update the production server without being able to log in. The "poor man's" version of this is using sudo to allow your devs to run a command as another user who does have ssh access - and only allowing them to run the publish script.
I would still recommend giving your devs access to your machine, though you don't have to hand over the keys to the kingdom. Just create a "developers" group, assign your devs to it, and give it enough permissions to play with the necessary directories of the server, and you should be good to go.
I'm migrating a few projects from SVN to Mercurial and I'm not sure how to address this issue: because we are working with MVC 3, we have some SQL connection strings stored in our Web.config file.
Since TortoiseHg automatically starts a wide-open web-server when you click "Web Server" from the context menu, I'm looking into ways to restrict it or lock it down, but I haven't been having any luck. We obviously don't want anyone being able to browse or pull, which is enabled by default. While the simplest solution is just to not run it, it is entirely possible that a developer accidentally clicks it while trying to synchronize or clone, clicks X to close it, and then ends up with his local server without a clue.
How do other developers address this? Am I missing something? I've thought about pushing out a GPO blocking :8000 remote access, but there's nothing stopping a dev. from scrolling up and changing the ports or something silly.
After all clarifications, I still believe you're trying to solve the wrong problem.
hg serve is a legitimate tool that can be used to pull changesets between developers on the same network when it's too early to push those changesets to the server. It may or may not fit into your workflow, but I don't think the problem lies there.
If you expect malice, than nothing prevents any developer to expose the sensitive information in the Web.config (and, by the way, the source code itself) to the third party even you somehow block hg serve.
On the other hand, if you expect carelessness, then you should instruct the developers not to use hg serve, or stop storing any sensitive information there, possibly both.