Team City V8, changing from checkout as agent to checkout as server, changes working directory - teamcity-8.0

We are currently using teamcity v8 (eventually will upgrade)
Right now we have a server and agent on the same machine and all the VCS's are set as "checkout as agent".
By doing that I get this directory structure:
[agentdirectory]\work\[hashcode]\[all the code]
I then tried adding a second agent and needed to change the checkout style to "as server", but then this change happened
[agentdirectory]\work\[hashcode]\[branch name]\[all the code]
This messes up any build, as I expect the root project.proj file to be directly under [all the code], but now it isn't anymore.
Is there a way to prevent this? Was this a bug with V8 of teamcity?
This would otherwise force me to change the build folder, testing folders, and artifacts.

So the answer is it’s a bug in the surround scan plugin. They are now aware of it and will fix it “in the future”

Related

How to modify Chrome ExtensionInstallWhitelist?

I am trying to install extensions on Chrome but it seems to "decide" for me what I should and should not install, which is very frustrating. I have navigated to the "chrome://policy/" and the ExtensionInstallBlacklist is set to "*", how can I change this or add my extensions ID's to the ExtensionInstallWhitelist array? I cannot find this file anywhere on my machine (MAC) and I have looked everywhere including as I found in other threads as a /Library/Managed Preferences/username/. How can I modify the policy settings?
Then I would suggest contacting your administrator. This setting is put into your machine by a workgroup policy.
Even if you were able to change the value locally, it's in place for a reason, and you may get in trouble for that.
Note that you would need a local administrator to access the file. According to the docs, it should indeed be in /Library/Managed Preferences/<username>. And modifying it will not help in the long term.
Here's how I resolve mine:
Find the extension that you want to install and try to install it
0.5 Copy the extension from the pop-up
Navigate to HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Google\Chrome\ExtensionInstallWhiteList
Add a new REG_SZ with the next sequence number and paste the data
Restart Chrome and try to install the extension again
Only for MacOS
Google Chrome restricts installing 3rd-party extensions for better security. The official method to bypass such limitation is to add custom policy. In current version (60) of Chrome, there is a policy entry for whitelisting extension called ExtensionInstallWhitelist. On macOS, one can easily add such policy by running the following command in Terminal:
defaults write com.google.Chrome ExtensionInstallWhitelist -array id
Replace id with your actual extension ID correspondingly. The ID can be found in chrome://extensions by clicking the “Developer mode” box. If you want to add multiple IDs, put id1 id2 id3, etc. after -array. Restart Chrome to take effect. To check if the policy works, visit chrome://policy. To remove the policy, simply run:
defaults delete com.google.Chrome ExtensionInstallWhitelist
This could help install open-source Chrome extensions such as BaiduExporter without warning.
Original source
If you want to remove it:
Win+R
regedit
Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Google\Chrome
Select the folder containing ExtensionInstallWhiteList
Right click delete

Managing composer and deployment

So, I'm enjoying using composer, but I'm struggling to understand how others use it in relation to a deployment service. Currently I'm using deployhq, and yes, I can set it to deploy and run composer when there is an update to the repo, but this doesn't make sense to me now.
My main composer repo, containing just the json file of all of the packages I want to include in my build, only gets updated when I add a new package to the list.
When I update my theme, or custom extension (which is referenced in the json file), there is no "hook" to update my deployment service. So I have to log in to my server and manually run composer (which takes the site down until it's finished).
So how do others manage this? Should I only run composer locally and include the vendor folder in my repo?
Any answers would be greatly appreciated.
James
There will always be arguments as to the best way to do things such as this and there are different answers and different options - the trick is to find the one that works best for you.
Firstly
I would first take a step back and look at how you are managing your composer.json
I would recommend that all of your packages in composer.json be locked down to the exact version number of the item in Packagist. If you are using github repo's for any of the packages (or they are set to dev-master) then I would ensure that these packages are locked to a specific commit hash! It sounds like you are basically there with this as you say nothing updates out of the packages when you run it.
Why?
This is to ensure that when you run composer update on the server, these packages are taken from the cache if they exist and to ensure that you dont accidentally deploy untested code if one of the modules happens to get updated between you testing and your deployment.
Actual deployments
Possible Method 1
My opinion is slightly controversial in that when it comes to Composer for many of my projects that don't go through a CI system, I will commit the entire vendor directory to version control. This is quite simply to ensure that I have a completely deployable branch at any stage, it also makes deployments incredibly quick and easy (git pull).
There will already be people saying that this is unnecessary and that locking down the version numbers will be enough to ensure any remote system failures will be handled, it clogs up the VCS tree etc etc - I won't go into these now, there are arguments for and against (a lot of it opinion based), but as you mentioned it in your question I thought I would let you know that it has served me well on a lot of projects in the past and it is a viable option.
Possible Method 2
By using symlinks on your server to your document root you can ensure that the build completes before you switch over the symlink to the new directory once you have confirmed the build completed.
This is the least resistance path towards a safe deployment for a basic code set using composer update on the server. I actually use this method in conjunction with most of my deployments (including the ones above and below).
Possible Method 3
Composer can use "artifacts" rather than a remote server, this will mean that you will basically be creating a "repository folder" of your vendor files, this is an alternative to adding the entire vendor folder into your VCS - but it also protects you against Github / Packagist outages / files being removed and various other potential issues. The files are retrieved from the artifacts folder and installed directly from the zip file rather than being retrieved from a server - this folder can be stored remotely - think of it as a poor mans private packagist (another option btw).
IMO - The best method overall
Set up a CI system (like Jenkins), create some tests for your application and have them respond to push webhooks on your VCS so it builds each time something is pushed. In this build you will set up the system to:
run tests on your application (If they exist)
run composer update
generate an artifact of these files (if the above items succeed)
Jenkins can also do an actual deployment for you if you wish (and the build process doesn't fail), it can:
push the artifact to the server via SSH
deploy the artifact using a script
But if you already have a deployment system in place, having a tested artifact to be deployed will probably be one of its deployment scenarios.
Hope this helps :)

Why do I suddenly get the "Not commonly downloaded" warning in Chrome after ClickOnce deployment?

Upgraded Telerik in my ClickOnce application to version 2014.3.1202.40. (Never sure of the best way to do this. After the install, all my references to Telerik controls was broken and I had to remove all Telerik references in each of the projects and re-add them. So, I may be upgrading in the wrong way. But that's another matter.)
I deploy my app to a staging folder on my web server before moving to production. The app is signed with a commercial code signing certificate from Comodo that doesn't expire until 2019. I've uploaded new versions many times with no problem. But now, since I upgraded the Telerik controls, I can't download the and install the application. Here's what happens:
In Chrome, I enter the url: http://porpoiseanalytics.com/PorpoiseStaging/setup.exe
I get the "Not commonly downloaded" warning where I never got that before. I don't get any error on Firefox nor on IE.
If I tell Chrome to keep the file, I can start it. The installation starts on all the other browsers too.
About 3/4 of the way through the download of the files, Avast blocks it with DRep virus (I'm guessing lack of reputation). If I turn off Avast, it installs fine. ClickOnce install log shows an error: "Exception occurred loading manifest from file [application].exe: the manifest may not be valid or the file could not be opened."
Why is my application suddenly acting like it has no reputation when it's been downloaded for months with no problems. But, after I modify the application in VS2010 and then remove and re-add the Telerik dll's, I suddenly have no reputation. And what makes matters worse, is that now my production download located at ttp://porpoiseanalytics.com/PorpoiseDownload/setup.exe is suddenly acting the same way.
I admit I don't have a good enough understanding of reputation, signing, and clickonce. But I do know that whereas before we were fine, after deploying the application, we're flagged as malicious software. I made a few code changes in the program (not many), but I also replaced the Telerik dlls. Probably has something to do with signing and publishing, but I can't figure it out. Please help. Thanks.
I think I figured it out. Although I had signed the manifest in the main UI project (the installer), the executable was not signed. With some help, learned how to do that:
Download the Windows 7 SDK with signtool.
In Visual Studio, open project properties in the main UI project.
Open the Compile tab and click on the Build Events button.
In the post-build events, enter:
"C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\Bin\signtool.exe" sign /f "$(ProjectDir)[name of code cert file]" /p "[password]" /t http://timestamp.comodoca.com/authenticode "$(ProjectDir)obj\$(ConfigurationName)\[exe name].exe
where [name of cert file] is the name of the code-signing cert file, such as private_key.pfk, and [password] is the password used when exporting the certificate (if % is included in the password escape it with %%, so pass%word would be entered as pass%%word), and [exe name] is the name of your primary project executable.
In other projects within solution, sign those by inserting a similar command line in the same post-build location:
"C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\Bin\signtool.exe" sign /f "$(ProjectDir)DAD_Code_Certificate.pfx" /p "<password>" /t http://timestamp.comodoca.com/authenticode "$(TargetPath)
Orignal Problem
My theory is that the original problem was caused because of a new feature in Avast 2015 that does a DomainRep (reputation?) check and if several criteria are all met, the alarm bells go off and it stops the download. Because my executable was not signed, it met all the requirements.
It is possible (although I really am not sure about this) that because of this DRep alarm, Google flagged the installer on our website as malicious, causing the red "not normally downloaded" warning when first starting the download.
At least, that's my best guess. Others will most certainly understand this better than me (and would have avoided it in the first place by signing the executable).
Official answer from Google Apps technical support (I'm on the Silver support plan - $150/month):
I replicated the issue you are describing and it looks to be a known
issue with Google Chrome, when trying to download an archive that has
an executable in it.
Please be advised that Google Chrome is outside the support scope of
Google Cloud, however the workaround is rather simple: when that
message appears you can click on the arrow to the right of that
message and chose "keep". This will download the file requested.

Chrome allow file access from files no longer working (was using to see WebGL/three.js files)?

I was using a Chrome shortcut with allow-file-access-from-files in the target to work on my three.js student project files. But sometime this morning this stopped working and it appeared Chrome had been updated. I redid the shortcut but no joy.
Part of the project I'm doing is building three.js animation that works in a common browser (for which I chose Chrome).
Is there any way to get Chrome to allow file access again?
Thanks.
The answer I came up with was to use Firefox instead of Chrome changing the security policy as detailed in https://github.com/mrdoob/three.js/wiki/How-to-run-things-locally
Not a perfect answer but with a deadline looming it's the best workable answer for me right now as trying different variations of Chrome, trying Wamp and also Mongoose didn't work. If I had more time I would work out how to use Python or probably node.js as I've seen it mentioned a number of times as being the faster option.
What gman stated is true, using the Chrome flag (and changing Firefox's security policy) does create a big security risk. But only if you use that shortcut (and it's tabs etc.) for anything other than accessing your own local files. I've been scrupulous about not using it for the internet but don't use this method if you can't be strict with yourself.
Ideally I'd recommend beginning any project with node.js.
Gman's answer is good. If you're in windows environment, and use npm for package management the easiest is to install http-server globally:
npm install -g http-server
Then simply run http-server in any of your project directories:
Eg. d:\my_project> http-server
Starting up http-server, serving ./
Available on:
http:169.254.116.232:8080
http:192.168.88.1:8080
http:192.168.0.7:8080
http:127.0.0.1:8080
Hit CTRL-C to stop the server
Easy, and no security risk of accidentally leaving your browser open vulnerable.
DON'T USE THAT FLAG! You're opening yourself to having your online accounts being hacked and your local data stolen. Here are 2 proof of concept examples
Run a simple server.
It's super simple.
Here's one
Here's one.
Here's another.
And another.
They won't take more than a couple of minutes to download and require no configuration

Unable to get email-ext.hpi to work in hudson

I have just setup hudson and have begun playing around with it.
I have downloaded the email-ext.hpi into the the folder $HUDSON_HOME\plugins
I have restarted hudson post-step1 ( i am following this manual method as i am unable to use (for proxy setting reasons) the automatic way of installing plugins via the "Manage hudson" page.
I dont see any errors when hudson starts. In fact i see the line
INFO: Started all plugins
BUT:
When i start a project configuration page, I do not see the promised option "Editable Email Notification".
FYI:
1. I am able to setup and run few basic test builds and they run fine.
2. I am also able to configure and receive the default hudson emails for failures and subsequent successes.(This confirms the SMTP settings)
3. I was also aboe to setup the subversion tag hpi in the same way as detailed above and that works fine as well!
What am i missing? Thanks in advance for any help!
EXTRA INFO:
Hudson version - 1.379 running on Windows XP
OK - i figured out a workaround (although i still need to dig into why this is a problem). Recording here for anyone else tha tmay face this issue.
The plugin when copied into the $HUDSON_HOME\plugin was somehow not really being activeated/recognized. But when i copied it over also to C:\Documents and Settings\mylogin.hudson\plugins and restarted hudson service, voila! it worked.
If anyone knows why this might have occured, kindly record it here for reference. Thanks.
To install a plugin you should use the easy route. In Hudson, go to 'Manage Hudson' -> 'Manager Plugins' -> 'Advanced' (its a tab) and use the 'upload plugin' option.
Than follow the instructions. Usually you have to restart Hudson to actually get the plugin.
Way saver than messing around with the file system. In general the approach you had should have been correct, but there seems to be an issue with your $HUDSON_HOME. Have a look at the "Manage Hudson" -> "Configure System" page. What is the Hudson Home directory displayed on the top of the page? I don't know what Hudson does if it can't access the Home Directory? My assumption is here that Hudson runs as a service with a user account rather than the local system account and that you used a different account to copy the hpi file.
Install Maven Legacy and Maven3 plugins .