Use environment variables inside google-repo manifest? - manifest

We are trying to start using google-repo in our project as the project is divided into multiple repositories. The problem is that our git server requires that one puts the username into the URL, e.g.
git clone ssh://username#git.server.com
But is it possible to get that into the manifest? I've tried the following
<?xml version="1.0" encoding="UTF-8"?>
<manifest>
<remote name="gerrit"
fetch="ssh://$USER#git.server.com"
review="ssh://$USER#git.server.com"
revision="refs/heads/master"/>
<default remote="gerrit" sync-j="4"/>
<project name="project" remote="gerrit" path="project"/>
</manifest>
but google-repo simply uses ssh://$USER#git.server.com when cloning (that is, it does not dereference environment variable $USER).

This is a ssh config issue, you should not add $USER in the remote of your manifest.
In ~/.ssh/config, add:
host whatever git.server.com
IdentityFile ~/.ssh/id_rsa
User <your_user>
IdentityFile should link to your ssh private key (YMMV).
You should now be able to
git clone ssh://git.server.com

Related

Specify JFROG_ACCESS home instead of ~/.jfrog_access (Artifactory 5.5.2)

I managed to set up artifactory using our existing tomcat. I have set to ARTIFACTORY_HOME=/opt/artifactory, that part works well. There is, however, also the jfrog access.war file, which needs to be running as well. I didn't figure out which variable to use to specify its home, therefore it defaults to ~/.jfrog_access, which is not at all what I like.
I moved the content over to my $ARTIFACTORY_HOME/access and symlinked it, but that's not the way to go for sure. Any help appreciated.
In case someone is stumbling over this thread and struggles with the same problem:
Solution for me was to also extract the Context files (access.xml and artifactory.xml which are available in the zip file under <zip extract>/misc/tomcat) to the Tomcat configuration folder, e.g. $CATALINA_HOME/conf/Catalina/localhost/. After that the $ARTIFACTORY_HOME env will be recognized on Access startup.
A previous answer finally put me on the right track for solving this problem on Amazon Linux.
In addition to copying access.xml and artifactory.xml to ${catalina.home}/host/MY_HOSTNAME, I found that some other changes were needed.
I modified the docBase attributes in the XML context files because my server has multiple hostnames:
/usr/share/tomcat8/conf/Catalina/repo.mydomain.org/access.xml
<Context path="/access" docBase="${catalina.home}/host/repo.mydomain.org/access.war">
<Parameter name="jfrog.access.bundled" value="true" override="true"/>
<!-- enable annotations scanning of access jar files -->
<JarScanner scanClassPath="false">
<JarScanFilter defaultPluggabilityScan="false" pluggabilityScan="access*" defaultTldScan="false"/>
</JarScanner>
</Context>
/usr/share/tomcat8/conf/Catalina/repo.mydomain.org/artifactory.xml
<Context crossContext="true" path="/artifactory" docBase="${catalina.home}/host/repo.mydomain.org/artifactory.war">
</Context>
Important Note: In order to prevent the above two XML files from being deleted by Tomcat Manager during upgrades via Undeploy/Deploy WAR, make sure they are owned by root and not writable by the tomcat user:
chown root.root access.xml artifactory.xml
chmod 644 access.xml artifactory.xml
If you forget to do the above, you will likely end up missing these files, which will break the communication between the access and artifactory web applications, resulting in login failures ("Username or Password Are Incorrect"). In this case, these errors result from the lack of communication between the web applications, not a problem with the credentials themselves.
/usr/share/tomcat8/conf/Catalina/repo.mydomain.org/manager.xml
This gives me the ability to upload new versions of access.war and artifactory.war via https://repo.mydomain.org:8443/manager/html:
<Context docBase="${catalina.home}/webapps/manager" privileged="true" antiResourceLocking="false">
</Context>
Additionally, I created the following folder to serve as the artifactory.home:
sudo mkdir /usr/share/artifactory
sudo chown tomcat.tomcat /usr/share/artifactory
tomcat8.conf
Add (or modify) the following line:
JAVA_OPTS="-Dartifactory.home=/usr/share/artifactory -Djfrog.access.home=/usr/share/artifactory/access -Dartifactory.access.client.serverUrl.override=http://localhost:8080/access"
Note: The Access Client URL specified above must use localhost in order to avoid the Server HTTP parameter from being overwritten by Apache and its modules. For instance, if I use:
https://repo.mydomain.org/access/api/v1/system/ping
The Server HTTP header value in the response is:
Server: Apache/2.4.33 (Amazon) OpenSSL/1.0.2k-fips mod_jk/1.2.43
And the Access Client produces the following exception:
[ERROR] (o.j.a.c.AccessClientImpl:154) - Access client/server version mismatch. Client version: 4.1.5, Server version: 2.4.33 (Amazon) OpenSSL
Which means the Access Client is depending on the first string matching #.#.# in the server header. This seems like a really fragile part of the Access Client. They should have used X-JFrog-Access-Server or something instead of trying to control a value that is set by the web server. So, to reiterate, use http://localhost:8080/access to connect directly to the tomcat server.
Artifactory 6.2.0 depends on Apache Derby (the specific version can be found in jfrog-artifactory-oss-6.2.0.zip\artifactory-oss-6.2.0\tomcat\lib). This should be added as a shared library to Tomcat:
mkdir /usr/share/tomcat8/shared
cd /usr/share/tomcat8/shared
wget http://central.maven.org/maven2/org/apache/derby/derby/10.11.1.1/derby-10.11.1.1.jar
Add or modify the following line in catalina.properties:
shared.loader=${catalina.home}/shared/*.jar
Since we want https://repo.mydomain.org to go to the Artifactory webapp:
mkdir /usr/share/tomcat8/host/repo.mydomain.org/ROOT
echo '<html><head><meta http-equiv="refresh" content="0;URL=/artifactory"></meta></head><body></body></html>' > /usr/share/tomcat8/host/repo.mydomain.org/ROOT/index.html
And make sure the services automatically start on reboot:
sudo chkconfig httpd on
sudo chkconfig tomcat8 on
Artifactory will then be available at the url:
https://repo.mydomain.org/artifactory/webapp/

How to send junit report via email as report doesn't display

I have a Junit HTML Report below in my local which is built through ANT:
Now the problem I am having is that if I try and attach this report by simply sending just the index.html, it will display this:
I want to know how can I send the above report via email so everyone can see the html report. I only want to send the index.html I don't want to send everybody the corresponding files.
The xml code to build this report is below (I xxx out some things which are not xxx in the real file):
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!-- WARNING: Eclipse auto generated file.
Any modifications will be overwritten.
To include a user specific buildfile here, simply create one in the same
directory with the processing instruction <?eclipse.ant.import?>
as the first entry and export the buildfile again. -->
<project basedir="." default="Test_Report" name="Test_Report">
<target name="xxx_SoapUI">
<exec dir="." executable="C:\Program Files\SmartBear\SoapUI-5.3.0\bin\testrunner.bat">
<arg line="-r -j -f 'D:\xxx\xxx' 'D:\xxx).xml'"></arg>
</exec>
</target>
<target name="xxx_SoapUI">
<exec dir="." executable="C:\Program Files\SmartBear\SoapUI-5.3.0\bin\testrunner.bat">
<arg line="-r -j -f 'D:\xxx\xxx' 'D:\xxx).xml'"></arg>
</exec>
</target>
<target name="xxx_SoapUI">
<exec dir="." executable="C:\Program Files\SmartBear\SoapUI-5.3.0\bin\testrunner.bat">
<arg line="-r -j -f 'D:\xxx\xxx' 'D:\xxx).xml'"></arg>
</exec>
</target>
</project>
The format="frames"attribute normally generates separate files for frames and maybe stylesheets. As mentioned in the task documentation, you can set the value to noframes and Ant will generate the report in a single HTML:
The noframes format does not use redirecting and generates one file called junit-noframes.html.
In case you need to customize the XSL used to generate the report, you can override it using the styledir attribute (note that the file must be named junit-noframes.xsl). The default XSL is embedded in the Ant source code and can be viewed here.
Here is alternative approach.
In this approach, do not use any attachment of the test result. Instead, host the result in a container such as tomcat or WebDAV.
Here is the example for tomcat(should be something similar for WebDAV as well):
Install tomcat
Create a directory say reports under TOMCAT_HOME\webapps directory and make sure tomcat is started.
After generating reports using current ant target, create new target for moving current reports under TOMCAT_HOME\webapps\reports\<timestamp>
Send the reports link in the email. The report link could be http://<hostname>:<port>/reports/<timestamp>/index.html
This way, every one can access the report online.

How to set aspnetcore_environment in publish file?

I have ASP.NET Core application (Web Api). The documentation has explained working with multiple environments, however it failed to explain how to set aspnetcore_environment when publishing the web site.
So lets say if i have 3 environments Development, Staging and Production
In classic ASP.NET Web Application i used to create 3 build configurations. Development, Staging and Production ( Like shown in picture below). and then 3 .pubxml files, one for each configuration. Do i need to use the same approach for ASP.NET Core application as well?
How do i set aspnetcore_environment in .pubxml file?
If the approach specified in Question 1 is obsolete, then what's the alternate approach? ( I use Jenkins for CI)
Update 1
I understand that I have to set ASPNETCORE_ENVIRONMENT however I am not able to understand where do we set this? During development I can set this in profile in launchSettings.json, however question was how do we set this when publishing to staging or production? do we set environment variable on the target server itself?
Update 2
I found article here that explains different ways of setting environment variable. This partially answered my question. However when I publish the application, the publish process does not honor the environment variable while publishing appsettings.{env.EnvironmentName}.json
I have created separate post for that question
You could pass in the desired ASPNETCORE_ENVIRONMENT into the dotnet publish command as an argument using:
/p:EnvironmentName=Staging
e.g.
dotnet publish /p:Configuration=Release /p:EnvironmentName=Staging
This will generate out the web.config with the correct environment specified for your project:
<environmentVariables>
<environmentVariable name="ASPNETCORE_ENVIRONMENT" value="Staging" />
</environmentVariables>
I had the same requirement, and I came up with the following solutions. This works well with automated deployments and require fewer configuration changes.
1. Modifying the project file (.CsProj) file
MSBuild supports the EnvironmentName Property which can help to set the right environment variable as per the Environment you wish to Deploy. The environment name would be added in the web.config during the Publish phase.
Simply open the project file (*.csProj) and add the following XML.
<!-- Custom Property Group added to add the Environment name during publish
The EnvironmentName property is used during the publish for the Environment variable in web.config
-->
<PropertyGroup Condition=" '$(Configuration)' == '' Or '$(Configuration)' == 'Debug'">
<EnvironmentName>Development</EnvironmentName>
</PropertyGroup>
<PropertyGroup Condition=" '$(Configuration)' != '' AND '$(Configuration)' != 'Debug' ">
<EnvironmentName>'$(Configuration)'</EnvironmentName>
</PropertyGroup>
Above code would add the environment name as Development for empty or Debug configuration. For any other Configuration the Environment name would be picked from the configuration which was selected. This will add the ASPNETCORE_ENVIRONMENT environment with the desired configuration. You can modify the logic for environment name as desired by updating the CsProj file. More details here
2. Adding the EnvironmentName Property in the publish profiles.
We can add the <EnvironmentName> property in the publish profile as well. Open the publish profile file which is located at the Properties/PublishProfiles/{profilename.pubxml} This will set the Environment name in web.config when the project is published. More Details here
<PropertyGroup>
<EnvironmentName>Development</EnvironmentName>
</PropertyGroup>
As shown in above image, environment can be added for each configuration and the name of the EnvironmentName property can be changed in each *.pubxml file.
3. Command line options using dotnet publish
Additionaly, we can pass the property EnvironmentName as a command line option to the dotnet publish command. Following command would include the environment variable as Development in the web.config file.
dotnet publish -c Debug -r win-x64 /p:EnvironmentName=Development
When hosting the application under IIS you can set the environment variable in web.config.
https://learn.microsoft.com/en-us/aspnet/core/hosting/aspnet-core-module
To generate it on publish add a web.config to the root of your project, "dotnet publish" will use this file as the basis for the one that is generated for in the publish folder. Then you can change the value in your deployment system.
<?xml version="1.0" encoding="utf-8" ?>
<!-- Used to overwrite settings web.config generated by "dotnet publish", Only used when hosting under IIS -->
<configuration>
<system.webServer>
<aspNetCore stdoutLogEnabled="true">
<environmentVariables>
<environmentVariable name="ASPNETCORE_ENVIRONMENT" value="Development" />
</environmentVariables>
</aspNetCore>
</system.webServer>
</configuration>
I think you can't do it in the publish profile. You have to set environment variable, e.g. ASPNETCORE_ENVIRONMENT = Staging.
I had to do a similar thing with a aspnet core web app on Azure. I wanted to have dev, staging and production. The way I did it was exactly with env variable.
To setup two or more profiles, you need to create additional profile, as mentioned in a linked article, and your launchSettings.json will contain an array:
"profiles": {
"IIS Express": {
"commandName": "IISExpress",
"launchBrowser": true,
"environmentVariables": {
"ASPNETCORE_ENVIRONMENT": "Development"
}
},
"IIS Express (Staging)": {
"commandName": "IISExpress",
"launchBrowser": true,
"environmentVariables": {
"ASPNETCORE_ENVIRONMENT": "Staging"
}
}
}
To be able to read the environment variable, you need to specify it during startup and call additional method AddEnvironmentVariables to variables take action:
public class Startup
{
public Startup(IHostingEnvironment env)
{
var builder = new ConfigurationBuilder()
.SetBasePath(env.ContentRootPath)
// general properties
.AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
// specify the environment-based properties
.AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
// do not forget to add environment variables to your config!
.AddEnvironmentVariables();
Configuration = builder.Build();
}
}
Simple way to set it in visual studio IDE.
Project > Properties> Debug > Environment variables
Please do not use environment variables of machine level instead scope
to the application , there is a possibility of other application doing
same, changing may affect other application.

Using environment properties with files in elastic beanstalk config files

Working with Elastic Beanstalk .config files is kinda... interesting. I'm trying to use environment properties with the files: configuration option in an Elastc Beanstalk .config file. What I'd like to do is something like:
files:
"/etc/passwd-s3fs" :
mode: "000640"
owner: root
group: root
content: |
${AWS_ACCESS_KEY_ID}:${AWS_SECRET_KEY}
To create an /etc/passwd-s3fs file with content something like:
ABAC73E92DEEWEDS3FG4E:aiDSuhr8eg4fHHGEMes44zdkIJD0wkmd
I.e. use the environment properties defined in the AWS Console (Elastic Beanstalk/Configuration/Software Configuration/Environment Properties) to initialize system configuration files and such.
I've found that it is possible to use environment properties in container-command:s, like so:
container_commands:
000-create-file:
command: echo ${AWS_ACCESS_KEY_ID}:${AWS_SECRET_KEY} > /etc/passwd-s3fs
However, doing so will require me to manually set owner, group, file permissions etc. It's also much more of a hassle when dealing with larger configuration files than the Files: configuration option...
Anyone got any tips on this?
How about something like this. I will use the word "context" for dev vs. qa.
Create one file per context:
dev-envvars
export MYAPP_IP_ADDR=111.222.0.1
export MYAPP_BUCKET=dev
qa-envvars
export MYAPP_IP_ADDR=111.222.1.1
export MYAPP_BUCKET=qa
Upload those files to a private S3 folder, S3://myapp/config.
In IAM, add a policy to the aws-elasticbeanstalk-ec2-role role that allows reading S3://myapp/config.
Add the following file to your .ebextensions directory:
envvars.config
files:
"/opt/myapp_envvars" :
mode: "000644"
owner: root
group: root
# change the source when you need a different context
#source: https://s3-us-west-2.amazonaws.com/myapp/dev-envvars
source: https://s3-us-west-2.amazonaws.com/myapp/qa-envvars
Resources:
AWSEBAutoScalingGroup:
Metadata:
AWS::CloudFormation::Authentication:
S3Access:
type: S3
roleName: aws-elasticbeanstalk-ec2-role
buckets: myapp
commands:
# commands executes after files per
# http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html
10-load-env-vars:
command: . /opt/myapp_envvars
Per the AWS Developer's Guide, commands "run before the application and web server are set up and the application version file is extracted," and before container-commands. I guess the question will be whether that is early enough in the boot process to make the environment variables available when you need them. I actually wound up writing an init.d script to start and stop things in my EC2 instance. I used the technique above to deploy the script.
Credit for the “Resources” section that allows downloading from secured S3 goes to the May 7, 2014 post that Joshua#AWS made to this thread.
I am gravedigging but since I stumbled across this in the course of my travels, there is a "clever" way to do what you describe–at least in 2018, and at least since 2016. You can retrieve an environment variable by key with get-config:
/opt/elasticbeanstalk/bin/get-config environment --key YOUR_ENV_VAR_KEY
And likewise all environment variables with (as JSON or --output YAML)
/opt/elasticbeanstalk/bin/get-config environment
Example usage in a container command:
container_commands:
00_store_env_var_in_file_and_chmod:
command: "/opt/elasticbeanstalk/bin/get-config environment --key YOUR_ENV_KEY | install -D /dev/stdin /etc/somefile && chmod 640 /etc/somefile"
Example usage in a file:
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/00_do_stuff.sh":
mode: "000755"
owner: root
group: root
content: |
#!/bin/bash
YOUR_ENV_VAR=$(source /opt/elasticbeanstalk/bin/get-config environment --key YOUR_ENV_VAR_KEY)
echo "Hello $YOUR_ENV_VAR"
I was introduced to get-config by Thomas Reggi in https://serverfault.com/a/771067.
I assume that AWS_ACCESS_KEY_ID and AWS_SECRET_KEY are known to you prior to the app deployment.
You can create the file on your workstation and submit it to Elastic Beanstalk instance with the code on $ git aws.push
$ cd .ebextensions
$ echo 'ABAC73E92DEEWEDS3FG4E:aiDSuhr8eg4fHHGEMes44zdkIJD0wkmd' > passwd-s3fs
In .config:
files:
"/etc/passwd-s3fs" :
mode: "000640"
owner: root
group: root
container_commands:
10-copy-passwords-file:
command: "cat .ebextensions/passwd-s3fs > /etc/passwd-s3fs"
You might have to play with the permissions or execute cat as sudo. Also, I put the file into .ebextensions for example, it can be anywhere in your project.
Hope it helps.

How do you configure jetty to allow access from an external server?

I've seen this asked before, with no good answers, how do you configure jetty to allow access from an external server? I've just started messing around with solr and jetty and am using the example jetty instance that comes with solr.
solr is running fine on localhost, and I can query it from sites on the same server. However, I can't access the solr instance from another server. I've googled and read quite a bit in the last few days, but have not been able to discover what's keeping jetty from allowing non localhost access to solr.
Based on what I've read, I have tried added the following line to example/etc/jetty.xml
<Set name="Host">0.0.0.0</Set>
and still got no external response
then tried
<Set name="Host">x.x.x.x</Set>
where x.x.x.x is my server's IP address
and
<Set name="Host">host.domain.com</Set>
where host.domain.com is my server's FQDN
These both resulted in the error
java.net.BindException: Cannot assign requested address
when I started.
The start command I'm using is
sudo java -jar start.jar etc/jetty.xml
You can point me to where I can read on this or spoon feed me, I don't care. I'd just like to get past this hurdle so I can keep learning about setting up and using solr.
you should add a file called clientaccesspolicy.xml for cross domain access to your static web files directory:
<access-policy>
<cross-domain-access>
<policy>
<allow-from http-methods="*" http-request-headers="*">
<domain uri="http://*"/>
<domain uri="https://*"/>
</allow-from>
<grant-to>
<resource path="/" include-subpaths="true"/>
</grant-to>
</policy>
</cross-domain-access>
</access-policy>
you should set you static directory to jetty using this code:
ResourceHandler staticHandler = new ResourceHandler();
staticHandler.setResourceBase("static/dir");
handlers.addHandler(staticHandler);