Getting 422:Unprocessable Entity when trying to add maven package form github registry to a gradle file - build.gradle

I have a jar out in gitHub package registry called test.jar (version: 1.0.5), the repository name is testRepo, owner name: tejas3108.
I am trying to add this jar as a dependency in my other gradle project, but with correct credentials, I still get this message: Could not GET 'https://maven.pkg.github.com/tejas3108/com.tejas/1.0.5/testRepo-1.0.5.pom'. Received status code 422 from server: Unprocessable Entity.
Pasting the same in the browser gives the message: Invalid path for maven file.
How do I successfully add this jar from the github registry to my build.gradle? Here is what I have now:
repositories {
mavenCentral()
maven{
url = "https://maven.pkg.github.com/tejas3108"
credentials {
username = "tejas3108"
password = "<my_github_token>"
}
}
}
dependencies {
compile "com.tejas:testRepo:1.0.5"
}

The URL should include the repository name, as per the documentation. So, in your case, write:
url = "https://maven.pkg.github.com/tejas3108/testRepo"

It may be also because the artifact group must be unique in your Github organisation / account.
So for example if you already have the same artifact in a different Github repository of your organisation or account

My issue was an uppcase letter in mArtifactId
publications {
maven(MavenPublication) {
groupId mGroupId
artifactId mArtifactId
version mVersionName

Related

Github Actions Bicep deployment: "What-if" fails to create Key Vault; "Create" fails with exit code 1

It succeeds when I manually execute Bicep deployment with the following command:
az login
az deployment group what-if --resource-group $RESOURCE_GROUP_NAME --template-file ./infrastructure/bicep/main.bicep --parameters ./infrastructure/bicep/params.json
az deployment group create (with same arguments) fails with exit code 1 and no logged msg.
I then create a Service Principal and I set it as a Github Actions secret which I am supplying to my workflow for authentication with Azure/cli:
az ad sp create-for-rbac --name azure-contributor-github-service-principal --role contributor --scope /subscriptions/$SUBSCRIPTION_ID
Then execution of the same deployment but now automated fails with the following
log message:
"Multiple errors occurred: BadRequest. Please see the details.
BadRequest - The specified KeyVault '/subscriptions/***/resourceGroups/<my_rg_name>/providers/Microsoft.KeyVault/vaults/<my_kv_name>' could not be found."
The Bicep script indeed contains a declaration for a KeyVault resource named <my_kv_name>.
To me, it seems that when I use az cli and login with (az login) my Azure Portal User account the cli is already authorized to have Key Vault-related permissions. GitHub though using the Service Principal that I created especially for that purpose, doesn't have sufficient permissions even if I create it as --role owner.
I struggle to find more debugging information.
Any idea what I am missing?
UPDATE #1:
Considering #4c74356b41's answer I added in my Bicep code an access policy that sets permissions to the Service Principal for secrets.
Unfortunately I receive the same result.
resource keyVaultAccessPolicyForSecrets 'Microsoft.KeyVault/vaults/accessPolicies#2022-07-01' = {
name: '${keyVault.name}/policy'
properties: {
accessPolicies: [
{
applicationId: spPolicyAppId
objectId: spPolicyObjectId
tenantId: spPolicyTenantId
permissions: {
secrets: [ 'all' ]
}
}
]
}
}
UPDATE #2:
I managed to make the Bicep file deployable, but still I had to change its appearance. I believe the root cause of the issue is not related to Service Principal permissions to operate with the Key Vault that the script creates. Here is why I think so:
File structure of the Bicep Code:
core.bicep - responsible for the creation of a Container Registry, a Key Vault, and a Key Vault Secret
aca.bicep - responsible for the creation of a Log Analytics Workspace, a Container App Environment, and a Container App (with configured MS Default CA Image)
main.bicep - where
via "module" I am referencing the core.bicep file which as I
mentioned creates the Key Vault.
I create an existing Key Vault resource a prop of which I need to use as an input param to the next module
via "module" keyword I am referencing the aca.bicep file which I use to create the rest of the resources.
main.bicep:
module core 'core.bicep' = {
name: 'core'
params: {
location: location
solution: solution
spPolicyAppId: spPolicyAppId
spPolicyObjectId: spPolicyObjectId
spPolicyTenantId: spPolicyTenantId
}
}
resource keyVault 'Microsoft.KeyVault/vaults#2022-07-01' existing =
{
name: core.outputs.KeyVaultName
}
module devAca 'aca.bicep' = {
name: 'devAca'
dependsOn: [
core
]
params: {
env: 'dev'
location: location
project: project
solution: solution
containerRegistryName: core.outputs.ContainerRegistryName
containerRegistryPassword:
keyVault.getSecret(core.outputs.ContainerRegistrySecretName)
imageName: imageName
imageTag: imageTag
}
}
Having this structure during the deployment is throwing already the mentioned message. When I took out the code from the subfiles and replaced the modules with it, the deployment started passing successfully.
Moreover, I removed the Key Vault Policy and the infrastructure still deploys successfully including the Key Vault and the Secret in it.
So my conclusion, for now, is that I am somehow misusing the "module" keyword
Azure Key Vault got its own data plane permissions, you need to grant your Service Principal access to secrets\certificates\keys (not sure what you are puling) in the KV (get\list).
reading: https://learn.microsoft.com/en-us/azure/key-vault/general/assign-access-policy?tabs=azure-portal
Since you didn't provide full KeyVault bicep file, we can't be sure if it's Bicep issue or something else. Try this Bicep file below with your parameters, it should create a KeyVault and Access Policy. If it fails, then maybe your user (or service principle) is missing some rights in Azure?
Check App registration API permissions, you might need Application.ReadWrite.All or Azure KeyVault permission.
resource keyVault 'Microsoft.KeyVault/vaults#2022-07-01' = {
name: keyVaultName
location: location
properties: {
tenantId: azureTenantId
accessPolicies: [
{
objectId: ownerObjectId
tenantId: azureTenantId
permissions: {
secrets: secretsPermissions
}
}
]
sku: {
name: keyVaultSkuName
family: 'A'
}
}
}

Install multiple vs code extensions in CICD

My unit test launch looks like this. As you can see I have exploited CLI options to install a VSIX my CICD has already produced, and then also tried to install ms-vscode-remote.remote-ssh because I want to re-run the tests on a remote workspace.
import * as path from 'path';
import * as fs from 'fs';
import { runTests } from '#vscode/test-electron';
async function main() {
try {
// The folder containing the Extension Manifest package.json
// Passed to `--extensionDevelopmentPath`
const extensionDevelopmentPath = path.resolve(__dirname, '../../');
// The path to the extension test runner script
// Passed to --extensionTestsPath
const extensionTestsPath = path.resolve(__dirname, './suite/index');
const vsixName = fs.readdirSync(extensionDevelopmentPath)
.filter(p => path.extname(p) === ".vsix")
.sort((a, b) => a < b ? 1 : a > b ? -1 : 0)[0];
const launchArgsLocal = [
path.resolve(__dirname, '../../src/test/test-docs'),
"--install-extension",
vsixName,
"--install-extension",
"ms-vscode-remote.remote-ssh"
];
const SSH_HOST = process.argv[2];
const SSH_WORKSPACE = process.argv[3];
const launchArgsRemote = [
"--folder-uri",
`vscode-remote://ssh-remote+testuser#${SSH_HOST}${SSH_WORKSPACE}`
];
// Download VS Code, unzip it and run the integration test
await runTests({ extensionDevelopmentPath, extensionTestsPath, launchArgs: launchArgsLocal });
await runTests({ extensionDevelopmentPath, extensionTestsPath, launchArgs: launchArgsRemote });
} catch (err) {
console.error(err);
console.error('Failed to run tests');
process.exit(1);
}
}
main();
runTests downloads and installs VS Code, and passes through the parameters I supply. For the local file system all the tests pass, so the extension from the VSIX is definitely installed.
But ms-vscode-remote.remote-ssh doesn't seem to be installed - I get this error:
Cannot get canonical URI because no extension is installed to resolve ssh-remote
and then the tests fail because there's no open workspace.
This may be related to the fact that CLI installation of multiple extensions repeats the --install-extension switch. I suspect the switch name is used as a hash key.
What to do? Well, I'm not committed to any particular course of action, just platform independence. If I knew how to do a platform independent headless CLI installation of VS Code:latest in a GitHub Action, that would certainly do the trick. I could then directly use the CLI to install the extensions before the tests, and pass the installation path. Which would also require a unified way to get the path for vs code.
Update 2022-07-20
Having figured out how to do a platform independent headless CLI installation of VS Code:latest in a GitHub Action followed by installation of the required extensions I face new problems.
The test framework options include a path to an existing installation of VS Code. According to the interface documentation, supplying this should cause the test to use the existing installation instead of installing VS Code; this is why I thought the above installation would solve my problems.
However, the option seems to be ignored.
My latest iteration uses an extension dependency on remote-ssh to install it. There's a new problem: how to get the correct version of my extension onto the remote host. By default the remote host uses the marketplace version, which obviously won't be the version we're trying to test.
I would first try with only one --install-extension option, just to check if any extension is installed.
I would also check if the same set of commands works locally (install VSCode and its remote SSH extension)
Testing it locally (with only one extension) also allows to check if that extension has any dependencies (like Remote SSH - Editing)

How to add providers in terraform aws?

This is the error I'm getting:
Failed to query available provider packages
Could not retrieve the list of available versions for provider
hashicorp/mysql: provider registry registry.terraform.io does not have a
provider named registry.terraform.io/hashicorp/mysql
For terraform > 0.13 you need to add a required_providers snippet for any un-official provider (un-official means not owned by HashiCorp and not part of their registry). There was one supported by HashiCorp but it is discontinued (you could potentially use it if you downgrade to TF12).
If you are aware of a community provided one a code snippet similar to the one for docker below should suffice:
terraform {
required_providers {
docker = {
source = "kreuzwerker/docker"
}
}
}
where in source you will give a link to the provider source repo/registry.

Hyperledger Composer CLI Ping to a Business Network returns AccessException

Im trying to learn Hyperledger Composer but seems to be a relatively new technology, i mean there are few tutorials and few solutions to a lot of questions, tutorial does not mention possible error case when following the commands and which means there are is also no solution for those errors.
I have joined the composer channel in their community chat, looks like its running in Discord or something, and asked the same question without a response, i have a better experience here in SO.
This is the problem: I have deployed my business network, installed it, started it, created my network admin card and imported it, then to test if everything is ok i have to command composer network ping --card NAME-OF-MY-ADMIN-CARD
And this error comes:
juan#JuanDeDios:~/proyectos/inovacion/a3-poliza-microservice$ composer network ping --card admin#a3-policy-microservice
Error: transaction returned with failure: AccessException: Participant 'org.hyperledger.composer.system.NetworkAdmin#admin' does not have 'READ' access to resource 'org.hyperledger.composer.system.Network#a3-policy-microservice#0.0.1'
Command failed
I think that it has to do something with the permission.acl file, and gave permission to everyone to everything so there would not be any restrictions to anyone, and tryied again, but failed.
So i thought i had to uninstall my business network and create it again, i deleted my .bna and my network.card files also so everything would be created again, but the same error result.
My other attempt was to update the business network, but didn't work, the same error happened and I'm sure i didn't miss any step from the tutorial. I do also followed the playground tutorial. What i have not done its to create another app with the Yeoman but i will do if i don't find a solution to this problem which would not require me to create another app.
This were my steps:
1-. Created my app with Yeoman
yo hyperledger-composer:businessnetwork
2-. Selected Apache-2.0 for my license
3-. Created a3-policy-microservice as the name of the business network
4-. Created org.microservice.policy (Yeah i switched names but Im totally aware)
5-. Generated my app with a template selecting the NO option
6-. Created my assets, participants and transactions
7-. Changed my permission rules to mine
8-. I generated the .bna file
composer archive create -t dir -n .
9-. Then installed my bna file
composer network install --card PeerAdmin#hlfv1 --archiveFile a3-policy-microservice#0.0.1.bna
10-. Then started my network and created my networkadmin card
composer network start --networkName a3-policy-network --networkVersion 0.0.1 --networkAdmin admin --networkAdminEnrollSecret adminpw --card PeerAdmin#hlfv1 --file networkadmin.card
11-. Imported my card
composer card import --file networkadmin.card
12-. Tried to ping my network
composer network ping --card admin#a3-poliza-microservice
And the error happens
Later i tried to create everything again shutting down my fabric and started it again and creating the network from the first step.
My other attempt was to change the permissions and upgrade my bna network, but it failed too. Im running out of options
Hope this description its not too long to ignore it. Thanks in advance
thanks for the question!
First possibility is that your network name is a3-policy-network but you're pinging a network called a3-poliza-microservice - once you do get the correct ACLs in place (currently, that's the error you're trying to resolve).
The procedure for upgrade would normally be the procedure below:
After your step 12 (where you can't ping the business network due to restrictive ACL conditions, assuming you are using the right network name) you would have:
Make the changes to to include your System ACLs this time eg.
/**
* Sample access control list.
*/
rule SystemACL {
description: "System ACL to permit all access"
participant: "org.hyperledger.composer.system.Participant"
operation: ALL
resource: "org.hyperledger.composer.system.**"
action: ALLOW
}
rule NetworkAdminUser {
description: "Grant business network administrators full access to user resources"
participant: "org.hyperledger.composer.system.NetworkAdmin"
operation: ALL
resource: "**"
action: ALLOW
}
rule NetworkAdminSystem {
description: "Grant business network administrators full access to system resources"
participant: "org.hyperledger.composer.system.NetworkAdmin"
operation: ALL
resource: "org.hyperledger.composer.system.**"
action: ALLOW
}
Update the "version" field in your existing package.json in your Business Network project directory (ie need to change it next increment - eg. update the version property from 0.0.1 to 0.0.2.)
From the same directory, run the following command:
composer archive create --sourceType dir --sourceName . -a a3-policy-network#0.0.2.bna
Now install the new business network code firstly:
composer network install --card PeerAdmin#hlfv1 --archiveFile a3-policy-network#0.0.2.bna
Then perform the requisite upgrade step (single '-' for short form of the parameter):
composer network upgrade -c PeerAdmin#hlfv1 -n a3-policy-network -V 0.0.2
After a few seconds, ping the network again to see ACL changes are now in effect:
composer network ping -c a3-policy-network

TC7 (20939) : upgrade : mercurial : http auth : Test Connection Succeeds... but build checks fail (http auth)

Have been using EAP 7 for a couple of months, this is the 2nd upgrade.
Upgraded to build 20939 today and now get errors when builds are trying to check mercurial for changes (VCS problem: FOO Edit this VCS root>>). If I edit the VCS Root and click Test Connection it succeeds. How do I go about debugging this issue?
Have tried re-saving the vcs root. I deleted and recreated the vcs root on one project and get the same result.
The recent entries in the teamcity-vcs log don't have domain\user:password, should they?
I now have both the teamcity and buildagent services running under my AD account. I don't remember what account the teamcity service was using before the upgrade (is that logged somewhere?).
If the vcs root is configured with an 'https://' and has user/password why don't I see the credentials in the log message (see above post)?
My user directory contains mercurial.ini / ssl cert (and was working pre-upgrade).
TeamCity hosted on Windows2k8, mercurial repo, using Active Directory credentials for authentication.
teamcity service is running as Local System
buildagent running as AD account (for builds that deploy to other machines)
newest errors:
[2012-01-11 17:12:39,578] WARN [cutor 4 {id=29}] - jetbrains.buildServer.VCS - Error while loading changes for root mercurial: https://mycompany.com/myproject {instance id=29, parent id=8}, cause: 'cmd /c hg pull https://mycompany.com/MyProject' command failed.
stderr: abort: http authorization required
older errors:
[2012-01-10 16:38:02,791] INFO [TeamCity Agent ] - jetbrains.buildServer.VCS - Patch applied for agent=computer {id=1, host=127.0.0.1:9090}, buildType=Project :: MVC3 {id=bt12}, root=mercurial: https://mycompany/myproject {instance id=12, parent id=1}, version=3775:7fc0ae5029e6
[2012-01-11 10:30:36,277] INFO [_Server_StartUp] - jetbrains.buildServer.VCS - Server-wide hg path is not set, will use path from the VCS root settings
The problem persisted after a complete uninstall/re-install.
In the VCS Root definition... I left the user/password fields blank and encoded the user:password into the 'Pull changes from' string (just like you'd do on the command-line.
https://domain\user:password#hg.mycompany.com/Repo
To sorta clean up the plaintext password I created a project level property 'MyPassword' (type password) and used it in the connection string like this:
https://domain\user:%MyPassword%#hg.mycompany.com/Repo
Still not great but I'm up and running and the password is not viewable by causal users.