Multiple DevOps REST methods throwing 'command failed with an unexpected error' - azure-cli2

I'm having trouble running DevOps API's via 'az rest' The two methods I attempted return the same result
The code snippet I'm working with
$resource = "https://graph.microsoft.com"
$uribase = "https://dev.azure.com/{org}/{project}"
$requestpath = "/_apis/build/builds?api-version=6.0-preview.6"
$uri = $uribase+$requestpath
az rest --uri $uri --headers "Content-Type=application/json" --resource $resource
I've tried several different API versions down to 5.1 but the result is the same. If i remove the --resource parameter it complains about not being able to determine the authentication. Is 'https://graph.microsoft.com' the right resource for this call?
The error returned
> az : Not a json response, outputting to stdout. For binary data suggest use "--output-file" to
write to a file
At line:5 char:1
+ az rest --uri $uri --headers "Content-Type=application/json" --resour ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (Not a json resp...write to a file:String) [], RemoteException
+ FullyQualifiedErrorId : NativeCommandError
The command failed with an unexpected error. Here is the traceback:
'charmap' codec can't encode character '\u221e' in position 6302: character maps to <undefined>
Traceback (most recent call last):
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-
packages\azure\cli\command_modules\util\custom.py", line 18, in rest_call
File "C:\Users\VSSADM~1\AppData\Local\Temp\pip-install-yh2ypeu1\requests\requests\models.py", line
897, in json
File "C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\lib\json\__init__.py", line 354, in loads
return _default_decoder.decode(s)
....
I've written the output to a file. It's an HTML file that is related to sign in/sign out - nothing obvious from the text, but kindof a clue. As a result, I've been attempting to explicitly login to devops (even though I am logged in via 'az login') like this:
az devops login --organization https://dev.azure.com/{org}
The login hangs every time, I have to shut down powershell to restart. I've rebooted my machine, tried with and without the organization parameter, tried re-logging in via az login which works fine, but I cannot get the devops login command to work.
I'm not sure if the login is the issue or if its something else. Anyone else seen a situation like this? Thoughts on what to try?

I've untangled most of this mess, hopefully someone else will benefit from this.
First, the hang. I never fixed this so it may still be related to the workaround I discuss below.
Fix #1: Instead of passing --resource, I punted on that and pass an authorization token in the header. The "/_apis/build/builds?api-version=6.0-preview.6" now the request method works with this authorization.
Fix #2: I was working with two different REST requests, one retrieves a simple list of builds, the other retrieves a simple list of release definitions. The base url is different for the release definition API requests, it requires 'vsrm.dev.azure.com' (i have no idea why).
$tokeninfo = az account get-access-token | convertfrom-json
$token = $tokeninfo.accessToken
$uribase = "https://vsrm.dev.azure.com/{org}/{project}"
$requestpath = "/_apis/release/definitions?api-version=6.0-preview.4"
$uri = $uribase+$requestpath
$authheader = "Authorization=Bearer " + $token
az rest --uri $uri --headers $authheader
$uribase = "https://dev.azure.com/{org}/{project}"
$requestpath = "/_apis/build/builds?api-version=6.0-preview.6"
$uri = $uribase+$requestpath
$authheader = "Authorization=Bearer " + $token
az rest --uri $uri --headers $authheader
Hope this helps someone out

Related

Not able to update API management isntance using powershell

Trying to fetch system certificates from API management instance .Then trying to remove the certificate and update the APIM instance .
APIM management instance is in INTERNAL VNET mode.
Set-AzContext -Subscription "xxxx"
$apimservice= Get-AzApiManagement -ResourceGroupName "xxxx" -Name "xxxx"
$apimservice.SystemCertificates
$apimservice.SystemCertificates.Clear()
Set-AzApiManagement -InputObject $apimservice
ERROR:
Set-AzApiManagement : 'SubnetResourceId' does not match expected pattern '^/subscriptions/[^/]*/resourceGroups/[^/]*/
providers/Microsoft.(ClassicNetwork|Network)/virtualNetworks/[^/]*/subnets/[^/]*$'.
At line:8 char:1
+ Set-AzApiManagement -InputObject $apimservice
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : CloseError: (:) [Set-AzApiManagement], ValidationException
+ FullyQualifiedErrorId : Microsoft.Azure.Commands.ApiManagement.Commands.SetAzureApiManagement
Looking to your script it seems correct.
So far as we have not all the configuration as yours we have tried to reproduce the minimal as we can and it works at our end.
Could you please make sure you are using connect-azaccount before using the set-azcontext .
And try to use the latest module of az PowerShell module as 8.0.
Set-AzApiManagement : 'SubnetResourceId' does not match expected
pattern '^/subscriptions/[^/]/resourceGroups/[^/]/
providers/Microsoft.(ClassicNetwork|Network)/virtualNetworks/[^/]/subnets/[^/]$'.
For the error:-
Make sure that you have correctly added your resource group as mentioned in portal.
For more information please refer the below Links:-
MICROSOFT DOCUMENTATION|Set-AzApiManagement
MS Q&A For Similar error| resourceGroupName' does not match expected pattern'^[-\w._()]+[ & Getting invalid status code conflict while running New-AzRoleAssignment in powershell.

How to automaticaly upload .json on daily basis to Firebase

Our company generates a .json on daily basis containing the data for our mobile app which has the database on Firebase. We upload the data to it manualy, but we've been doing it for couple of months now and it is pain in the butt.
Our suplier tried to create a uploader which works with this cmdlet gcsupload-windows.exe /key:"C:\Data\myapp-test-sdk.json" /bucket:"myapp-test.appspot.com" /dst:"Import" "C:\Data\json\*.json" and they created it based on https://github.com/googleapis/google-cloud-go/tree/master/storage, but this is not my expertiese, so I can only tell you what it does.
DevOps created a Win Core server for me, they said it is enough, so I am reliant only to CMD...
Once I go to CMD and type the command outside of the domain it does upload the .json to the server so I am sure that the Uploader and the Command are correct and working properly, BUT when I am in the domain it goes haywire and the cmd replies Failed to get bucket metadata and so on..
CMD Input: PS: D:\Uploader> .\gcsupload-windows.exe /key:"D:\Firebase_Key\myapp-test.json" /bucket:"myapp-test.appspot.com" /dst:"Import-Test" /src:"D:\myapp_json"
CMD Output: Failed to get bucket metadata: Get "https://storage.googleapis.com/storage/v1/b/myapp-test.appspot.com?alt=json&prettyPrint=false&projection=full" oauth2: cannot fetch token: Post "https://oautha.googleapis.com/token" dial tcp 216.58.201.74:443: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
Proxy is set correctly on my machine and the trafic is all accepted on proxy server.
The suplier said that it might be something with gRPC, but again, this is not my expertiese, so please, wise stackoverflowers ask me, shoot me, just please help me with this. Thank you
For anyone still interested in this...
We stumbled across the solution (more like a workaround) by accident. I tried everything I could find on the internet, I had to add this
$proxyString = "http://proxy:8080" $proxyUri = new-object System.Uri($proxyString) [System.Net.WebRequest]::DefaultWebProxy = new-object System.Net.WebProxy ($proxyUri, $true) [System.Net.WebRequest]::DefaultWebProxy.Credentials=[System.Net.CredentialCache]::DefaultCredentials
just to pick up IE settings, which kind of helped, but it was not enough. Some said to modify the Uploader and give it credentials with which I should authenticate against proxy, but this looked unsafe.
I was helpless, tried invoke-webrequest http://google.com just to be sure if I am connecting to the proxy, tried the command for the uploader again, and voila - it worked!
It looks like invoke-webrequest is doing something like telling everything behind it to work with the proxy, anyway it works. So my whole script is looking like this:
$proxyString = "http://proxy:8080"
$proxyUri = new-object System.Uri($proxyString)
[System.Net.WebRequest]::DefaultWebProxy = new-object System.Net.WebProxy ($proxyUri, $true)
[System.Net.WebRequest]::DefaultWebProxy.Credentials=[System.Net.CredentialCache]::DefaultCredentials
invoke-webrequest http://google.com
$Env:HTTP_PROXY = "http://proxy:8080"
D:\Uploader\gcsuploader.exe /key:"D:\Firebase_key\myapp-prod-firebase-adminsdk.json" /bucket:"myapp-prod.appspot.com" /dst:"Import" "D:\Myapp_json\*.json" >> "c:\Uploader_logs\uploader $(get-date -f yyyy-MM-dd).log" 2>&1

Download a file from Google Drive using google-api-ruby-client

I try to download files from a directory on Google Disk.
The code, mostly copied from official quickstart guide, works fine:
# ... code from official quickstart tutorial...
# Initialize the API
service = Google::Apis::DriveV3::DriveService.new
service.client_options.application_name = APPLICATION_NAME
service.authorization = authorize
# now the real deal
response = service.list_files(q: "'0ByJUN4GMe_2jODVJVVpmRE1VTDg' in parents and trashed != true",
page_size: 100,
fields: 'nextPageToken, files(id, name)')
puts 'Files:'
puts 'No files found' if response.files.empty?
response.files.each do |file|
puts "#{file.name} (#{file.id})"
# content = service.get_file(file.id, download_dest: StringIO.new)
end
The output looks fine:
Files:
k.h264 (0B4D93ILRdf51Sk40UzBoYmZKMTQ)
output_file.h264 (0B4D93ILRdf51V1RGUDFIWFQ5cG8)
test.mp4 (0B4D93ILRdf51dWpoZWdBV3l4WjQ)
test2.mp4 (0B4D93ILRdf51ZmN4ZGlwZjBvR2M)
test3.mp4 (0B4D93ILRdf51ZXo0WnVfdVBjTlk)
12.mp4 (0ByJUN4GMe_2jRzEwS1FWTnVkX00)
01.mp4 (0ByJUN4GMe_2jSlRWVEw4a1gxa2s)
01.mp4 (0ByJUN4GMe_2jOFpPMW9YNjJuY2M)
But once I uncomment content = service.get_file(file.id, download_dest: StringIO.new), I get a lot of errors:
Files:
k.h264 (0B4D93ILRdf51Sk40UzBoYmZKMTQ)
/Users/mvasin/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/google-api-client-0.9.15/lib/google/apis/core/http_command.rb:211:in `check_status': Invalid request (Google::Apis::ClientError)
[...lots of other 'from' stuff...]
from /Users/mvasin/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/google-api-client-0.9.15/generated/google/apis/drive_v3/service.rb:772:in `get_file'
from quickstart.rb:56:in `block in <main>'
from quickstart.rb:54:in `each'
from quickstart.rb:54:in `<main>'
But that's the way it should work according to ruby section in "download files" official tutorial.
I also tried content = service.get_file(file.id, download_dest: "/tmp/#{file.name}"), but it failed with the same set of errors.
UPDATE: Here are my findings. If you start with Google Drive API Ruby quick start tutorial, and want make it download something,
1) change scope to not just let your script read meatadata, but read files contents as well:
SCOPE = Google::Apis::DriveV3::AUTH_DRIVE_METADATA_READONLY
to at least
SCOPE = Google::Apis::DriveV3::AUTH_DRIVE_READONLY
2) Filter out google docs files, because you can't download them this way, you have to convert them. To filer them:
2.1) Add mime_type to fileds set:
response = service.list_files(page_size: 10, fields: 'nextPageToken, files(id, name, mime_type)')
2.2) and in the final loop where you print files' ids and names, put something like
service.get_file(file.id, download_dest: "/your/path/#{file.name}") unless file.mime_type.include? "application/vnd.google-apps"
From the error that you got, it says that your request is invalid. So make sure that your request there are correct. Here is the documentation on how to download files using Ruby(just click the Ruby on the example to view the ruby code.)
Take NOTE: Downloading the file requires the user to have at least
read access. Additionally, your app must be authorized with a
scope
that allows reading of file content.
For more information, check these threads:
How to download file from google drive api
A Ruby library to read/write files

Box Server file upload error while using a QT application coded using the new OAuth2 API

I have been working on a Box App using the API v2 for the past few days and have successfully authenticated using OAuth2.
My app retrieves the access token successfully and I'm also able to access my Box account using the access token, however an upload of a file fails with a response of 299.
The html response I see from Box after posting an upload request has the following message
"Sorry, we can't access that page."
Your Box account may be temporarily unavailable. We're working on resolving the issue and should be back up soon."
I take it all 2xx errors mean that the request has been accepted but the Box server cannot handle it.
Given below is a snippet of my code used to post the file.
Any tips on what could be wrong is appreciated
I am following instructions from
http://developers.box.com/get-started/#uploading-and-downloading
QUrl requrl = QUrl("https://www.box.com/api/2.0/files/content");
std::string token = m_acc_token;
QString hdrval = "Bearer "+QString(token.c_str());
QNetworkRequest qnr(requrl);
qnr.setRawHeader("Authorization",hdrval.toUtf8());
QString boundary;
boundary = "---------7d935033608e2";
QByteArray data;
data.append("file=#btest.txt");
data.append(boundary);
data.append("folder_id=0");
data.append(boundary);
qnr.setHeader(QNetworkRequest::ContentTypeHeader,"multipart/form-data; boundary=---------7d935033608e2");
qnr.setHeader(QNetworkRequest::ContentLengthHeader,data.size());
QNetworkReply* areply = NULL;
areply = m_networkManager->post(qnr,data);
you can implement it like
QHttpMultiPart *multiPart = new QHttpMultiPart(QHttpMultiPart::FormDataType);
QHttpPart headerPart;
headerPart.setHeader(QNetworkRequest::ContentDispositionHeader, QVariant("form-data; name=\"parent_id\" \" "));
headerPart.setBody(QString(aParentFolderId).toLatin1());
QHttpPart textPartData;
textPartData.setHeader(QNetworkRequest::ContentDispositionHeader, QVariant("form-data; filename=\"filename\" \" "));
textPartData.setBodyDevice(&File); //file must be open.
File.setParent(multiPart);
multiPart->append(headerPart);
multiPart->append(textPartData);
QNetworkRequest networkReq;
networkReq.setUrl(QUrl("https://upload.box.com/api/2.0/files/content"));
networkReq.setRawHeader("Authorization", "Bearer " + AccessToken.toLatin1());
networkReply = mNetworkAccessManager.post(networkReq, multiPart);
multiPart->setParent(networkReply);
The curl call in the Box API documentation can't be translated directly to code as you have done. the file=#btest.txt line on the command line puts the contents of file btest.text as the value of the parameter file.
Additionally, your multipart boundaries are malformed: they must end in \r\n; one must be present at the start of the multipart body, and another boundary with a slightly different format must be present as a final boundary. If you are interested in manually implementing the multipart form data, I'd recommend reading RFC 1876.
The Box API will return a 500 response if it is sent a malformed multipart POST body.
I'd recommend using QHttpMultiPart, for multipart form uploads, which is part of the Qt framework.

"Error generating the Discovery document for this api" when trying to build a drive service, starting 2/14/2013

I am intermittently getting this error when calling build on a drive service. I am able to reproduce this with a simple program which has the JSON credentials stored to a file.
#!/usr/bin/python
import httplib2
import sys
from apiclient.discovery import build
from oauth2client.client import Credentials
json_creds = open('creds.txt', 'r').read()
creds = Credentials.new_from_json(json_creds)
http = httplib2.Http()
http = creds.authorize(http)
try:
drive_service = build('drive', 'v2', http=http)
except Exception:
sys.exit(-1)
When I run this in a loop, I am seeing a rather high number of errors, this code in a loop fails 15-25% of the time for me.
i=0; while [ $i -lt 100 ]; do python jsoncred.py || echo FAIL ; i=$(( $i + 1 )); done | grep FAIL | wc -l
Now when I take this same code, and just replace 'drive' by 'oauth2', the code runs without problems
I have confirmed that the OAuth token that I am using is valid and have the correct scopes:
"expires_in": 2258,
"scope": "https://www.googleapis.com/auth/userinfo.profile https://www.googleapis.com/auth/drive https://www.googleapis.com/auth/userinfo.email",
Looking at my app logs, this seems to have started 2/14/2013 1PM PST. I did not push any new code, so I wonder if this a problem with the API. Is there a bug in the API causing this ?
Google are seeing some reports of increased error rates for the discovery document. Please just retry on 500 error for now, and you should be successful.
One could argue that you should have retry logic for this call anyway, since it is good practice, but the current levels are too high, so, sorry about that.
Update: this should now be fixed.