How to know remaining balance for Azure subscription? (Remaining = Credits - Usage) - azure-billing-api

I am able to consume the Azure REST APIs - RateCard & Usage.
Using combination of above APIs, I am able to calculate the cost of Consumption(usage).
To find the Credits, it was mentioned here,
first, you would need to find the total credits available. This information can be fetched via Rate Card API. This will be available under OfferTerms element in the response.
but I received below response in RateCard API:
"OfferTerms": [
{
"ExcludedMeterIds": [],
"Name": "Monetary Credit"
}
],
Hence, my question is How can I find the credit transactions for the subscription in order to calculate the remaining balance?
Thanks.

I tried to replicate the scenario as per the documentation available here
(I'd recommend checking it)The script does still include the bellow info:
<add key="ADALRedirectURL" value="https://localhost/"/>
<add key="TenantDomain" value="ENTER.AAD.TENANT.DNS.NAME"/>
<add key="SubscriptionID" value="00000000-0000-0000-0000-000000000000"/>
<add key="ClientId" value="00000000-0000-0000-0000-000000000000"/>
Have you double checked if all the IDs above were correct ?

Related

How to invoke Time Sheet Invoicing Upload SAP Fieldglass REST API call?

I'm looking into how to use the Time Sheet Invoicing Upload and first port of call was the Try It Out page.
The documentation lists the value for the mandatory "Type" field as TIMESHEET INVOICING but this seems at odds with other calls (it's usually just the call name, e.g. Time Sheet Invoicing Upload). Have tried these values and multiple other variants on the "Try It Out" page but all have failed so far with "The Type value specified in this file is not recognized".
Grateful for any pointers on how to get this working and/or advice on whether the SAP Fieldglass REST API documentation for this call might need to be amended.
As an aside - am also wondering about some of the fields listed in the body - e.g. TIMESHEET ID and ORIGINAL TIMESHEET ID are in block capitals, which doesn't follow the convention of other fields and the API reference for this call just has "data": [ {} ] in the body with no actual fields present - again, this is at odds with other calls.
Re: Main question - The documentation is incorrect - the Type value should be "Time Sheet Invoicing Upload". Also found out that this particular call can only be made by a Supplier tenant, not a Buyer tenant. In our case, we needed to request SAP to enable Configuration Manager for that tenant and then we could log in as the Supplier, change to the linked Configuration Manager account, create the API Application Key and License Key, enable the integration connector and use all of the above to authenticate as the Supplier and make the API call... it also requires a Buyer field in the header (set to the 4 digit Buyer code e.g. "A123") - this also isn't mentioned in the documentation.
Re: Aside - Turns out the API is case insensitive for field names - e.g. "Timesheet ID" will work just as well as "TIMESHEET ID".

Data Studio connector making multiple calls to API when it should only be making 1

I'm finalizing a Data Studio connector and noticing some odd behavior with the number of API calls.
Where I'm expecting to see a single API call, I'm seeing multiple calls.
In my apps script I'm keeping a simple tally which increments by 1 every url fetch and that is giving me the correct number I expect to see with getData().
However, in my API monitoring logs (using Runscope) I'm seeing multiple API requests for the same endpoint, and varying numbers for different endpoints in a single getData() call (they should all be the same). E.g.
I can't post the code here (client project) but it's substantially the same framework as the Data Connector code on Google's docs. I have caching and backoff implemented.
Looking for any ideas or if anyone has experienced something similar?
Thanks
Per the this reference, GDS will also perform semantic type detection if you aren't explicitly defining this property for your fields. If the query is semantic type detection, the request will feature sampleExtraction: true
When Data Studio executes the getData function of a community connector for the purpose of semantic detection, the incoming request will contain a sampleExtraction property which will be set to true.
If the GDS report includes multiple widgets with different dimensions/metrics configuration then GDS might fire multiple getData calls for each of them.
Kind of a late answer but this might help others who are facing the same problem.
The widgets / search filters attached to a graph issue getData calls of their own. If your custom adapter is built to retrieve data via API calls from third party services, data which is agnostic to the request.fields property sent forward by GDS => then these API calls are multiplied by N+1 (where N = the amout of widgets / search filters your report is implementing).
I could not find an official solution for this either, so I invented a workaround using cache.
The graph's request for getData (typically requesting more fields than the Search Filters) will be the only one allowed to query the API Endpoint. Before starting to do so it will store a key in the cache "cache_{hashOfReportParameters}_building" => true.
if (enableCache) {
cache.putString("cache_{hashOfReportParameters}_building", 'true');
Logger.log("Cache is being built...");
}
It will retrieve API responses, paginating in a look, and buffer the results.
Once it finished it will delete the cache key "cache_{hashOfReportParameters}building", and will cache the final merged results it buffered so far inside "cache{hashOfReportParameters}_final".
When it comes to filters, they also invoke: getData but typically with only up to 3 requested fields. First thing we want to do is make sure they cannot start executing prior to the primary getData call... so we add a little bit of a delay for things that might be the search filters / widgets that are after the same data set:
if (enableCache) {
var countRequestedFields = requestedFields.asArray().length;
Logger.log("Total Requested fields: " + countRequestedFields);
if (countRequestedFields <= 3) {
Logger.log('This seams to be a search filters.');
Utilities.sleep(1000);
}
}
After that we compute a hash on all of the moving parts of the report (date range, plus all of the other parameters you have set up that could influence the data retrieved form your API endpoints):
Now the best part, as long as the main graph is still building the cache, we make these getData calls wait:
while (cache.getString('cache_{hashOfReportParameters}_building') === 'true') {
Logger.log('A similar request is already executing, please wait...');
Utilities.sleep(2000);
}
After this loop we attempt to retrieve the contents of "cache_{hashOfReportParameters}_final" -- and in case we fail, its always a good idea to have a backup plan - which would be to allow it to traverse the API again. We have encountered ~ 2% error rate retrieving data we cached...
With the cached result (or buffered API responses), you just transform your response as per the schema GDS needs (which differs between graphs and filters).
As you start implementing this, you`ll notice yet another problem... Google Cache is limited to max 100KB per key. There is however no limit on the amount of keys you can cache... and fortunately others have encountered similar needs in the past and have come up with a smart solution of splitting up one big chunk you need cached into multiple cache keys, and gluing them back together into one object when retrieving is necessary.
See: https://github.com/lwbuck01/GASs/blob/b5885e34335d531e00f8d45be4205980d91d976a/EnhancedCacheService/EnhancedCache.gs
I cannot share the final solution we have implemented with you as it is too specific to a client - but I hope that this will at least give you a good idea on how to approach the problem.
Caching the full API result is a good idea in general to avoid round trips and server load for no good reason if near-realtime is good enough for your needs.

Get rates for all services in one request

Question
Is it possible to get rates for all possible ups services in the same request?
Background
Although the UPS rates documentation states that the service element is optional
Requests with the service element defined respond successfully while requests without the element defined result in the following error:
["Error"]=>
array(3) {
["ErrorSeverity"]=>
string(4) "Hard"
["ErrorCode"]=>
string(6) "111100"
["ErrorDescription"]=>
string(58) "The requested service is invalid from the selected origin."
}
Additionally, every example and library i've seen either only desired to create requests for one type of service or creates a request for each service the user specifies they want to receive:
// optional, you can specify which rates to look for -- performs multiple requests, so be careful not to do too many
In Summary
Is there a way to return rates for all services from UPS that I am missing or must we query UPS for each service we wish to get a rate for?
You should be able to receive rates for multiple services by setting the /RateRequest/Request/RequestOption to Shop and omitting the /RateRequest/Shipment/Service element.
This is outlined in UPS's documentation for the Rate Webservice endpoints:
Can a customer compare services for a shipment using the Rating API?
Yes. Use the “Shop” value, instead of the “Rate” value, in the RequestOption element of the ../Request container to retrieve the rates for all services for the stated lane pair. The API response will return a rate for each of the available services. This is known as the Shop option.

How to get more than 1 stock information per call on Google Financials?

I'm using Google Script and Google Financials to get information for a list of stocks I have in a text file. The problem is that the class FinanceApp just seems to be able to get one stock at a time and since I have to do this for more than 250 stocks I reach the maximum call limit.
Is there a better way to do this?
Since there are limitations and you are making repeted tests, I suggest using a cache : You can then repeat the test without hitting the limit (assuming you request always the same data for the same date, i.e. using StockInfoSnapshot).
You do it by wrapping FinanceApp.getHistoricalStockInfo() so that it serves from the cache if possible, or add to it if info is not available.
The cache could conveniently resides in the "script-related storage" : https://developers.google.com/apps-script/script_user_properties
Good luck !

Tweet counter for identi.ca

Is there a way to retrieve the amount of times a certain URL was "dented" (shared on identi.ca, status.net and/or the likes?).
For twitter there are several services that give this information.
Twitter itself: http://urls.api.twitter.com/1/urls/count.json?url=http://example.com&callback=twttr.receiveCount
Tweetmeme: http://api.tweetmeme.com/url_info.jsonc?url=http://example.com
Topsy: http://otter.topsy.com/stats.js?url=http://example.com&callback=?
I don't need the fancy extra information that Tweetmeme or Topsy deliver, only the amount.
I am aware that this is problematic, seen from the "distributed" nature of status.net: it will only give a count from once single silo, e.g. identi.ca. However, for me, for now, that would be enough.
Is there such an endpoint that gives me such JSON?
I don't think so. There's a file table in StatusNet databases that holds references to dented URLs (so it wouldn't be hard to count them if you had access to database or could write a plugin -- i.e., you wouldn't have to parse all notices, just lookup the file table), but it's not exposed through the API.
The list of API possible calls for StatusNet is here: http://status.net/wiki/TwitterCompatibleAPI
In addition, there's a proposed Google Summer of Code project on this subject: Social Analytics plugin