I'm looking into how to use the Time Sheet Invoicing Upload and first port of call was the Try It Out page.
The documentation lists the value for the mandatory "Type" field as TIMESHEET INVOICING but this seems at odds with other calls (it's usually just the call name, e.g. Time Sheet Invoicing Upload). Have tried these values and multiple other variants on the "Try It Out" page but all have failed so far with "The Type value specified in this file is not recognized".
Grateful for any pointers on how to get this working and/or advice on whether the SAP Fieldglass REST API documentation for this call might need to be amended.
As an aside - am also wondering about some of the fields listed in the body - e.g. TIMESHEET ID and ORIGINAL TIMESHEET ID are in block capitals, which doesn't follow the convention of other fields and the API reference for this call just has "data": [ {} ] in the body with no actual fields present - again, this is at odds with other calls.
Re: Main question - The documentation is incorrect - the Type value should be "Time Sheet Invoicing Upload". Also found out that this particular call can only be made by a Supplier tenant, not a Buyer tenant. In our case, we needed to request SAP to enable Configuration Manager for that tenant and then we could log in as the Supplier, change to the linked Configuration Manager account, create the API Application Key and License Key, enable the integration connector and use all of the above to authenticate as the Supplier and make the API call... it also requires a Buyer field in the header (set to the 4 digit Buyer code e.g. "A123") - this also isn't mentioned in the documentation.
Re: Aside - Turns out the API is case insensitive for field names - e.g. "Timesheet ID" will work just as well as "TIMESHEET ID".
Related
Is it possible to trigger an HTTP cloud function in response to a pubsub message?
When editing a subscription, google makes it possible to push the message to an HTTPS endpoint, but for abuse reasons one has to be able to prove that you own the domain in order to do this, and of course you can't prove that you own google's own *.cloudfunctions.net domain which is where they get deployed.
The particular topic I'm trying to subscribe to is a public one, projects/pubsub-public-data/topics/taxirides-realtime. The answer might be use a background function rather than HTTP triggered, but that doesn't work for different reasons:
gcloud functions deploy echo --trigger-resource projects/pubsub-public-data/topics/taxirides-realtime --trigger-event google.pubsub.topic.publish
ERROR: gcloud crashed (ArgumentTypeError): Invalid value 'projects/pubsub-public-data/topics/taxirides-realtime': Topic must contain only Latin letters (lower- or upper-case), digits and the characters - + . _ ~ %. It must start with a letter and be from 3 to 255 characters long.
This seems to indicate this is only permitted on topics I own, which is a strange limitation.
It is possible to publish from a pub/sub topic to a cloud function. I was looking for a way to publish messages from a topic in project A to a function in project B. This was not possible with a regular topic trigger, but it is possible with http-trigger. Overall steps to follow:
Creata a http-triggered function in project B.
Create a topic in project A.
Create a push subscription on that topic in project A.
Domain verification
Push subscription
Here we have to fill in three things: the endpoint, the audience and the service account under which the function runs.
Push Endpoint: https://REGION-PROJECT_ID.cloudfunctions.net/FUNC_NAME/ (slash at end)
Audience: https://REGION-PROJECT_ID.cloudfunctions.net/FUNC_NAME (no slash at end)
Service Account: Choose a service account under which you want to send the actual message. Be sure the service account has the "roles/cloudfunctions.invoker" role on the cloud function that you are sending the messages to. Since november 2019, http-triggered functions are automatically secured because AllUsers is not set by default. Do not set this property unless you want your http function to be public!
Domain verification
Now you probably can't save your subscription because of an error, that is because the endpoint is not validated by Google. Therefore you need to whitelist the function URL at: https://console.cloud.google.com/apis/credentials/domainverification?project=PROJECT_NAME.
Following this step will also bring you to the Google Search Console, where you would also need to verify you own the endpoint. Sadly, at the time of writing this process cannot be automated.
Next we need to add something in the lines of the following (python example) to your cloud function to allow google to verify the function:
if request.method == 'GET':
return '''
<html>
<head>
<meta name="google-site-verification" content="{token}" />
</head>
<body>
</body>
</html>
'''.format(token=config.SITE_VERIFICATION_CODE)
Et voila! This should be working now.
Sources:
https://github.com/googleapis/nodejs-pubsub/issues/118#issuecomment-379823198,
https://cloud.google.com/functions/docs/calling/http
Currently, Cloud Functions does not allow one to create a function that receives messages for a topic in a different project. Therefore, specifying the full path including "projects/pubsub-public-data" does not work. The gcloud command to deploy a Cloud Function for a topic expects the topic name only (and not the full resource path). Since the full resource path contains the "/" character, it is not a valid specification and results in the error you see.
The error you are getting seems to be that you are misspelling something in the gcloud command you are issuing.
ERROR: gcloud crashed (ArgumentTypeError): Invalid value 'projects/pubsub-public-data/topics/taxirides-realtime': Topic must contain only Latin letters (lower- or upper-case), digits and the characters - + . _ ~ %. It must start with a letter and be from 3 to 255 characters long
Are you putting a newline character in the middle of the command?
I'm new to NATS and have read all the examples for:
https://nats.io/documentation/concepts/nats-messaging/
I'm in Microservciearchitecture where in microservice-Y (MSY) need to store some information published from other microservice-X (MSX) I have 2-10 instances of MSY so when changes are made in MSX and MSX-instance publishes event I want that only 1 instance of MSY should save information so not all of them save the same data.
I have read Request-Repy:
https://nats.io/documentation/concepts/nats-req-rep/
but there seems that all of instances receives message (and will handle it) even if it is point-to-point and reply is handled just for the one instance that is quickest to reply
Is this correct or have I missunderstood example?
If I only need that 1 instance of MSY should handle given message (store data in db) what can I do to acheve this?
Use queue groups. If you have multiple subscriptions on the same subject with the same queue group, only one of the members of the group will receive the message.
Check this out: https://nats.io/documentation/concepts/nats-queueing/
Question
Is it possible to get rates for all possible ups services in the same request?
Background
Although the UPS rates documentation states that the service element is optional
Requests with the service element defined respond successfully while requests without the element defined result in the following error:
["Error"]=>
array(3) {
["ErrorSeverity"]=>
string(4) "Hard"
["ErrorCode"]=>
string(6) "111100"
["ErrorDescription"]=>
string(58) "The requested service is invalid from the selected origin."
}
Additionally, every example and library i've seen either only desired to create requests for one type of service or creates a request for each service the user specifies they want to receive:
// optional, you can specify which rates to look for -- performs multiple requests, so be careful not to do too many
In Summary
Is there a way to return rates for all services from UPS that I am missing or must we query UPS for each service we wish to get a rate for?
You should be able to receive rates for multiple services by setting the /RateRequest/Request/RequestOption to Shop and omitting the /RateRequest/Shipment/Service element.
This is outlined in UPS's documentation for the Rate Webservice endpoints:
Can a customer compare services for a shipment using the Rating API?
Yes. Use the “Shop” value, instead of the “Rate” value, in the RequestOption element of the ../Request container to retrieve the rates for all services for the stated lane pair. The API response will return a rate for each of the available services. This is known as the Shop option.
I'm building an application that stores files into the FIWARE Object Storage. I don't quite understand what is the correct way of storing files into the storage.
The code python code snippet below taken from the Object Storage - User and Programmers Guide shows 2 ways of doing it:
def store_text(token, auth, container_name, object_name, object_text):
headers = {"X-Auth-Token": token}
# 1. version
#body = '{"mimetype":"text/plain", "metadata":{}, "value" : "' + object_text + '"}'
# 2. version
body = object_text
url = auth + "/" + container_name + "/" + object_name
return swift_request('PUT', url, headers, body)
The 1. version confuses me, because when I first looked at the only Node.js module (repo: fiware-object-storage) that works with Object Storage, it seemed to use 1. version. As the module was making calls to the old (v.1.1) API version instead of the presumably newest (v.2.0), referencing to the python example, not sure if that is an outdated version of doing it or not.
As I played more with the module, realised it didn't work and the code for it was a total mess. So I forked the project and quickly understood that I will need rewrite it form the ground up, taking the above mention python example from the usage guide as an reference. Link to my repo.
As of writing this the only methods that aren't implement is the object storage (PUT) and object fetching (GET).
Had some addition questions about the Object Storage which I sent to fiware-lab-help#lists.fiware.org, but haven't heard anything back so asking them here.
Haven't got much experience with writing API libraries. Should I need to worry about auth token expiring? I presume it is not needed to make a new authentication, every time we interact with storage. The authentication should happen once when server is starting-up (we create a instance) and it internally keeps it. Should I implement some kind of mechanism that refreshes the token?
Does the tenant id change? From the quote below is presume that getting a tenant I just a one time deal, then later you can use it in the config to make less authentication calls.
A valid token is required to access an object store. This section
describes how to get a valid token assuming an identity management
system compatible with OpenStack Keystone is being used. If the
username, password and tenant details are known, only step 3 is
required. source
During the authentication when fetching tenants how should I select the "right" one? For now i'm just taking the first one similar as the example code does.
Is it true that a object storage container belongs to only a single region?
Use only what you call version 2. Ignore your version 1. It is commented out in the example. It should be removed from the documentation.
(1) The token will be valid for some period of time. This could be an hour or a day, depending on the setup. This period of time should be specified in the token that is returned by the authentication service. The token needs to be periodically refreshed.
(2) The tenant id does not change.
(3) Typically only one tenant id is returned. It is possible, however, that you were assigned more than one id, in which case you have to pick which one you are currently using. Containers typically belong to a single tenant and are not shared between tenants.
(4) Containers are typically limited to a single region. This may change in the future when multi-region support for a container is added to Swift.
Solved my troubles and created the NPM module that works with the FIWARE Object Storage: https://github.com/renarsvilnis/fiware-object-storage-ge
Is there a way to retrieve the amount of times a certain URL was "dented" (shared on identi.ca, status.net and/or the likes?).
For twitter there are several services that give this information.
Twitter itself: http://urls.api.twitter.com/1/urls/count.json?url=http://example.com&callback=twttr.receiveCount
Tweetmeme: http://api.tweetmeme.com/url_info.jsonc?url=http://example.com
Topsy: http://otter.topsy.com/stats.js?url=http://example.com&callback=?
I don't need the fancy extra information that Tweetmeme or Topsy deliver, only the amount.
I am aware that this is problematic, seen from the "distributed" nature of status.net: it will only give a count from once single silo, e.g. identi.ca. However, for me, for now, that would be enough.
Is there such an endpoint that gives me such JSON?
I don't think so. There's a file table in StatusNet databases that holds references to dented URLs (so it wouldn't be hard to count them if you had access to database or could write a plugin -- i.e., you wouldn't have to parse all notices, just lookup the file table), but it's not exposed through the API.
The list of API possible calls for StatusNet is here: http://status.net/wiki/TwitterCompatibleAPI
In addition, there's a proposed Google Summer of Code project on this subject: Social Analytics plugin