Chainlink Request Event Emit - Unsure whether events are being emitted, requests seem to be sucessful - ethereum

I am trying to learn how to use the ChainlinkClient and I am using their example as well as one for the API that I am trying to uses.
You can see them here on this Gist.
The two contracts in the Gist are deployed on Rinkenby here:
APIConsumner.sol
APIConsumner2.sol
When I call the requestData() method on both contracts they seems to work, the transactions goes through and Link gets taken from the contracts, I am however unable to determine whether the actual data I am requesting from the external APIs gets returned, either by looking in the transaction event or trying to access the value that I am setting.
I am a bit bamboozled at this point, any guidance or suggestions would be greatly appreciated.

Thanks for the flag. The node that was hosting this is deprecated, the article has been updated, and the docs have the latest example.
Please use:
oracle = 0xc57B33452b4F7BB189bB5AfaE9cc4aBa1f7a4FD8;
jobId = "d5270d1c311941d0b08bead21fea7747";

Related

Google Data Studio - When is resetAuth() being called?

I am having troubles with the resetAuth() function. I implemented it roughly like this example, but I have no idea when it is being called. Adding a console output and observing the Stackdriver Log tells me that this function is never being called during what I would call a normal workflow.
The documentation is weirdly brief and is missing this part about why I need to implement it and when it is being called. Do I need to call resetAuth() manually on some point? Is there a button somewhere that calls this function?
I'm using the AuthType USER_PASS by the way and everything else seems to work just fine after some investigation and debugging.
I found this document called Community Connectors Developer Launch where, among other things, the following it listed (as of 2018-07-30):
What's next: Upcoming changes and improvements
Some of the features and improvements we'll be working on in the
coming months include:
Configuration and Authentication
Capability to execute the resetAuth function of community connectors from within Data Studio.
Does this mean that calling resetAuth() is currently not yet implemented?
resetAuth is called when the user revokes access to the connector via the https://datastudio.google.com/datasources/create endpoint.
There was a bug that caused this function to not be called for certain auth types, but it has been resolved.

Data Studio connector making multiple calls to API when it should only be making 1

I'm finalizing a Data Studio connector and noticing some odd behavior with the number of API calls.
Where I'm expecting to see a single API call, I'm seeing multiple calls.
In my apps script I'm keeping a simple tally which increments by 1 every url fetch and that is giving me the correct number I expect to see with getData().
However, in my API monitoring logs (using Runscope) I'm seeing multiple API requests for the same endpoint, and varying numbers for different endpoints in a single getData() call (they should all be the same). E.g.
I can't post the code here (client project) but it's substantially the same framework as the Data Connector code on Google's docs. I have caching and backoff implemented.
Looking for any ideas or if anyone has experienced something similar?
Thanks
Per the this reference, GDS will also perform semantic type detection if you aren't explicitly defining this property for your fields. If the query is semantic type detection, the request will feature sampleExtraction: true
When Data Studio executes the getData function of a community connector for the purpose of semantic detection, the incoming request will contain a sampleExtraction property which will be set to true.
If the GDS report includes multiple widgets with different dimensions/metrics configuration then GDS might fire multiple getData calls for each of them.
Kind of a late answer but this might help others who are facing the same problem.
The widgets / search filters attached to a graph issue getData calls of their own. If your custom adapter is built to retrieve data via API calls from third party services, data which is agnostic to the request.fields property sent forward by GDS => then these API calls are multiplied by N+1 (where N = the amout of widgets / search filters your report is implementing).
I could not find an official solution for this either, so I invented a workaround using cache.
The graph's request for getData (typically requesting more fields than the Search Filters) will be the only one allowed to query the API Endpoint. Before starting to do so it will store a key in the cache "cache_{hashOfReportParameters}_building" => true.
if (enableCache) {
cache.putString("cache_{hashOfReportParameters}_building", 'true');
Logger.log("Cache is being built...");
}
It will retrieve API responses, paginating in a look, and buffer the results.
Once it finished it will delete the cache key "cache_{hashOfReportParameters}building", and will cache the final merged results it buffered so far inside "cache{hashOfReportParameters}_final".
When it comes to filters, they also invoke: getData but typically with only up to 3 requested fields. First thing we want to do is make sure they cannot start executing prior to the primary getData call... so we add a little bit of a delay for things that might be the search filters / widgets that are after the same data set:
if (enableCache) {
var countRequestedFields = requestedFields.asArray().length;
Logger.log("Total Requested fields: " + countRequestedFields);
if (countRequestedFields <= 3) {
Logger.log('This seams to be a search filters.');
Utilities.sleep(1000);
}
}
After that we compute a hash on all of the moving parts of the report (date range, plus all of the other parameters you have set up that could influence the data retrieved form your API endpoints):
Now the best part, as long as the main graph is still building the cache, we make these getData calls wait:
while (cache.getString('cache_{hashOfReportParameters}_building') === 'true') {
Logger.log('A similar request is already executing, please wait...');
Utilities.sleep(2000);
}
After this loop we attempt to retrieve the contents of "cache_{hashOfReportParameters}_final" -- and in case we fail, its always a good idea to have a backup plan - which would be to allow it to traverse the API again. We have encountered ~ 2% error rate retrieving data we cached...
With the cached result (or buffered API responses), you just transform your response as per the schema GDS needs (which differs between graphs and filters).
As you start implementing this, you`ll notice yet another problem... Google Cache is limited to max 100KB per key. There is however no limit on the amount of keys you can cache... and fortunately others have encountered similar needs in the past and have come up with a smart solution of splitting up one big chunk you need cached into multiple cache keys, and gluing them back together into one object when retrieving is necessary.
See: https://github.com/lwbuck01/GASs/blob/b5885e34335d531e00f8d45be4205980d91d976a/EnhancedCacheService/EnhancedCache.gs
I cannot share the final solution we have implemented with you as it is too specific to a client - but I hope that this will at least give you a good idea on how to approach the problem.
Caching the full API result is a good idea in general to avoid round trips and server load for no good reason if near-realtime is good enough for your needs.

Autodesk Forge register job conflict

When POSTing to https://developer.api.autodesk.com/viewingservice/v1/register I sometiems receive the following error:
{
Diagnostic: The request is rejected as it conflicts with a previous request that is in-progress.,
registerKeys: {},
Result: Conflict
}
How can I find out which job is already in progress so that I can track its progress and get its result?
First, this is the old API, you need to consider using the ModelDerivtive API instead (ie https://developer.autodesk.com/en/docs/model-derivative/v2)
Like Xiaodond said there is no API to collect all jobs currently processing on your account. You need to request each URN manifest to determine how many jobs runs on this model as know you can translate to SVF, but also export to other formats such as obj, stl, ... when it is possible. Manifest end point and documentation here - https://developer.autodesk.com/en/docs/model-derivative/v2/reference/http/urn-manifest-GET/
Last, we are working on a webhook solution which will be a better solution as a Webhook will call you back when a job is starting and completes. Webhooks aren't yet available at the time of this post, but you should be notified via the developer newsletter when it will be on production.
Hope that helps,

The prefix "atom" for element "atom:cc" is not bound exception

I am trying to fetch the contacts of the user who have an account in google apps marketplace. While fetching the contact i get the following error
com.google.gdata.util.ParseException: The prefix "atom" for element "atom:cc" is not bound.
at com.google.gdata.util.XmlParser.parse(XmlParser.java:695)|
at com.google.gdata.util.XmlParser.parse(XmlParser.java:568)|
at com.google.gdata.data.BaseFeed.parseAtom(BaseFeed.java:793)|
at com.google.gdata.wireformats.input.AtomDataParser.parse(AtomDataParser.java:68)|
at com.google.gdata.wireformats.input.AtomDataParser.parse(AtomDataParser.java:39)|
at com.google.gdata.wireformats.input.CharacterParser.parse(CharacterParser.java:)|
at com.google.gdata.wireformats.input.XmlInputParser.parse(XmlInputParser.java:52)|...
I am using Java client library to fetch the contacts. Can you please let me know is there an issue in the java client library? This issue is there for a long time and I badly need to find a solution for this...What should I do to make it work...Any help will be grateful..
Thanks,
VijayRaj
I got the same Problem, that you have with the Java Client, with the .NET client.
After contacting Google support, they told me that the Contacts arbitrary XML data which is in an Property element cannot be parsed within my version of GData .
However, there is a time intensive workaround, by deleting and recreating Contacts, but thats probably not what you are looking for, me either.
After switching to the Python implementation all works fine now.
Check out this Issue report Issue 361

What is the difference between the BU and ZK OK codes in SAP macro

I am trying the post an invoice to SAP using the F-47 transaction and using SHDB to record the transaction and learn how it works. I see there that sometimes BU and ZK BDC OK codes are used. I would like to understand the difference between them, but could not find any official documentation. Please, explain the difference between the two?
I found the meaning of some of the status codes. I post it here, so I can remember:
/00. Enter
/AB Go to overview
=ZK Go to additional information
=ENTE Enter (don't know exactly what is difference between /00)
=PI select cursor location
=STER Go to taxes
=DELZ delete cursor
=GO continue
=BU post (save)
/EEND end processing
=Yes select "yes" from message box
=BP park (save)
=ENTR Enter (don't know exactly what is difference between =ENTE or /00)
=AE save when changing document
=BK change document header (parking or posting parked document)
=P+ next page
=BL delete parked document
A BDC_OKCODE indicates which action is (will) be executed on a screen (things like save, back, exit etc). The BU code is used for a SAVE function (like in MM01 transaction). Sorry but I cannot recall to which function ZK maps to. Obviously their difference lies in the fact that they map to different functions. You can still find out which function each button utilizes by using System->Status->GUI status.
By the way, BTCI transactions are not fully robust- minor changes in GUI flow let your program break. Error handling / analysis is tedious.... DId you have a look to posting methods more preferably? E.g. like BAPI_* function modules? With the help of LSMW you can browse for different input methods and use them later standalone. Or you can use transaction BAPI directly.