ssis 2010 zappysys more than 300 rows issue - ssis

I am using SSIS 2010 and using a test extension ZappySys connecting test JSON Source (rest API or File).
The issue I have is the total rows to scan 300 default. I have tried to override this, and it still returned 300 rows. I would like to use this extension more, but is there a way of getting more than 300 rows of data? Does anyone know how to by pass this?
There is this post on zappysys blog but does not state how to get more than 300 rows.
https://zappysys.com/blog/how-to-read-data-from-servicenow-rest-api-ssis/

Disclaimer: I work for ZappySys.
Scan option affect only Metadata Guess in ZappySys. Based on your description Looks like your issue is not metadata guess but fetching more rows with pagination. Process of looping through ServiceNow data is described here
For using pagination in ZappySys make sure you configure JSON source as per the article below
Step-By-Step : Configure ServiceNow API Pagination
Screenshot - Pagination Settings:
Here is an example of Pagination in ServiceNow. Let's say you fetching all rows for MyTable1. In such case you have to keep calling API like below until last page is reached. You detect last page by trapping WebException with StatusCode 404
/api/now/table/MyTable1?sysparm_limit=10&sysparm_offset=0
/api/now/table/MyTable1?sysparm_limit=10&sysparm_offset=10
/api/now/table/MyTable1?sysparm_limit=10&sysparm_offset=20
/api/now/table/MyTable1?sysparm_limit=10&sysparm_offset=30
/api/now/table/MyTable1?sysparm_limit=10&sysparm_offset=NNN ...Last page reached you get 404 error. Stop looping
ZappySys Pagination settings takes care of these for you automatically.

Related

Jenkins API xpath like functionality for JSON

I am trying to use the jenkins API to retrieve a list of running jobs buildURLs and this works with this the query
https://jenkins.server.com/computer/api/xml?tree=computer[executors[currentExecutable[url]]]&depth=1&xpath=//url&wrapper=buildUrls
By searching for all executors on given jenkins server and then grabbing the urls and wrapping them in a xml buildUrls object
(what I actually want is a count of total running jobs but I can filter in api then get a .size of this once client side)
However the application I am uses only accepts json, although I could convert to json I am wondering if there is a way I can have this return a json object which contains just the buildUrls. Currently if I change return to json the xpath modification is removed (since its xml only)
And I instead get returned the list of all executors and their status
I think my answer lies within Tree but i cant seem to get the result I want
To return a JSON object you can modify the URL as below
http://jenkins.server.com/computer/api/json?tree=computer[executors[currentExecutable[url]]]&depth=1&pretty=true
It will not be possible to get only the build urls you need to iterate through all the executables to get the total running jobs. The reason is that build urls alone cannot give you information where it is running. Hence, unfortunately we need to iterate through each executor.
The output you will get is something like below.
Another way of doing this same thing is parsing via jobs
http://jenkins.server.com/api/json?tree=jobs[name,url,lastBuild[building,timestamp]]&pretty=true
You can check for building parameter. But here too you will not be able to filter output from url diretly in jenkins. Iteration would be required through each and every job.

SSIS Error : oData VS_NEEDSNEWMETADATA Error When No Data Returned

I searched but didn't find anything specific to the issue I was having. My apologies if I overlooked it.
I have a scenario where I'm pulling data from an oData source and everything works fine. I have the oData source in a loop where I loop over each of our different companies and pull the data for that company. I'm doing this to reduce the volume of data that is being returned in each oData call.
Everything works fine as long as there is data being returned from the oData call. In the attached photo you can see that the call is being made and data is being returned.
But when the oData Service is called with parameters/filters that return no data I get the following and that is when the VS_NEEDSNEWMETADATA error is thrown. The image below shows what I get when no data is returned.
So the issue isn't that I have invalid metadata due to changes made to the service (adding/removing fields). It's that nothing is being returned so there is no Metadata. Now it's possible that this is an issue with the system that I'm pulling the oData from (SAP S4) and they way that system surfaces the oData call when there is no data??
Either way, trying to figure out a way to handle this within SSIS. I tried to set validate external metadata = false but the package still fails. I could also fix this by excluding those companies in the script but then once data does exists I'd have to remember to update the scripts and redeploy.
Ideas suggestions?
You've hit the nail on the head except your metadata is changing in the no data case because the shape is different - at least from the SSIS engine perspective.
You basically need to double invoke the OData source. First invocation will simply identify is data returned. If that evaluation is true, only then does the Data Flow start running.
How can you evaluate whether your OData will return data? You're likely looking at a Script Task and C#/VB.NET and hopefully the SAP data returns don't vary between this call and next call.
If that's the case, then you need to define an OnError event handler for the Data Flow, mark the System scoped variable Propagate to False and maybe even force a success status to be returned to get the loop to continue.

Caching API response from simple ruby script

I'm working on a simple ruby script with cli that will allow me to browse certain statistics inside the terminal.
I'm using API from the following website: https://worldcup.sfg.io/matches
require 'httparty'
url = "https://worldcup.sfg.io/matches"
response = HTTParty.get(url)
I have to goals in mind. First is to somehow save the JSON response (I'm not using a database) so I can avoid unnecessary requests. Second is to check if the new data is available, and if it is, to override the previously saved response.
What's the best way to go about this?
... with cli ...
So caching in memory is likely not available to you. In this case you can save the response to a file on disk.
Second is to check if the new data is available, and if it is, to override the previously saved response.
The thing is, how can you check if new data is available without doing a request for the data? Not possible (given the information you provided). So you can simply keep fetching data every 5 minutes or so and updating your local file.

JsonConverter Excel VBA - HTTP get always same

I am retrieving some JSON data from an HTTP service and using JsonConverter to parse it.
When I first run the macro everthing is fine, but when I run the macro again I get same the same data as the first time (when I should be getting updated data, which includes a ticker changing every 100 ms at the client side).
If I close Excel and reopen again I get the latest data.
What is the problem here?
Any help appreciated.
The issue is due to the caching mechanism.
To avoid it, use Msxml2.ServerXMLHTTP.6.0 instead of Msxml2.XMLHTTP.6.0.
You could also keep using Msxml2.XMLHTTP.6.0, but you'll have to set the following header:
xhr.SetRequestHeader "If-None-Match", "-"

Retrieve URL JSON data in MS Access

There is a web service that allows me to go to a URL, with my API-key, and request a page of data. The data is returned as JSON. The JSON is well-formed, I ran it through JSONLint and confirmed its OK.
What I would like to do is retrieve the JSON data from within MS Access (2003 or 2007), if possible, and build a table from that data (first time thru), then append/update the table on subsequent calls to that URL. I would settle for "pre-step" where I retrieve this information via another means. Since I have an API key in the URL, I do not want to do this server-side. I would like to keep it all within Access, run it on my PC at home (its for personal use anyway).
If I have to use another step before the database load then Javascript? But I dont know that very well. I dont even really know what JSON is other than what I have read in Wikipedia. The URL looks similar to:
http://www.SomeWebService.com/MyAPIKey?p=1&s=50
where: p = page number
s = records per page
Access DB is a JavaScript Lib for MS Access, quick page search says they play nicely with JSON, and you can input/output with. boo-ya.
http://www.accessdb.org/
EDIT:
dead url; wayback machine ftw:
http://web.archive.org/web/20131007143335/http://www.accessdb.org/
also sourceforge
http://sourceforge.net/projects/accessdb/