Ethereum: API call for Total Terminal Difficulty - json

I see you can get the current block data with
https://api.etherscan.io/api?module=proxy&action=eth_blockNumber&apikey=YourApiKeyToken
and
https://api.blockcypher.com/v1/eth/main
But none have TTL. Any idea how I can get this info? I want to poll it.

These are arbitrary APIs by services that don't include the raw block data.
If you want to get the raw block data, including its totalDifficulty, you need to query an Ethereum node over its JSON-RPC API method eth_getBlockByNumber or eth_getBlockByHash.
Example with Infura, a well-known 3rd party Ethereum node provider:
curl -X POST 'https://mainnet.infura.io/v3/<your_api_key>' \
--data '{"jsonrpc":"2.0","method":"eth_getBlockByNumber","params":["0x1b4", true],"id":1}'
Response contains the totalDifficulty param which, according to the docs, represents "integer of the total difficulty of the chain until this block.".
{"jsonrpc":"2.0","id":1,"result":{"difficulty":"0x4ea3f27bc","extraData":"0x476574682f4c5649562f76312e302e302f6c696e75782f676f312e342e32","gasLimit":"0x1388","gasUsed":"0x0","hash":"0xdc0818cf78f21a8e70579cb46a43643f78291264dda342ae31049421c82d21ae","logsBloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","miner":"0xbb7b8287f3f0a933474a79eae42cbca977791171","mixHash":"0x4fffe9ae21f1c9e15207b1f472d5bbdd68c9595d461666602f2be20daf5e7843","nonce":"0x689056015818adbe","number":"0x1b4","parentHash":"0xe99e022112df268087ea7eafaf4790497fd21dbeeb6bd7a1721df161a6657a54","receiptsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","sha3Uncles":"0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347","size":"0x220","stateRoot":"0xddc8b0234c2e0cad087c8b389aa7ef01f7d79b2570bccb77ce48648aa61c904d","timestamp":"0x55ba467c","totalDifficulty":"0x78ed983323d","transactions":[],"transactionsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","uncles":[]}
Ethereum is scheduled to perform the upgrade to PoS when the total difficulty reaches the threshold of 58,750,000,000,000,000,000,000.

Related

How to run the jmeter thread groups one after another

The situation is next: I need to run the performance tests using jmeter. I have setup several Graphql requests, tested them and they work perfectly. The idea of the flow is next:
The user login to the platform using graphql mutation, and in response headers receives accessToken, which is valid for an hour.
The next graphql request must parce the generated token, and perform the next mutation using this token.
The idea is to test the platform with 250k requests, which takes more than 1 hour, and as you probably understand - the token is expired.
What I have done within my script:
I've created 2 thread groups. In the first, there is a Flow is Control action element, which is setup to pause duration for 3600000 msec. This means that it will run the Login request once per hour.
I'm scraping the token with JSON extractor, and parcing it using the BeanShell post-processor, by the following command:
props.put("accessToken", vars.get("accessToken"));
Second thread group has a Graphql request to perform the 250k requests to the server and uses the token value from HTTP headers:
Bearer ${__P(accessToken)}
Everything works fine, accept the fact, that I don't know how to setup the scenario, where the thread group with login will finish its run, after the second thread group will complete 250k requests.
I've tried adding the loop controller to Login, set as infinite, but the thing that once the second group will complete 250k requests - the run won't be finished, as the Login will be running forever once per hour.
Any ideas?
You can add a Loop Controller or Throughput Controller and configure them to execute these 250k requests followed by Flow Control Action Sampler configured like:
When the sampler is reached - it will tell all the threads in all thread groups to stop.
Also be informed that since JMeter 3.1 you're supposed to be using JSR223 Test Elements and Groovy language for scripting so it makes sense to consider migration now. Storing a property once per hour with 1 thread is not something you should be worrying but for more resource intensive tasks Groovy behaves much better (for example it has built-in JSON support so you can discard the JSON Extractor)

How to retrieve git commit history for specific time period with cURL?

I am trying to retrieve commit history as JSON and output in a txt file.
curl https://api.github.com/repos/username/repo/commits > commitHistory.txt
With above curl commend, I only get the 1st page of the commit history. I would like to retrieve full commit history or maybe set a date range when doing it. How should I do it?
You can use since and until parameters to get commits only for specific time period:
curl https://api.github.com/repos/username/repo/commits?since=2016-11-01T00:00:00Z&until=2016-11-01T23:59:59Z
For details: See api doc .
API requests from GitHub are automatically paginated for large result sets, so you'll need to inspect the Link: header and make further requests while ever there are more results. The API documentation offers more information:
Requests that return multiple items will be paginated to 30 items by default. You can specify further pages with the ?page parameter. For some resources, you can also set a custom page size up to 100 with the ?per_page parameter. Note that for technical reasons not all endpoints respect the ?per_page parameter, see events for example.
curl 'https://api.github.com/user/repos?page=2&per_page=100'
Note that page numbering is 1-based and that omitting the ?page parameter will return the first page.
For more information on pagination, check out our guide on Traversing with Pagination.
You could also do this using a Python library like github3.py (or equivalent), which will handle pagination for you.
In terms of a specific date range, philipjkim's answer is correct: use the since and until parameters.

WSO2 IS 5.1.0 as OAuth/OIDC IdP response with different claims on UserInfo endpoint

Anyone know why if I make a call to /userinfo endpoint I obtain different JSON response? Specifically:
When I make a call with curl from command line, like $curl -k -H "Authorization: Bearer 2bcea7cc9d7e4b63fd2257aa31116512" https://localhost:9443/oauth2/userinfo?schema=openid I obtain as response the JSON: {"sub":"asela","name":"asela","preferred_username":"asela","given_name":"asela","family_name":"asela"}
If I make the call with a java client (a library that implement the Authorization Code Flow), when the client make the /userinfo call I have as response a JSON like {"sub":"asela#carbon"} without all other claims.
The claims for the service defined in WSO2 IS are the default ones. Thanks for any help.
I have tried this and got the same issue that you have faced. As I have mentioned in my previous comment, the issue occurs due to the claim mapping issue. Normally we get the user's attributes from the “http://wso2.org/claims” dialect. But when we call to OpenID userInfo endpoint, it will provide the user's attributes from “http://wso2.org/oidc/claim”. But all the claims in http://wso2.org/claims are not defined in http://wso2.org/oidc/claim. (Ex:Mobile, Address, Organization). So we have to define those required claims on http://wso2.org/oidc/claim dialect, if it is not defined.
You can check this claims from Identity Server Management console. To do this, Log into ManagementConsole > Main > List (under Claims)
Then you can go though the two claim dialects and add required claims to http://wso2.org/oidc/claim dialect. To add new claim, Goto ManagementConsile > Main > Add(under Claims) > Add new claim. See the attached screen shot of defining a sample claim. Here you need to map the exact Mapped Attribute & Claim Uri with the http://wso2.org/claims.
Hope this will helpful.
WSO2 IS normally returns the claims that are configured under the “http://wso2.org/oidc/claim” claim dialect. But the claim in the response should return normally. So make sure you have defined claim values in the user's profile. You can follow [1] & [2] for more details about this. Still you couldn't get the correct response, please attached your SP configurations and claim configurations for further analyze.
[1] http://xacmlinfo.org/2015/03/09/openid-connect-support-with-resource-owner-password-grant-type/
[2] http://shanakaweerasinghe.blogspot.com/2016/01/get-user-profile-for-oauth-token-using.html

REST: Updating Multiple Resources With One Request - Is it standard or to be avoided?

A simple REST API:
GET: items/{id} - Returns a description of the item with the given id
PUT: items/{id} - Updates or Creates the item with the given id
DELETE: items/{id} - Deletes the item with the given id
Now the API-extension in question:
GET: items?filter - Returns all item ids matching the filter
PUT: items - Updates or creates a set of items as described by the JSON payload
[[DELETE: items - deletes a list of items described by JSON payload]] <- Not Correct
I am now being interested in the DELETE and PUT operation recycling functionality that can be easily accessed by PUT/DELETE items/{id}.
Question: Is it common to provide an API like this?
Alternative: In the age of Single Connection Multiple Requests issuing multiple requests is cheap and would work more atomic since a change either succeeds or fails but in the age of NOSQL database a change in the list might already have happend even if the request processing dies with internal server or whatever due to whatever reason.
[UPDATE]
After considering White House Web Standards and Wikipedia: REST Examples the following Example API is now purposed:
A simple REST API:
GET: items/{id} - Returns a description of the item with the given id
PUT: items/{id} - Updates or Creates the item with the given id
DELETE: items/{id} - Deletes the item with the given id
Top-resource API:
GET: items?filter - Returns all item ids matching the filter
POST: items - Updates or creates a set of items as described by the JSON payload
PUT and DELETE on /items is not supported and forbidden.
Using POST seems to do the trick as being the one to create new items in an enclosing resource while not replacing but appending.
HTTP Semantics POST Reads:
Extending a database through an append operation
Where the PUT methods would require to replace the complete collection in order to return an equivalent representation as quoted by HTTP Semantics PUT:
A successful PUT of a given representation would suggest that a subsequent GET on that same target resource will result in an equivalent representation being returned in a 200 (OK) response.
[UPDATE2]
An alternative that seems even more consistent for the update aspect of multiple objects seems to be the PATCH method. The difference between PUT and PATCH is described in the Draft RFC 5789 as being:
The difference between the PUT and PATCH requests is reflected in the way the server processes the enclosed entity to modify the resource identified by the Request-URI. In a PUT request, the enclosed entity is considered to be a modified version of the resource stored on the origin server, and the client is requesting that the stored version be replaced. With PATCH, however, the enclosed entity contains a set of instructions describing how a resource currently residing on the origin server should be modified to produce a new version. The PATCH method affects the resource identified by the Request-URI, and it also MAY have side effects on other resources; i.e., new resources may be created, or existing ones modified, by the application of a PATCH.
So compared to POST, PATCH may be also a better idea since PATCH allows an UPDATE where as POST only allows appending something meaning adding without the chance of modification.
So POST seems to be wrong here and we need to change our proposed API to:
A simple REST API:
GET: items/{id} - Returns a description of the item with the given id
PUT: items/{id} - Updates or Creates the item with the given id
DELETE: items/{id} - Deletes the item with the given id
Top-resource API:
GET: items?filter - Returns all item ids matching the filter
POST: items - Creates one or more items as described by the JSON payload
PATCH: items - Creates or Updates one or more items as described by the JSON payload
You could PATCH the collection, e.g.
PATCH /items
[ { id: 1, name: 'foo' }, { id: 2, name: 'bar' } ]
Technically PATCH would identify the record in the URL (ie PATCH /items/1 and not in the request body, but this seems like a good pragmatic solution.
To support deleting, creating, and updating in a single call, that's not really supported by standard REST conventions. One possibility is a special "batch" service that lets you assemble calls together:
POST /batch
[
{ method: 'POST', path: '/items', body: { title: 'foo' } },
{ method: 'DELETE', path: '/items/bar' }
]
which returns a response with status codes for each embedded requests:
[ 200, 403 ]
Not really standard, but I've done it and it works.
Updating Multiple Resources With One Request - Is it standard or to be avoided?
Well, sometimes you simply need to perform atomic batch operations or other resource-related operations that just do not fit the typical scheme of a simple REST API, but if you need it, you cannot avoid it.
Is it standard?
There is no universally accepted REST API standard so this question is hard to answer. But by looking at some commonly quoted api design guidelines, such as jsonapi.org, restfulapi.net, microsoft api design guide or IBM's REST API Conventions, which all do not mention batch operations, you can infer that such operations are not commonly understood as being a standard feature of REST APIs.
That said, an exception is the google api design guide which mentions the creation of "custom" methods that can be associated via a resource by using a colon, e.g. https://service.name/v1/some/resource/name:customVerb, it also explicitly mentions batch operations as use case:
A custom method can be associated with a resource, a collection, or a service. It may take an arbitrary request and return an arbitrary response, and also supports streaming request and response. [...] Custom methods should use HTTP POST verb since it has the most flexible semantics [...] For performance critical methods, it may be useful to provide custom batch methods to reduce per-request overhead.
So in the example case you provided you do the following according to google's api guide:
POST /api/items:batchUpdate
Also, some public APIs decided to offer a central /batch endpoint, e.g. google's gmail API.
Moreover, as mentioned at restfulapi.net, there is also the concept of a resource "store", in which you store and retrieve whole lists of items at once via PUT – however, this concept does not count for server-managed resource collections:
A store is a client-managed resource repository. A store resource lets an API client put resources in, get them back out, and decide when to delete them. A store never generates new URIs. Instead, each stored resource has a URI that was chosen by a client when it was initially put into the store.
Having answered your original questions, here is another approach to your problem that has not been mentioned yet. Please note that this approach is a bit unconventional and does not look as pretty as the typical REST API endpoint naming scheme. I am personally not following this approach but I still felt it should be given a thought :)
The idea is that you could make a distinction between CRUD operations on a resource and other resource-related operations (e.g. batch operations) via your endpoint path naming scheme.
For example consider a RESTful API that allows you to perform CRUD operations on a "company"-resource and you also want to perform some "company"-related operations that do not fit in the resource-oriented CRUD-scheme typically associated with restful api's – such as the batch operations you mentioned.
Now instead of exposing your resources directly below /api/companies (e.g. /api/companies/22) you could distinguish between:
/api/companies/items – i.e. a collection of company resources
/api/companies/ops – i.e. operations related to company resources
For items the usual RESTful api http-methods and resource-url naming-schemes apply (e.g. as discussed here or here)
POST /api/companies/items
GET /api/companies/items
GET /api/companies/items/{id}
DELETE /api/companies/items/{id}
PUT /api/companies/items/{id}
Now for company-related operations you could use the /api/companies/ops/ route-prefix and call operations via POST.
POST /api/companies/ops/batch-update
POST /api/companies/ops/batch-delete
POST /api/companies/ops/garbage-collect-old-companies
POST /api/companies/ops/increase-some-timestamps-just-for-fun
POST /api/companies/ops/perform-some-other-action-on-companies-collection
Since POST requests do not have to result in the creation of a resource, POST is the right method to use here:
The action performed by the POST method might not result in a resource that can be identified by a URI.
https://www.rfc-editor.org/rfc/rfc2616#section-9.5
As far as I understand the REST concept it covers an update of multiple resources with one request. Actually the trick here is to assume a container around those multiple resources and take it as one single resource. E.g. you could just assume that list of IDs identifies a resource that contains several other resources.
In those examples in Wikipedia they also talk about resources in Plural.

Querying REST API using cURL

To begin with I am a novice, both with REST and cURL. This is what I want to do:
I want to query the Twitter Rest API using commandline (some bash script) and hence, I want to use cURL to do that.
I have been trying to search for some tutorial (with no luck) that explains how to use REST with cURL and how to form the headers, get/post and then how to receive the REST response and the response status, so that I can write my logic based on the response status.
I would greatly appreciate if you can give me a few examples on how to call the twitter REST API using curl and also how to access the REST response status.
I tried calling twitter using this:
curl "http://api.twitter.com/1/statuses/user_timeline.json?include_entities=true&screen_name=$user_name&count=150"
And got a json which I was able to use in my program. But I am not sure how to access the rest response status codes (in order to make my implementation more reliable). Also, I am not sure about what other flags should I provide to the cURL while calling it.
PS: Just to add, if you think there is some other MUCH better alternative to retrieving data from REST API other that using cURL, then also please let me know. Though my application is not web based and is more data crunching oriented and that's why I chose commandline and bash.
curl -i will print out the header information:
HTTP/1.1 200 OK
Date: Sat, 29 Dec 2012 07:34:00 GMT
I know it doesn't directly answer your question, but I'd suggest using Python instead of bash, for a number of reasons. And if you decide on that, I strongly recommend the marvellous Requests library: http://docs.python-requests.org