Reading smart contract from malicious node - ethereum

With smart contracts, I know transactions are verified by multiple nodes, but reading only requires one node. What if that one node is malicious and gives out corrupted data? Is this possible?

Yes, it is technically possible for a node to be malicious and to return modified results (to either all queries or just selected ones).
Apart from non-technical ways to minimize the risk of retrieving data from a malicious node (e.g. request data only from reputable providers, ...), you can set up your own node that you have control over. Here are two widely used open-source Ethereum clients that you can run on your machine:
https://geth.ethereum.org/docs/getting-started
https://openethereum.github.io/index
Both are capable of communicating with external apps using the standardized JSON RPC API (there are wrappers over this API, for example web3 and ethers.js libraries).

Related

JSON-RPC vs JSON Patch

There are a lot of comparisons like "REST vs smth" (eg vs Kafka, vs JSON-RPC), but I also see many similarities between JSON-RPC and JSON Patch – both of them specify operation/method, values/parameters, and allow to perform batch requests. The only difference I see is that JSON-RPC also describes response format with IDs and errors, so it looks more mature. But maybe they just have different pros&cons, different appropriate use cases?
JSON-RPC (2.0) is indeed mostly mentioned as a REST alternative. (A look at the more general discussion, REST vs SOAP vs RPC might be due but this is a rabbit hole, seriously. Part of the problem is, people talking about different RESTs and RPCs and confuse them.) Historically, RPC systems often suffered from confounded dependences related to schema requirements, custom serializations, protocol restrictions, and complex RPC/function mappings. JSON-RPC tries to entangle dependencies, it does not want to be everything RPC for everybody but to stay relatively narrow in scope. It supports a small(er) set of commands, and is more opinionated on how to implement it. Nonetheless, it's a complete package, highly flexible, and used for complex APIs that struggle to map all operations to HTTP verbs. Newer capabilities like events, notifications, and batch actions make JSON-RPC very versatile and a good choice for all kinds of projects; the additions are especially useful for action-based microservices.
JSON-Patch (RFC6902), arguably, tries to solve a single common problem of APIs, i.e. partial updates. Instead of PUTting or POSTing (potentially heavyweight) changes to a (historically) growing URL space, using query params to carry the semantics, or introducing other complexities, JSON Patch offers an atomic update request that patches your JSON using a series of "add", "remove", "replace", "move" and "copy" operations. To quote one of the authors of the JSON Patch RFC, Mark Nottingham (IETF):
This has a bunch of advantages. Since it’s a generic format, you can
write server- and client-side code once, and share it among a number
of applications. You can do QA once, and make sure you get it right.
Furthermore, your API will become less complex, because it has less
URI conventions, leading to more flexibility and making it easier to
approach for new developers. Using this approach will also make caches
operate more correctly; since modifications to a resource will
“travel” through its URL, rather than some other one, the right stored
responses will get invalidated.
Similar things can be said about JSON Merge Patch (RFC7396, which can be used for the same purpose but works more like a diff/patch that simply contains the changes instead of using mutating operations. In this respect, JSON Merge Patch is simpler but more limited than JSON Patch, e.g. you cannot set a key to null (since null means remove), merging only works with objects {..}, arrays [..] can only be replaced as a whole, and merging never fails, could cause side-effects and is therefore not transactional. This renders Merge Patch (overly) simplistic and impractical for certain real-world applications. There are more related projects named like JSON-Diffpatch, etc that take similar approaches.
In general, all of them have advantages and disadvantages, and none of them will fit everybody's use-cases. It really depends on what problems you want to solve with your APIs. JSON-RPC is arguably more versatile and mature, however, JSON-Patch is also a sound choice, especially if you control both sides of the communication.

Sending continuous data over HTTP with Go

I am currently working on a web service in Go that essentially takes a request and sends back JSON, rather typical. However, this particular JSON takes 10+ seconds to actually complete and return. Because I am also making a website that depends on the JSON, and the JSON contents are subject to change, I implemented a route that quickly generates and returns (potentially updated or new) names as placeholders that would get replaced later by real values that correspond to the names. The whole idea behind that is the website would connect to the service, get back JSON almost immediately to populate a table, then wait until the actual data to fill in came back from the service.
This is where I encounter an issue, potentially because I am newish to Go and don't understand its vast libraries completely. The previous method that I used to send JSON back through the HTTP requests was ResponseWriter.Write(theJSON). However, Write() terminates the response, so the website would have to continually ping the service which could now and will be disastrous in the future
So, I am seeking some industry knowledge into my issue. Can HTTP connections be continuous like that, where data is sent piecewise through the same http request? Is that even a computationally or security smart feature, or are there better ways to do what I am proposing? Finally, does Go even support a feature like that, and how would I asynchronously handle it for performance optimization?
For the record, my website is using React.js.
i would use https websockets to achieve this effect rather than a long persisting tcp.con or even in addition to this. see the golang.org/x/net/websocket package from the go developers or the excellent http://www.gorillatoolkit.org/pkg/websocket from gorilla web toolkit for use details. You might use padding and smaller subunits to allow interruption and restart of submission // or a kind of diff protocol to rewrite previously submitted JSON. i found websocket pretty stable even with small connection breakdowns.
Go does have a keep alive ability net.TCPConn's SetKeepAlive
kaConn, _ := tcpkeepalive.EnableKeepAlive(conn)
kaConn.SetKeepAliveIdle(30*time.Second)
kaConn.SetKeepAliveCount(4)
kaConn.SetKeepAliveInterval(5*time.Second)
Code from felixqe
You can use restapi as webservice and can sent data as a json.SO you can continously sent data over a communication channel.

Does Apache Thrift allow foreign function calls between any two languages?

I'm currently trying to develop (an API in multiple programming languages) that can be accessed from (various other programming languages). I've taken a look at Apache Thrift, and it appears that it might be possible to allow seamless foreign function calls between any two languages using Thrift. Is this correct?
Thrift is created to facilitate communication between different processes over the network, not in process FFI. It is probably possible to take some parts of Thrift (like IDL), and adopt it for FFI, but it could be an nontrivial undertaking, and provide suboptimal results.
I have actually been thinking of something similar myself.
There are core concepts to the Thrift specification.
The Transport: This portion is responsible for facilitating data transfer between a client and server.
The Protocol: This portion is responsible for formatting the said data in different ways. It can be a JSON, compressed binary, even raw uncompressed binary.
The Server: This is responsible for putting these things together and managing them.
Thrift allows you to mix these different parts in unique ways to create something suitable to your purpose. Thrift is still very much server-client oriented though.
To develop an API in thrift would mean that you could theoreticallly have plugins in any language. The main software component would launch the sub-process and use STD-IN/OUT as a Transport. This would allow it to make RPC calls regardless of Language.

reliability of spring integration esb

How the reliability of message transmission be protected in the spring intergration?
For example, the server crashed when messages transforming in the router, or messages were processed failed in splitter and transformer?
How the mechanism handles those situation?Is there any references or documents?
Any help will be appreciated!
Also, if your entry point is a channel adapter or gateway that supports transactions (e.g. JMS, AMQP, JDBC, JPA,..) and you use default channels, the entire flow will take place within the scope of that transaction, as the transaction context is bound to the thread. If you add any buffering channels or a downstream aggregator, then you would want to consider what Gary mentioned so that you are actually completing the initial transaction by handing responsibility to another reliable resource (as opposed to leaving a Message in an in-memory Map and then committing, for example).
Hope that makes sense.
Shameless plug: there's a good overview of transactions within the Spring Integration in Action book, available now through MEAP: http://manning.com/fisher/
Regards,
Mark
By default, messages are held in memory but you can declare channels to be persistent, as needed. Persistent channels use JMS, AMQP (rabbit), or a message store. A number of message stores are provided, including JDBC, MongoDB, Redis, or you can construct one that uses a technology of your choice.
http://static.springsource.org/spring-integration/docs/2.1.1.RELEASE/reference/html/

Why use XML(SOAP) when JSON so simple and easy to handle?

Receiving and sending data with JSON is done with simple HTTP requests. Whereas in SOAP, we need to take care of a lot of things. Parsing XML is also, sometimes, hard. Even Facebook uses JSON in Graph API. I still wonder why one should still use SOAP? Is there any reason or area where SOAP is still a better option? (Despite the data format)
Also, in simple client-server apps (like Mobile apps connected with a server), can SOAP give any advantage over JSON?
I will be very thankful if someone can enlist the major/prominent differences between JSON and SOAP considering the information I have provided(If there are any).
I found the following on advantages of SOAP:
There is one big reason everyone sticks with SOAP instead of using JSON. With every JSON setup, you're always coming up with your own data structure for each project. I don't mean how the data is encoded and passed, but how the data formatted format is defined, the data model.
SOAP has an industry-mature way of specifying that data will be in a certain format: e.g. "Cart is a collection of Products and each Product can have these attributes, etc." A well put together WSDL document really has this nailed. See W3C specification: Web Services Description Language
JSON has similar ways of specifying this data structure — a JavaScript class comes to mind as the most common way of doing this — but a JavaScript class isn't really a data structure used for this purpose in any kind of agnostic, well established, widely used way.
In short, SOAP has a way of specifying the data structure in a maturely formatted document (WSDL). JSON doesn't have a standard way of doing this.
If you are creating a client application and your server implementation is done with SOAP then you have to use SOAP in client side.
Also, see: Why use SOAP over JSON and custom data format in an “ENTERPRISE” application? [closed]
Nowadays SOAP is a complete overkill, IMHO. It was nice to use it, nice to learn it, and it is beautiful we can use JSON now.
The only difference between SOAP and REST services (no matter whether using JSON) is that SOAP WS always has it's own WSDL document that could be easily transformed into a self-descriptive documentation while within REST you have to write the documentation for yourself (at least to document the data structures). Here are my cons'&'pros for both:
REST
Pros
lightweight (in all means: no server- nor client-side extensions needed, no big chunks of XML are needed to be transfered here and there)
free choice of the data format - it's up on you to decide whether you can use plain TXT, JSON, XML, or even create you own format of data
most of the current data formats (and even if used XML) ensures that only the really required amount of data is transfered over HTTP while with SOAP for 5 bytes of data you need 1 kB of XML junk (exaggerated, ofc, but you got the point)
Cons
even there are tools that could generate the documentation from docblock comments there is need to write such comments in very descriptive way if one wants to achieve a good documentation as well
SOAP
Pros
has a WSDL that could be generated from even basic docblock comments (in many languages even without them) that works well as a documentation
even there are tools that could work with WSDL to give an enhanced try this request interface (while I do not know about any such tool for REST)
strict data structure
Cons
strict data structure
uses an XML (only!) for data transfers while each request contains a lot of junk and the response contains five times more junk of information
the need for external libraries (for client and/or server, though nowadays there are such libraries already a native part of many languages yet people always tend to use some third-party ones)
To conclude, I do not see a big reason to prefer SOAP over REST (and JSON). Both can do the same, there is a native support for JSON encoding and decoding in almost every popular web programming language and with JSON you have more freedom and the HTTP transfers are cleansed from lot of useless information junk. If I were to build any API now I would use REST with JSON.
I disagree a bit on the trend of JSON I see here. Although JSON is an order maginitude easier, I'd venture to say it's quite limited. For example, SOAP WS is not the last thing. Indeed, between soap client/server you now have enterprise services bus, authentification scheme based on crypto, user management, timestamping requests/replies, etc. For all of this, there're some huge software platforms that provide services around SOAP (well, "web services") and will inject stuff in your XML. So although JSON is probably enough for small projects and an order of magnitude easier there, I think it becomes quite limited if you have decoupled transmission control and content (ie. you develop the content stuff, the actual server, but all the transmission is managed by another team, the authentification by one more team, deployment by yet another team). I don't know if my experience at a big corp is relevant, but I'd say that JSON won't survive there. There are too many constraints on top of the basic need of data representation. So the problem is not JSON RPC itself, the problem is it misses the additional tools to manage the complexity that arises in complex applications (not to say that what you do is not complex, it's just that the software reflects the complexity of the company that produces it)
I think there is a lot of basic misinformation on this thread. SOAP, REST, XML, and JSON concepts seem to be mixed up in the responses.
Here is some clarification -
XML and JSON (an others) are encodings of information.
SOAP is a communications protocol
REST is an (Architecture) style
each is used for something different although you might use more than one of these things together.
Lets start with encoding data structures as XML vs JSON:
Everything JSON currently supports can be done in XML, but not the other way around. JSON will eventually adopt all the features that XML has, but its proponents haven't encountered all of the problems yet, once they get more experience things will be added on to close the gap. for example JSON didn't start out with Schemas and binary formats.
SOAP is a communication protocol for calling an operation. It runs on top of things like, HTTP, SMTP, etc. Aside from many other features, SOAP messages can span multiple "application" layer protocols. i.e. i can sent a SOAP message by HTTP to a service endpoint which then puts it on a message queue for another system. SOAP solves the problem of maintaining authentication, message authenticity, etc. as the requested moved between different parts of a distributed system.
JSON and other data formats canbe sent via SOAP. I work with some systems that sent binary fixed-width encoded objects via SOAP, its not a problem.
The analogy is that - if only the postman is allowed to send you a letter, then it is just HTTP, but if anyone can send you a letter, then you want SOAP. (i.e. message transport security vs message content security)
the 6 REST constraints are architectural style. Interestingly the first several years of REST the examples were in SOAP. (there is no such thing as REST or SOAP they are not opposites)
A "heavyweight bloated, etc.etc." SOA SOAP system might have monoliths with operations like GET, PUT, POST instances of a single entity. SOAP doesn't have those operations predefined, but that is typically how it is used.
Consider that if you built a "REST" service on HTTP alone with an SSL/TLS terminating proxy, then you may have violated the 4th constraint of REST.
So for your software development today, you wouldn't normally interact with any of these directly. Just as if you were written a graphics program you wouldn't directly work with HDMI vs. DisplayPort typically.
The question is do you understand architecturally what your system needs to do and configure it to use the mechanism that does that job. (for example, all the challenges of applying today's microservices to general systems are old problems previously solved by SOAP, CORBA and the old protocols)
I have spent several years writing SOAP web services (with JAX WS). They are not hard to write. And I love the idea of a single endpoint and single HTTP method (POST). For me, REST is too verbose.
But as a data container, JSON is simpler, smaller, more readable, more flexible, looks closer to programming languages.
So, I reinvented the wheel and created my own approach to writing backends for AJAX requests. In comparison:
REST:
get user: method GET https://example.com/users/{id}
update user: method POST https://example.com/users/ (JSON with User object in request body)
RPC:
get user: method GET https://example.com/getUser?id=1
update user: method POST https://example.com/updateUser (JSON with User object in the request body)
My way (the proposed name is JOH - JSON over HTTP):
get user: method POST https://example.com/ (JSON specifies both user ID and class/method responsible for handling request)
update user: method POST https://example.com/ (JSON specifies both user object and class/method responsible for handling request)