How can we access RLP encoded signed raw transaction in solidity? - ethereum

I know how to create a raw transaction for Ethereum because there are many articles like this.
But I don't know how to access the raw transactions in Solidity.
Msg object give us msg.data, msg.sender, msg.sig and msg.value.
These parameters are convenient but I want the RLP encoded signed raw transaction.
How can I access the raw data?
Is there any global variable like tx.raw?
Or Is it impossible?
Thanks in advance.

To my knowledge, the raw transaction is not exposed to Solidity.

Related

how to store data in eth blockchain without gas

I want to create an app where I store some data/text in the Ethereum blockchain, I know it's possible with smart-contracts but is there a way to store without paying gas?
Thanks!
No, you have to pay gas because this is essentially data that every person who is running a node must store on their computer

Angular Downloading data and storing it in a factory

I have been working with Angular for some time now. My question is simple, I have a database with multiple tables. There is a clients table and around 7 or 8 other tables that contain information about that client that I need. None of the data from these tables is too terribly large. In order to reduce http calls, it was my thought to load all of the tables and store the data from each into a object stored in a factory.
So once a particular client is called, the http requests are made from each table and each are stored inside of a factory. Then, when a user needs to access that table, its data is stored in memory as the http call has been completed at the outset. When the data is changed, it can make a quick save of the table data and reload it again.
Most of the data is financial containing information about the income and asset categories of the client.
Question is .. is this wise? Am I missing something?
Thanks in advance
Your use of the term factory is inappropriate as a factory is a creational pattern. What you are describing is a facade. It is reasonable for a facade to aggregate data for a client and present it in a unified manner.
So, a remote client requests some data. The server-side facade makes the many requests on behalf of the client and composes the single response.
You have mentioned about caching the data. If you choose to do so, you will need to consider how to manage the cache data for staleness, how much memory you will need, etc.

Redis and MongoDB; How should I store large JSON objects, Performance issue

I am currently developing a Node.js app. It has a mySql database server which I use to store all of the data in the app. However, I find myself storing a lot of data that pertains to the User in session storage. How I have been doing this is by using express-session to store the contents of my User class however these User classes can be quite large. I was thinking about writing a middleware that will save the User class as JSON to either redis or mongodb and store the key to the storage within the session cookie. When I retrieve the JSON from redis or mongodb, I will then parse it and use it to reconstruct my User class.
My question is which method would be the fastest performing and also scalable: storing JSON strings in Redis, or storing a mongo document representation of my User class in MongoDB? Thanks!
EDIT: I am planning to include mongoDB in another part of the app, solving a different issue. Also will the JSON parsing from redis be more time-consuming and memory intensive than parsing from mongo? At what reoccuring user count would server memory sessions become a problem?
express-session has various session store options to save the session data to.
AFAIK, these all work through the same principle: they serialize the session object to a JSON string, and store that string in the store (using the session id as the key).
In other words, your idea of storing the user data as a JSON string in either Redis or MongoDB using a second key is basically exactly the same as what express-session does when using the Redis or MongoDB stores. So I wouldn't expect any performance benefits from that.
Another option would be to store the user data as a proper MongoDB document (not serialized to a JSON string). Under the hood this would still require (de)serialization, although from and to BSON, not JSON. I have never benchmarked which of those two is faster, but I'm gonna guess and say that JSON might be a tad quicker.
There's also a difference between Redis and MongoDB, in that Redis is primarily in-memory and more lightweight. However, MongoDB is more of a "real" database that allows for more elaborate queries and has more options in terms of scalability.
Since it seems to me that you're only storing transient data in your sessions (as the actual data is stored on MySQL), I would suggest the following:
use the Redis session store if the total amount of data you're storing in the sessions will fit in memory;
use the MongoDB session store if not.
tj/connect-redis in conjunction with express-session does the job well! Redis is incredibly fast with JSON and is superb for handling sessions.

Compress the data passed in JSON

I dont know whether i am asking valid question
My problem is i want to send large data from one application to another for which i am using JSONP to pass data to handler file which stores it in database.As the data is large i am dividing it in chunks and passing the packets in loop, the more the number of packets the more time it takes to pass complete data which ultimately results in performance issue.(FYI my web server is bit slow)
Is there any way by which i can compress my data and send it at a time rather than sending it in packets.
OR
Any other way by which i can pass my data(large data) from application to another.
Need this ASAP
Thanks in advance.

What is a "serialized" object in programming? [duplicate]

This question already has answers here:
What is Serialization?
(16 answers)
Closed 2 years ago.
I've seen the term "serialized" all over, but never explained. Please explain what that means.
Serialization usually refers to the process of converting an abstract datatype to a stream of bytes (You sometimes serialize to text, XML or CSV or other formats as well. The important thing is that it is a simple format that can be read/written without understanding the abstract objects that the data represents). When saving data to a file, or transmitting over a network, you can't just store a MyClass object, you're only able to store bytes. So you need to take all the data necessary to reconstruct your object, and turn that into a sequence of bytes that can be written to the destination device, and at some later point read back and deserialized, reconstructing your object.
Serialization is the process of taking an object instance and converting it to a format in which it can be transported across a network or persisted to storage (such as a file or database). The serialized format contains the object's state information.
Deserialization is the process of using the serialized state to reconstruct the object from the serialized state to its original state.
real simple explanation, serialization is the act of taking something that is in memory like an instance of a class (object) and transforming into a structure suitable for transport or storage.
A common example is XML serialization for use in web services - I have an instance of a class on the server and need to send it over the web to you, I first serialize it into xml which means to create an xml version of that data in the class, once in xml I can use a transport like HTTP to easily send it.
There are several forms of serialization like XML or JSON.
There are (at least) two entirely different meanings to serialization. One is turning a data structure in memory into a stream of bits, so it can be written to disk and reconstituted later, or transmitted over a network connection and used on another machine, etc.
The other meaning relates to serial vs. parallel execution -- i.e. ensuring that only one thread of execution does something at a time. For example, if you're going to read, modify and write a variable, you need to ensure that one thread completes a read, modify, write sequence before another can start it.
What they said. The word "serial" refers to the fact that the data bytes must be put into some standardized order to be written to a serial storage device, like a file output stream or serial bus. In practice, the raw bytes seldom suffice. For example, a memory address from the program that serializes the data structure may be invalid in the program that reconstructs the object from the stored data. So a protocol is required. There have been many, many standards and implementations over the years. I remember one from the mid 80's called XDR, but it was not the first.
You have data in a certain format (e.g. list, map, object, etc.)
You want to transport that data (e.g. via an API or function call)
The means of transport only supports certain data types (e.g. JSON, XML, etc.)
Serialization: You convert your existing data to a supported data type so it can be transported.
The key is that you need to transport data and the means by which you transport only allows certain formats. Your current data format is not allowed so you must "serialize" it. Hence as Mitch answered:
Serialization is the process of taking an object instance and converting it to a format in which it can be transported.