Get details of pending transaction using tx hash on BSC - ethereum

I'm trying to get the details of pending transaction using it's txhash. I tried using web3.eth.getTransaction(transactionHash) but it seems like it returns null for pending transactions but works fine for finished transactions. Is there any other way to get details of a pending transaction? I tried to open bscscan and it was able to show from to and status as indexing.
web3.eth.getTransaction(transactionHash).then(console.log); //-> Prints null
I tried to use multiple full nodes to check if it is node's issue or not (https://bsc-dataseed1.binance.org:443, https://bsc-ws-node.nariox.org:443)
To add more context, I'm trying to do what's mentioned in this answer

I realised that the problem is with the nodes that I'm using. I self hosted a node myself and this wasn't the problem anymore.
So, If you are also facing this issue, use self hosted nodes may be.

Related

Read after write consistency with mysql and multiple concurrent connections

I'm trying to understand whether it is possible to achieve the following:
I have multiple instances of an application server running behind a round-robin load balancer. The client expects GET after POST/PUT semantics, in particular the client will make a POST request, wait for the response and immediately make a GET request expecting the response to reflect the change made by the POST request, e.g:
> Request: POST /some/endpoint
< Response: 201 CREATED
< Location: /some/endpoint/123
> Request: GET /some/endpoint/123
< Response must not be 404 Not Found
It is not guaranteed that both requests are handled by the same application server. Each application server has a pool of connections to the DB. Each request will commit a transaction before responding to the client.
Thus the database will on one connection see an INSERT statement, followed by a COMMIT. One another connection, it will see a SELECT statement. Temporally, the SELECT will be strictly after the commit, however there may only be a tiny delay in the order of milliseconds.
The application server I have in mind uses Java, Spring, and Hibernate. The database is MySQL 5.7.11 managed by Amazon RDS in a multiple availability zone setup.
I'm trying to understand whether this behavior can be achieved and how so. There is a similar question, but the answer suggesting to lock the table does not seem right for an application that must handle concurrent requests.
Under ordinary circumstances, you will not have any issue with this sequence of requests, since your MySQL will have committed the changes to the database by the time the 201 response has been sent back. Therefore, any subsequent statements will see the created / updated record.
What could be the extraordinary circumstances under which the subsequent select will not find the updated / inserted record?
Another process commits an update or delete statement that changes or removes the given record. There is not too much you can do about this, since it is part of the normal operation. If you do not want such thing to happen, then you have to implement application level locking of data.
The subsequent GET request is routed not only to a different application server, but that one uses (or is forced to use) a different database instance, which does not have the most updated state of that record. I would envisage this to happen if either application or database server level there is a severe failure, or routing of the request goes really bad (routed to a data center at a different geographical location). These should not happen too frequently.
If you're using MyISAM tables, you might be seeing the effects of 'concurrent inserts' (see 8.11.3 in the mysql manual). You can avoid them by either setting the concurrent_insert system variable to 0, or by using the HIGH_PRIORITY keyword on the INSERT.

Old data in result from new hibernate session

I faced an issue that a process receives old data from newly-opened hibernate session whereas that data is updated by another process through committed transaction and closing session after that. If I connect to database directly I can see the updated data. The Hibernate version is 4.1.9 and database is MySQL 5.6.
After reading a lot of similar problems I made sure that it's not caused neither by first level cache - new session is opened each time, nor by second level cache - it's not enabled. I also tried different options like flushing session after transaction and etc. but the hibernate was still returning old data instead of updated. Then I found quite old post on hibernate forum describing the same problem which was fixed by setting hibernate.connection.isolation to 2 (TRANSACTION_READ_COMMITTED). I gave a try to such approach and it works for me as well.
So actually the question why such change could be required for Hibernate and MySQL having default configuration, could someone please explain ?
p.s. I spent hours for solving this problem, but didn't find such solution on so. so may be it will save time for other people in the same situation.
p.p.s While posting this question I finally found another. So posted with the same problem and solution. but at the same time there are also many other similar questions without replies. May be one more post will give more chance to find the solution.
Link to the post at hibernate forum
Link to to the similar post at SO

Will IIS ever terminate the thread if a POST gets canceled by the browser [duplicate]

Environment:
Windows Server 2003 - IIS 6.x
ASP.NET 3.5 (C#)
IE 7,8,9
FF (whatever the latest 10 versions are)
User Scenario:
User enters search criteria against large data-set. After initiating the request, they are navigated to a results page, where they wait until the data is loaded and can then refine the data.
Technical Scenario:
After user sends search criteria (via ajax call), UI calls back-end service. Back-end service queries transactional system(s) and puts the resulting data into a db "cache" - a denormalized table, set-up for further refining the of the data (i.e. sorting, filtering). UI waits until the data is cached and then upon getting notified that the process is done, navigates to a resulting page. The resulting page then makes a call to get the data from the denormalized table.
Problem:
The search is relatively slow (15-25 seconds) for large queries that end up having to query many systems based on the criteria entered. It is relatively fast for other queries ( <4 seconds).
Technical Constraints:
We can not entirely re-architect this search / results system. There are way to many complexities here between how the UI and the back-end is tied together. The page is required (because of constraints that can not be solved on StackOverflow) to turn after performing the search criteria.
We also can not ask the organization to denormalize the data prior to searching because the data has to be real-time, i.e. if a user makes a change in other systems, the data has to show up correctly if they do a search afterwards.
Process that I want to follow:
I want to cheat a little. I want to issue the "Cache" request via an async HttpHandler in a fire-forget model.
After issuing the query, I want to transition the page to the resulting page.
On the transition page, I want to poll the "Cache" table to see if the data has been inserted into it yet.
The reason I want to do this transition right away, is that the resulting page is expensive on itself (even without getting the data) - still 2 seconds of load time before even getting to calling the service that gets the data from the cache.
Question:
Will the ASP.NET thread that is called via the async handler reliably continue processing even if I navigate away from the page using a javascript redirect?
Technical Boundaries 2:
Yes, I know... This search process does not sound efficient. There is nothing I can do about that right now. I am trying to do whatever I can to get it to perform a little better while we continue researching how we are going to re-architect it.
If your answer is to: "Throw it away and start over", please do not answer. That is not acceptable.
Yes.
There is the property Response.IsClientConnected which is used to know if a long running process is still connected. The reason for this property is a processes will continue running even if the client becomes disconnected and must be manually detected via the property and manually shut down if a premature disconnect occurs. It is not by default to discontinue a running process on client disconnect.
Reference to this property: http://msdn.microsoft.com/en-us/library/system.web.httpresponse.isclientconnected.aspx
update
FYI this is a very bad property to rely on these days with sockets. I strongly encourage you to do an approach which allows you to quickly complete a request that notes in some database or queue of some long running task to complete, probably use RabbitMQ or something like that, that in turns uses socket.io or similar to update the web page or app once completed.
How about don't do the async operation on an ASP.NET thread at all? Let the ASP.NET code call a service to queue the data search, then return to the browser with a token from the service, where it will then redirect to the result page that awaits the completed result? The result page will poll using the token from the service.
That way, you won't have to worry about whether or not ASP.NET will somehow learn that the browser has moved to a different page.
Another option is to use Threading (System.Threading).
When the user sends the search criteria, the server begins processing the page request, creates a new Thread responsible for executing the search, and finishes the response getting back to the browser and redirecting to the results page while the thread continues to execute on the server background.
The results page would keep verifying on the server if the query execution had finished as the started Thread would share the progress information. When it does finish, the results are returned when the next ajax call is done by the results page.
It could also be considered using WebSockets. In a sense that the Webserver itself could tell the browser when it is done processing the query execution as it offers full-duplex communications channels.

Persistent mysql connection, regardless of users connected?

I want to have ONE single mysql-connection used by EVERY user that selects the data all the time and updates it if specific conditions are met (like a placed bid). Most preferably even then if no user is visiting the website, if that's even possible?
So, in the last days I'm google'ing all the time, trying so hard to figure out to solve my issue, but it seems there are no people with enough knowledge to help me with my problem. So I try to ask my question as simple as possible without confusing you with my code. (But if you're interested seeing the code: http://pastebin.com/dRFzWtEH)
However, this is all about an auction website with live-countdown-timer and I just want to run a node.js server that SELECTs data every second and sends it to a WebSocket to show all users visiting that website the countdown and price-updates (on bids) in realtime.
I accomplished this whole task by using single-mysql-queries but then I ran into errors. Then the author of the GitHub node-mysql-module suggested me to use a MySQL Pool. But there is like no content at all to find about my specific aim stated in my first sentence of this question.
Now I want to ask in general, how could I accomplish this and is this even possible or does at least one user has to be on my website?
What would the code/code-structure/logical process look like?
And I guess I don't need to close the connection at all, so I won't need functions like connection.end()?
No, don't worry about connection pooling. It is not a big deal in MySQL.
Furthermore a "pool" has a problem -- it must clear out all settings, #variables, transaction state, etc, etc, before allowing the next 'client' to use the pooled connection. This can take time, especially if the client is far from the server.
MySQL's connection/disconnection time is very low, unlike competing products.
If you are developing a Web product, then keep in mind that HTTP is "stateless". That is, you cannot hang onto a connection from one 'page' to the next 'page. Hence, no 'state' can be saved.
Edit
If you have "Across the pond" latency problems (100-200ms between US and Europe), client-side connection pool could be very useful. However, if the pool software is injecting commands to reset things, that could totally defeat the pooling.
If you can turn on the 'general log' (in a hosted service, you may have to use log_output=TABLE), do so to see what extra commands are injected.
Also, consider combining multiple client SQL statements into Stored Procedures to cut down on back-and-forth.
Also consider either moving the MySQL server closer to the client, or moving the client closer to the MySQL server, depending on how the end-user to client back-and-forth compares to the client to MySQL traffic.

Rails debugging in production environment

I'm creating a Twitter application, and every time user updates the page it reloads the newest messages from Twitter and saves them to local database, unless they have already been created before. This works well in development environment (database: sqlite3), but in production environment (mysql) it always creates messages again, even though they already have been created.
Message creation is checked by twitter_id, that each message has:
msg = Message.find_by_twitter_id(message_hash['id'].to_i)
if msg.nil?
# creates new message from message_hash (and possibly new user too)
end
msg.save
Apparently, in production environment it's unable to find the messages by twitter id for some reason (when I look at the database it has saved all the attributes correctly before).
With this long introduction, I guess my main question is how do I debug this? (unless you already have an answer to the main problem, of course :) When I look in the production.log, it only shows something like:
Processing MainPageController#feeds (for 91.154.7.200 at 2010-01-16 14:35:36) [GET]
Rendering template within layouts/application
Rendering main_page/feeds
Completed in 9774ms (View: 164, DB: 874) | 200 OK [http://www.tweets.vidious.net/]
...but not the database requests, logger.debug texts, or anything that could help me find the problem.
You can change the log level in production by setting the log level in config/environment/production.rb
config.log_level = :debug
That will log the sql and everything else you are used to seeing in dev - it will slow down the app a bit, and your logs will be large, so use judiciously.
But as to the actual problem behind the question...
Could it be because of multiple connections accessing mysql?
If the twitter entries have not yet been committed, then a query for them from another connection will not return them, so if your query for them is called before the commit, then you won't find them, and will instead insert the same entries again. This is much more likely to happen in a production environment with many users than with you alone testing on sqlite.
Since you are using mysql, you could use a unique key on the twitter id to prevent dupes, then catch the ActiveRecord exception if you try to insert a dupe. But this means handling an error, which is not a pretty way to handle this (though I recommend doing it as a back up means of prevent dupes - mysql is good for this, use it).
You should also prevent the attempt to insert the dupes. One way is to use a lock on a common record, say the User record which all the tweets are related to, so that another process cannot try to add tweets for the user until it can get that lock (which you will only free once the transaction is done), and so prevent simultaneous commits of the same info.
I ran into a similar issue while saving emails to a database, I agree with Andrew, set the log level to debug for more information on what exactly is happening.
As for the actual problem, you can try adding a unique index to the database that will prevent two items from being saved with the same parameters. This is like the validates_uniqueness but at the database level, and is very effective: Mysql Constraign Database Entries in Rails.
For example if you wanted no message objects in your database that had a duplicate body of text, and a duplicate twitter id (which would mean the same person tweeted the same text). Then you can add this to your migration:
add_index( :message, [:twitter_id, :body] , :unique => true)
It takes a small amount of time after you tell an object in Rails to save, before it actually gets in the database, thats maybe why the query for the id doesn't find anything yet.
For your production server, I would recommend setting up a rollbar to report you all of the unhandled errors and exceptions in your production servers.
You can also store a bunch of useful information, like http request, requested users, code which invoked an error and many more or sends email notifications each time some unhandled exceptions happened on your production server.
Here is a simple article about debugging in rails that could help you out.