I downloaded ethereum wallet two days ago and it started to sync blocks. The syncing speed is quite fast in the beginning. Now it's close to the finish point but the syncing speed become really slow. I've been waiting for an afternoon but it just can't finish syncing the last 1000 blocks.
What should I do?
The question I posted is not really an issue.
I just waited for an evening and it finally finished.
I had the same problem for months now. About 100 blocks left and sync not ending. Switching to "Develop" -> "Sync with light client (beta)" seems to have fixed this issue for me. But you'd better backup your data and account files before experimenting with this.
Disclaimer: Since this is declared 'beta' please be cautious and carefully judge if this is an option yor you.
Related
I'm a little worried about my Raspberry Pi's SD-card life.
On the Raspberry, there's a MySQL(MariaDB)-server running.
A program of mine is reading from the database every second,
then looks something up on the internet and only rarely,
when something happens, it's going to write to the database.
I used to use commit() only once every 5 minutes, but apparently if I don't commit,
the program doesn't see changes from other programs even though those are from tables
it doesn't write to.
1) Concerns about a Raspberries-SD-card's life are all over the internet, so my question is,
how to best call commit()?
2) If it just reads from the database but doesn't change anything, will commit even access the disk?
Is there a way to see the new changes without commiting?
3) And if I do have to commit every second in order to see the changes in time, how bad is it?
PS: I'm using Python3 with mysql-connector, an 8 GB SD with the OS the raspberry imager program recommended to me
So I guess since it's just a couple hundred writings a day, it's totally fine.
But why have you guys all answered in the comments?
Who am I gonna pick as best answer?
I have a "standard persistent disk" of size 10GB on Google Cloud using Ubutu 12.04. Whenever, I try to remove this, I encounter following error
The resource 'projects/XXX/zones/us-central1-f/disks/tahir-run-master-340fbaced6a5-d2' is not ready
Does anybody know about what's going on? How can I get rid of this disk?
This happened to me recently as well. I deleted an instance but the disk didn't get deleted (despite the auto-delete option being active). Any attempt to manually delete the disk resource via the dev console resulted in the mentioned error.
Additionally, the progress of the associated "Delete disk 'disk-name'" operation was stuck on 0%. (You can review the list of operations for your project by selecting compute -> compute engine -> operations from the navigation console).
I figured the disk-resource was "not ready" because it was locked by the stuck-operation, so I tried deleting the operation itself via the Google Compute Engine API (the dev console doesn't currently let you invoke the delete method on operation-resources). It goes without saying, trying to delete the operation proved to be impossible as well.
At the end of the day, I just waited for the problem to fix-itself. The following morning, I tried deleting the disk again, as it looks like the lock had been lifted in the meantime, as the operation was successful.
As for the cause of the problem, I'm still left clueless. It looks like the delete-operation was stuck for whatever reason (probably related to some issue or race-condition going on with the data-center's hardware/software infrastructure).
I think this probably isn't considered as a valid answer by SO's standards, but I felt like sharing my experience anyway, as I had a really hard time finding any info about this kind of google cloud engine problems.
If you happen to ever hit the same or similar issue, you can try waiting it out, as any stuck operation will (most likely) eventually be canceled after it has been in PENDING state for too long, releasing any locked resources in the process.
Alternatively, if you need to solve the issue ASAP (which is often the case if the issue is affecting any resource which is crtical to your production environment), you can try:
Contacting Google Support directly (only available to paid support customers)
Posting in the Google Compute Engine discussion group
Send an email to gc-team(at)google.com to report a Production issue
I believe your issue is the same as the one that was solved few days ago.
If your issue didn't happen after performing those steps, you can follow Andrea's suggestion or create a new issue.
Regards,
Adrián.
I've got a weird situation. The first time I hit an embedded web server (uclinux/boa) at 10.1.10.29, I get a 10 second delay in the browser window before things start happening. "first time" means I haven't hit the machine in a few days. Browser type/OS doesn't matter (source is 10.1.10.20)
I've got a wireshark capture of it happening.
And here is the detail of frame 296:
Note packet 374 doesn't pop out for around 10 seconds after 296. The packets between those 2 aren't from the machine in question. It's just sitting there for 10 seconds and decides to retransmit. How's it supposed to work?
The main reason is most certainly because the code was swapped out from memory.
MS-Windows is really bad in that regard. If some program does not get used for "too long", it gets swapped out of memory. Period. When you come back at it, it has to re-read it back from the hard drive.
The one good thing (main reason) Windows does that is to defragment the kernel memory. For that, it is good.
You have similar problems under Linux, however, only if your server needs the memory. In other words, if you have tons of processes and they all fight for as much of memory as possible, then it is likely to swap out the least used software. Otherwise it will stay in place.
If you were to use the Cassandra database system, you would notice that on any computer that runs anything else than Cassandra. If you just run Cassandra, it remains fast all the time. If you run other software that use a lot of the memory, Cassandra is slow on first access. This is particularly noticeable.
I want to add the answer that solved our problem that had the problem with the 10 second delay, then working and after 5 minutes of inactivity adding another 10 seconds delay.
First of all, we wiresharked everything, and tried to find some kind of error in code, or in the way that the computer or server handled the network traffic. Found nothing out of the ordinary.
After much searching we found it was a DNS-"problem". In the DNS-server that the client computer used, there were dual entries for the domain name of the server. One was correct and one (the first one in the list) was wrong.
So removing the wrong dns pointer solved the problem.
This means the problem was that the computer tried the first address it got, waited 10seconds to get a reply, didnt get it and went to the second address in line. This creates no error messages as this is how DNS is supposed to work. And that is why all our wireshark logs showed up as just waiting 10 seconds with no error and no reason, and then just jump into life, work for as long as the DNS record is valid (5 minutes in our case) and then the procedure needs to be done again.
Hope this helps someone who has a similar problem.
I'm experiencing some issues with FreePBX queueus.
The longest calls waiting don't always seem to have priority, in various cases we've experienced instances when a call was on hold for 10 ten minutes and another call came in and the new call was sent to the next available agent before it.
Anyone have any experience with this?
According to what I've been told by some developers, this seems to be a bug in Asterisk's queue application. The queue app doesn't seem to share the longest wait times across the queues, thus, if a member is part of multiple queues, there could be some problems like what we've experienced.
I have come to accept that and moved on to a commercial grade Call Center solution.
So, I have to make a minor bug fix to all of my scripts: I didn't realize there was a limit to the amount you could push into the Cache (BTW Google, I'm pretty sure this isn't documented anywhere).
Anyhow, so my three line fix resulted in my having to resubmit a bunch of scripts. Typically this isn't a big deal, Google is usually super awesome about approving them (usually the next business day). However, unfortunately they seem to be taking more time this time. This became a problem because I had to do a presentation today, and I just assumed they would be approved by now (I fudged it and just showed a spreadsheet with the script already installed).
So, I guess my main question here is maybe would have a more graceful upgrade process? It sometimes doesn't make sense to have the script removed from the gallery when waiting for approval.
Thanks!
Ben
I've opened an issue a while ago regarding this (nearly 2 years now). You probably want to star it to keep track of updates.
About the approval process, it is not "reliable" as you could see. I had scripts that took 3 months to be re-approved and then, the next upgrade, only a couple of days.