I'm trying to clone a bit repository from bitbucket via hg but I keep getting this error:
abort: stream ended unexpectedly (got 404093 bytes, expected 8706452)
mac:~ user$ hg clone https://user2#bitbucket.org/mine/test
http authorization required
realm: Bitbucket.org HTTP
user: user2
password:
destination directory: test
requesting all changes
adding changesets
adding manifests
adding file changes
transaction abort!
rollback completed
abort: stream ended unexpectedly (got 404093 bytes, expected 8706452)
I have tried it twice now but both times it's given the same error.
I have more than enough HDD space.
Any thoughts?
Is it failing in the same spot every time? (404093 bytes) If so, then it sounds like there's something wrong on the server side, and you might want to ask them for help.
If its bombing in different places every time then I guess it would be the network.
Related
My first problem looked like this:
Writing objects: 60% (9/15)
It freezed there for some time with very low upload speed (in kb/s), then, after long time, gave this message:
fatal: the remote end hung up unexpectedly
Everything up-to-date
I found something what seemed to be a solution:
git config http.postBuffer 524288000
This created a new problem that looks like this:
MacBook-Pro-Liana:LC | myWebsite Liana$ git config http.postBuffer 524288000
MacBook-Pro-Liana:LC | myWebsite Liana$ git push -u origin master
Enumerating objects: 15, done.
Counting objects: 100% (15/15), done.
Delta compression using up to 4 threads
Compressing objects: 100% (14/14), done.
Writing objects: 100% (15/15), 116.01 MiB | 25.16 MiB/s, done.
Total 15 (delta 2), reused 0 (delta 0)
error: RPC failed; curl 56 LibreSSL SSL_read: SSL_ERROR_SYSCALL, errno 54
fatal: the remote end hung up unexpectedly
fatal: the remote end hung up unexpectedly
Everything up-to-date
Please help, I have no idea what’s going on...
First, Git 2.25.1 made it clear that:
Users in a wide variety of situations find themselves with HTTP push problems.
Oftentimes these issues are due to antivirus software, filtering proxies, or other man-in-the-middle situations; other times, they are due to simple unreliability of the network.
This works for none of the aforementioned situations and is only useful in a small, highly restricted number of cases: essentially, when the connection does not properly support HTTP/1.1.
Raising this is not, in general, an effective solution for most push problems, but can increase memory consumption significantly since the entire buffer is allocated even for small pushes.
Second, it depends on your actual remote (GitHub? GitLab? BitBucket? On-premise server). Said remote server might have an incident in progress.
I am working with OpenShift Origin 3.9 and had an application (consisting of a service, pods, etc.) building and running alright.
However, now rebuilds fail with this error message:
Successfully built 1234567890ab
Pushing image docker- registry.default.svc:5000/my_project/my_app:latest ...
Warning: Push failed, retrying in 5s ...
Warning: Push failed, retrying in 5s ...
Warning: Push failed, retrying in 5s ...
Warning: Push failed, retrying in 5s ...
Warning: Push failed, retrying in 5s ...
Warning: Push failed, retrying in 5s ...
Warning: Push failed, retrying in 5s ...
Registry server Address:
Registry server User Name: serviceaccount
Registry server Email: serviceaccount#example.org
Registry server Password: <<non-empty>>
error: build error: Failed to push image:
After retrying 6 times, Push image still failed due to error:
Get https://docker-registry.default.svc:5000/v1/_ping: dial tcp 1.2.3.4:5000:
getsockopt: connection refused
I don't have admin privileges on that cluster, so it is unlikely that this is due the the nodes' DNS setup, as similar answers would suggest (e.g. here).
One possibly contributing cause could be that I had created a service account in the meantime (since the last successful build) and temporarily logged in with its API token. However I am no logged in again with (an API token for) my full account (e.g. according to oc whoami.)
This is how I am starting the rebuild:
oc login --token=$api_token
oc start-build --follow my_app
What could explain this error and how can I further diagnose and overcome it, esp. given that I don't have cluster admin rights?
The problem "somehow" went away after some days. Whether by operator intervention or otherwise I cannot tell.
You missed one steps
oc policy add-role-to-user system:image-builder
Please follow this doc
https://blog.openshift.com/remotely-push-pull-container-images-openshift/
Whenever I attempt to run geth on the command line, it seems to have trouble syncing with the blockchain. I am getting these warnings continuously (this is run on testnet).
geth --testnet --rpc --rpcaddr "localhost" --rpccorsdomain "*" --rpcapi="db,eth,net,web3,personal,staker,net,txpool,shh " --rpcport 8545
I just exited my command prompt, and restarted the geth upgradedb process and it worked.
Another solution is set the current head of the local chain to a few blocks back with debug.setHead command. That rewind the chain back to the faulty snapshot block (epoch transaction), for all the signer nodes.
Like:
debug.setHead("0x124F80") // (1200000 th block)
From #karalabe's answer on the closed issue:
There was a bug in one Geth release (v1.8.14/v1.8.15) that violated
the Clique consensus spec, causing some signers to create blocks when
they weren't allowed to (epoch transition). All previous and
subsequent version of Geth (apart from the faulty one) correctly
rejected those blocks, hence why you couldn't sync a new node to your
already mined chain.
A node however does not re-validate blocks when you update it, so even
though you updated your signers, they were oblivious to the fact that
a faulty block was already in their chain. When you rewound the chain,
the signers had to re-mine the faulty segment, correcting the issue.
This should most definitely not happen again, as long as you don't use
the faulty version of Geth. Any version equal or above to v1.8.16
should work just fine.
To whom it may concern:
I am running CentOS 6.5 on my server.
I keep on receiving the following error when I type in yum update as the root user:
[root#dbtest /]# yum update
Loaded plugins: fastestmirror, security
Setting up Update Process
Loading mirror speeds from cached hostfile
epel/metalink | 14 kB 00:00
* epel: mirror.steadfast.net
* passenger: mirror.hmdc.harvard.edu
http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/repodata/repomd.xml: [Errno 14]
PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found"
Trying other mirror.
Error: Cannot retrieve repository metadata (repomd.xml) for repository: Updates-ambari-1.x. Please
verify its path and try again
I am using these two links to assist me:
https://www.webmaster.net/fix-pycurl-error-22-the-requested-url-returned-error-404-not-found/
To the best of my knowledge the reason I think I am getting this error is because there is something wrong with the ambari.repo repository file under the /etc/yum.repos.d directory.
My question is what can I do to fix the ambari.repo file, if anything, and what can I do so that I am able to successfully perform the yum update task without any errors?
This is what is inisde the ambari.repo file. Any help can be greatly appreciated.
One more thing I would like to mention is that I made changes to the CentOS-Base.repo file
That URL is incorrect. A quick search online for ambari repo led me to this page which seems to suggest that the correct path is now http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.2.3.7/ and that you can get a new repo file from http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.2.3.7/ambari.repo.
Have been using EAP 7 for a couple of months, this is the 2nd upgrade.
Upgraded to build 20939 today and now get errors when builds are trying to check mercurial for changes (VCS problem: FOO Edit this VCS root>>). If I edit the VCS Root and click Test Connection it succeeds. How do I go about debugging this issue?
Have tried re-saving the vcs root. I deleted and recreated the vcs root on one project and get the same result.
The recent entries in the teamcity-vcs log don't have domain\user:password, should they?
I now have both the teamcity and buildagent services running under my AD account. I don't remember what account the teamcity service was using before the upgrade (is that logged somewhere?).
If the vcs root is configured with an 'https://' and has user/password why don't I see the credentials in the log message (see above post)?
My user directory contains mercurial.ini / ssl cert (and was working pre-upgrade).
TeamCity hosted on Windows2k8, mercurial repo, using Active Directory credentials for authentication.
teamcity service is running as Local System
buildagent running as AD account (for builds that deploy to other machines)
newest errors:
[2012-01-11 17:12:39,578] WARN [cutor 4 {id=29}] - jetbrains.buildServer.VCS - Error while loading changes for root mercurial: https://mycompany.com/myproject {instance id=29, parent id=8}, cause: 'cmd /c hg pull https://mycompany.com/MyProject' command failed.
stderr: abort: http authorization required
older errors:
[2012-01-10 16:38:02,791] INFO [TeamCity Agent ] - jetbrains.buildServer.VCS - Patch applied for agent=computer {id=1, host=127.0.0.1:9090}, buildType=Project :: MVC3 {id=bt12}, root=mercurial: https://mycompany/myproject {instance id=12, parent id=1}, version=3775:7fc0ae5029e6
[2012-01-11 10:30:36,277] INFO [_Server_StartUp] - jetbrains.buildServer.VCS - Server-wide hg path is not set, will use path from the VCS root settings
The problem persisted after a complete uninstall/re-install.
In the VCS Root definition... I left the user/password fields blank and encoded the user:password into the 'Pull changes from' string (just like you'd do on the command-line.
https://domain\user:password#hg.mycompany.com/Repo
To sorta clean up the plaintext password I created a project level property 'MyPassword' (type password) and used it in the connection string like this:
https://domain\user:%MyPassword%#hg.mycompany.com/Repo
Still not great but I'm up and running and the password is not viewable by causal users.