Installed Hyperledger sawtooth with this guide:
https://sawtooth.hyperledger.org/docs/core/releases/latest/sysadmin_guide/installation.html
[2018-11-04 02:35:13.204 DEBUG selector_events] Using selector: ZMQSelector
[2018-11-04 02:35:13.205 INFO interconnect] Listening on tcp://127.0.0.1:4004
[2018-11-04 02:35:13.205 DEBUG dispatch] Added send_message function for connection ServerThread
[2018-11-04 02:35:13.206 DEBUG dispatch] Added send_last_message function for connection ServerThread
[2018-11-04 02:35:13.206 DEBUG genesis] genesis_batch_file: /var/lib/sawtooth/genesis.batch
[2018-11-04 02:35:13.206 DEBUG genesis] block_chain_id: not yet specified
[2018-11-04 02:35:13.207 INFO genesis] Producing genesis block from /var/lib/sawtooth/genesis.batch
[2018-11-04 02:35:13.207 DEBUG genesis] Adding 1 batches
[2018-11-04 02:35:13.208 DEBUG executor] no transaction processors registered for processor type sawtooth_settings: 1.0
[2018-11-04 02:35:13.209 INFO executor] Waiting for transaction processor (sawtooth_settings, 1.0)
[2018-11-04 02:35:13.311 INFO processor_handlers] registered transaction processor: connection_id=014a2086c9ffe773b104d8a0122b9d5f867a1b2d44236acf4ab097483dbe49c2ad33d3302acde6f985d911067fe92207aa8adc1c9dbc596d826606fe1ef1d4ef, family=intkey, version=1.0, namespaces=['1cf126']
[2018-11-04 02:35:18.110 INFO processor_handlers] registered transaction processor: connection_id=e615fc881f8e7b6dd05b1e3a8673d125a3e759106247832441bd900abae8a3244e1507b943258f62c458ded9af0c5150da420c7f51f20e62330497ecf9092060, family=xo, version=1.0, namespaces=['5b7349']
[2018-11-04 02:35:21.908 DEBUG permission_verifier] Chain head is not set yet. Permit all.
[2018-11-04 02:35:21.908 DEBUG permission_verifier] Chain head is not set yet. Permit all.
Than:
ubuntu#ip-172-31-42-144:~$ sudo intkey-tp-python -vv
[2018-11-04 02:42:05.710 INFO core] register attempt: OK
Than:
ubuntu#ip-172-31-42-144:~$ intkey create_batch
Writing to batches.intkey...
ubuntu#ip-172-31-42-144:~$ intkey load
batches: 2 batch/sec: 160.14600713999351
REST-API works, too.
I did exactly all steps as shown in the guide. The older one doesn't help me, too. hyperledger sawtooth validator node permissioning issue
ubuntu#ip-172-31-42-144:~$ curl http://localhost:8008/blocks
{
"error": {
"code": 15,
"message": "The validator has no genesis block, and is not yet ready to be queried. Try your request again later.",
"title": "Validator Not Ready"
}
}
genesis was attached ?!
MARiE
As the log shows, the genesis batch is waiting on the sawtooth-setting TP. If you start that up, just like you start up intkey and xo, it will process the genesis batch and will then be able to handle your intkey transactions.
Related
Trying to fetch config block to create a config update.
I'm using the test network in fabric samples with default settings (no CA)
even after starting the network I cannot fetch any blocks. not latest or oldest either
This is the output I'm getting
peer channel fetch config
2022-02-08 11:09:47.306 +03 [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
2022-02-08 11:09:47.309 +03 [cli.common] readBlock -> INFO 002 Expect block, but got status: &{NOT_FOUND}
Error: can't read the block: &{NOT_FOUND}
I think you need to specify the channel, for example:
peer channel fetch config -c mychannel
That works for me with the default test network channel, and I get the same error you saw without the -c option.
It's also worth having a look at the test network scripts since they are meant to be a sample themselves. In this case configUpdate.sh does a config update.
I am trying to run a beacon-chain for Ethereum2.0 in the pyrmont testnet with Prysm and Besu.
I run the ETH1 node with the command :
besu --network=goerli --data-path=/root/goerliData --rpc-http-enabled
This command is working and download the entire blockchain, then run properly.
But when I launch :
./prysm.sh beacon-chain --http-web3provider=localhost:8545 --pyrmont
I get :
Verified /root/prysm/dist/beacon-chain-v1.0.0-beta.3-linux-amd64 has been signed by Prysmatic Labs.
Starting Prysm beacon-chain --http-web3provider=localhost:8545 --pyrmont
[2020-11-18 14:03:06] WARN flags: Running on Pyrmont Testnet
[2020-11-18 14:03:06] INFO flags: Using "max_cover" strategy on attestation aggregation
[2020-11-18 14:03:06] INFO node: Checking DB database-path=/root/.eth2/beaconchaindata
[2020-11-18 14:03:08] ERROR main: database contract is xxxxxxxxxxxx3fdc but tried to run with xxxxxxxxxxxx6a8c
I tried to delete the previous data folder /root/goerliData and re-download the blockchain but nothing changed...
Why does the database contract didn't change and what should I do ?
Thanks :)
The error means that you have an existing database for another network, probably medalla.
Try starting your beacon node with the flag --clear-db next time, and you'll see it the error disappear and start syncing Pyrmont.
We have a Puppet server running that services a couple of hundred Windows servers. The installed Puppet agent is 6.x. On almost all of the servers 'puppet agent -t' works fine, with a few exceptions exhibiting the same issue.
When I start clean, the Puppet agent connects with the server, receives a certificate and downloads all of the facts and what not. This works. Then the agent loads the facts and after a while I get an error message:
C:\>puppet agent -t
Info: Using configured environment 'windows'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Retrieving locales
Info: Loading facts
Error: Failed to apply catalog: Could not render to json: source sequence is illegal/malformed utf-8
C:\>
If I run the Puppet agent in debug mode, although I could have missed it because there's a lot of output, all it shows is that it's resolving facts, and then the above message appears and the agent run stops. The last fact (according to debug output) that is being resolved is consistently:
Debug: Facter: resolving processor facts.
Debug: Facter: fact "hardwareisa" has resolved to "x64".
Debug: Facter: fact "processorcount" has resolved to 2.
Debug: Facter: fact "physicalprocessorcount" has resolved to 1.
Debug: Facter: fact "processor0" has resolved to "Intel(R) Xeon(R) CPU E5-2643 v2 # 3.50GHz".
Debug: Facter: fact "processors" has resolved to {
count => 2,
isa => "x64",
models => [
"Intel(R) Xeon(R) CPU E5-2643 v2 # 3.50GHz"
],
physicalcount => 1
}.
Error: Failed to apply catalog: Could not render to json: source sequence is illegal/malformed utf-8
However, I'm in doubt if that is the culprit because IIRC Puppet does not really run things sequential.
I don't understand how the same thing can work on one server, but not on another, even when having the same agent version. How can I find out what is the source of the error message?
I know this is an ancient topic but the resolution in my case was confirming that each of the custom facts was encoded in UTF-8. We discovered that a single fact file was encoded differently and re-encoding it as UTF-8 fixed our issues.
I'm using ChromeDriver with Play! framework. I have a UnitTest where ChromeDriver is instantiated and make a get request to my Dyndns url. When the test starts, it opens chrome, makes the request but there is no response. It waits indefinitely. And when I closed chrome, testrunner fails with the exception;
A org.openqa.selenium.WebDriverException has been caught, Chrome did
not respond to 'NavigateToURL'. Elapsed time was 116077 ms. Request
details:
({"command":"NavigateToURL","navigation_count":1,"tab_index":0,"url":"http://myurl.dyndns.org:9000/test/","windex":0}).
(WARNING: The server did not provide any stacktrace information) Build
info: version: '2.5.0', revision: '13516', time: '2011-08-23 18:29:57'
System info: os.name: 'Windows Vista', os.arch: 'x86', os.version:
'6.0', java.version: '1.6.0_21' Driver info: driver.version:
RemoteWebDriver
When I do not use UnitTest (and TestRunner) and start my test directly with a main method (also initializing the Play! by myself) test runs with no problem. But I need JUnit's assert methods and it's surely better that all tests are run from the same module (I have many other unit and functional tests).
Any ideas to fix this?
Thanks.
What happens is that the http://localhost:9000/#tests fires off a web request to http://localhost:9000/#tests/<your_test_class>.class to run your test class taking up one thread, your test tries to fire off a request to http://localhost:9000/your_path that blocks until the request for http://localhost:9000/#tests/<your_test_class>.class finishes. Thus you wait indefinitely since by default the number of threads is one. So if you increase play.pool > 1, your test suite will work properly.
See conf/application.conf
# Execution pool
# ~~~~~
# Default to 1 thread in DEV mode or (nb processors + 1) threads in PROD mode.
# Try to keep a low as possible. 1 thread will serialize all requests (very useful for debugging purpose)
# play.pool=3
Note: One thing I found helpful in understanding how #tests work is turning on Network in chrome, then I could easily trace how it works and then it made more sense where the block was.
Has anyone seen an exception relating to the Media.UploadWatcher? I don't have the error handy, but the exception was causing all pages to not load, even the admin section. In order to fix it, I reset the application pool and the site came back up right away.
I know that the client was uploading some large files through the content editor, but I wouldn't think that alone would cause problems. I have upped the MaxExecutionTime to allow for those uploads, but again, I don't think that would be the problem. Is there something I forgot to do while moving the code to production or is there a setting that might be off? All I did was copy the code to production, and change the directory references in the web.config to point to the new locations (like the license file).
There error hasn't come up again, but I'm scared it will come up at an inopportune time. Any ideas?
Thanks in advance!
UPDATE:
The exception just occurred again on the live site and I had to recycle the app pool. Anyone know what could be causing this? Here is the exception from the event log:
Event code: 3005
Event message: An unhandled exception has occurred.
Event time: 1/4/2010 9:56:50 AM
Event time (UTC): 1/4/2010 3:56:50 PM
Event ID: 7fbcc8d807204614904572753b4beb2e
Event sequence: 23
Event occurrence: 22
Event detail code: 0
Application information:
Application domain: /LM/w3svc/1422107501/root-1-129070941106290901
Trust level: Full
Application Virtual Path: /
Application Path: C:\HostingSpaces\mysite\mysite.com\wwwroot\
Machine name: 180716WEB1
Process information:
Process ID: 310020
Process name: w3wp.exe
Account name: 180716WEB1\myuser_web
Exception information:
Exception type: TypeInitializationException
Exception message: The type initializer for 'Sitecore.Resources.Media.UploadWatcher' threw an exception.
Request information:
Request URL: http://www.mysite.com/Default.aspx
Request path: /Default.aspx
User host address: 75.147.19.21
User:
Is authenticated: False
Authentication Type:
Thread account name: 180716WEB1\myuser_web
Thread information:
Thread ID: 7
Thread account name: 180716WEB1\myuser_web
Is impersonating: False
Stack trace:
Custom event details:
For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.
I'm fairly sure the media uploadwatcher doesn't come in to it when using the content editor to add media - it runs on a schedule (defined in web.config) to check if any items have been added to the media upload folder in the filesystem (I can't remember the exact folder name at the moment).
When we've launched sitecore sites, we find it easier to NOT upload the local web.config to live - instead, duplicate changes to both. There are settings and entire sections in the web.config relevant to the role of that server.
If you can get the error message, add it to your post.
On our dev server the solution to this error was removing the SiteDefinition.config from the app_config/include folder, which only contains settings between xml comment (version 6.6 update 4) probably de default config file.
I Got there by first removing all the files in app_config/include and placing them back one by one.