Hyperledger Sawtooth docker container "settings tp" not working - hyperledger-sawtooth

I have set up a network with 4 validator using docker compose and it is using PBFT consensus.
If i try to submit a proposal to change a setting, for example the "sawtooth.validator.transaction_families" settings, nothing happens (I'm doing it from the validator container using "sawtooth proposal create" ). Did someone have similar problems?
Moreover If i enter inside the settings TP docker container I can't see the folder with logs. Does someone know why the settings TP is not creating logs?

Probably seems to be an issue when running the command without the --url option inside the shell container. So, probably the transaction never got submitted.
If you run these commands be sure that you are using the --url option with the right host

Related

PhpStorm showing "unvailable" breakpoint whereas execution is suspended

I'm sometimes stuck while attempting to debug my code.
Debug Session is active, code execution is suspended :
But I cannot see what really happens, as the breakpoint show "unavailable" ("no parking" symbol):
Does anybody know about this sign ?
I still haven't found any information about it on JetBrains sites... that's why I'm here :-)
(PhpStorm 2020.3, using docker containers (linux containers) with Docker Desktop/ Windows 10)
[EDIT] :
I just noticed that "break at first line in php script" seem to be functioning though:
But I have these weird breakpoints instead of red "normal" ones, and an highlighted line.
I tried restarting my docker containers, same issue. This produces seemingly randomly and gets solved after a while ... (reboot ?...)
[EDIT] SOLVED
The path mapping (local<->docker) for the root of my project was empty (how did it happen...) in my docker configuration in PhPStorm.
I'm not sure how this problem occured, but I'll be able to solve it next time if it's back.
If you try to disable "break at first line in php scripts" you may get the message :
17:38 Debug session was finished without being paused It may be
caused by path mappings misconfiguration or not synchronized local and
remote projects. To figure out the problem check path mappings
configuration for 'docker-server' server at PHP|Servers or enable
Break at first line in PHP scripts option (from Run menu). Do not
show again
In my case, the path mapping for the root of my project was incomplete "Absolute path on the server" was emtpy. I don't know how it happened but you could check :
In PHP | Servers

Running Github Actions on OSX results in "Could not find domain for port (Aqua)"

I've followed the directions here, but when I run ./svc.sh run, I receive the following error:
Could not find domain for port (Aqua)
I'm SSH-ing into a box to run this command, it seems to work fine when I'm not in a headless session, but I need this to be headless and as a background service. Anyone else run into this?
I was able to resolve this, addressed here
sudo cp {/Users/xxx/Library/LaunchAgents,/Library/LaunchDaemons}/your.plist
I was able to reboot my machine without logging in and see the runner active
Solution by #futbolpal not good, because LaunchDaemons doesn't have access to keychain.
Better copy to LaunchAgents
Like:
sudo cp {/Users/xxx/Library/LaunchAgents,/Library/LaunchAgents}/your.plist

Cannot map agent.conf using Cygnus docker installation

I have problem installing CYGNUS using docker as source, simply i cannot understand where i should map what specific agent.conf.
Image i am using is from here.
When i try to map agent.conf witch have my specific setup to container it starts and run but fail to copy, and not only that any change i made to file inside container wont stay it returns to previous default state.
While i have no issues with grouping_rules.conf using same approach.
I used docker and docker compose both same results.
Path on witch i try to copy opt/apache-flume/conf/agent.conf
docker run -v /home/igor/Documents/cygnus/agent.conf:/opt/apache-flume/conf/agent.conf fiware/cygnus-ngsi
Can some who managed to run it using his config tell me if i misunderstood location of agent.conf or something because this is weird, i used many docker images and never had issue where i was not able to copy from my machine to docker container.
Thanks in advance.
** EDIT **
Link of agent.conf
Did you copy the agent.conf file to your directory before start the container?
As you can see here, when you define a volume with "-v" option, docker copies the content of the host directory, inside the container directory using the mount point. Therefore, you must first provide the agent.conf file on your host.
The reason is that when using a "bind mounted" directory from the
host, you're telling docker that you want to take a file or directory
from your host and use it in your container. Docker should not modify
those files/directories, unless you explicitly do so. For example, you
don't want -v /home/user/:/var/lib/mysql to result in your
home-directory being replaced with a MySQL database.
If you do not have access to the agent.conf file, you can download the template in the source code from the official cygnus github repo here. You can also copy it once the docker container is running, using the docker cp option:
docker cp <containerId>:/file/path/within/container /host/path/target
Keep in mind, that you will have to edit the agent.conf file to configure it according to the database you are using. You can find in the official doc how to configure cygnus to use differents sinks like MongoDB, MySQL, etc.
I hope I have been helpful.
Best regards!

Hide/obfuscate environmental parameters in docker

I'm using the mysql image as an example, but the question is generic.
The password used to launch mysqld in docker is not visible in docker ps however it's visible in docker inspect:
sudo docker run --name mysql-5.7.7 -e MYSQL_ROOT_PASSWORD=12345 -d mysql:5.7.7
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b98afde2fab7 mysql:5.7.7 "/entrypoint.sh mysq 6 seconds ago Up 5 seconds 3306/tcp mysql-5.7.7
sudo docker inspect b98afde2fab75ca433c46ba504759c4826fa7ffcbe09c44307c0538007499e2a
"Env": [
"MYSQL_ROOT_PASSWORD=12345",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"MYSQL_MAJOR=5.7",
"MYSQL_VERSION=5.7.7-rc"
]
Is there a way to hide/obfuscate environment parameters passed when launching containers. Alternatively, is it possible to pass sensitive parameters by reference to a file?
Weirdly, I'm just writing an article on this.
I would advise against using environment variables to store secrets, mainly for the reasons Diogo Monica outlines here; they are visible in too many places (linked containers, docker inspect, child processes) and are likely to end up in debug info and issue reports. I don't think using an environment variable file will help mitigate any of these issues, although it would stop values getting saved to your shell history.
Instead, you can pass in your secret in a volume e.g:
$ docker run -v $(pwd)/my-secret-file:/secret-file ....
If you really want to use an environment variable, you could pass it in as a script to be sourced, which would at least hide it from inspect and linked containers (e.g. CMD source /secret-file && /run-my-app).
The main drawback with using a volume is that you run the risk of accidentally checking the file into version control.
A better, but more complicated solution is to get it from a key-value store such as etcd (with crypt), keywhiz or vault.
You say "Alternatively, is it possible to pass sensitive parameters by reference to a file?", extract from the doc http://docs.docker.com/reference/commandline/run/ --env-file=[] Read in a file of environment variables.

openshift : need to edit httpd.conf to enable or add directive according to my needs but it's not work

i've build my application on localhost and running it without any error. i choose openshift to host my application code but i have a problem to make it works perfectly like on my localhost.
i want to add directive of AllowEncodedSlashes and set it to On in my apache2 configuration file, i have tried to edit the file from ~/php/configuration/etc/conf/httpd.conf and then restart the server using ctl_all restart. but the result are http error code 400 (Bad Request). before i add this directive into httpd.conf the result are http error code 404, i am just not sure if the changes are in effect or not. or apache is bugging?
is there anyone knows howto make this work for me?
See if you can add it into .htaccess file instead of httpd.conf file. Also the best way to troubleshoot these problems would be by reviewing your application logs for errors. All you have to do is run "rhc tail {appName}" from your client machine (where the rhc client tools are installed). That gives you the current log entries.
To get to the entire log, you'll want to ssh onto the gear(s) on which the language framework/cartridge is installed using this FAQ and run: more ~/{cartridgeID}/logs/*.log
where {cartridgeID} is your framework cartridge like nodejs-0.6, or your embedded cartridge logs like mysql-5.1.
I created a feature request for this. See this Trello card and feel free to vote it up.