Error when starting Yesod app on openshift - command line args? - openshift

I am getting the following error when (re)starting my Yesod app on openshift:
server: InvalidYaml (Just (YamlException "Yaml file not found: xxx.xxx.xxx.xxx"))
Where xxx.xxx.xxx.xxx is an IP address. I did find a link to a Heroku+Yesod issue saying something about "removing an argument" but it didn't say from where, and of course the scripts/settings are going to be different in the case of OpenShift. Any ideas what this error is and how to get past it?

I'm assuming based on the question that you're using the standard scaffolding. If you look in the code, you'll find that uses loadAppSettingsArgs, which is described as:
Same as loadAppSettings, but get the list of runtime config files from the command line arguments.
If you don't want to pay attention to command line arguments, just replace the call to loadAppSettingsArgs with loadAppSettings [].

Related

400-unknown or invalid client_id for forge-bim360-data.connector.dashboard

I have tried to implement - https://github.com/Autodesk-Forge/forge-bim360-data.connector.dashboard
I have updated this part - npm install set FORGE_CLIENT_ID=<<YOUR CLIENT ID FROM DEVELOPER PORTAL>> set FORGE_CLIENT_SECRET=<<YOUR CLIENT SECRET>> set FORGE_CALLBACK_URL=<<your callback url of Forge e.g. http://localhost:3000/oauth/callback>> set DC_CALLBACK_URL=<<"your ngrok address here: e.g. http://abcd1234.ngrok.io/job/callback">>
I am getting the error that 400-Unknown or invalid client_id
Firstly, I rarely used Windows OS now. I simply copied the guideline of setting environment variables from other samples, while most time, I tried with debug mode (setting environment variables in launch.json) .
checking the Readme again, I found the wording is:
Windows (use Node.js command line from Start menu)
i.e. it asks to input those commands to command line of Node.js, instead of terminal of VSCode! That is why it always reports the error of client id is not defined because the variables are not set to environment at all.
The correct way is to open the command line of Node.js, and run the commands. This is a screenshot.

GET error: Get https://api-int.openshiftX.yyyy.com:22623/config/master

I am installing various versions of openshift 4.x but i am getting the above mentioned error..
My bootstrap node comes online and pick the given ip which i gave in dhcp file and i am able to ssh it successfully aswell from my Loadbalancer but whenever i start the master node i get (GET error: Get https://api-int.openshiftX.yyyy.com:22623/config/master) sometimes its like GET error: Get https://api-int.openshiftX.yyyy.com:22623/config/master: EOF) and also i get this error too sometimes.(GET error: Get https://api-int.openshiftX.yyyy.com:22623/config/master: x509 :certificate is expired or is not yet valid. )
Can somebody help me on this.

Event Subscription not working with error interrupted system call

I am trying to establish an event subscription via zmq from my locally running sawtooth network. As soon as I start my event-subscriber container, I get the error "interrupted system call".
I am following the example from here https://github.com/danintel/sawtooth-cookiejar/tree/master/events/go
I have tried using validatorUrl as tcp://localhost:4004 tcp://validator-0:4004
note: validator-0 is my local container name for the validator
Also, have tried with the direct IP of the validator container tcp://<IP>:4004
zmqConnection.RecvMsgWithId() is throwing the error.
The error I am getting is exactly at this line https://github.com/danintel/sawtooth-cookiejar/blob/master/events/go/src/events_client.go#L105
Can someone please help for the probable reasons or the way I can debug this one?
I do not know, but one possible cause is this example was recently updated to a new version of Go, 1.11 (from 1.9) after your posting:
https://github.com/danintel/sawtooth-cookiejar/pull/9
Because of this error:
Loading input failed: unsupported version of go: exit status 2: flag provided but not defined: -compiled
The issue was related to inter-pod communication. So the issue was, my event subscriber client was in a completely different pod than the pod where the validator container was running. In that case, we need to used the FQDN of that pod. Refer to the link below.
https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-hostname-and-subdomain-fields

redis.conf include: "Bad directive or wrong number of arguments"

I've created this config for redis [/etc/redis/map.conf]:
include /etc/redis/ideal.conf
port 11235
pidfile /var/run/redis-map.pid
logfile /var/log/redis/map.log
dbfilename map.rdb
As you can see, it includes /etc/redis/ideal.conf; this file actually exists and we have read permissions.
Also there is another file, slightly different; consider [/etc/redis/storage.conf]:
include /etc/redis/ideal.conf
pidfile /var/run/redis-storage.pid
port 8000
bind 192.168.0.3
logfile /var/log/redis/storage.log
dbfilename dump_storage.rdb
My problem is: I can launch redis-server with storage.conf (and everything works fine), but map.conf leads to the following error:
Reading the configuration file, at line 1
>>> 'include /etc/redis/ideal.conf'
Bad directive or wrong number of arguments
failed
Version of redis is 2.2.
Where did I go wrong?
Sorry guys.
I was using different instances of Redis.
Instance for storage.conf was launched by /usr/local/bin/redis-server, but map.conf launched by /usr/bin/redis-server; second one is broken.
Thank you anyway.

MySQL driver segfaulting under mod_perl - where to look for issue

I have a webapp that segfaults when the database in restarted and it tries to use the old connections. Running it under gdb --args apache -X leads to the following output:
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread -1212868928 (LWP 16098)]
0xb7471c20 in mysql_send_query () from /usr/lib/libmysqlclient.so.15
I've checked that the drivers and database are all up to date (DBD::mysql 4.0008, MySQL 5.0.32-Debian_7etch6-log).
Annoyingly I can't reproduce this with a trivial script:
use DBI;
use Test::More tests => 2;
my $dbh = DBI->connect( "dbi:mysql:test", 'root' );
sub test_db {
my ($number) = $dbh->selectrow_array("select 1 ");
return $number;
}
is test_db, 1, "connected to db";
warn "restart db now";
getc;
is test_db, 1, "connected to db";
Which gives the following:
ok 1 - connected to db
restart db now at dbd-mysql-test.pl line 23.
DBD::mysql::db selectrow_array failed: MySQL server has gone away at dbd-mysql-test.pl line 17.
not ok 2 - connected to db
# Failed test 'connected to db'
# at dbd-mysql-test.pl line 26.
# got: undef
# expected: '1'
This behaves correctly, telling me why the request failed.
What stumps me is that it is segfaulting, which it shouldn't do. As it only appears to happen when the whole app is running (which uses DBIx::Class) it is hard to reduce it to a test case.
Where should I start to look to debug this? Has anyone else seen this?
UPDATE: further prodding showed that it being under mod_perl was a red herring. Having reduced it to a simple test script I've now posted to the DBI mailing list. Thanks for your answers.
What this probably means is that there's a difference between your mod_perl environment and the one you were testing via your script. Some things to check:
Was your mod_perl compiled with the same version of Perl
Are the #INC's the same for both
Are you using threads in your mod_perl setup? I don't believe DBD::mysql is completely thread-safe.
I've seen this problem, but I'm not sure it had the same cause as yours. Are you by chance using a certain module for sending mails (forgot the name, sorry) from your application? When we had the problem in a project, after days of debugging we found that this mail module was doing strange things with open file descriptors, then forked off another process which called the console tool sendmail, which again did strange things with file descriptors. I guess one of the file descriptors it messed around with was the connection to the database, but I'm still not sure about that. The problem disappeared when we switched to another module for sending mails. Maybe it's worth a look for you too.
If you're getting a segfault, do you have a core file greated? If not, check ulimit -c. If that returns 0, your system won't create core files and you'll have to change that. If you do have a core file, you can use gdb or similar tools to debug it. It's not particularly fun, but it's possible. The start of the command will look something like:
gbd /usr/bin/httpd core
There are plenty of tutorials for debugging core files scattered about the Web.
Update: Just found a reference for ensuring you get core dumps from mod_perl. That should help.
This is a known problem in old DBD::mysql. Upgrade it (4.008 is not up to date).
There's a simple test script attached to https://rt.cpan.org/Public/Bug/Display.html?id=37027
that will trigger this bug.