DB table name for agent properties - urbancode

What is the table name in uDeploy DB to get the agent properties.
I am looking for agent host name property for each agent installed in uDeploy.
Agent.hostname or Agent.HOSTNAME.

The Agent Properties are stored inside the Udeploy Database in the table "vc_persistent_record" . However, this field is a CLOB and contains the hostname nested inside XML tags and cannot be used directly.
The other option to get the agent hostname just by passing an agent name is by using the UDeploy CLI on your Windows Command prompt. This is the command to do that:
udclient -username *myusername* -password *mypassword* -weburl *https://udeploy.com/* getAgentProperty -agent *myagentname* -name HOSTNAME
Output of abovecommand:
YOURSERVERHOSTNAME
References to get this going: https://www.ibm.com/support/knowledgecenter/en/SS4GSP_6.2.4/com.ibm.udeploy.reference.doc/topics/cli_install.html
https://www.ntu.edu.sg/home/ehchua/programming/howto/Environment_Variables.html

Related

Using Apache Drill

I am trying to use Apache Drill. The instructions at https://drill.apache.org/docs/drill-in-10-minutes/ seem to be very straightforward but after following them I get the following error:
show files;
Error: VALIDATION ERROR: SHOW FILES is supported in workspace type schema only. Schema [] is not a workspace schema.
Missing config for the path to files maybe?
Looks like you are issuing this command without connecting to any schema. You can issue this command after switching to particular schema using 'use '.Issue 'show schemas' to list available schemas.
If you are using sqlline, You may specify schema while connecting to sqlline as below (to connect schema 'dfs') .
sqlline -u "jdbc:drill:schema=dfs;zk=<zk node>:<zk port>"

spring boot --spring.application.json parameters not being set

I have a spring boot application that currently has a "mysql" profile that sets the following properties:
spring.datasource.url =
spring.datasource.username =
spring.datasource.password =
in /resources/application-mysql.properties file.
This is working great. When I run the mysql profile it connects to the local mysql database. When I don't run the mysql profile it uses the default h2 database. Next I want to get rid of the application.my-sql.properties and pass those values in via the command line. From the documentation here I would expect something like the following to work. But strangely, it never gets these properties and runs the h2 database instead.
java -jar myapp-0.0.1-SNAPSHOT.jar --spring.application.json='{"spring": {"datasource": {"url":"jdbc:mysql://localhost:3306/db", "username":"user","password":"pw"}}}'
I can confirm that this does work. As Vaelyr pointed out in the comments, setting it as system argument worked:
-Dspring.application.json='{"spring":{"datasource":{"username":"yourusername","password":"yourpassword"}}}'

Zabbix Trapper: Cannot get data from orabbix

I am using orabbix to monitor my db. The data from the queries executed on this db using orabbix are sent to zabbix server. However, I am not able to see the data reaching zabbix.
On my zabbix web console, I see this message on the triggers added - "Trigger expression updated. No status update so far."
Any ideas?
My update interval for the trigger is set to 30 sec.
Based on the screenshots you posted, your host is named "wfc1dev1" and you have items with keys "WFC_WFS_SYS_001" and "WFC_WFS_SYS_002". However, based on the Orabbix XML that it sends to Zabbix, the hostname and item keys are different. Here is the XML:
<req><host>V0ZDMURFVg==</host><key>V0ZDX0xFQUZfU1lTXzAwMg==</key><data>MA==</dat‌​a></req>
From this, we can deduce the host:
$ echo V0ZDMURFVg== | base64 -d
WFC1DEV
The key:
$ echo V0ZDX0xFQUZfU1lTXzAwMg== | base64 -d
WFC_LEAF_SYS_002
The data:
$ echo MA== | base64 -d
0
It can be seen that neither the host name, nor item key match those configured on Zabbix server. Once you fix that, it should work.

Update automatic attributes in Opscode Chef (serialized_object)

I had a couple of nodes in my chef server that had a problem while bootstrapping and missed the FQDN and domain automatic attributes due to which they were not indexed by SOLR and not searchable by knife. I could not rebootstrap these machines, but wanted to fix this and it took me a while to do so. Therefore I am posting this hoping that it will save others some time.
Automatic attributes are stored by Chef in the database and are not editable by knife (see Chef Attributes Overview). They are stored in chef's database as a column named serialized_object in the nodes table in hex and is in fact a gzipped JSON string.
To obtain the JSON string:
Use a PostgreSQL client to connect to the chef PostgreSQL (you can find the credentials on the chef server in /etc/chef-server/chef-server-secrets.json)
Save the contents of the serialized_object to a file say serialized_object.hex (it should look something like '\x1f8b08000...')
Run: xxd -p -r serialized_object.hex > serialized_object.gz
Run: gunzip serialized_object.gz
Now the file serialized_object contains the attributes in JSON format which you can edit. After editing you can store its contents back in chef server by following this:
Run: gzip serialized_object
Run: xxd -p serialized_object.gz > serialized_object.hex
Now you need to use the PostgreSQL client and insert the Hex data (be sure to remove prefix backslashes and x from the hex string) with the following query:
update nodes set serialized_object = decode('1f8b08000...','hex') where name = ''
Hope this helps someone :)

UnknownHostException while formatting HDFS

I have installed CDH4 on CentOS 6.3 64-bit in Pseudo Distributed mode using the following instructions. Everything is set to localhost in the Hadoop configuration files. But, still when I format the name node the below exception appears. When I add an 192.168.1.101 CentOSHost entry to the /etc/hosts file the exception goes away and I am able to run format/start HDFS and run MR jobs.
I want to run MR jobs even when I am not connected to the network without adding an entry to the /etc/hosts file. How to get this done?
12/08/27 22:17:15 WARN net.DNS: Unable to determine address of the host-falling back to "localhost" address
java.net.UnknownHostException: CentOSHost: CentOSHost
at java.net.InetAddress.getLocalHost(InetAddress.java:1360)
at org.apache.hadoop.net.DNS.resolveLocalHostIPAddress(DNS.java:283)
at org.apache.hadoop.net.DNS.(DNS.java:59)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.newBlockPoolID(NNStorage.java:1017)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.newNamespaceInfo(NNStorage.java:565)
at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:145)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:724)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1095)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1193)
It looks like some where the configuration is returning/ using the hostname as CentOSHost.
What does hostname --fqdn returns to you?
For Hadoop, it is important that name look-up and reverse look-up work successfully. You should be able to resolve the ip-address and resolve hostname from the ip-address (Reverse resolution). This can be tested using the above command.
The entry to /etc/hosts is required for the reverse resolution to work. Unless the entry and the configuration are pointing to localhost. Even in that case the hostname --fqdn should return as localhost.