I would like to know where are the WebSphere configuration details saved?
Specifically, configuration details that are shown in the Administrative Console (from the web) or from the console using wsadmin.
Some of the examples would be:
Java and Process Management: Class
loader, Process definition, Process
execution
Container Settings:
Session management, SIP Container
Settings, Web Container Settings,
Portlet Container Settings
Are there XML files that persist these configuration details?
Nicholas
WebSphere Application Server configuration data is stored in XMI format in the profile configuration repository.
The settings you referred to are stored in server.xml
${PROFILE_HOME}/config/cells/${CELL}/nodes/${NODE}/servers/${SERVER}/server.xml
Along with server.xml # ${PROFILE_HOME}/config/cells/${CELL}/nodes/${NODE}/servers/${SERVER}/server.xml, there are other files too which save more data as below:
resources.xml at the same path. This file has all the resources information saved.
variables.xml at the same path. This file saves the variables used in places like DB drivers etc.
Also, there are other important file mentioned here - https://websphereapplicationservernotes.wordpress.com/2012/12/13/websphere-application-server-important-files/
As the URLs get obsolete these days, I am pasting the content here too:
CELL-scope
• admin-authz.xml
Contains the roles set for administration of the Admin console.
/appsrv01/config/cells//
• profileRegistry.xml
Contains a list of profiles and profile configuration data
• resources.xml
Defines operating cell scope environmental resources, including JDBC, JMS, JavaMail, URL end point configuration, and so on.
• security.xml
Contains security data , including all user ID and password information.
• virtualhosts.xml
Contains virtual host and Multipurpose Internet Mail Extensions (MIME)-type configurations.
• variables.xml
Contains cell level WebSphere variables
• wimconfig.xml
Contains the federated repository configurations for global security
/config/cells//wim/config/
NODE-scope
• namestore.xml
Provides persistent JNDI namespace binding data
• resources.xml
Defines node scope environmental resources, including JDBC, JMS, JavaMail, URL end point configuration, and so on
• serverindex.xml
Specifies all the ports used by servers on this node
• variables.xml
Contains node level WebSphere variables
SERVER-scope
• resources.xml
Contains the configuration of resources, such as, JDBC, JMS, JavaMail, and URL end points at server scope
• server.xml
Contains application server configuration data
• variables.xml
Contains server level variables
Related
I have installed PredictionIo engine from the following link using the first method. Now I want to run the engine using MYSQL as a datasource.
So I have configured env.sh file as described below :
#!/usr/bin/env bash
#
# Copy this file as pio-env.sh and edit it for your site's configuration.
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# PredictionIO Main Configuration
#
# This section controls core behavior of PredictionIO. It is very likely that
# you need to change these to fit your site.
# SPARK_HOME: Apache Spark is a hard dependency and must be configured.
# SPARK_HOME=$PIO_HOME/vendors/spark-2.0.2-bin-hadoop2.7
SPARK_HOME=$PIO_HOME/vendors/spark-2.1.1-bin-hadoop2.6
POSTGRES_JDBC_DRIVER=$PIO_HOME/lib/postgresql-42.0.0.jar
MYSQL_JDBC_DRIVER=$PIO_HOME/lib/mysql-connector-java-8.0.11.jar
# PredictionIO Storage Configuration
#
# This section controls programs that make use of PredictionIO's built-in
# storage facilities. Default values are shown below.
#
# For more information on storage configuration please refer to
# http://predictionio.apache.org/system/anotherdatastore/
# Storage Repositories
# Default is to use PostgreSQL
PIO_STORAGE_REPOSITORIES_METADATA_NAME=pio_meta
PIO_STORAGE_REPOSITORIES_METADATA_SOURCE=MYSQL
PIO_STORAGE_REPOSITORIES_EVENTDATA_NAME=pio_event
PIO_STORAGE_REPOSITORIES_EVENTDATA_SOURCE=MYSQL
PIO_STORAGE_REPOSITORIES_MODELDATA_NAME=pio_model
PIO_STORAGE_REPOSITORIES_MODELDATA_SOURCE=MYSQL
# Storage Data Sources
# PostgreSQL Default Settings
# Please change "pio" to your database name in PIO_STORAGE_SOURCES_PGSQL_URL
# Please change PIO_STORAGE_SOURCES_PGSQL_USERNAME and
# PIO_STORAGE_SOURCES_PGSQL_PASSWORD accordingly
PIO_STORAGE_SOURCES_PGSQL_TYPE=jdbc
PIO_STORAGE_SOURCES_PGSQL_URL=jdbc:postgresql://localhost/pio
PIO_STORAGE_SOURCES_PGSQL_USERNAME=pio
PIO_STORAGE_SOURCES_PGSQL_PASSWORD=pio
# MySQL Example
PIO_STORAGE_SOURCES_MYSQL_TYPE=jdbc
PIO_STORAGE_SOURCES_MYSQL_URL=jdbc:mysql://localhost:3306/pio
PIO_STORAGE_SOURCES_MYSQL_USERNAME=root
PIO_STORAGE_SOURCES_MYSQL_PASSWORD=root
I have also placed the mysql-java-connector jar in pio_home/lib directory.
However when I run pio status command, I get the following error :
[INFO] [Management$] PredictionIO 0.12.1 is installed at /home/oodles/predictionio/PredictionIO-0.12.1
[INFO] [Management$] Inspecting Apache Spark...
[INFO] [Management$] Apache Spark is installed at /home/oodles/predictionio/PredictionIO-0.12.1/vendors/spark-2.1.1-bin-hadoop2.6
[INFO] [Management$] Apache Spark 2.1.1 detected (meets minimum requirement of 1.3.0)
[INFO] [Management$] Inspecting storage backend connections...
[INFO] [Storage$] Verifying Meta Data Backend (Source: MYSQL)...
[ERROR] [Management$] Unable to connect to all storage backends successfully.
The following shows the error message from the storage backend.
No suitable driver found for jdbc:mysql://localhost:3306/pio (java.sql.SQLException)
Dumping configuration of initialized storage backend sources.
Please make sure they are correct.
Source Name: MYSQL; Type: jdbc; Configuration: PASSWORD -> root, URL -> jdbc:mysql://localhost:3306/pio, TYPE -> jdbc, USERNAME -> root
Can someone please help me out with this?
change jdbc:mysql://localhost:3306/pio to jdbc:mysql://localhost/pio in env.sh file
I have had this same issue and this is because the JDBC driver is not installed although you would think it is from the pio-env.sh saying there is a jar file of some sort. What you want to do is go to this site https://dev.mysql.com/downloads/connector/j/ and then choose the "Platform Independent" option and then click the download button. This Oracle website is trying to make you sign up for something but don't do it. Go to the bottom of the page where it says "No thanks, just start my download" and right click and save that link. You can now "wget" command to get the file and then unzip that file in your linux/ubuntu environment. I forget what the unzip command file is. I am noob to linux/unix environments but this did work for me with PIO.
Spark v. 0.5
Here's the command I used to start a Locator:
ubuntu#ip-172-31-8-115:/snappydata-0.5-bin/bin$ ./snappy-shell locator start
Starting SnappyData Locator using peer discovery on:
0.0.0.0[10334] Starting DRDA server for SnappyData at address localhost/127.0.0.1[1527]
Logs generated in /snappydata-0.5-bin/bin/snappylocator.log
SnappyData Locator pid: 9352 status: running
It looks like it starts the DRDA server locally, with no outside interface for a client to connect to. So, I cannot reach my SnappyData Locator using this JDBC URL from an outside client host (e.g. my SquirrelSQL editor).
This does not connect:
jdbc:snappydata://MY-AWS-PUBLIC-IP-HERE:1527/
What property do I pass my ./snappy-shell.sh location start command to get the DRDA Server to start on a public IP address instead of "localhost/127.0.0.1"?
Use -client-bind-address and -client-port options. For locator also use the -peer-discovery-address and -peer-discovery-port options to specify bind address for other locators/servers/leads (that are passed to their -locators=<address>:<port>):
snappy-shell locator start -peer-discovery-address=<internal IP for peers> -client-bind-address=<public IP for clients>
See the output of snappy-shell locator --help for commonly used options.
For SnappyData releases, you may find it much easier to use the global configuration for all of the locators, servers, leads. Check configuring the cluster.
This will allow specifying all options for all JVMs of the cluster in conf/locators, conf/leads, conf/servers then starting with snappy-start-all.sh, status with snappy-status-all.sh and stop all with snappy-stop-all.sh
On a related note, we at SnappyData Inc., are developing scripts to enable users quickly launch SnappyData cluster on AWS.
If you want to try it out, below steps would guide you. We would love to hear your feedback on this.
Download its development branch git clone https://github.com/SnappyDataInc/snappydata.git -b SNAP-864 (You don't need to clone the repo for this, but I could not find a way to attach the scripts here.)
Go to ec2 directory cd snappydata/cluster/ec2
Run snappy-ec2. ./snappy-ec2 -k ec2-keypair-name -i /path/to/keypair/private/key/file launch your-cluster-name
See this README for more details.
Does anyone know how to enable kerberos with Apache Drill? Is it possible. I can't seem to find any documentation on it, or any questions/answers floating around with the information on it. I am currently running a CDH cluster.
I am getting this error when trying to use HDFS with Drill:
Error: PERMISSION ERROR: SIMPLE authentication is not enabled.
Available:[TOKEN, KERBEROS]
HDFS + Kerberos integration isn't currently supported / tested / documented. Vote on this ticket to track when it becomes available:
https://issues.apache.org/jira/browse/DRILL-3584
There isn't any documentation that the Drill team provides about how to enable kerberos and they haven't tested kerberos with Drill. Drill Eng. does believe that it should work.
In order to gain access onto the cluster once Kerberized, you must configure certain files in order to gain access.
Make an HDFS Superuser account as indicated in this Cloudera doc. On the Main Node, run
•sudo kadmin.local
In addition, add an 'hdfs' principal with this command
•addprinc hdfs#LOCALDOMAIN -- Where localdomain is the principal name
In order to enable authentication with Kerberos, we also need to copy the file hadoop-yarn-api.jar into Drill's class path. Example given below
•cp /opt/cloudera/parcels/CDH-5.5.1-1.cdh5.5.1.p0.11/lib/hadoop/client/hadoop-yarn-api.jar ~/apache-drill/jars/
The above step and the three following must be performed on each node of the cluster that an Apache Drill is installed.
Next, Drill's conf/core-site.xml file should be edited to contain the following snippet of xml. You might have to copy this file from /etc/hadoop/conf.cloudera.yarn/core-site.xml, etc or a similar path.
<property>
<name>hadoop.security.authentication</name>
<value>kerberos</value>
</property>
After this step, you will also need to add the following xml snippet below to the drill core-site.xml file. In this instance, hdfs/_HOST#LOCALDOMAIN is my principal property. The property can be found on the hdfs-site.xml file
<property>
<name>dfs.namenode.kerberos.principal</name>
<value>hdfs/_HOST#LOCALDOMAIN</value>
</property>
All that is left to do is create an 'hdfs' Kerberos ticket for the user that we're logged into
•kinit hdfs -- hdfs is the super user
Then start up each of the drillbits
•/opt/apachedrillfolder/bin/Drillbit.sh start
So now, Drill has both the configuration and the authority to use our kerberized HDFS store. Give it a shot by opening up a Drill prompt (drill-conf) and trying a query
I am working on a Notification Service using IBM MQ messaging provider with JBoss eap 6.1 environment. I am successfully able to send messages via MQ JCA provider rar i.e. wmq.jmsra.rar file. However on consumer part my current configuration looks like this
#MessageDriven(
activationConfig = {
#ActivationConfigProperty(propertyName="destinationType", propertyValue="javax.jms.Queue"),
#ActivationConfigProperty(propertyName="destination", propertyValue="F2.QUEUE"),
#ActivationConfigProperty(propertyName="providerAdapterJNDI", propertyValue="java:jboss/jms/TopicFactory"),
#ActivationConfigProperty(propertyName="queueManager", propertyValue="TOPIC.MANAGER"),
#ActivationConfigProperty(propertyName="hostName", propertyValue="10.239.217.242"),
#ActivationConfigProperty(propertyName="userName", propertyValue="root"),
#ActivationConfigProperty(propertyName = "channel", propertyValue = "TOPIC.CHANNEL"),
#ActivationConfigProperty(propertyName = "port", propertyValue = "1422")
})
My problem is that consumer of this service does not want to add any port numbers, hostName, queueManager properties in these beans. Also they do not want to use ejb-jar.xml to externalize these configs. I have researched and found that we can add a domain IBM Message Driven Bean but with no success. Any suggestions on what I can do here to externalize all these configurations ?
EDIT: Adding --> The JCA resource adapter is deployed at consumer end if it makes it any easier.
Thanks
You can actually externalize an MDBs activation spec properties to the server configuration file.
Create the ejb-jar.xml file, but do not put the actual value in the file, use a property placeholder:
<activation-config-property>
<activation-config-property-name>hostName</activation-config-property-name>
<activation-config-property-value>${wmq.host}</activation-config-property-value>
</activation-config-property>
Do this for all of the desired properties.
Ensure that property replacement for Java EE spec files (ejb-jar.xml, in this case) is enabled in the server configuration file:
<subsystem xmlns="urn:jboss:domain:ee:1.2">
<spec-descriptor-property-replacement>true</spec-descriptor-property-replacement>
Then, in the server configuration file, provide values for your properties:
<system-properties>
<property name="wmq.host" value="10.0.0.150"/>
Once your MDBs are packaged, you will not need to change any of the files in the MDB jar - just provide the properties in the server configuration.
you can avoid to add host name, port number and so on in MDB, you just want to define destinationType in MDB, and rest of the thing u can configure in your application server, like Activation Specification, Queues and Queue Connection Factories.
I have done the same thing but i used IBM Websphere Application Server.
I am using SQL Server Integration Services (SSIS) in SQL Server Business Intelligent Development Studio.
I need to do a task that is as follows. I have to read from a source database and put it into a destination flat file. But at the same time the source database should be configurable.
That means in the OLEDB Connection Manager, the connection string should change dynamically. This connection string should be taken from a configuration/XML/flat file.
I read that I can use variables and expressions to change the connection string dynamically. But how do I read the connection string value from a config/XML/flat file and set the variable?
This part I am unable to do. Is this the right way to achieve this? Can we add web.config files to an SSIS project?
First add a variable to your SSIS package (Package Scope) - I used FileName, OleRootFilePath, OleProperties, OleProvider. The type for each variable is "string". Then I create a Configuration file (Select each variable - value) - populate the values in the configuration file - Eg: for OleProperties - Microsoft.ACE.OLEDB.12.0; for OleProperties - Excel 8.0;HDR=, OleRootFilePath - Your Excel file path, FileName - FileName
In the Connection manager - I then set the Properties-> Expressions-> Connection string expression dynamically eg:
"Provider=" + #[User::OleProvider] + "Data Source=" + #[User::OleRootFilePath]
+ #[User::FileName] + ";Extended Properties=\"" + #[User::OleProperties] + "NO \""+";"
This way once you set the variables values and change it in your configuration file - the connection string will change dynamically - this helps especially in moving from development to production environments.
Some options:
You can use the Execute Package Utility to change your datasource, before running the package.
You can run your package using DTEXEC, and change your connection by passing in a /CONNECTION parameter. Probably save it as a batch so next time you don't need to type the whole thing and just change the datasource as required.
You can use the SSIS XML package configuration file. Here is a walk through.
You can save your configrations in a database table.
Here's some background on the mechanism you should use, called Package Configurations: Understanding Integration Services Package Configurations.
The article describes 5 types of configurations:
XML configuration file
Environment variable
Registry entry
Parent package variable
SQL Server
Here's a walkthrough of setting up a configuration on a Connection Manager: SQL Server Integration Services SSIS Package Configuration - I do realize this is using an environment variable for the connection string (not a great idea), but the basics are identical to using an XML file. The only step(s) you have to change in that walkthrough are the configuration type, and then a path.
Goto Package properties->Configurations->Enable Package Configurations->Add->xml configuration file->Specify dtsconfig file->click next->In OLEDB Properties tick the connection string->connection string value will be displayed->click next and finish package is hence configured.
You can add Environment variable also in this process
These answers are right, but old and works for Depoloyement Package Model.
What I Actually needed is to change the server name, database name of a connection manager and i found this very helpful:
https://www.youtube.com/watch?v=_yLAwTHH_GA
Better for people using SQL Server 2012-2014-2016 ... with Deployment Project Model