I will install DataHub for the first time. I am trying to integrate Hybris Commerce 6.5 and Hybris Datahub. I established hybris commerce 6.5 and mysql 5.7.17.
I initialized the hybris commerce. Eveything is ok.
How can I integrate DataHub into it?
There are several steps to do this but in a nutshell you will need to do the following. These steps can be referenced on the Hybris Wiki or Hybris Help website for more details.
1) You need to enable the Data Hub extension in Hybris in the localextensions.xml file.
<!-- Data Hub extensions -->
<extension name='datahubadapter'/>
<extension name='datahubbackoffice'/>
After running a build and starting Hybris this will expose a Hybris adapter API endpoint for Data Hub. For example: http://locahost:9001/datahubadapter
2) You then need to go to Hybris HMC and create a service user that Data Hub can use to authenticate with Hybris. In my case I created a user called datahub-user within the admingroup. There may be a more proper group to put it in.
3) Then you will need to setup Data Hub with Tomcat. This can be found on the Hybris Wiki or Hybris Help website. Within the Data Hub local.properties file you will need to add the following properties to connect to Hybris.
targetsystem.hybriscore.url=http://YOUR_HYBRIS_HOSTNAME_OR_IP:9001/datahubadapter
targetsystem.hybriscore.username=datahub-user
targetsystem.hybriscore.password=YourSetPassword
4) After starting Hybris and Data Hub you will need to initiate the connection to Data Hub from Hybris. This can be done from the Hybris HMC. On the left menu of the HMC expand "SAP Integration" and then click "SAP Administration". You will see a button labeled "Start Upload". This will initiate the connection of Hybris and Data Hub.
Depending on your business needs there are other steps to consider such as setting up the Mapping Sales Areas to Catalogs in the SAP global configuration area in the HMC and setting up the inbound directory paths if you storing products in SAP Material Master.
Further Reading On Data Hub Setup:
https://help.hybris.com/6.5.0/hcd/8ba79fcc86691014a83e8530484d3892.html
I found the solution.
Step 1 :
https://blogs.sap.com/2017/03/14/hybris-sap-integrations-part-1/
Step 2 :
https://blogs.sap.com/2017/03/20/hybris-sap-integrations-part-2/
I will suggest to use recipe installer.
Go to platform directory and run below command
. ./setantenv.sh (For Mac)
setantenv.bat (For Windows)
Goto Hybris installer directory and run below command
Note: You will find installer directory inside hybris directory
./install.sh -r sap_som_b2b (for Mac)
install.bat -r sap_som_b2b (for Windows)
Copied from:
https://www.queshub.com/how-to-install-sap-hybris-using-recipe-installer-
Related
I have deployed spring boot application on google compute engine using this link (https://cloud.google.com/community/tutorials/kotlin-springboot-compute-engine#before_you_begin) from my local computer using the cloud SDK command line. I have created the google storage bucket and then followed the steps in the link to deploy the spring boot project. Deployment works fine. But now I have to deploy changes to my deployed project. How can that be achieved using command line without restarting the VM instance?
I have updated the google storage bucket which I provided in the --metadata BUCKET= while creating the instance.
Copied my new jar from the local location after building the project to the google cloud bucket. But after refreshing the URL in the browser can't see the new changes.
As far as I can understand in your description, you need to download the new version from the bucket to your VM, in the same directory where you created the instance-startup.sh as in [1], you can execute the command "gsutil cp gs://${BUCKET}/demo.jar ." this if you replaced the .jar file in the bucket, if the name changed you can change it in the previous command in order to make it match with the new version that you uploaded.
Then you can then stop the java process with the previous jar file, you may use "ps -aux | grep ${jarfilename}" and then "kill $PID", after this you can execute the new version with the command "java -jar $jarfile.jar" making it match with the new version of your jar file.
[1] https://cloud.google.com/community/tutorials/kotlin-springboot-compute-engine#create_a_startup_script
Since installing Service Fabric SDK 2.2.207 I'm not able to change the cluster data and log paths (with previous SDKs I could).
I tried:
Editing the registry keys in HKLM\Software\Microsoft\Service Fabric - they just revert back to C:\SfDevCluster\data and C:\SfDevCluster\log when the cluster is created.
Running poweshell: & "C:\Program Files\Microsoft SDKs\Service Fabric\ClusterSetup\DevClusterSetup.ps1" -PathToClusterDataRoot d:\SfDevCluster\data -PathToClusterLogRoot d:\SfDevCluster\log - this works successfully but upon changing the cluster mode to 1-node (newly available configuration with this SDK), the cluster moves to the C drive.
Any help is appreciated!
Any time you switch cluster mode on local dev box, existing cluster is removed and a new one is created. You can pass use \DevClusterSetup.ps1 to switch mode from 5->1 node, by passing -CreateOneNodeCluster to create one node cluster and pass Data and Log root paths to it as well.
We are developing a service for our QA staff.
The main goal is that a tester from our web interface be able to select from a github branch a dump for this particular machine and click "Deploy" button, then the rails app for testing will be deployed to Digital Ocean.
The feature I am now working on, is collecting deployment logs and displaying them through our web interface.
On DO droplet there is a "logs" folder which contains different log files which are populated during deployment:
migrations_result_#{machine_id}.log, bundle_result_#{machine_id}.log, etc.
Where #{machine_id} is the id of deployed machine on our service(it is not droplet id).
With the help of remote_syslog gem we are monitoring "logs" folders on each droplet and send them through udp to our main service server, and with the help of rsyslog we store them in a particular folder, let's say /var/log/deplogs/
So in /var/log/deplogs/ we have:
migrations_result_1.log, bundle_result_1.log,
migrations_result_2.log, bundle_result_2.log,
...
migrations_result_n.log, bundle_result_n.log
How do I need to monitor this folder and save contents of each log file to mysql database?
I need to achieve something like the following (Ruby code):
Machine.find(#{machine_id}).logs.create!(text: "migrations_result_#{machine_id}.log contents")
Rsyslog does not seems to be able to achieve this. Or am I missing something?
Any advices?
Thanks in advance, and sorry for my English, I hope you can get the idea.
First of all, congratulations! You are in front of a beautiful problem. My suggestion is to use divide and conquer.
Here are my considerations:
Put the relevant folder(s) under version control (for example, GIT)
Check via GIT commands the files that changed every X amount of time.
Also obtain the differences between the prior version of each file, and the new ones, so you can update your database parsing the new info.
Just in case, here are ways to call system commands from ruby.
Hope that helps,
Following process outlined here to create a axis service from a pojo:
Webinar: Building Applications with Carbon Studio for On-Premise and the Cloud.
I create the axis services as described in the webinar.
I did a mvn package sucessfully.
I start the WS02 ESB in eclipse sucessfully.
But when I deploy my app to the WS02 ESB, I see the following in the console:
INFO - ApplicationManager Deploying Carbon Application : MyCarbonApp-1.0.0.car...
WARN - ApplicationManager No artifacts found to be deployed in this server. Ignoring
Carbon Application : MyCarbonApp-1.0.0.car
and the service does not appear on the ESB console's web services list.
When I look at the file called MyAxisService.service I see the following:
#Contains the information about the axis2 service generation information from the eclipse workspace
#Fri May 25 15:53:09 NZST 2012
Class-name=com.unisys.comms.esbselection.MyAxisService
Type=FROM_CLASS
Service-name=MyAxisService
Projects=MyCarbonApp
What does this warning mean?
What can I do to further investigate the cause?
Is there some obvious step I've missed when creating the app?
Thanks in advance.
Please follow these steps to solve this issue.
Go to carbon.xml file locate in ....\wso2esb-4.9.0\repository\conf
Add new server role to xml elemet
EnterpriseServiceBus
ApplicationServer
Restart the server
This error means the Server Roles of the C-App Artifacts found in your Carbon Application Archive (CAR), does not match the Server Role of the ESB.
Reason is, Axis2 Web Services are by default has the Server Role of "ApplicationServer". Hence if you deploy it on the WSO2 AS, it will deploy without any problem. But in this case, you have tried to Deploy it the WSO2 ESB. Since the WSO2 ESB has the Server Role "EnterpriseServiceBus" and your Axis2 web service has the Server Role "ApplicationServer", they do not match each other. Result is the C-App deployer will ignore the C-App.
To solve this, you need to change the Server Role of your Axis2 Web Service. In order to do that, follow the steps below.
Go the C-App project you created and browse to the Axis2 Service Artifact folder in the "Artifacts" folder of the C-App.
Inside this Axis2 Service Project, you will see a file called "Artifact.xml". Open this file by double clicking on the file.
Once you double click on the file, file will be opened in the Artifact editor. Scroll the Editor down a bit.
There you will see a Drop Down next to a Label called "Server Role".
Select "EnterpriseServiceBus" option from the Drop down list and click on "Save All" button on the Eclipse Tool bar.
Go to the Servers view in Carbon Studio and click on the expand icon infront of the Carbon Server (WSO2 ESB in this case)
Once you expand the Server, you will be able to see the Server Module (C-App project) you deployed in the ESB.
Right Click on the C-App module under the ESB Server and Select "Redeploy".
Now you will see that Carbon Studio redeploy the C-App project and if you followed all the steps above correctly, your Axis2 Web Service will be deployed in the ESB.
Hope this helps!!
Thanks.
/Harshana
Whenever i create a new Environment in Elastic Beanstalk, i manually configure the Custom AMI ID, SNS notifications etc., but i want to do it automatically i.e, save the settings(custom AMI ID, SNS, key-pair etc.,) into a configuration template. Is it through Command line tools or from AWS management console that we can create this Configuration Template. Please suggest me.
You can easily do this through Amazon's web console. If you have a configuration you like just press save configuration. You can then use edit/load configuration to push that to new environments
If you are using the elastic beanstalk command line tools, when you setup an environment using the command git aws.config it creates a directory called .elasticbeanstalk with a file in it called config that looks like this:
[global]
AwsCredentialFile=/path/to/file/with/aws/account/credentials
ApplicationName=YourAppName
DevToolsEndpoint=git.elasticbeanstalk.your-region-name.amazonaws.com
EnvironmentName=yourEnvName
Region=your-region-name
Hope that helps!
Elastic Beanstalk's console is pretty lacking when it comes to configuring templates. You can't update or delete templates. There is a command line tool for full control.
You can also get the AWS Eclipse plugin. It's not as full featured like the CLI, but much better than web console.