Some days ago I was able to run cygnus, on my Context Broker vm, like that the documentation says. All suscriptions between cygnus and CB are done without problems, and the notifications that the CB sends, reach cygnus.
My doubt is when I have to configure cygnus.conf, I think that, the failures that I have, when Cygnus sends data to COSMOS are related with this archive's fields configuration. The next file is the template to fill, available in the documentation:
### ============================================
###OrionHDFSSink configuration
###channel name from where to read notification events
cygnusagent.sinks.hdfs-sink.channel = hdfs-channel
### sink class, must not be changed
cygnusagent.sinks.hdfs-sink.type = es.tid.fiware.fiwareconnectors.cygnus.sinks.OrionHDFSSink
### Comma-separated list of FQDN/IP address regarding the Cosmos Namenode endpoints
cygnusagent.sinks.hdfs-sink.cosmos_host = x1.y1.z1.w1,x2.y2.z2.w2
###port of the Cosmos service listening for persistence operations; 14000 for httpfs, 50070 for webhdfs and free choice for inifinty
cygnusagent.sinks.hdfs-sink.cosmos_port = 14000
###default username allowed to write in HDFS
cygnusagent.sinks.hdfs-sink.cosmos_default_username = default
###default password for the default username
cygnusagent.sinks.hdfs-sink.cosmos_default_password = xxxxxxxxxxxxx
###HDFS backend type (webhdfs, httpfs or infinity)
cygnusagent.sinks.hdfs-sink.hdfs_api = httpfs
### how the attributes are stored, either per row either per column (row, column)
cygnusagent.sinks.hdfs-sink.attr_persistence = column
###Hive FQDN/IP address of the Hive server
cygnusagent.sinks.hdfs-sink.hive_host = x.y.z.w
### Hive port for Hive external table provisioning
cygnusagent.sinks.hdfs-sink.hive_port = 10000
### ============================================
But for me is not clear what direction i have to put in the next field:
### Comma-separated list of FQDN/IP address regarding the Cosmos Namenode endpoints
cygnusagent.sinks.hdfs-sink.cosmos_host = x1.y1.z1.w1,x2.y2.z2.w2
and I also don't know if the hive server field, the direction that i need to write is the same that fiware COSMOS instance's ip address:
### Hive FQDN/IP address of the Hive server
cygnusagent.sinks.hdfs-sink.hive_host = x.y.z.w
Looking to the BigData Quick Start documentation, it seems that the value for cosmos_host in the case of using FIWARE Lab Cosmos instance is: cosmos.lab.fi-ware.org.
Regarding Hive, it is said:
Or remotelly, by developing a Hive client (typically, using JDBC, but
there are some other options for other non Java programming languages)
connecting to cosmos.lab.fi-ware.org:10000.
so I guess that the hive_host is the same (cosmos.lab.fi-ware.org).
Finally, take into account the following:
In addition, all the documented connections to such global instance
(except for ssh connections and the Cosmos portal) must be done from a
FI-LAB virtual machine; on the contrary, the firewall will stop them.
which means that you should run Cygnus from a VM inside FIWARE Lab.
Related
I have a server with Firebird 2.5.3 and I need read then database on another server over the ODBC connection for use in SSIS project(integration services), so I shared the folder with the .FDB database and set the address in my ODBC connection, but doesn't works.
My file firebird.conf:
# ----------------------------
# TCP Protocol Settings
#
# The TCP Service name/Port number to be used for client database
# connections.
#
# It is only necessary to change one of the entries, not both. The
# order of precendence is the 'RemoteServiceName' (if an entry is
# found in the 'services.' file) then the 'RemoteServicePort'.
#
# Type: string, integer
#
RemoteServiceName = fb_db
RemoteServicePort = 5050
I try:
Without port:
192.168.100.21:C:\IntegracaoRH\CONSISANET2_5.FDB and
With :
192.168.100.21:5050:C:\IntegracaoRH\CONSISANET2_5.FDB and same error.
How to make remote ODBC connection?
According to the configuration you posted, your Firebird instance is running on port 5050, however when you don't specify a port in the connection string, then the Firebird client will default to port 3050.
To use the right port, you need to explicitly specify the port in your connection string, by using format <host>/<port>:<db-path-or-alias>.
In other words, something like:
192.168.100.21/5050:database-alias
Where database-alias should be the alias or the path of your database.
Be aware, Firebird on Windows supports URLs of the form \\<host>\<db-path-or-alias> using the WNET protocol. However my guess would be that you navigated to the UNC-path \\192.168.100.21\IntegracaoRH\CONSISANET2_5.FDB. There is no 1-on-1 correspondence between an UNC-path and a Firebird WNET-url: they look the same, but they are not the same thing. The 'Browse' button in the ODBC configuration should only be used to select databases local to your machine.
As an aside, being able to browse to an UNC-path \\192.168.100.21\IntegracaoRH\CONSISANET2_5.FDB suggests that your database is in a folder shared to the network. You shouldn't share databases over the network through a fileshare. It is insecure, as anybody with access can create a copy of the database and access it with full permissions, or even replace or otherwise damage the database. Access to the database should always be done through a Firebird server on the same host as the database file.
I am currently developing an app using R Shiny and finishing my ShinyApps, now I am trying to deploy the apps to Shinyapps.io so multiple user can reach and use it, but I have an issue for the deployment.
my Apps is about a Pharmacy management, it controls a CRUD operation, so clearly it is binded with db Connection using these options configuration (running this in locally)
options(mysql = list(
"host" = "127.0.0.1",
"port" = 3306,
"user" = "root",
"password" = ""
))
one more thing, to connect to database, I usually started my XAMPP apps and switched on mysql admin so my apps can connect the database locally. it worked flawlessly and clear before deploy
but it crashed instantly when I try to run it in the shiny.io after delpoy (what I mean is disconnected automatically). so I did try to change the host ip to publically like this , (I am trying to get ip address on user local machine)
configA <- system("ipconfig", intern=TRUE)
configB <- configA[grep("IPv4", configA)]
configC <- gsub(".*? ([[:digit:]])", "\1", configB)
options(mysql = list(
"host" = configC,
"port" = 3306,
"user" = "root",
"password" = ""
))
the ConfigC variable stores IPv4 address to get the public IP on local machine, but still these doesn't work, I attached a log in below link
how can I connect and sync my apps with MySQL in Shinyapps.io ? I use DBI and RMySQL package..do I need to host MySQL first so i can sync my apps? can anyone brief me with step by step explanation how to? thankyou in advance
here is my error log from shinyapps.io
http://textuploader.com/dulzh
For people who have same problem & didn't know how, i'll share what have work for me:
1) I Recommend Host your MySQL Database to AWS (Amazon Web Service), it is free and great performance to sync any Web Service update online especially shinyapps.io, with creating an Account first
2) Validate your AWS Account with full information including credit card, so you can access its Services
3) Click Service > Database > RDS
4) Then you will be redirected to AWS RDS Dasboard, here you can manage Instances of your MySQL Database, to create one new Instance, click Launch DB Instance
5) Here is my Instance settings:
Engine Options: MySQL
Use Case: Production- MySQL
DB Instance Class: db.r4.large 2vCPU, 15.25 GiB RAM (i believe this setting is subjectively based on our CPU performance)
Multi AZ Deployment: No
Storage Type: Provisioned IOPS
Allocated Storage: 100 GiB
Provisioned IOPS: Depends on your Allocated Storage (I use 4000)
6) Then in Settings tab, fill your db instance identifier, master username & password, after that when you click Next, there is advanced configuration, fill again db name and then you will want to check all Log Reports in hope an easier maintenance later, after finished > Launch DB Instance
7) Wait until your instances status become Available (keep refreshing to know)
8) After the Instance become Available, check the Instances and scroll down until you found Connect section, remember and save the Endpoint, Security Group Rules, master username & password Instances from Detail section
9) In your server.R, edit your MySQL connection options, from localhost to AWS RDS..
options(mysql = list(
"host" = "your Endpoint",
"port" = 3306,
"user" = "your master username of db instance",
"password" = "your master password of db instance"
))
10) Before deploying your MySQL database from localhost to AWS RDS, firstly Go to your AWS > Services > VPC > Security Group > (Click one of Group Name that is actively used by your Instances)> Inbound Rules
11) In Inbound Rules you must whitelist all External IP that you or other PC access to your shinyapps http://whatsmyip.org, and whitelist all shinyapps IP address based on this http://docs.rstudio.com/shinyapps.io/applications.html#accessing-databases-with-odbc in section 3.8.4
12) And now lastly, to deploy your MySQL Database from localhost to AWS RDS cannot be done directly, I Recommend installing MySQL Workbench to do it, after done installing, launch MySQL WorkBench
13) Create new MySQL Connection adn then fill the connection form:
Connection Name: (anything you like)
Connection method: TCP/IP
Hostname : (paste your Endpoint)
Port 3306
Username: (your master username of db instance)
Password: (your master password of db instance)
14) After a successful Connection to AWS RDS, open your connection, and then MySQL WorkBench UI will open, Import your .bak files (MySQL database) from Navigator > Management > Data Import > Select Import from Self Contained Files > browse your file> and then start Import
15) You have successfully deployed your database to AWS RDS! you can use query in WorkBench to see all your table/database information
16) RunApp your ShinyApps and test it, and DONE!
(if you EVER found message can't connect to your AWS RDS host, probably that your External IP is changed to new one, and to solve it you need to whitelist again your IP to AWS VPC in step 10)
I hope these are helpful for you!
I have created a bucket in my local system and I am trying to connect another node which is located in a remote server. I am able to work with the nodes separately. But I need to join these two nodes to form a cluster. Is there a way to add the remote server node into my local server by using the web UI?
When I tried to add the remote server's IP address by clicking "Add Server", I am getting the following error.
"Attention - Prepare join failed. Authentication failed. Verify username and password. Got HTTP status 401 from REST call post to http://XXX.XXX.XXX.XXX:8091/engageCluster2. Body was: []"
I used my local server's username and password. If I give that server's username and password, I get this error.
Attention - This node cannot add another node ('ns_1#XXX.XXX.XXX.XXX') because of cluster version compatibility mismatch. Cluster works in [4, 1] mode and node only supports [2, 0].
Is there a way to link them using Java API? Can someone please help me with this?
introduction
when configuring elasticsearch I ran into a problem with binding the
listening interfaces.
somehow the documentation does not provide how to setup multiple network interfaces (network def and bind def)
problem description
my intention is to setup the network.bind_host as _eth1:ipv4_ and _local_
even when trying to setup the bind_host as _local_ only,
the elastic search port 9200 is still only reachable by eth1 (of course i have restarted the server)
solutions tried
i have tested the firewall configuration by setting up a netcat server and this one works perfectly for that port
so this results in 2 Questions:
how to configure multiple nics? (whats the notation?)
would i require to change the network.publish_host ?!
.
any other pointers?
current configuration:
network.bind_host: _eth1:ipv4_
network.publish_host: _eth1:ipv4_
network.host: _eth1:ipv4_
also tested configuration:
network.bind_host: _local_
network.publish_host: _eth1:ipv4_
network.host: _local_
PS:
afaik the publish_host is the nic for the inter-server communication
Using a YAML list for the desired property:
network.bind_host:
- _local_
- _en0:ipv4_
If I understand this answer correctly, publish_host should be _eth1:ipv4_. Your publish_host has to be a one of the interfaces to which elasticsearch binds via the bind_host property.
The above linked answer is actually great, so I have to cite it here:
"bind_host" is the host that an Elasticsearch node uses in the socket
bind call when starting the network. Due to socket programming model,
you can "bind" to an address. By referencing an "address", the socket
allows access to one or all underlying network devices. There are
several addresses with predefined semantics, e.g. 0.0.0.0 is reserved
for "bind to all network devices". So the "bind_host" address does not
necessarily reflect a single unique address.
"publish_host" must be a single unique network address. It is used for
connect calls by other nodes, not for socket bind call by the node
itself. By using "publish_host" all nodes and clients can be sure they
can connect to this node. Declaring this single unique address to the
outside can be interpreted as "publishing", so it is called
"publish_host".
You can not set "bind_host" and "publish_host" to arbitrary values,
the values must adhere to the underlying socket model.
I am new to the 'cloud' concept I have a Java based application for data entry which runs well on my LAN.
On my LAN I install:
MySql
Configure Instance ( user name - root, pass - ******)
Dump dummy database entry_db that is in raw format
Then I have a jar executable file which when runs, displays a login screen.
I manage to successfully log in using predefined ID and PASSWORD (user - config pass - ******)
After logging in I configure(d):
Database Type
Database IP
User Name (Root)
Password ****
Database Name ( It auto selects database named entry_db)
In another window I configure(d) Network File Sharing Location:
file shared location
image path
back up data path
config file location in xml
(Note - When I select file shared location, all other files take the same path automatically)
Then I create Admin account rather than Supervisor account or operator account and login with the Admin account and I can now upload data and distribute to all operators.
Here is my problem:
I configure a cloud computer on Hp Cloud (they provide me a static ip) and then import database from xeround.com.
I now have a dns and port number and also a log in form using MY PHP CLIENT
How can I package all this to the same executable jar file to be used from anywhere?
How can I use it just like on my LAN from the web?
What is the optimal configuration for this?
I work in Xeround.
I have read your question and I wanted to point out a couple of things; you should use the DNS in the connection string where you used to put hostname/IP of the MYSQL server machine and the port number where you used to put the MySQL default port (3306).
Other than that you can connect from anywhere there is access to the instance. I suggest that if your jar runs in the HP cloud you create your Xeround database instance there as well (this will yield improved performance).
If you still need help, we will be more than happy to help you. Just send us a quick email to support#xeround.com and we'll take it from there.
Cheers,
Yuval