JXTA Configuration - configuration

can someone help to configure JXTA
??
I'm not understanding what type of uri to place as the rendez vous

Noor, you are using an old version of JXTA. The window you see has been completely removed from the code base in 2.7. It is not used in 2.6 too. JXTA is not configured with this window anymore.
About your question, there are two types of URIs: seed and seeding. Seeds are describing 'locations' of peers devices acting as RendezVous or Relays (i.e., 'super' peers). This information is used directly by peers to connect RDV or relays.
Seedings are locations where peers can load information about locations of seed peers. Typically, they point towards an online xml advertisement document. First, these documents are loaded by peers. Then, the extracted content is used by peers to connect to rdv or relay peers.
You can also read the Practical JXTA II document available online at Scribd. For more details about JXTA configuration.

Related

Secrecy in IPFS?

I have some questions regarding the functioning of IPFS. These are the following:
if I am the only one that has the Merkle root hash of an uploaded file on IPFS, is it infeasible for other peers to download/find this same file?
related to point 1., can I see all the files that have been uploaded on IPFS from every peers in the network? If yes, how?
once uploaded a file on IPFS, this is split in chunks and these are given to the peers in the network. In which scenario is it possible to 'lose' the file? Can the fact that several peers in the network go 'offline' forever be a security problem for IPFS?
Is there a way to allow only specific peers to have access to a specific file stored on IPFS?
Ad 1: As long there is at least one peer with the content, other peers should be able to (eventually, it may take time) to discover its address via DHT and connect to it (as long it has a public and dialable address). A primer on networking challenges can be found in Core Course - Solving distributed networking problems with libp2p
Ad 2: You can't see everything in realtime (you'd have to be connected to every peer in existence), but you can crawl the public DHT. See how its done in https://github.com/raulk/dht-hawk or https://github.com/ipfs-search/ipfs-search/
Ad 3: Chunks are never "pushed" to other peers. Peers need to request them. Content is available on the network as long as peers provide it. You can "lose" file if nobody wants to host it. More on this in Core Course - The Lifecycle of Data in DWeb
Ad 4: There is no access control at a file level. You can set up a private network in which only Peers that know a secret key can connect to each other and exchange data.
For future reference, a much better place for asking this type of generic questions is IPFS Community Forum at https://discuss.ipfs.io/c/help

How to use opendolphin without http sticky sessions in a load balanced scenario?

I read "Those who would like to enjoy the binding, presentation model structuring, testing capabilities, toolkit independence, and all the other benefits of OpenDolphin, but prefer REST (or other) remoting for data access, can use OpenDolphin with the in-memory configuration"
But I could not find any further hints in the docs?
I can't rely on sticky sessions in my load balanced webserver.
Therefore I need to plugin something different for the http session state.
Is there a opendolphin config property prepared for this? If not are there any plugin points available?
since OpenDolphin and Dolphin Platform use the remote presentation model pattern to synchronize presentation models between client and server you need a state on the server. Currently this state is defined in the session. As you said it's no problem to use load balancing with sticky sessions to provide several server instances. If you need dynamic updates between the clients a distributed event bus like hazelcast will help.
Therefore I need to plugin something different for the http session
state.
What do you need? With the last version (0.8.6) of Dolphin Platform you can access the http client in the client API and provide custom headers or cookies. Will this help? Can you please tell us what you need or open an issue at the Dolphin Platform github repo?

Difference between using Listeners and MBean to send Notifications?

I've been reading about how the GemFire distributed data store/management/cache system performs notifications. While this reading, i had this question.
Gemfire seems to be using MBeans to create notifications during events. How different/suitable is using MBeans to create notifications instead of implementing a Listener based aproach ? (not just in GemFire but, generally)
Note: I am very new to the topic of MBean. Just with the understanding that it's main purpose is to expose resources to be managed.
CONTEXT
...topic of MBean... it's main purpose is to expose resources to be managed.
That is correct. (GemFire) Resources exposed as MBeans can both be queried and altered, depending on what the MBean exposes for the resource (e.g. Region, DiskStore, Gateway, AEQ, etc), using JMX.
GemFire's JMX interface can then be consumed by applications and tools that use the JMX API. GemFire's Gfsh (command-line shell and management tool) along with Pulse (web monitoring tool) are both examples of JMX clients and the kinds of applications you could write that use JMX.
You can also use the standard JDK tools like jconsole or jvisualvm to connect to a GemFire Manager (managing node in the cluster that federates the view of all the members in the cluster as well as the ability to control any single member from the Manager). See GemFire's section in the User Guide on Management for more details.
Contrasting that with GemFire Callbacks, callbacks (e.g. CacheListener) can be used by peer/client cache applications to register interests in certain types of events, like Region entry creation/updates, etc. Other callbacks like CacheLoaders can used to read-through to an external data source (e.g. RDBMS) on a Cache miss. Likewise, the CacheWriter can be used to 'write-through' to an external data source on a Cache (Region) create/update, or perhaps asynchronously with a AEQ/AsyncEventListener performing a 'write-behind' to the external data source.
There are many other callbacks and ways in which these callbacks can be used, but nearly all are used programmatically in an GemFire client/peer Cache application to "receive" notifications of some type.
For more details, see the GemFire User Guide on Events and Event Handling.
ANSWER
Now, when it comes to "sending" notifications, GemFire does a fair amount of distribution on your application's behalf. JMX is primarily used to send notifications about management changes... a Region was add, the eviction policy changed, a Function was deployed, etc. In contrast, GemFire sends distribution events when data changes to other members in the cluster that are interested in the event. "Interested" members typically includes other nodes in the cluster that host the same Region and have the same key/values, which need to be updated, and in certain cases atomically (in a TX) for consistency sakes.
Now, if you want to send notifications from your application, then you are better off using Spring and Spring Data GemFire to configure and access GemFire. Spring provides exceptional support for application messaging.
Of course, other options are available including JMS, which Spring provides integration support.
All and all, the events/notifications that are sent and the distribution mechanism used highly depends on the event/notification type. As well, the manner in which to be notified (JMX Notification vs. GemFire Callback) is also dependent on the type of message and purpose.
Sorry for the lengthy explanation; it is loaded/broad question and complex subject that can vary greatly depending on the use case.
Hope this helps (a little ;-)

Adding centralized configuration to our servers

As our systems grow, there are more and more servers and services (different types and multiple instances of the same type that require minor config changes). We are looking for a "cetralized configuration" solution, preferably existing and nothing we need to develop from scrtach.
The idea is something like, service goes up, it knows a single piece of data (its type+location+version+serviceID or something like that) and contacts some central service that will give it its proper config (file, object or whatever).
If the service that goes online can't find the config service it will either use a cached config or refuse to initialize (behavior should probably be specified in the startup parameters it's getting from whom or whatever is bringing it online)
The config service should be highly avaiable i.e. a cluster of servers (ZooKeeper keeps sounding like a perfect candidate)
The service should preferably support the concept of inheritence, allowing a global configuration file for the type of service and then specific overrides or extensions for each instance of the service by its ID. Also, it should support something like config versioning, allowing to keep different configurations of the same service type for different versions since we want to rely more on more on side by side rollout of services.
The other side of the equation is that there is a config admin tool that connects to the same centralized config service, and can review and update all the configurations based on the requirements above.
I know that if I modify the core requirement from serivce pulling config data to having the data pushed to it I can use something like puppet or chef to manage everything. I have to be honest, I have little experience with these two systems (our IT team has more), but from my investigations I can say it seemed they are NOT the right tools for this job.
Are there any systems similar to the one I describe above that anyone has integrated with?
I've only had experience with home grown solutions so my answer may not solve your issue but may help someone else. We've utilized web servers and SVN robots quite successfully for configuration management. This solution would not mean that you would have to "develop from scratch" but is not a turn-key solution either.
We had multiple web-servers each refreshing its configurations from a SVN repository at a synchronized minute basis. The clients would make requests of the servers with the /type=...&location=...&version=... type of HTTP arguments. Those values could then be used in the views when necessary to customize the configurations. We did this both with Spring XML files that were being reloaded live and standard field=value property files.
Our system was pull only although we could trigger a pull via JMX If necessary.
Hope this helps somewhat.
Config4* (of which I am the maintainer) can provide you with most of the capabilities you are looking for out-of-the-box, and I suspect you could easily build the remaining capabilities on top of it.
Read Chapters 2 and 3 of the "Getting Started" manual to get a feel for Config4*'s capabilities (don't worry, they are very short chapters). Doing that should help you decide how well Config4* meets your needs.
You can find links to PDF and HTML versions of the manuals near the end of the main page of the Config4* website.

DIRECTOR "TCP/IP Socket sever/client"

Would Director be an option for creating a socket client?
My client needs to accept server commands; frame rate, start etc.
Director seems like it was made for controlling movies. I've got Director 11.5 at the office. Any lingo experts that could advise?
Interaction with client
SERVER==>XML PACKET==>CLIENT==>swf plays on given frame and duration
Links
http://www.adobe.com/support/director/multiuser.html
http://www.adobe.com/products/director/multiuser/
http://smbus.org/specs/
http://opensmus.sourceforge.net/
Just found this
http://www.director-online.com/buildArticle.php?id=1158
Director does not natively support creating socket connections.
There is an Xtra for communicating with servers using text connections, called the Multiuser Xtra. It doesn't provide a full suite of socket commands, but it will allow you to open a connection to an arbitrary server and send messages back and forth. It has two modes: one that uses just a raw text connection (similar to telnet, and would require you to essentially roll your own server), and one which talks to the "Shockwave Multiuser Server" via the proprietary SMUS protocol. The "Shockwave Multiuser Server" provides services like matchmaking, forwarding messages to groups, etc., but it has been de-supported by Adobe, so most Director developers, I'd wager, are skittish on basing any long-term projects on it. There are third-party alternatives available such as OpenSMUS, but you'd still be dependent on Adobe to continue supporting the Xtra.
If you want to continue down this path, I'd recommend going to the OpenSMUS site - there's a community and code samples available there.
Another possibility is to do your networking through a Flash object and embed the Flash object into Director. Since you're coming from a Flex/as3 background, apparently, that might be a better migration for you - you could do the networking stuff in Flash, and build the rest of your client in Director. This might be your best bet, especially if you already have some Flash-based infrastructure built for your project.