Syncing ethereum with get from my another node without execution and verifying txs - ethereum

I have 2 ethereum full synced geth node but i need one more geth full node.
however ethereum geth syncing is very slow because verifying the downloaded syncing datas.
Can I skip the verifying datas from my authorized nodes?
trusted node option is acted as just a static node

I would suggest backing up your chaindata folder from one of your already synchronized nodes and restoring it on the third node:
Gracefully stop your already synced node (SIGINT/CTRL-C).
Tar compress the chaindata folder with zstd or lz4 compression.
Extract to chaindata folder (Replace it) at the location geth is saving the state.
It will then only sync from that point to head of the chain.

Related

IPFS nodes in Docker and Web UI

This question is referred to the following project which is about Integrating Fabric Blockchain and IPFS.
The architecture basically comprises a swarm of docker containers that should communicate with each other (Three containers: Two peer nodes and one Server node). Every container is an IPFS node and has a separate configuration.
I am trying to run a dockerized environment of an IPFS cluster of nodes and view the WEB UI that comes with it. I set up the app by running all the steps described and then supposedly i would be able to see the WebUI in this address:
http://127.0.0.1:5001
Everything seem to be setup and configured as they should (I checked every docker logs <container>). Nevertheless all i get is an empty page.
When i try to view my local IPFS repository via
https://webui.ipfs.io/#/welcome
I get a message that this is probably caused by a CORS error (it makes sense) and it is suggested to change the IPFS configuration in order to by-pass the CORS error. See this!
Screenshot
I try to implement the solution by changing the Headers in the configuration but it doesn't seem to have any effect.
The confusion relies on the fact that after setting up the containers we have 3 different containers with 3 configurations and in addition the IPFS daemon is running in each one of them. Outside the containers the IPFS Daemon is not running.
I don't know if the IPFS daemon outside the containers should be running.
I'm not sure which configuration (if not all) should i modify.
Should i use a reverse proxy to solve this?
Useful Info
The development is done in a Linux-Ubuntu VM that meets all the necessary requirements.

rsync mechanism in wso2 all in one active-active

I am deploying active-active all in one in 2 separate servers with wso2-am 2.6.0 and wso2 analytics 2.6.0. I am configuring my servers by this link. In part 4 and 5 about rsync mechanism I have some questions:
1.how can I figure out that my server is working rsync or sync??
2.What will happen in future if I don't use rsync now and also don't use configuration on part 4 and 5 ?
1.how can I figure out that my server is working rsync or sync??
It is not really clear what are you asking for.. rsync is just a command to synchronize files in folders.
What is the rsync used for - when deploying an API, the gateway creates or updates a few synapse sequences or apis in the filesystem (repository/deployment/server) and these file updates need to be synchronized to all gateway nodes.
I personally don't advice using rsync, the whole issue is that you need to invoke regularly the rsynccommand to synchronize the files created by a master node. That creates certain delay for service availability and most important, if something goes wrong and you want to use another node as the master, you need to switch the rsync direction, which is not really automated process.
We usually keep it simple using a shared filesystem (nfs, gluster, ..) and then we have all active-active setup (ok, setting up HA NFS or glusterFS is not particulary simple, but that's usually job of the infra guys)
2.What will happen in future if I don't use rsync now and also don't use configuration on part 4 and 5 ?
In the case the filesystems between gateways is not synced or shared - you deploy an api from the publisher to a single gateway node, but other gateway nodes won't create the synapse sequences and api artefacts. As a result the other nodes won't pass the client request to the backend

IPFS Swarm and IPFS Cluster

Consider 3 IPFS peers A, B and C
When
peer A establishes the connection with peer B and C (using ipfs swarm connect)
Will it form a cluster with A as leader? If yes, Do we need to manually create secret key? and Who and how the key is managed?
IPFS is a decentralized system, even you establish connection using peer A, at the end they all will end up sharing each other's DHT ( Distribute Hash Table) information and come at the same level. There will not be any leader in a cluster, and all peers will have the same privileges as any other peer in the network.
And right now there is no notion of a secret key in IPFS, all the data in IPFS network is publicly available if you want you have to implement a layer on the top of it and encrypt data before putting it into IPFS.
Private IPFS is designed for a particular IPFS node to connect to other peers who have a shared secret key. With IPFS private networks, each node specifies which other nodes it will connect to. Nodes in that network don’t respond to communications from nodes outside that network.
An IPFS-Cluster is a stand-alone application and a CLI client that allocates, replicates and tracks pins across a cluster of IPFS daemons. IPFS-Cluster uses the RAFT leader-based consensus algorithm to coordinate storage of a pinset, distributing the set of data across the participating nodes.
This difference between Private IPFS and IPFS cluster is remarkable. It is worth noting that a private network is a default feature implemented within the core IPFS functionality and IPFS-Cluster is its separate app. IPFS and IPFS-Cluster applications are installed as different packages, run as separate processes, and they have different peer IDs as well as API endpoints and ports. IPFS-Cluster daemon depends on IPFS daemon and should be started afterwards.
In a private IPFS network, you should have 'Go' and IPFS installed on all the nodes. Once it is done, run the following command to install the swarm key generation utility. Swarm key allows us to create a private network and tell network peers to communicate only with those peers who share this secret key.
This command should be run only on your Node0. We generate swarm.key on the bootstrap node and then just copy it to the rest of the nodes.
go get -u github.com/Kubuxu/go-ipfs-swarm-key-gen/ipfs-swarm-key-gen
Now run this utility on your first node to generate swarm.key under .ipfs folder:
ipfs-swarm-key-gen & > ~/.ipfs/swarm.key
Copy the file generated swarm.key to the IPFS directory of each node participating in the private network. Please let me know if you need further details on this.
No.It doesn't form a cluster because there is a separate implementation of IPFS for the above mentioned problem named as IPFS Cluster where a particular node pins the data across various other nodes through which other nodes in the network can access the data.The pinning of data by the node functions through a secret-key. For more information you can go through the documentation of IPFS Cluster.
https://cluster.ipfs.io/documentation/

openshift database and data directory

I was looking at a README file that raised some questions about database persistence on Openshift.
Note: Every time you push, everything in your remote repo dir gets recreated
please store long term items (like an sqlite database) in the OpenShift
data directory, which will persist between pushes of your repo.
The OpenShift data directory is accessible relative to the remote repo
directory (../data) or via an environment variable OPENSHIFT_DATA_DIR.
https://github.com/ryanj/nodejs-custom-version-openshift/blob/master/README#L24
However, I could find no confirmation of this on the Openshift website. Is this README out of date? I'd rather not test this, so it would be much appreciated if anyone had any firsthand knowledge they'd be willing to share.
Yep, that readme file is up to date regarding SQLite. All gears have SQLite installed on them. Data should be stored in the persistent storage directory on your gear. This does not apply to MySQL/MongoDB/PostgreSQL as those databases are add-on cartridges pre-configured to use persistent storage, whereas SQLite is simply installed and available for use.
See the first notice found in the OpenShift Origin documentation here: https://docs.openshift.org/origin-m4/oo_cartridge_guide.html
Specifically:
Cartridges and Persistent Storage: Every time you push, everything in
your remote repo directory is recreated. Store long term items (like
an sqlite database) in the OpenShift data directory, which will
persist between pushes of your repo. The OpenShift data directory can
be found via the environment variable $OPENSHIFT_DATA_DIR.
The official OpenShift Django QuickStart shows the design pattern you should follow for adding SQLite to your application via the deploy action hook. See: https://github.com/openshift/django-example/blob/master/.openshift/action_hooks/deploy

Poll a drive location in web methods to see if a file has been upload from the mainframe?

I am just starting out with WEBMethods. I am starting on a project that will need to poll a drive on my companies M: drive location. The file will arrive randomly from the mainframe and I will need to have WEBMethods some how pulled the file from the drive location.
Once I have to move the file from one location to another before I start my parsing of the file.
If I had more code I would post it, but WEBMethods is new and so far I actually have not writen any code in WEBMethods but I am extremely code with Java.
Drive location:
M:\tempTest\NewDriveLocation\ThisIsTheFileINeed
I need to be able to have a transform that pulls in a file from any directly location on Friday. I have an input retieve on my MAP but have not figured out how to enter the file path so that it can find the file.
Software AG's webMethods Integration Server has a built-in feature called a File Polling Port, which you can configure to monitor a local or network shared directory for new files. The Integration Server Administrator's Guide instructions for how to set up a File Polling Port are as follows:
A file polling port periodically polls a monitoring directory for the arrival of files and
then performs special processing on them. When it detects a new file, the server copies
the file to a working directory, then runs a special file processing service against the file.
The service might parse, convert, and validate the file then write it to the file system. This
service, which you write, is the only service that can be invoked through this port. You
can limit the files the server accepts by filtering for specific file names.
For file polling to work, you must do the following:
Set up the Monitoring Directory on Integration Server. Other directories used for file
polling are automatically created by Integration Server.
Write a file processing service and make it available to Integration Server. See
webMethods Service Development Help and the Flat File Schema Developer's Guide for
examples of such services.
Set up the file polling port on Integration Server.
Use the following procedure to add a file polling port to Integration Server.
Open Integration Server Administrator if it is not already open.
In the Security menu of the Navigation panel, click Ports.
Click Add Port.
In the Add Port area of the screen, select webMethods/FilePolling.
Click Submit. Integration Server Administrator displays a screen requesting
information about the port.
Under Package, enter the following information:
Package Name - The package associated with this port.
When you enable the package, the server enables the port.
When you disable the package, the server disables the port.
If you are performing special file handling, specify the
package that contains the services that perform that
processing. If you want to process flat files from this port,
select WmFlatFile,which contains built-in services you can
use to process flat files.
Note: If you replicate this package, whether to a server on
the same machine or a server on a separate machine, a file
polling port with the same settings is created on the target
server. If a file polling port already exists on the target
server, its settings remain intact. If the original and target
servers reside on the same machine, they will share the
same monitoring directory. If the target server resides on
another machine, by default, another monitoring directory
will be created on the target server's machine.
Alias - An alias for the port. An alias must be between 1 and 255
characters in length and include one or more of the
following: ASCII characters, numbers, underscore (_),
period (.), and hyphen (-).
Description - A description of the port.
Under Polling Information, enter the following information:
Monitoring Directory - Directory on Integration Server that you want to
monitor for files.
Working Directory (optional) - Directory on Integration Server to which the server
should move files for processing after they have been
identified in the Monitoring Directory. Files must meet
age and file name requirements before being moved to
the Working Directory. The default sub-directory,
MonitoringDirectory..\Work, is automatically created
if no directory is specified.\
Completion Directory (optional) - Directory on Integration Server to which you want files
moved when processing is completed in the Monitoring
Directory or Working Directory. The default sub-directory,
MonitoringDirectory..\Done, is automatically created
if no directory is specified.
Error Directory (optional) - Directory on Integration Server to which you want files
moved when processing fails. The default subdirectory,
MonitoringDirectory..\Error, is
automatically created if no directory is specified.
File Name Filter (optional) - The file name filter for files in the Monitoring Directory.
The server only processes files that meet the filter
requirements. If you do not specify this field, all files
will be polled. You can specify pattern matching in this
field.
File Age (optional) - The minimum age (in seconds) at which a file in the
Monitoring Directory can be processed. The server
determines file age based on when the file was last
modified on the monitoring directory. You can adjust
this age as needed to make sure the server does not
process a file before the entire file has been copied to
the Monitoring Directory. The default is 0.
Content Type - Content type to use for the file. The server uses the
content handler associated with the content type
specified in this field. If no value is specified, the server
performs MIME mapping based on the file extension.
Allow Recursive Polling - Whether Integration Server is to poll all sub-directories
in the Monitoring Directory. Select Yes or No.
Enable Clustering Whether Integration Server should allow clustering in
the Monitoring Directory. Select Yes or No.
Number of files to process per interval (optional) -
Specifies the maximum number of files that the file
polling listener can process per interval. When you
specify a positive integer, the file polling listener
processes only that number of files from the
monitoring directory. Any files that remain in the
monitoring directory will be processed during
subsequent intervals. If no value is specified, the
listener processes all of the files in the monitoring
directory.
Under Security, in the Run services as user parameter, specify the user name you want
to use to run the services assigned to the file polling directory. Click to lookup and
select a user. The user can be an internal or external user.
Under Message Processing, supply the following information:
Enable - Whether to enable (Yes) or disable (No) this file polling
port.
Processing Service - Name of the service you want Integration Server to
execute for polled files. The server executes this service
when the file has been copied to the Working directory.
This service should be the only service available from
this port.
Important! If you change the processing service for a file
polling port, you must also change the list of services
available from this port to contain just the new service.
See below for more information.
File Polling Interval - How often (in seconds) you want Integration Server to
poll the Monitoring Directory for files.
Log Only When Directory Availability Changes -
If you select No (the default), the listener will log a
message every time the monitoring directory is
unavailable.
If you select Yes, the listener will log a message in
either of the following cases:
The directory was available during the last polling
attempt but not available during the current
attempt
The directory was not available during the last
polling attempt but is available during the current
attempt
Directories are an NFS Mounted File System - For use on a UNIX system where the monitoring
directory, working directory, completion directory,
and/or error directory are network drives mounted on
the local file system.
If you select No (the default), the listener will call the
Java File.renameTo() method to move the files from the
monitoring directory to the working directory, and
from the working directory to the completion and/or
error directory.
If you select Yes, the listener will first call the Java
File.renameTo() method to move the files from the
monitoring directory. If this method fails, the listener
will then copy the files from the monitoring directory
to the working directory and delete the files from the
monitoring directory. This operation will fail if either
the copy action or the delete action fails. The same
behavior applies when moving files from the working
directory to the completion and/or error directory.
Cleanup Service (Optional) - The name of the service that you want to use to clean
up the directories specified under Polling Information.
Cleanup At Startup - Whether to clean up files that are located in the
Completion Directory and Error Directory when the file
polling port is started.
Cleanup File Age (Optional) - The number of days to wait before deleting processed
files from your directories. The default is 7 days.
Cleanup Interval (Optional) - How often (in hours) you want Integration Server to
check the processed files for cleanup. The default is 24
hours
Maximum Number of Invocation Threads -
The number of threads you want Integration Server to
use for this port. Type a number from 1-10. The default
is 10.
Click Save Changes.
Make sure the port's access mode is properly set and that the file processing service is
the only service accessible from the port.
In the Ports screen, click Edit in the Access Mode field for the port you just created.
Click Set Access Mode to Deny by Default.
Click Add Folders and Services to Allow List.
Type the name of the processing service for this port in the text box under Enter
one folder or service per line.
Remove any other services from the allow list.
Click Save Additions.
Note: If you change the processing service for a file polling port, remember to
change the Allow List for the port as well. Follow the procedure described above
to alter the allowed service list.
If you change the processing service for a file polling port, remember to change
the Allow List for the port as well. Follow the procedure described above to alter
the allowed service list.
The Processing Service referenced above is a service which you must develop.
If you are processing XML files with the File Polling Port, the file will be parsed prior to invoking your service, so you should create a service which has a single input argument of type object called node (which is the parsed XML document). You can then use the pub.xml services in the WmPublic package (such as pub.xml:xmlNodeToDocument to convert the node to an IData document) to process the provided node object. Refer to the Integration Server Built-In Services Reference for details on the pub.xml services.
If you are processing flat files (which is anything other than XML in webMethods), the File Polling Port will invoke your service with a java.io.InputStream object from which you can read the file contents, so you should create a service which has a single input argument of type object called ffdata. You can then use the pub.io services in the WmPublic package (such as pub.io:streamToBytes to read all data in the stream to a byte array) or the pub.flatFile services in the WmFlatFile package (such as pub.flatFile:convertToValues to convert ffdata to an IData document) to process the provided ffdata object. Refer to the Integration Server Built-In Services Reference for details on the pub.io services, and the Flat File Schema Developer's Guide for details on the pub.flatFile services.
If both XML and flat files are being written to the monitored directory, you can either write a service which optionally accepts both a node and ffdata object and check which one exists in the pipeline at runtime and process accordingly, or you can create two File Polling Ports which monitor the same directory but check for different file extensions (ie. *.xml and *.txt respectively) using the File Name Filter setting on the port.
If you want to poll a Windows file share, you can specify the directory using a UNC file path (such as \\server\directory) on the File Polling Port.
Also, you need to make sure the user account under which Integration Server executes has appropriate file access rights to the various directories configured on the File Polling Port.