Poll a drive location in web methods to see if a file has been upload from the mainframe? - webmethods

I am just starting out with WEBMethods. I am starting on a project that will need to poll a drive on my companies M: drive location. The file will arrive randomly from the mainframe and I will need to have WEBMethods some how pulled the file from the drive location.
Once I have to move the file from one location to another before I start my parsing of the file.
If I had more code I would post it, but WEBMethods is new and so far I actually have not writen any code in WEBMethods but I am extremely code with Java.
Drive location:
M:\tempTest\NewDriveLocation\ThisIsTheFileINeed
I need to be able to have a transform that pulls in a file from any directly location on Friday. I have an input retieve on my MAP but have not figured out how to enter the file path so that it can find the file.

Software AG's webMethods Integration Server has a built-in feature called a File Polling Port, which you can configure to monitor a local or network shared directory for new files. The Integration Server Administrator's Guide instructions for how to set up a File Polling Port are as follows:
A file polling port periodically polls a monitoring directory for the arrival of files and
then performs special processing on them. When it detects a new file, the server copies
the file to a working directory, then runs a special file processing service against the file.
The service might parse, convert, and validate the file then write it to the file system. This
service, which you write, is the only service that can be invoked through this port. You
can limit the files the server accepts by filtering for specific file names.
For file polling to work, you must do the following:
Set up the Monitoring Directory on Integration Server. Other directories used for file
polling are automatically created by Integration Server.
Write a file processing service and make it available to Integration Server. See
webMethods Service Development Help and the Flat File Schema Developer's Guide for
examples of such services.
Set up the file polling port on Integration Server.
Use the following procedure to add a file polling port to Integration Server.
Open Integration Server Administrator if it is not already open.
In the Security menu of the Navigation panel, click Ports.
Click Add Port.
In the Add Port area of the screen, select webMethods/FilePolling.
Click Submit. Integration Server Administrator displays a screen requesting
information about the port.
Under Package, enter the following information:
Package Name - The package associated with this port.
When you enable the package, the server enables the port.
When you disable the package, the server disables the port.
If you are performing special file handling, specify the
package that contains the services that perform that
processing. If you want to process flat files from this port,
select WmFlatFile,which contains built-in services you can
use to process flat files.
Note: If you replicate this package, whether to a server on
the same machine or a server on a separate machine, a file
polling port with the same settings is created on the target
server. If a file polling port already exists on the target
server, its settings remain intact. If the original and target
servers reside on the same machine, they will share the
same monitoring directory. If the target server resides on
another machine, by default, another monitoring directory
will be created on the target server's machine.
Alias - An alias for the port. An alias must be between 1 and 255
characters in length and include one or more of the
following: ASCII characters, numbers, underscore (_),
period (.), and hyphen (-).
Description - A description of the port.
Under Polling Information, enter the following information:
Monitoring Directory - Directory on Integration Server that you want to
monitor for files.
Working Directory (optional) - Directory on Integration Server to which the server
should move files for processing after they have been
identified in the Monitoring Directory. Files must meet
age and file name requirements before being moved to
the Working Directory. The default sub-directory,
MonitoringDirectory..\Work, is automatically created
if no directory is specified.\
Completion Directory (optional) - Directory on Integration Server to which you want files
moved when processing is completed in the Monitoring
Directory or Working Directory. The default sub-directory,
MonitoringDirectory..\Done, is automatically created
if no directory is specified.
Error Directory (optional) - Directory on Integration Server to which you want files
moved when processing fails. The default subdirectory,
MonitoringDirectory..\Error, is
automatically created if no directory is specified.
File Name Filter (optional) - The file name filter for files in the Monitoring Directory.
The server only processes files that meet the filter
requirements. If you do not specify this field, all files
will be polled. You can specify pattern matching in this
field.
File Age (optional) - The minimum age (in seconds) at which a file in the
Monitoring Directory can be processed. The server
determines file age based on when the file was last
modified on the monitoring directory. You can adjust
this age as needed to make sure the server does not
process a file before the entire file has been copied to
the Monitoring Directory. The default is 0.
Content Type - Content type to use for the file. The server uses the
content handler associated with the content type
specified in this field. If no value is specified, the server
performs MIME mapping based on the file extension.
Allow Recursive Polling - Whether Integration Server is to poll all sub-directories
in the Monitoring Directory. Select Yes or No.
Enable Clustering Whether Integration Server should allow clustering in
the Monitoring Directory. Select Yes or No.
Number of files to process per interval (optional) -
Specifies the maximum number of files that the file
polling listener can process per interval. When you
specify a positive integer, the file polling listener
processes only that number of files from the
monitoring directory. Any files that remain in the
monitoring directory will be processed during
subsequent intervals. If no value is specified, the
listener processes all of the files in the monitoring
directory.
Under Security, in the Run services as user parameter, specify the user name you want
to use to run the services assigned to the file polling directory. Click to lookup and
select a user. The user can be an internal or external user.
Under Message Processing, supply the following information:
Enable - Whether to enable (Yes) or disable (No) this file polling
port.
Processing Service - Name of the service you want Integration Server to
execute for polled files. The server executes this service
when the file has been copied to the Working directory.
This service should be the only service available from
this port.
Important! If you change the processing service for a file
polling port, you must also change the list of services
available from this port to contain just the new service.
See below for more information.
File Polling Interval - How often (in seconds) you want Integration Server to
poll the Monitoring Directory for files.
Log Only When Directory Availability Changes -
If you select No (the default), the listener will log a
message every time the monitoring directory is
unavailable.
If you select Yes, the listener will log a message in
either of the following cases:
The directory was available during the last polling
attempt but not available during the current
attempt
The directory was not available during the last
polling attempt but is available during the current
attempt
Directories are an NFS Mounted File System - For use on a UNIX system where the monitoring
directory, working directory, completion directory,
and/or error directory are network drives mounted on
the local file system.
If you select No (the default), the listener will call the
Java File.renameTo() method to move the files from the
monitoring directory to the working directory, and
from the working directory to the completion and/or
error directory.
If you select Yes, the listener will first call the Java
File.renameTo() method to move the files from the
monitoring directory. If this method fails, the listener
will then copy the files from the monitoring directory
to the working directory and delete the files from the
monitoring directory. This operation will fail if either
the copy action or the delete action fails. The same
behavior applies when moving files from the working
directory to the completion and/or error directory.
Cleanup Service (Optional) - The name of the service that you want to use to clean
up the directories specified under Polling Information.
Cleanup At Startup - Whether to clean up files that are located in the
Completion Directory and Error Directory when the file
polling port is started.
Cleanup File Age (Optional) - The number of days to wait before deleting processed
files from your directories. The default is 7 days.
Cleanup Interval (Optional) - How often (in hours) you want Integration Server to
check the processed files for cleanup. The default is 24
hours
Maximum Number of Invocation Threads -
The number of threads you want Integration Server to
use for this port. Type a number from 1-10. The default
is 10.
Click Save Changes.
Make sure the port's access mode is properly set and that the file processing service is
the only service accessible from the port.
In the Ports screen, click Edit in the Access Mode field for the port you just created.
Click Set Access Mode to Deny by Default.
Click Add Folders and Services to Allow List.
Type the name of the processing service for this port in the text box under Enter
one folder or service per line.
Remove any other services from the allow list.
Click Save Additions.
Note: If you change the processing service for a file polling port, remember to
change the Allow List for the port as well. Follow the procedure described above
to alter the allowed service list.
If you change the processing service for a file polling port, remember to change
the Allow List for the port as well. Follow the procedure described above to alter
the allowed service list.
The Processing Service referenced above is a service which you must develop.
If you are processing XML files with the File Polling Port, the file will be parsed prior to invoking your service, so you should create a service which has a single input argument of type object called node (which is the parsed XML document). You can then use the pub.xml services in the WmPublic package (such as pub.xml:xmlNodeToDocument to convert the node to an IData document) to process the provided node object. Refer to the Integration Server Built-In Services Reference for details on the pub.xml services.
If you are processing flat files (which is anything other than XML in webMethods), the File Polling Port will invoke your service with a java.io.InputStream object from which you can read the file contents, so you should create a service which has a single input argument of type object called ffdata. You can then use the pub.io services in the WmPublic package (such as pub.io:streamToBytes to read all data in the stream to a byte array) or the pub.flatFile services in the WmFlatFile package (such as pub.flatFile:convertToValues to convert ffdata to an IData document) to process the provided ffdata object. Refer to the Integration Server Built-In Services Reference for details on the pub.io services, and the Flat File Schema Developer's Guide for details on the pub.flatFile services.
If both XML and flat files are being written to the monitored directory, you can either write a service which optionally accepts both a node and ffdata object and check which one exists in the pipeline at runtime and process accordingly, or you can create two File Polling Ports which monitor the same directory but check for different file extensions (ie. *.xml and *.txt respectively) using the File Name Filter setting on the port.
If you want to poll a Windows file share, you can specify the directory using a UNC file path (such as \\server\directory) on the File Polling Port.
Also, you need to make sure the user account under which Integration Server executes has appropriate file access rights to the various directories configured on the File Polling Port.

Related

Tricentis Service Configuration

I am trying to install and configure Tricentis on a server and want the process of setting the configuration to be automated. I have searched the document but there was no mention of any unattended method for configuring.
We cannot directly update the appsettings.json as it has a clientID which is generated everytime

Are custom metadata values for GCE instance stored securely?

I was wondering if custom metadata for google compute engine VM instances was an appropriate place to store sensitive information for configuring apps that run on the instance.
So we use container-optimised OS images to run microservices. We configure the containers with environment variables for things like creds for db connections and other systems we integrate with.
The VMs are treated as ephemeral for each CD deployment and the best I have come up with so far is to create an instance template with config values loaded via a file I keep on my local machine into the VM custom metadata, which is then made available to a systemctl unit when the VM starts up (cloud-config).
The essence of this means environment variable values (some containing creds) are uploaded by me (which don't change very much) and are then pulled from the VM instance metadata server when a new VM is fired up. So I'm just wondering if there's any significant security concerns with this approach...
Many thanks for your help
According to the Compute Engine documentation :
Is metadata information secure?
When you make a request to get
information from the metadata server, your request and the subsequent
metadata response never leaves the physical host running the virtual
machine instance.
Since the request and response are not leaving the physical host, you will not be able to access the metadata from another VM or from outside Google Cloud Platform. However, any user with access the VM will be able to query the metadata server and retrieve the information.
Based on the information you provided, storing credentials for a test or staging environment in this manner would be acceptable. However, if this is a production system with customer or information important to the business, I would keep the credentials in a secure store that tracks access. The data in the metadata server is not encrypted, and accesses are not logged.

item not supported on access a custom script,in zabbix_agent conf

I have script which I am using from agent, it works good if only one or two machine. But I have 100 nodes so I kept the script in a shared location and changed the all agent conf file with this location.
That share location is an FTP server, still am getting error:
"no more connections can be made to this remote computer, the maximum number of connections has been reached".
How am I suppose to use a custom script in zabbix, I cant copy this to all the nodes every time.

configure username and password for already mapped UNC path in ssis using file system task.

I was refering this : SSIS: Accessing a network drive using a different username and passoword
But, here it seems that unc location is mapped runtime. My case is bit different, my unc location is already mapped and I just need to configure username and password in my configuration file.how can I do this in ssis package.
Note- I need to copy file to unc location.
Thanks.
Each time the system is rebooted, the mapped drive needs to be connected again with the credentials if these were not saved. unfortunately we dont have so called cleaner approach by just reusing the existing mapped drive just by passing credentials(actually this is in disconnected mode after reboot)
Either you have to run batch file with net use /DELETE option to remove the existing mapped drive and recreate the mapped drived with credential or with net use with the /SAVECRED option to re-use the passowd credentials, this batch needs to be linked to startup window to retain the connection after reboot.
sample batch script
#echo off
net use z: /delete
net use z: \\server\share /USER:MYCOMPUTER\UserID password
exit
Or go to Command Prompt, run
net use z: \\server\share /savecred /p:yes
spefify the credential, it should retain the credential after reboot
Or add the credentials by opening Start → Run → control > userpasswords2 → Advanced → Manage Passwords on Windows XP and later.
Or map the network drive as suggested in SSIS: Accessing a network drive using a different username and passoword

Using version control with SSIS packages (saving 'sensitive' data)

We are a team working on a bunch of SSIS packages, which we share using version control (SVN). We have three ways of saving sensitive data in these packages :
not storing them at all
storing them with a user key
storing them with a password
However, each of these options is inconvenient while testing packages saved and committed by an other developer. For each such package, one has to update the credentials, no matter how the sensitive data was persisted.
Is there a better way to collaborate on SSIS packages?
Since my workplace uses file deployment, I use "Don't save sensitive" In order to make development easier, we also store config files with the packages in our version control system, and the connection strings for the development environment are stored in the config files. The config files are also stored in a commonly named folder, so if I retrieve the config files into my common config file area, then I can open any of our project packages and they will work for me for development. When the packages are deployed, they are secured by the deployment team on a machine where developers do not have access and the config file values for connection strings are changed to match the production environment.
We do somthing similar using database deployment. Each enviroment has a configuration database, and every package references a single xml config file in a common file path on every server/workstation, e.g., "c:\SSISConfig". This xml config file has one entry that points to the appropriate config database for that environment. All of the rest of the SSIS configs are stored in that config database. The config database in production is only accessible by the admin group, the developers do not have access. When new packages and configurations are deployed to prod, connection strings are updated by the admin group. The packages are all set to "Dont save sensitive".