I want to upload Approval Process and Steps along with approval/rejection actions via data loader. Could you please tell me the steps to do so?
Approval processes are configuration and metadata. The dataloader is for meant for the import/export of record data which isn't the same.
Metadata can be migrated(deployed) with change sets or the meta data api, but i believe not all parts of approval processes are supported by these mechanisms yet (you can try). You may need to manually copy the same configuration to different orgs, or create instructions to do so.
Related
My team does a lot of work staging data from other sources. I've worked to automate/script this process as much as possible because some of these sources have hundreds of tables to ingest. My pain point is in sharing my scripted Oozie workflows.
I am able to generate JSON files which are imported into Hue, but the JSON files do not specify what users/groups can read or modify them. This means I would have to manually go through every workflow and share them. I may be wrong, but it appears that sharing information is stored in the backend of Hue, not the workflows themselves.
Is it possible to add a key value pair inside of the JSON import file to specify the groups to share with?
or
Am I missing an easier way to bulk share?
Composer provides some pretty good ACL features, with enough granularity to prevent unauthorized access based on complex business logic.
I understand that using composer's APIs, an unauthorized user will not be able to read data.
However, what if the user uses Fabric's APIs? How does composer implement their ACL at the Fabric level? Is a business network sharing a single channel? In which case, does it mean that any participant/node can look at the block manually and see private data?
So my question is, can we rely on Composer and assume that if we correctly wrote our ACL file then the data is secured?
I would put it this way, the Composer ACLs are as secure as if you had written Fabric chaincode in Go and had included explicit conditional checks on what data a given transaction can access or update.
The data is stored unencrypted on the blockchain, and it is stored in world-state (in CouchDB for example). So if someone has local access to the files on disk, or the CouchDB database for world-state, they can circumvent the ACL rules. This is not specific to Composer however, and would apply equally to Fabric chaincode written in Go.
I am creating a mobile application using swift for my organization. The application reads in data in JSON format to populate the information that gets displayed on the application. I already have a method to generate the JSON files, but I need somewhere to host the actual files. I have an AWS account and an instance running, this is where I initially was hosting my JSON files but I got an email from AWS saying that having the app constantly grab the JSON files that I stored on the site resembled scanning behaviour, which is not allowed apparently. So I was wondering where I could host JSON files so that my mobile app can read in the information it needs. The biggest thing that I need is that I can host it with a static URL that I can keep calling with my app.
I was thinking of potentially putting the files on an AWS bucket with read permissions and having those get accessed, but since AWS already complained about me doing something like that I'm iffy. I was also thinking of putting the JSON files on Github, but again I'd hate to get an email from github telling me that they don't like that an application keeps grabbing the data.
For background, the app essentially has a hardcoded URL that grabs the JSON data and parses it. I didn't do an api because an API takes some time to grab all the information that doesn't really change that often, it's much easier to generate the JSON files locally and just post them online somewhere. The information on it can be read by anyone too it's not private or anything.
Message from AWS:
Hello,
We've received a report(s) that your AWS resource(s)
information
has been implicated in activity which resembles scanning remote hosts on the internet for security vulnerabilities. Activity of this nature is forbidden in the AWS Acceptable Use Policy (https://aws.amazon.com/aup/). We've included the original report below for your review.
Please take action to stop the reported activity and reply directly to this email with details of the corrective actions you have taken. If you do not consider the activity described in these reports to be abusive, please reply to this email with details of your use case.
If you're unaware of this activity, it's possible that your environment has been compromised by an external attacker, or a vulnerability is allowing your machine to be used in a way that it was not intended.
We are unable to assist you with troubleshooting or technical inquiries. However, for guidance on securing your instance, we recommend reviewing the following resources:
I'm new so it won't let me post links but they attached a couple help links
If you require further assistance with this matter, you can take advantage of our developer forums:
more links I can't have
Or, if you are subscribed to a Premium Support package, you may reach out for one-on-one assistance here:
link
Please remember that you are responsible for ensuring that your instances and all applications are properly secured. If you require any further information to assist you in identifying or rectifying this issue, please let us know in a direct reply to this message.
Regards,
AWS Abuse
Abuse Case Number:
Using an AWS EC2 instance to host static files (which is what it sounds like you were doing?) is pretty standard and I suspect that this is not what Amazon is complaining about. More likely, your instance has been infected by some sort of software which is causing it to request many files from other random servers on the web ("scanning for remote vulnerabilities"). You should check that you have not accidentally publicly posted your AWS credentials (in any form), and consider wiping the instance and resetting it. And of course reply to the email explaining this to AWS.
Has anyone had much experience with data migration into and out of NetSuite? I have to export DB2 tables into MySQL, manipulate data, and then export ina CSV file. Then take a CSV file of accounts and manipulate the data again for accounts to match up from our old system to new. Anyone tried to do this in MySQL?
A couple of options:
Invest in a data transformation tool that connects to NetSuite and DB2 or MySQL. Look at Dell Boomi, IBM Cast Iron, etc. These tools allow you to connect to both systems, define the data to be extracted, perform data transformation functions and mappings and do all the inserts/updates or whatever you need to do.
For MySQL to NetSuite, php scripts can be written to access MySQL and NetSuite. On the NetSuite side, you can either do SOAP web services, or you can write custom REST APIs within NetSuite. SOAP is probably a bit slower than REST, but with REST, you have to write the API yourself (server side JavaScript - it's not hard, but there's a learning curve).
Hope this helps.
I'm an IBM i programmer; try CPYTOIMPF to create a pretty generic CSV file. I'll go to a stream file - if you have NetServer running you can map a network drive to the IFS directory or you can use FTP to get the CSV file from the IFS to another machine in your network.
Try Adeptia's Netsuite integration tool to perform ETL. You can also try Pentaho ETL for this (As far as I know Celigo's Netsuite connector is built upon Pentaho). Also Jitterbit does have an extension for Netsuite.
We primarily have 2 options to pump data into NS:
i)SuiteTalk ---> Using which we can have SOAP based transformations.There are 2 versions of SuiteTalk synchronous and asynchronous.
Typical tools like Boomi/Mule/Jitterbit use synchronous SuiteTalk to pump data into NS.They also have decent editors to help you do mapping.
ii)RESTlets ---> which are typical REST based architures by NS can also be used but you may have to write external brokers to communicate with them.
Depending on your need you can have whatever you need.IN most of the cases you will be using SuiteTalk to bring in data to Netsuite.
Hope this helps ...
We just got done doing this. We used an iPAAS platform called Jitterbit (similar to Dell Boomi). It can connect to mySql and to NetSuite and you can do transformations in the tool. I have been really impressed with the platform overall so far
There are different approaches, I like the following to process a batch job:
To import data to Netsuite:
Export CSV from old system and place it in Netsuite's a File Cabinet folder (Use a RESTlet or Webservices for this).
Run a scheduled script to load the files in the folder and update the records.
Don't forget to handle errors. Ways to handle errors: send email, create custom record, log to file or write to record
Once the file has been processed move the file to another folder or delete it.
To export data out of Netsuite:
Gather data and export to a CSV (You can use a saved search or similar)
Place CSV in File Cabinet folder.
From external server call webservices or RESTlet to grab new CSV files in the folder.
Process file.
Handle errors.
Call webservices or RESTlet to move CSV File or Delete.
You can also use Pentaho Data Integration, its free and the learning curve is not that difficult. I took this course and I was able to play around with the tool within a couple of hours.
I would like a way for individual users to send a repo path to a hudson server and have the server start a build of that repo. I don't want to leave behind a trail of dynamically created job configuration. I'd like to start multiple simultaneous instances of the same job. Obviously this requires that the workspaces different for the different instances. I believe this isn't possible using any of the current extensions. I'm open to different approaches to what I'm trying to accomplish.
I just want the hudson server to be able to receive requests for builds from outside sources, and start them as long as there are free executors. I want the build configuration to be the same for all the builds except the location of the repo. I don't want to have dozens of identical jobs sitting around with automatically generated names.
Is there anyone out there using Hudson or Jenkins for something like this? How do you set it up? I guess with enough scripting I could dynamically create the necessary job configuration through the CLI API from a script, and then destroy it when it's done. But I want to keep the artifacts around, so destroying the job when it's done running is an issue. I really don't want to write and maintain my own extension.
This should be pretty straightforward to do with Jenkins without requiring any plugins, though it depends on the type of SCM that you use.
It's worth upgrading from Hudson in any case; there have certainly been improvements to the features required to support your use case in the many releases since becoming Jenkins.
You want to pass the repo path as a parameter to your build, so you should select the "This build is parameterized" option in the build config. There you can add a string parameter called REPO_PATH or similar.
Next, where you specify where code is checked-out from, replace the path with ${REPO_PATH}.
If you are checking out the code — or otherwise need access to the repo path — from a script, the variable will automatically be added to your environment, so you can refer to ${REPO_PATH} from your shell script or Ant file.
At this point, when pressing Build Now, you will be prompted to enter a repo path before the build will start. As mentioned in the wiki page above, you can call the buildWithParameters URL to start a build directly with the desired parameter, e.g. http://server/job/myjob/buildWithParameters?REPO_PATH=foo
Finally, if you want builds to execute concurrently, Jenkins can manage this for you by creating temporary workspaces for concurrent builds. Just enable the option
"Execute concurrent builds if necessary" in your job config.
The artifacts will be available, the same as any other Jenkins build. Though probably you want to manage how many recent artifacts are kept; this can be done by checking "Discard Old Builds", and then under Advanced…, you can select enter a value for "Max # of builds to keep with artifacts".