NetSuite Migrations - mysql

Has anyone had much experience with data migration into and out of NetSuite? I have to export DB2 tables into MySQL, manipulate data, and then export ina CSV file. Then take a CSV file of accounts and manipulate the data again for accounts to match up from our old system to new. Anyone tried to do this in MySQL?

A couple of options:
Invest in a data transformation tool that connects to NetSuite and DB2 or MySQL. Look at Dell Boomi, IBM Cast Iron, etc. These tools allow you to connect to both systems, define the data to be extracted, perform data transformation functions and mappings and do all the inserts/updates or whatever you need to do.
For MySQL to NetSuite, php scripts can be written to access MySQL and NetSuite. On the NetSuite side, you can either do SOAP web services, or you can write custom REST APIs within NetSuite. SOAP is probably a bit slower than REST, but with REST, you have to write the API yourself (server side JavaScript - it's not hard, but there's a learning curve).
Hope this helps.

I'm an IBM i programmer; try CPYTOIMPF to create a pretty generic CSV file. I'll go to a stream file - if you have NetServer running you can map a network drive to the IFS directory or you can use FTP to get the CSV file from the IFS to another machine in your network.

Try Adeptia's Netsuite integration tool to perform ETL. You can also try Pentaho ETL for this (As far as I know Celigo's Netsuite connector is built upon Pentaho). Also Jitterbit does have an extension for Netsuite.

We primarily have 2 options to pump data into NS:
i)SuiteTalk ---> Using which we can have SOAP based transformations.There are 2 versions of SuiteTalk synchronous and asynchronous.
Typical tools like Boomi/Mule/Jitterbit use synchronous SuiteTalk to pump data into NS.They also have decent editors to help you do mapping.
ii)RESTlets ---> which are typical REST based architures by NS can also be used but you may have to write external brokers to communicate with them.
Depending on your need you can have whatever you need.IN most of the cases you will be using SuiteTalk to bring in data to Netsuite.
Hope this helps ...

We just got done doing this. We used an iPAAS platform called Jitterbit (similar to Dell Boomi). It can connect to mySql and to NetSuite and you can do transformations in the tool. I have been really impressed with the platform overall so far

There are different approaches, I like the following to process a batch job:
To import data to Netsuite:
Export CSV from old system and place it in Netsuite's a File Cabinet folder (Use a RESTlet or Webservices for this).
Run a scheduled script to load the files in the folder and update the records.
Don't forget to handle errors. Ways to handle errors: send email, create custom record, log to file or write to record
Once the file has been processed move the file to another folder or delete it.
To export data out of Netsuite:
Gather data and export to a CSV (You can use a saved search or similar)
Place CSV in File Cabinet folder.
From external server call webservices or RESTlet to grab new CSV files in the folder.
Process file.
Handle errors.
Call webservices or RESTlet to move CSV File or Delete.
You can also use Pentaho Data Integration, its free and the learning curve is not that difficult. I took this course and I was able to play around with the tool within a couple of hours.

Related

Configure Apache Drill to read xml files in Mapr distribution

I have a project where I should read xml files with Apache Drill to process it , can someone tell me how I can configure it?
NB: I use Mapr distribution
I tried to add the configuration to the configuration UI but I get a error(see image)
enter image description here
Thanks in advance
You'll need to use a Drill distribution based on Apache Drill >= 1.19 for the XML format plugin.
So this is more of a Drill question than a MapR question.
There are two key steps here
make sure that Drill can access whatever you use to store your data (sounds your data is xml files in MapR (which is now called HPE Ezmeral Data Fabric))
make sure that Drill can understand the data you have. I am not current on Drill, but reading many kinds of XML should be doable.
For getting access, there are two major paths to accessing files on Ezmeral Data Fabric. One path is to mount the data fabric as a conventional file system on all the nodes running Drillbits. This is often done using NFS mounts, but can also be the FUSE driver provided with data fabric.
The other major approach to getting data access is to use the HDFS API framework to access data via maprfs://... path names. This requires installing the data fabric client on all of the nodes running Drillbits.
It sounds like you are running the version of Drill that is packaged with the old MapR or current HPE Ezmeral system. This is the easiest approach since the packaged version is integrated with the client libraries needed to use the HDFS API with maprfs:// resources (it also provides access to the tables and streams in the data fabric).

Load CSV data as RDF using Ontorefine CLI

I'm trying to programmatically add a csv file that's generated everyday to a GraphDB repository. I have already created the CSV to RDF mapping using Ontorefine. How does one use the CSV and the mapping now to add RDF triples programmatically.
Use the open source CLI https://github.com/Ontotext-AD/ontorefine-client (that's probably what #aksanoble refers to).
Please note that the CLI is not yet available in Ontotext Refine 1.0 (which was split off from GraphDB), and will be available in September. In the meantime, you could use GraphDB 9.11.
We are working on extended ETL pipeline scenarios, including
Reuse of cleaning and transformation scripts between projects
Run all cleaning, transformation and RDF data update or download steps on a new dataset automatically
BTW, is your file stored locally or accessed through a URL? We have an idea to handle the latter case specially.

What is the most efficient way to export data from Azure Mysql?

I have searched high and low, but it seems like mysqldump and "select ... into outfile" are both intentionally blocked by not allowing file permissions to the db admin. Wouldn't it save a lot more server resources to allow file permissions than to disallow them? Any other import/export method I can find uses executes much slower, especially with tables that have millions of rows. Does anyone know a better way? I find it hard to believe Azure left no good way to do this common task.
You did not list the other options you found to be slow, but have you thought about using Azure Data Factory:
Use Data Factory, a cloud data integration service, to compose data storage, movement, and processing services into automated data pipelines.
It supports exporting data from Azure MySQL and MySQL:
You can copy data from MySQL database to any supported sink data store. For a list of data stores that are supported as sources/sinks by the copy activity, see Supported data stores and formats
Azure Data Factory allows you to define mappings (optional!), and / or transform the data as needed. It has a pay per use pricing model.
You can start an export manually or using a schedule using the .Net or Python SKD , the Rest api or Powershell.
It seems you are looking to export the data to a file, so Azure Blob Storage or Azure Files are likely to be a good destination. FTP or the local file system are also possible.
"SELECT INTO ... OUTFILE" we can achieve this using mysqlworkbench
1.Select the table
2.Table Data export wizard
3.export the data in the form of csv or Json

Fill CoreData with a large SQL database

I have a large 180k row SQL (mysql) database that I want to use in CoreData. Can I create the SQLite database using Xcode, then use an SQLight client app to connect to that database, and fill it using my mysql data?
Or is there a better way to efficiently import a large data set to a CoreData store?
It will only be filled once and the data should reside on-device.
The reason I want to do this is because I am building an iOS app that needs to read from a persistent store containing most words in the English language. Along with the word, each row will contain a few other things. The app will never need to write to the database, just read from it, but it will need to read from it very quickly.
From Apple's docs it appears this is not recommended (or maybe impossible): "do not manipulate an existing Core Data-created SQLite store using the native SQLite API"
Update:
Another option that I am currently working on is to export the MySQL database to json using phpmyadmin (or another tool). Then load that json file in to the project. When the app loads (hopefully just the first time it is used), push the data from the json file in to Core Data.
You could reverse-engineer Core Data and produce a Core Data sqlite file directly if you really wanted to, but as you quoted from Apple docs this is not a good idea.
It would be easier to simply write a little macOS command-line tool which includes the same Core Data data model as your iOS app. This tool would read your MySQL database and write it to a Core Data SQLite file, which you would then ship with your iOS app.

Sending .csv files to a database: MariaDB

I will preface this by saying I am very new to databases. I am working on a project for my undergraduate research that requires various sensor data to be send from a Raspberry Pi via the internet to a database. I am using MariaDB at the moment, but am open to other options.
The background: Currently all sensor data is being saved in csv files on the RPi. There will be automation to send data at given intervals to the database.
The question: Am I able to audit the file itself to a database? For our application, a csv file is the most logical data storage format and we simply want the database to be a way for us to retrieve data remotely, since the system will be installed miles away from where we work.
I have read about "LOAD DATA INFILE" on this website, but am unsure how it applies to this database. Would JSON be at all applicable for this? I am willing to learn if it makes the process more streamlined.
Thank you!
If 'sending data to the database' means that, by one means or another, additional or replacement CSV files are saved on disk, in a location accessible to a MariaDB client program, then you can load these into the database using the "mysql" command-line client and an appropriate script of SQL commands. That script very likely will make use of the LOAD DATA LOCAL INFILE command.
The "mysql" program may be launched in a variety of ways: 1) spawned by the process that receives the uploaded file; 2) launched by a cron job (Task Scheduler on Windows) that runs periodically to check for new or changed CSV files; of 3) launched by a daemon that continually monitors the disk for new or changed CSV files.
A CSV is typically human readable. I would work with that first before worrying about using JSON. Unless the CSVs are huge, you could probably open them up in a simple text editor to read their contents to get an idea of what the data looks like.
I'm not sure of your environment (feel free to elaborate), but you could just use whatever web services you have to read in the CSV directly and inject the data into your database.
You say that data is being sent using automation. How is it communicating to your web service?
What is your web service? (Is it php?)
Where is the database being hosted? (Is it in the same webservice?)