Support for Sequence in DBUnit - junit

We are in the process of analyzing DBUnit for data driven unit testing. We were able to export oracle DB tables to a flat xml dataset using the code below and use the generated dataset as an input to our junit testing.
// partial database export
QueryDataSet partialDataSet = new QueryDataSet(connection);
partialDataSet.addTable("FOO", "SELECT * FROM TABLE WHERE COL='VALUE'");
partialDataSet.addTable("BAR");
FlatXmlDataSet.write(partialDataSet, new FileOutputStream("partial.xml"));
org.dbunit.database.QueryDataSet provides option to add tables but not Sequences. We need to export the sequence also the same way to DBUnit Dataset. Is there a way to achieve this ?
We are looking for tools for unit testing (data driven) the Repository layer using open source tools. Is there any other open source tool similar to DBUnit ?

Related

How to set the path of a CSV file that is in account storage in azure data factory pipeline

I have created a SSIS package that reads from a CSV file (using the Flat file connection manager) and loads records into a database. I have deployed it on Azure data factory pipeline and I need to give the path of the CSV file as a parameter. I have created a azure storage account and uploaded the source file there as shown below.
Can I just give the URL of the source file for the Import file in the SSIS package settings as shown below? I tried it but it is currently throwing 2906 error. I am new to Azure - appreciate any help here.
First, you said Excel and then you said CSV. Those are two different formats. But since you mention the flat file connection manager, I'm going to assume you meant CSV. If not, let me know and I'll update my answer.
I think you will need to install the SSIS Feature Pack for Azure and use the Azure Storage Connection Manager. You can then use the Azure Blob Source in your data flow task (it supports CSV files). When you add the blob source, the GUI should help you create the new connection manager. There is a tutorial on MS SQL Tips that shows each step. It's a couple years old, but I don't think much has changed.
As a side thought, is there a reason you chose to use SSIS over native ADF V2? It does a nice job of copying data from blob storage to a database.

Azure Data Factory v2 Data Transformation

I am new to Azure Data Factory. And my question is, I have a requirement to move the data from an on-premise Oracle and on-premise SQL Server to a Blob storage. The data need to be transformed into JSON format. Each row as one JSON file. This will be moved to an Event Hub. How can I achieve this. Any suggestions.
You could use lookup activity + foreach activity. And inside the foreach, there is a copy activity. Please reference this post. How to copy СosmosDb docs to Blob storage (each doc in single json file) with Azure Data Factory
The Data copy tool as part of the azure data factory is an option to copy on premises data to azure.
the data copy tool comes with a configuration wizard where you do all the required steps like configuring the source, sink, integration pipeline etc.
In the source you need to write a custom query to fetch data from the tables you require in json format.
In case of SQL server to select json you would use the options OPENJSON, FOR JSON AUTO to convert the rows to json. Supported in SQL 2016. For older versions you need to explore the options available. Worst case you can write a simple console app in C#/java to fetch the rows and then convert them to json file. And then you can upload the file to azure blob storage. If this is an one time activity this option should work and you may not require a data factory.
In case of ORACLE you can use the JSON_OBJECT function.

How to export CSV from Business Object report?

I understand that from Business Objects client I have an option to export to "CSV (data only)", but my understanding is that, such an export will not care about the report but just dump the raw universe data.
Isn't there any single way to be able to export the report "view" to CSV ?
It depends on the version of BusinessObjects you're working on.
Originally, the CSV export only looked at the Web Intelligence (I assume you're referring to that particular client) microcube, meaning the raw data retrieved from the data provider(s), and disregards any formatting, filters, aggregations, … you may have specified on your report.
GUI
However, you know have the option to export a report (so not the whole document) as a CSV Archive, which results in a Zip file containing a CSV for the active report at the time of export.
I'm referring to BI 4.1 SP05, previous versions may have this option as I'm not sure when it was introduced.
API
Using the RESTful API that is available in BI4, you can also export a report to CSV. In this case, the actual CSV file will be returned instead of an archive.
Remember that in order to use the RESTful API, you need to have a WACS server in your BusinessObjects environment, running the RESTful API service. You cannot deploy the REST API on an external Java application server.
For more information, have a look at the section Exporting a Report in Listing Mode (SDK information for BI 4.1 SP05).
Remarks
A report is a tab within a document; documents however are often (incorrectly) referred to as reports.

How to programmatically create an OrientDB database of a given (json) schema?

For my dev workflow purposes, I'd like to create a new orientdb database given a JSON schema, on the fly. I dont believe this is natively supported in orientdb, are there any existing solutions that do this - provide a JSON schema and point to a orientdb instance, and it auto-creates the database (edges, vertices, indexes and perhaps some sample data).
I ended up creating a .sh script to create the DB on the fly. The .sh files looks something like:
# (file: createmydb.sh)
# script to create my database declaratively
set echo true
# use this to ignore errors and continue, if needed
# set ignoreErrors true
# create database
create database plocal:../databases/MyDB root root plocal graph
# create User vertex
create class User extends V
create property User.Email STRING
create property User.Firstname STRING
...
And then call it like:
/usr/local/src/orientdb/bin/console.sh createmydb.sh
This works well for my purposes. The DB creation script is very easy to read, can be modified easily. And I am sure very backwards compatible (which may not have been the case with importing an exported JSON version of the db schema).
So far I've found that pre-loading the schema using an external definition stored in either JSON or OSQL has been most successful for me. Currently I am using an OSQL script that contains a whole bunch of CREATE CLASS ... and CREATE PROPERTY ... commands. It works well enough.
Pretty soon I'll have to start supporting dynamic changes to the data model, at which point I will have to write code to read a JSON schema definition and convert that to appropriate calls into OrientDB, either through the Blueprints API or through SQL batches.
I've not found a tool that does what you need "automatically." If you find one, please let me (and everyone else here) know.

Best practice to store couchbase views

My application has couchbase views (map-reduce). Presently, I am writing them on a text file and loading them for each new couchbase server from the couchbase admin page (tedious and error-prone process).
Is there anyway I can load all those views from text files into couchbase when am deploying a fresh couchbase server or when I create a fresh bucket ?
I remember in mysql, we used to write all insert queries and procedures onto a file and feed the file to mysql (via command prompt) for each new instance. Is there any such strategy available for couchbase ?
From your previous couchbase related questions, it seems you are using the java SDK?
Both 1.4 and 2.0 lines of the SDK allow for programmatically creating desing documents and views.
With Java SDK 1.4.x
You have to load your view definitions (map functions, reduce functions, in which design document to put them) somehow, as Strings. See the documentation at http://docs.couchbase.com/couchbase-sdk-java-1.4/#design-documents.
Basically you create a ViewDesignin a DesignDocument that you insert in the database via the CouchbaseClient:
DesignDocument designDoc = new DesignDocument("beers");
designDoc.setView(new ViewDesign("by_name", "function (doc, meta) {" +
" if (doc.type == \"beer\" && doc.name) {" +
" emit(doc.name, null);" +
" }" +
"}"));
client.createDesignDoc(designDoc);
With Java SDK 2.0.x
In the same way, you have to load your view definitions (map functions, reduce functions, in which design document to put them) somehow, as Strings.
Then you deal with DesignDocument, adding DefaultView to it, and insert the design document in the bucket via Bucket's BucketManager:
List<View> viewsForCurrentDesignDocument = new ArrayList<View>(viewCountForCurrentDesignDoc);
//... for each view definition you loaded
View v = DefaultView.create(viewName, viewMapFunction, viewReduceFunction);
viewsForCurrentDesignDocument.add(v);
//... then create the designDocument proper
DesignDocument designDocument = DesignDocument.create(designDocName, viewsForCurrentDesignDocument);
//optionally you can insert it as a development design doc, retrieve an existing one and update, etc...
targetBucket.bucketManager().insertDesignDocument(designDocument);
At Rounds, we use couchbase for some of our server side apps and use docker images for development environment.
I wrote 2 scripts for dumping an existing couchbase and re-creating couchbase buckets and views from the dumped data.
The view map and reduce functions are dumped as plain javascript files in a directory hierarchy that represents the design docs and buckets in couchbase. It is very helpful to commit the whole directory tree into your repo so you can track changes made to your views.
As the files are plain javascript, you can edit them with your favourite IDE and enjoy automatic syntax checks.
You can use the scripts from the following github repo:
https://github.com/rounds/couchbase-dump
Dump all your couchbase buckets and views as javascript files in a directory hierarchy that you can commit to your repo. Then you can recreate the couchbase buckets and views from previously dumped data.
If you find this helpful and have something to add please create an issue or contribute on github.