We had some R&D for our new project, which uses Cassandra as db. The research shows that we can not use Cassandra 3.x version for import/exporting data using SSIS. So, we have to use lower versions. (what's your opinion?)
On the other hand, we need using materialized view in some cases, SASI secondary index, and other functionality and capabilities of newer versions.
Is there any alternative approach that help us using both versions together and share data between them? Is this a good solution or we should sacrifice the benefits of new versions for translating the data?
that we can not use Cassandra 3.x version for import/exporting data using SSIS
Why do you need SSIS for data import/export. Did you consider using Apache Spark for this purpose ? With Spark, you can migrate to Cassandra 3.x
Related
Our main project has been using a now-very-old Flyway version since inception. (v3.2.1)
Flyway has made loads of improvements over the years, and v6+ appears to contain many interesting features for our MySQL schema.
Attempting the supported upgrade path, I ran into a few problems--e.g. our .sql migrations refuse to migrate from start to finish; Flyway v3.2.1 considers all our SQL migration valid, but v4+ chokes on some odd comment syntax. Naturally, file fixups to get migrate working will produce different check sums, which is an obstacle to safe upgrade. I'm well aware of the schema table name change in v5; that's not insurmountable.
I'm also eyeing Liquibase vs. and online schema migration tools; FB, Percona and GitHub's OST (gh-ost) look interesting, but we use foreign keys, and we'd need more replicas, so that may not be in the cards for us right now.
For now, I'm interested in a new baseline w/ Flyway v7 beta or switching tools. If you deploy SaaS on k8s and have any generic advice--I'll take it, but I'm specifically interested in one thing:
How have folks overcome the issue where newer versions of Flyway no longer accept existing SQL migrations. Or, has anyone "given up" and just created a new baseline, rather than doing the long upgrade path? (or, switched from Flyway to another tool with similar merits)
There are at least two problems here, with many moving parts:
Dealing with the tooling's constraints, and how to deal with Flyway 3->7+ (follow the tool's doc)
How to incorporate large prod SQL migrations in general, which is too general a problem to cover here.
If anyone has better (less general) advice on the first, I'd love to hear it.
Re: the second, we're looking to our infra and deployment from off-the-shelf tools.
Most projects I've worked on have been Spring based. (large ecosystem, even without the k8s bits)
You may give a shot to bytebase (bytebase.com).
Web-Based
Open-Source
Can do MySQL schema migration triggered by GitHub/GitLab with full history
I am using Prisma + MySQL in Production. Works great! There's a need in near future where we need to use neo4j alongside/completely. Any suggestions on can we achieve this with the existing artifacts, as apparently Prisma doesn't support neo4j. So if we can continue using Prisma or we stop using it and start using neo4j orm.
Prisma doesn't currently support neo4j, though there are plans to add support in the future. Polyglot support is a use case that Prisma is targeting at large. You can follow the development status in the Github issue (👍 to signal your interest).
In the meanwhile, I'd suggest looking at neo4j specific abstractions.
The Node.js ecosystem there is the neo4j driver and an OGM (Object Graph Mapper) called Neode
I'm writing an application in MeteorJS, which requires use of MongoDB. However, I'd really like to use an SQL database, as my data is highly relational, and I could make use of features like views.
I see that IBM has a Mongo wireline driver which natively emulates Mongo i.e. you can create a frontend that thinks it's communicating to a Mongo database, while in reality, it's being backed by an SQL database. This, to me, seems ideal, at least until Meteor supports a native relational backend.
Both DB2 and Informix have Mongo drivers, and my question is this: have any of you used the JSON and Mongo driver capabilities of either of these DBs and are there limitations or factors to consider? This is a greenfield project so there's no legacy database that needs to be supported.
I'd prefer to use DB2, as Informix appears to be a legacy product and I'm hesitant to start a brand new project with technology I'll have trouble finding trained staff for. Ironically, however, it seems that Informix has deeper support for JSON, including full two-way conversion of JSON to relational tables and back, indexing, etc. (even sharding and replication)
My reading of DB2 is that currently it only supports JSON as an additional JSON/BSON field into which all JSON data will go, but without automatic two-way access to the other relational columns. Is this correct? Anyone using DB2's JSON features?
I suspect in future versions, IBM will put better JSON support into DB2 (sort of how XML was gradually integrated), but I need something now. So my options for now, as I see them:
Use Informix with its better JSON support.
Use DB2 with less JSON support (unless I'm mistaken), and wait for
new versions
Use MongoDB for now and wait for Meteor to support a
relational DB
Any other options?
EF7 claims to support lots of providers, but I'm having trouble finding documentation about which ones currently exist. I'm particularly interested in MySQL and Postgres providers.
ATM EF7 still under development, currently it only support SQL Server, SQLLite and Azure Table Storage.
Since the code based still not standardized, when it standarlized MySQL (Oracle) will work on the providers.
Another update for the question, EF7 is starting to support PostgreSQL, Please have a try on this below article.
http://druss.co/2015/04/vnext-use-postgresql-fluent-nhibernate-from-asp-net-5-dnx-on-ubuntu/
any one can give sample code from mongodb to rdbs ... I tried already , fetching data from mongodb and output store in mongodb.For that i knew how to do hadoop configuration in java job.
And i want to know three things...
which hadoop version support both mongodb and rdbs?
Is it possible to use multiple collections as input...? If possible, how we can do that?
I tried mongodb query in hadoop,It's working fine.But when i defined sort or limit...It is not working properly..even it's not fetching data from mongodb...
1. which hadoop version support both mongodb and rdbs?
I believe that all versions of Hadoop supporting MongoDB also support RDBMS (the RDBMS implementations predate MongoDB).
For supported versions of Hadoop to use with MongoDB, see: Building the Adapter. Check the version information as some Hadoop versions do not support the Streaming Connector (i.e. if you want to write your jobs in non-JVM languages such as Python).
2. Is it possible to use multiple collections as input...?
If possible, how we can do that?
MongoDB Hadoop Connector v1.0.0 does not support multiple collections as input, but there are a few folks in the community working on this (see: Feature/multiple inputs).
3. I tried mongodb query in hadoop,It's working fine. But when i defined
sort or limit... It is not working properly..even it's not fetching data
from mongodb...
Can you provide an example of how/where you provided these options? Are you referring to the mongo.input.sort and mongo.input.limit properties?
You may want to try enabling the Database Profiler in MongoDB to confirm the queries are being sent:
db.setProfilingLevel(2)