Is there any way i can convert a table struture in a MySQL or Oracle database to XSD (XML Schema Definition) format ?.
Thank You.
use XML Spy.
http://williamjxj.wordpress.com/2011/05/25/1004/
Yes, but it's fairly complicated. You'll want to run the query SHOW CREATE TABLE <tablename> and it will return the full table creation statement (in tidy CREATE TABLE syntax).
Then you'll want to parse each line of the create table syntax using your language. Thankfully the fields are neatly separated by newlines.
The types should be fairly easy to map to XSD types.
Where it gets complicated is when you're parsing foreign key relationships - then you'll need to define custom types in your XSD and reference them accordingly.
It really comes down to your implementation. If you're looking for a portable data format that you can easily import/export from your database then there are a number of other solutions.
Related
I need to create dynamic views from JSON string data
create or replace view schema.vw_tablename copy grants as
SELECT
v:Duration::int Duration,
v:Connectivity::string Connectivity
...
from public.tablename
This is a kind of manual view for one of the table but i want to code in generic way so that i will pass the table name which is having JSON data and view will be created and output will be tabular format.
If you are wanting to have the view created in snowflake driven by data (as compared to using a tool to create the views client side, which we do in our company) I think you only hope will be stored procedures. In the detailed usage doc's it reminds you DDL operations commits the current transaction (which is always good to remember) but also implies that you can do DDL, which is what you are asking. Which means you should be able to write some javascript that builds the create view command you are want based on data handed to it.
There is a nice 2 part blog that handles this requirement. Similar to what is mentioned in Simeon Pilgrim's answer, the blog also uses a Stored Proc to generate the View. Albeit it does so using Snowflake SQL.
https://www.snowflake.com/blog/automating-snowflakes-semi-structured-json-data-handling/
https://www.snowflake.com/blog/automating-snowflakes-semi-structured-json-data-handling-part-2/
I'm building a Laravel app the core features are driven with rather large JSON objects. (the largest ones are between 1000-1500 lines).
I know there are better data base choices than MySQL for storing files and blocks of data, but for various reasons I will need to use MySQL for the application.
So my question is, how to I store my JSON objects most effective in MySQL? I will not need to do any queries on the column that holds the data, there will be other columns for identifying it. Something like this:
id, title, created-at, updated-at, JSON-blobthingy
Any ideas?
You could use the JSON data type if you have MySQL version 5.7.8 or above.
You could store the JSON file on the server, and simply reference its location via MySQL.
You could use also one of the TEXT types.
The best answer i can give is to use MySQL 5.7. On this version the new column type JSON are supported. Which handles large JSON very well (obviously).
https://dev.mysql.com/doc/refman/5.7/en/json.html
You could compress the data before inserting it if you don't need it searchable. I'm using the 'zlib' library for that
Simply, you can use the type longblob which can handle up to 4GB of data for the column holding the large JSON object where you can insert, update, and read this column normally as if it is text or anything else!
Is it possible to setup the XML Source component so that it generates GUID, instead of the default int, values for the automatically generated _id column?
I would send each output from the XML Source to a SQL table (using an OLE DB Destination), and add a GUID column to each table.
Actually I would forget about GUIDs because they are painful in general and with SSIS in particular - it essentially does not provide any support for GUIDs.
The way to solve my problems was to write a custom component. No other alternatives where available.
I am evaluating a Mondrian-Saiku solution for a client.
After analyzing their current database schemas, I realize that what constitutes as their 'fact table data' is currently being stored in XML's. The XML 's themselves are stored as blob datatypes in a MySQL table. Think of it like this: the table holds all the transactions of the company; the details of each transaction are stored in their own XML; each XML string is stored as one of the field values in a given transaction row.
This presents a slight dilemma since the Mondrian XML schema requires the explicit use of column names.
Short of having to extract and transfer the XML data to new tables (not realistic for my purposes due to the size of data and dependencies from other systems), is there any way I can work my client's existing setup for the purposes of a Mondrian-Saiku implementation?
You need to expose the data in a traditional table way. What is the database here? Can you create a database view which does some xml processing on the XML in the blob and exposes the columns?
Alternatively maybe something like composite or jboss teiid can help here. These tools allow you to expose as a standard looking table, virtually anything. It may not be quick enough though!
I have a dataset with a lot of columns I want to import into a MySQL database, so I want to be able to create tables without specifying the column headers by hand. Rather I want to supply a filename with the column labels in it to (presumably) the MySQL CREATE TABLE command. I'm using standard MySQL Query Browser tools in Ubuntu, but I didn't see in option for this in the create table dialog, nor could I figure out how to write a query to do this from the CREATE TABLE documentation page. But there must be a way...
A CREATE TABLE statement includes more than just column names
Table name*
Column names*
Column data types*
Column constraints, like NOT NULL
Column options, like DEFAULT, character set
Table constraints, like PRIMARY KEY* and FOREIGN KEY
Indexes
Table options, like storage engine, default character set
* mandatory
You can't get all this just from a list of column names. You should write the CREATE TABLE statement yourself.
Re your comment: Many software development frameworks support ways to declare tables without using SQL DDL. E.g. Hibernate uses XML files. YAML is supported by Rails ActiveRecord, PHP Doctrine and Perl's SQLFairy. There are probably other tools that use other format such as JSON, but I don't know one offhand.
But eventually, all these "simplified" interfaces are no less complex to learn as SQL, while failing to represent exactly what SQL does. See also The Law of Leaky Abstractions.
Check out SQLFairy, because that tool might already convert from files to SQL in a way that can help you. And FWIW MySQL Query Browser (or under its current name, MySQL Workbench) can read SQL files. So you probably don't have to copy & paste manually.