I have inherited a 4D database that I need to extract all the data from to import to another relational database. The 4D database ODBC driver seems to have quite a few quirks that prevents it from being used as a SQL Server linked server. I can give the gory details if anyone wants them but suffice to say; it's not looking like a possibility.
Another possibility I tried was using the MS SQL Server Import Data wizard. This is, of course, SSIS under the covers and it requires the 32 bit ODBC driver. This gets part of the way but it fails trying to create the target tables because it doesn't understand what a CLOB datatype is.
So my reasoning is that if I can build the DDL from the existing table structure in the 4D database I might be able to just import the data using the Data Import wizard if I create the tables first.
Any thoughts on what tools I could use to do this?
Thanks.
Alas, the 4D ODBC drivers are a (ahem) vessel filled with a fertiliser so powerful that none may endure its odour...
There is no simple answer but if you have made it here, you are already in a bad place so I will share some things that will help.
You can use the freeware ODBC Query Tool that can connect to the ODBC through a user or system DSN with the 64 bit driver. Then you run this query:
SELECT table_id, table_name,column_name, data_type, data_length, nullable, column_id FROM _user_columns ORDER BY table_id, column_id limit ALL
Note: ODBC Query Tool fetches the first 200 row pages by default. You need to scroll to the bottom of the result set.
I also tried DataGrip from JetBrains and RazorSQL. Neither would work against the 4D ODBC DSN.
Now that you have this result set, export it to Excel and save the spreadsheet. I found the text file outputs to be not be useful. They are exported as readable text, not CSV or tab delimited.
I then used the Microsoft SQL Server Import Data Wizard (which is SSIS) to import that data into a table that I could then manipulate. I am targeting SQL Server so it makes sense for me to make this step but if you importing to another destination database, you may create the table definitions from the data you now have whatever tool you think is best.
Once I had this in a table, I used this T-SQL script to generate the DDL:
use scratch;
-- Reference for data types: https://github.com/PhenX/4d-dumper/blob/master/dump.php
declare #TableName varchar(255) = '';
declare C1 CURSOR for
select distinct table_name
from
[dbo].[4DMetadata]
order by 1;
open C1;
fetch next from C1 into #TableName;
declare #SQL nvarchar(max) = '';
declare #ColumnDefinition nvarchar(max) = '';
declare #Results table(columnDefinition nvarchar(max));
while ##FETCH_STATUS = 0
begin
set #SQL = 'CREATE TABLE [' + #TableName + '] (';
declare C2 CURSOR for
select
'[' +
column_name +
'] ' +
case data_type
when 1 then 'BIT'
when 3 then 'INT'
when 4 then 'BIGINT'
when 5 then 'BIGINT'
when 6 then 'REAL'
when 7 then 'FLOAT'
when 8 then 'DATE'
when 9 then 'DATETIME'
when 10 then
case
when data_length > 0 then 'NVARCHAR(' + cast(data_length / 2 as nvarchar(5)) + ')'
else 'NVARCHAR(MAX)'
end
when 12 then 'VARBINARY(MAX)'
when 13 then 'NVARCHAR(50)'
when 14 then 'VARBINARY(MAX)'
when 18 then 'VARBINARY(MAX)'
else 'BLURFL' -- Put some garbage to prevent this from creating a table!
end +
case nullable
when 0 then ' NOT NULL'
when 1 then ' NULL'
end +
', '
from
[dbo].[4DMetadata]
where
table_name = #TableName
order by column_id;
open C2;
fetch next from C2 into #ColumnDefinition;
while ##FETCH_STATUS = 0
begin
set #SQL = #SQL + #ColumnDefinition;
fetch next from C2 into #ColumnDefinition;
end
-- Set the last comma to be a closing parenthesis and statement terminating semi-colon
set #SQL = SUBSTRING(#SQL, 1, LEN(#SQL) - 1) + ');';
close C2;
deallocate C2;
-- Add the result
insert into #Results (columnDefinition) values (#SQL);
fetch next from C1 into #TableName;
end
close C1;
deallocate C1;
select * from #Results;
I used the generated DDL to create the database table definitions.
Unfortunately, SSIS will not work with the 4D database ODBC driver. It keeps throwing authentication errors. But you may be able to load this database with your own bespoke tool that works with the ODBC weirdness of 4D.
I have my own tool (unfortunately I cannot share it) that will load the XML exported data directly to the database. So I am finished.
Good luck.
Boffin,
Does "inherited a 4D database" mean it's running or that you have the datafile and structure but can't open it?
If it's running and you have access to the user environment the easy thing to do is simply use 4D's export functions. If you don't have access to the user environment the only option for ODBC would be if it's designed to allow ODBC or if the developer provided some export capability.
If you can't run it you won't be able to directly access the datafile. 4D uses a proprietary datastructure and it changed from version to version. It's not encrypted by default so you can actually read/scavage the data but you can't just build a DDL and pull from it. ODBC is a connection between the running app and some other source.
Your best bet will be to contact the developer and ask for help. If that's not an option get the thing running. If it's really old you can contact 4D to get a copy of archived versions. Depending on which version it is and how it's built (compiled, interpreted, engined) your options vary.
[Edit] The developer can specify the schema that's available through SQL and we frequently limit what's exposed either for security or usability reasons. It sounds like this may be the case here - it would explain why you don't see the total structure.
This can also be done with the native 4D structure. I can limit how much of the 4D structure is visible in user mode on a field by field/table by table basis. Usually this is to make the system less confusing to users but it's also a way to enforce data security. So I could allow you to download all your 'data' while not allowing you to download the internal elements that make the database to work.
If you are able to export the data you want that sounds like the thing to do even if it is slow.
Related
I need to get last processed date of SSAS cube in SSIS and save it into a variable.
I've tried a "Execute SQL task":
SELECT LAST_DATA_UPDATE as LAST_DT FROM $system.mdschema_cubes
WHERE CUBE_NAME = 'CubeName'
It works ok in MSSQL management studio MDX query window but in SSIS it says: Unsupported data type on result set binding.
Then I've tried:
WITH MEMBER [Measures].[LastProcessed] AS ASSP.GetCubeLastProcessedDate() SELECT [Measures].[LastProcessed] ON 0 FROM [CubeName]
And it says '[ASSP].[GetCubeLastProcessedDate]' function does not exist.
Any ideas how to do this?
Thank you
A linked server might be your best option;
Create the linked server with the following, changing as appropriate:
EXEC master.dbo.sp_addlinkedserver
#server = N'LINKED_SERVER_OLAP_TEST', --Change to a suitable name
#srvproduct='', --Creates the productname as blank
#provider=N'MSOLAP', --Analysis Services
#datasrc=N'localhost', --Change to your datasource
#catalog=N'TESTCUBE' --Change to set the default cube
Change the data source of your Execute SQL Task to make sure it is pointing to any of the databases where the linked server is hosted, I.E. don't use an analysis service datasource use a standard OLE DB. Then have the following in your execute SQL task (Changing as appropriate).
SELECT *
FROM OpenQuery(LINKED_SERVER_OLAP_TEST,'SELECT LAST_DATA_UPDATE as LAST_DT FROM $system.mdschema_cubes
WHERE CUBE_NAME = ''CUBENAME''')
Set the variable to be DATETIME and the result set to be single row.
There may well be other ways to do this, however I have always found this method the most straight forward.
My query below runs fine in mysql client (Heidi) but errors out in Tableau. I've looked here and in the Tableau Community site and the only suggestion I see is to take out semicolons. I've tried that to no avail. I am connected just fine to my database through Tableau - I can see the tables and other queries run without a problem. Any ideas on what might be the problem here? I'm running Tableau 8.2. Thanks!
SET #sql=NULL;
SELECT
Group_Concat(Distinct CONCAT(
'MAX(IF(wsd.cid = ''', wc.cid, ''', wsd.data, NULL)) AS ''',wc.name,'',''''))
INTO #sql
FROM webform_component wc
WHERE wc.nid = 107;
SET #sql = Concat('SELECT wsd.sid,',#sql,'
FROM webform_submitted_data wsd
LEFT Join webform_component AS wc ON wsd.cid=wc.cid
WHERE wsd.nid = 107 AND wsd.sid >= 14967
GROUP BY wsd.sid');
PREPARE stmt FROM #sql;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
It turns out that Tableau SQL does not support this type of query when connected to a mySQL database. The recommendation I received was to to try to reformat this as a nested subquery.
This is the answer I received from a Program Manager at Tableau when I asked if using the above query was possible:
"You can't. It should be a single query that returns a result set. Tableau will wrap the custom SQL query as a subquery.If your SQL can't be treated that way, you will get errors.
We support this structure on data sources that we support "initial SQL". For example Teradata, Aster...
It allows you to run any SQL upfront, create temp tables etc. hence called initial SQL.
Then you can write a query as part of connection which will be evaluated after "initial SQL" and take advantage of the objects created in the initial SQL step."
I had this problem with custom queries as well. I could connect to the db and use the gui to bring data in but I couldn't write a custom query.
Downloading the mysql drivers was enough to fix it for me.
I have also read that you should download versions 3.51 and 5.X and use the 32-bit versions of each even if you are running 64-bit Tableau
I came across a similar situation with custom SQL query.
I had a dashboard developed with custom SQL query connecting to database with a program ID filter - "SQL Query where program id = 222". I needed to replicate the same dashboard for another program "SQL Query where program id = 333".The query didn't work initially with refreshing the data source.
Solution - if you are working with extract change the connection back to 'live', create a new extract.
I am trying to feed a table in a mysql database with something like 1 000 000 lines.
I am using Lua and the function :
conn:execute("INSERT INTO orders (dates, ordertype) VALUES ('"..tab[1][dateIndex]......
for each line.
The problem is that it is very long and I really need more efficiency.
Do you have others solutions (maybe creating a .csv and loading it with mysql, maybe there is a function that can load a matrix in a database more efficiently,...). Using Lua is an obligation as I am using an existing project.
Thank you for your help
First you can stop committing on each insert.
Also you can use prepared query. It provides by Lua-DBI and Lua-ODBC
I use ODBC.
local env = odbc.environment()
lcoal db = env:driverconnect{
Driver = IS_WINDOWS and '{MySQL ODBC 5.2 ANSI Driver}' or 'MySQL';
db='test';
uid='root';
};
cnn:set_autocommit(false)
local stmt = db:prepare("INSERT INTO orders (dates, ordertype) VALUES(?,?)")
for i, row in ipairs(tab) do
stmt:bindstr(row[dateIndex])
...
stmt:execute()
if i % 1000 == 0 then
cnn:commit()
end
end
Also ODBC provide variables. May be the could be faster because they do not call SQLBindParam each time.
-- create stmt as preview
...
local dateValue = odbc.date():bind_param(stmt, 1)
local orderValue = odbc.ulong():bind_param(stmt, 2)
for i, row in ipairs(tab) do
dateValue:set(row[1]) -- data is yyyy-mm-dd e.g. 2014-10-14
orderValue:set(row[2])
stmt:execute()
...
-- same as preview
I recently started with MySql, so i might be doing a begginers mistake. Any help is appreciated.
I connect the database in Delphi, define a query with columns of datatypes integer, decimal and varchar.
The problem is when I select a query in Delphi, and debug after opening query, the columns that are varchar does not appear as i if never selected them.
The driver for odbc connector is the latest mysql-connector-odbc-5.1.10
If you defined persistent fields for your TQuery in design mode, maybe you forgot to add the varchar (TStringField) field, or misspelled FieldName. Also make sure the filed is varchar rather than nvarchar (TWideStringField).
Another solution is to remove all persistent fields from your TQuery.
Speculating: Seems, you are using dbTables -> BDE -> ODBC -> MySQL data access path. BDE skips the fields with unknown data types. Probably Unicode character type is not supported by BDE.
Possible solutions:
try to set ANSI character set for the connection in ODBC connection parameters;
use dbGo / ADO data access components;
use dbExpress components with MySQL dbExpress driver;
use 3d party data access components.
Sample query:
SELECT anumber, adecimalnumber, avarchar FROM atable
Sample Delphi code (doing this from memory, so never mind small mistakes).
procedure Test;
var
AQuery: TADOQuery;
ANumber: integer;
ADecimal: Real;
AString: string;
begin
AQuery:= TAdoQuery.Create;
try
AQuery.Connection:= SomeODBCConnection;
AQuery.SQL.Text:= 'SELECT anumber, adecimalnumber, avarchar FROM atable';
AQuery.Open;
ANumber:= AQuery.FieldByName('anumber').AsInteger;
ADecimal:= AQuery.FieldByName('anumber').AsFloat;
AString:= AQuery.FieldByName('anumber').AsString; << Gets the varchar.
//Use AQuery.Next + test for AQuery.EOF to walk through rows.
finally
AQuery.Free;
end;
end;
It really not difficult at all to get varchar data.
I need to use text files as data source in SSRS. I tried accessing this with ‘OLEDB provider for Microsoft directory services’ connection. But I could not. The query is given below.
Also let me know how to query the data
I know this thread is old, but as it came up in my search results this may help other people.
There are two 'sort of' workarounds for this. See the following:
http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=130650
So basically you should use OLEDB as the data source, then in the connection string type:
Provider=Microsoft.Jet.OLEDB.4.0;Data Source=xxxx;Extended Properties="text;HDR=No;FMT=Delimited"
Then make sure your file is saved in .txt format, with comma delimiters. Where I've put xxxx you need to put the FOLDER directory - so C:\Temp - don't go down to the individual file level, just the folder it's in.
In the query you write for the dataset, you specify the file name as though it were a table - essentially your folder is your database, and the files in it are tables.
Thanks
I have had great success creating linked servers in SQL to link to disparate text files for creating SSRS reports. Below is sample SQL to link to your txt files:
EXEC master.dbo.sp_addlinkedserver #server = N'', #srvproduct=N'', #provider=N'Microsoft.Jet.OLEDB.4.0', #datasrc=N'', #provstr=N'text'
EXEC master.dbo.sp_addlinkedsrvlogin #rmtsrvname=N'YourLinkedServerName',#useself=N'False',#locallogin=NULL,#rmtuser=NULL,#rmtpassword=NULL
I simply used BULK INSERT command to load the flat file into a temporary table in SSRS, like this:
CREATE TABLE #FlatFile
(
Field1 int,
Field2 varchar(10),
Field3 varchar(15),
Field4 varchar(20),
Field5 varchar(50)
)
BEGIN TRY
BULK INSERT #FlatFile
FROM 'C:\My_Path\My_File.txt'
WITH
(
FIELDTERMINATOR ='\t', -- TAB delimited
ROWTERMINATOR ='\n', -- or '0x0a' (whatever works)
FIRSTROW = 2, -- has 1 header row
ERRORFILE = 'C:\My_Path\My_Error_File.txt',
TABLOCK
);
END TRY
BEGIN CATCH
-- do nothing (prevent the query from aborting on errors...)
END CATCH
SELECT * FROM #FlatFile
I don't think you can
Data Sources Supported by Reporting Services. In the table, your only chance would be "Generic ODBC data source", however a text file is not ODBC compliant AFAIK. No types, no structure etc.
Why not just display the text files? It seems a bit strange to query text files to bloat them into formatted HTML...
I'm not of the mind that you can, but a workaround for this, if your text files are CSVs or the like, is to create an SSIS package which brings that data into a table in SQL Server, which you can then query like there's no tomorrow. SSIS does Flat File Sources with ease.
You can even automate this by right clicking the database in SSMS, doing Tasks->Import Data. Walk through the wizard, and you can then save off the package at the end.