dbdeploy error with --// in deploy-file - mysql

I am generating deployment-files for my mysql-database with phing and dbdeploy.
The output of a dbdeploy-file that is generated looks like the following:
-- Fragment begins: 8 --
INSERT INTO changelog
(change_number, delta_set, start_dt, applied_by, description) VALUES (8, 'Main', NOW(), 'dbdeploy', '8-add_tracking_code.sql');
--//
ALTER TABLE `order` ADD `tracking_code` VARCHAR(255) NOT NULL;
UPDATE changelog
SET complete_dt = NOW()
WHERE change_number = 8
AND delta_set = 'Main';
-- Fragment ends: 8 --
The Problem is the --// before the ALTER Statement. The database got an error with it. If I remove the --// the hole file ist correct.
Here is a piece of my phing build-script so that you can see how I am generating the .sql-File with dbdeploy:
<target name="dbdeploy-migrate-all">
<!-- load the dbdeploy task -->
<taskdef name="dbdeploy" classname="phing.tasks.ext.dbdeploy.DbDeployTask"/>
<echo message="Loading deltas from ${build.dbdeploy.alters_dir}" />
<property name="build.dbdeploy.deployfile" value="${build.dbdeploy.deploy_dir}/deploy-${DSTAMP}${TSTAMP}.sql" />
<property name="build.dbdeploy.undofile" value="${build.dbdeploy.undo_dir}/undo-${DSTAMP}${TSTAMP}.sql" />
<!-- generate the deployment scripts -->
<dbdeploy
url="mysql:host=${db.host};dbname=${db.name}"
userid="${db.user}"
password="${db.pass}"
dir="${build.dbdeploy.alters_dir}"
outputfile="${build.dbdeploy.deployfile}"
undooutputfile="${build.dbdeploy.undofile}" />
<!-- execute the SQL - Use mysql command line to avoid trouble with large files or many statements and PDO -->
<property name="mysql.command" value="${progs.mysql} -h${db.host} -u${db.user} -p${db.pass} ${db.name} < ${build.dbdeploy.deployfile}" />
<echo message="Executing command: ${mysql.command}" />
<exec
command="${mysql.command}"
dir="${base.path}"
checkreturn="true" />
</target>
Why does dbdeploy generate a corrupt file?
Thans for your help!

A long time has passed since this question was asked, however I ran into the same problem and have managed to work out where Niels is coming from on this one.
I think we both ran into the problem because we both followed the popular tutorial on phing and dbdeploy by Dave Marshall here: http://davedevelopment.co.uk/2008/04/14/how-to-simple-database-migrations-with-phing-and-dbdeploy.html
In his example sql delat file he includes --// at the top which, if replaced with a comment in /* .... */ format instead, avoids this problem!
So I would say this is a bug in the tutorial, which is 8 years old now. Dave notes at the top of the tutorial that about 4 years ago he moved onto a different method, so it is kind of understandable that there is now a bug in the tutorial! I will submit a comment requesting an update though, because his page is a top ranking search result when searching for the topic so it would be good if we can save people the same problem we've had!
It is a shame phing didn't give a more detailed error report in the form of the SQL exception - there's an idea for a contribution to the dbdeploy script!

Related

BIML: Issues about Datatype-Handling on ODBC-Source Columns with varchar > 255

I'm just getting into BIML and have written some Scripts to creat a few DTSX-Packages. In general the most things are working. But one thing makes me crazy.
I have an ODBC-Source (PostgreSQL). From there I'm getting data out of a table using an ODBC-Source. The table has a text-Column (Name of the column is "description"). I cast this column to varchar(4000) in the query in the ODBC-Source (I know that there will be truncation, but it's ok). If I do this manually in Visual Studio the Advanced Editor of the ODBC-Source is showing "Unicode string [DT_WSTR]" with a Length of 4000 both for the External and the Output-Column. So there everything is fine. But if I do the same things with BIML and generate the SSIS-Package the External-Column will still say "Unicode string [DT_WSTR]" with a Length of 4000, but the Output-Column is telling "Unicode text stream [DT_NTEXT]". So the mapping done by BIML differs from the Mapping done by SSIS (manually). This is causing two things (warnings):
A Warning that metadata has changed and should be synced
And a Warning that the Source uses LOB-Columns and is set to Row by Row-Fetch..
Both warnings are not cool. But the second one also causes a drasticaly degredation in Performance! If I set the cast to varchar(255) the Mapping is fine (External- and Output-Column is then "Unicode string [DT_WSTR]" with a Length of 255). But as soon as I go higher, like varchar(256) it's again treated as [DT_NTEXT] in the Output.
Is there anything I can do about this? I invested days in the Evaluation of BIML and find many things an increase in Quality of Life, but this issue is killing it. It defeats the purpose of BIML if I have to correct the Errors of BIML manually after every Build.
Does anyone know how I can solve this Issue? A correct automatic Mapping between External- and Output-Columns would be great, but at least the option to define the Mapping myself would be ok.
Any Help is appreciated!
Greetings
Marco
Edit As requested a Minimal Example for better understanding:
The column in the ODBC Source (Postegres) has the type "text" (Columnname: description)
I select it in a ODBC-Source with this Query (DirectInput):
SELECT description::varchar(4000) from mySourceTable
The ODBC-Source in Biml looks like this:
<OdbcSource Name="mySource" Connection="mySourceConnection"> <DirectInput>SELECT description::varchar(4000) from mySourceTable</DirectInput></OdbcSource>
If I now generate the dtsx-Package the ODBC-Source throws the above mentioned warnings with the above mentioned Datatypes for External and Output-Column
As mentioned in the comment before I got an answer from another direction:
You have to use DataflowOverrides in the ODBC-Source in BIML. For my example you have to do something like this:
`<OdbcSource Name="mySource" Connection="mySourceConnection">
<DirectInput>SELECT description::varchar(4000) from mySourceTable</DirectInput>
<DataflowOverrides>
<OutputPath OutputPathName="Output">
<Columns>
<Column ColumnName="description" SsisDataTypeOverride="DT_WSTR" DataType="String" Length="4000" />
</Columns>
</OutputPath>
<OutputPath OutputPathName="Error">
<Columns>
<Column ColumnName="description" SsisDataTypeOverride="DT_WSTR" DataType="String" Length="4000" />
</Columns>
</OutputPath>
</DataflowOverrides>
</OdbcSource>`
You won't have to do the Overrides for all columns, only for the ones you have mapping-Issues with.
Hope this solution can help anyone who passes by.
Cheers

Migrating coldfusion application to Lucee

Our server is changing from Coldfusion to a Lucee server and I'm tasked with updating our code for a couple of web applications. I'm not a guru of Coldfusion but I can often figure my way around things, to that end my query.
The code I'm converting over is throwing this error:
Can't cast Object type [DateTime] to a value of type [Array]
I have been working through all of the queries and making sure that the output is appropriately CAST which has resolved the majority of issues but the small block of code that is stumping me throws the above error. The code is:
<cfset summaryStartDate = ArrayMin( qSummaryData["minHours"] ) />
<cfset summaryMaxDate = ArrayMax( qSummaryData["maxHours"] ) />
<cfset summaryEndDate = DateAdd("d", -(DayofWeek(#summaryMaxDate#))+6, #summaryMaxDate# ) />
minHours and maxHours are both DATETIME format. I know in the coldfusion version they output like so:
summaryStartDate: 41204
summaryMaxDate: 43465
summaryEndDate: {ts '2019-01-04 00:00:00'}
Which, to me, means Coldfusion is doing a conversion in some way and Lucee doesn't do those (or at least from what I've read). The database is mySQL and the minHours and maxHours output as dates with 00:00:00 on the hours, for reference.
I'm probably missing something obvious but I can't see it.
I preface this answer with "it is not the greatest fix" but it does work. Taking my queue from andrewdixon I looked at side stepping the use of an array (which the data wasn't suited for) and looked at alternatives.
I settled on a query of queries, extracting the min value and then setting that in a cfset. Doing the same for max value following that. My two queries were:
<cfquery name="smallestFigure" dbtype="query">SELECT CAST(MIN(minHours) AS DATETIME) as outputMin FROM qSummaryData;</cfquery>
<cfquery name="largestFigure" dbtype="query">SELECT CAST(MAX(maxHours) AS DATETIME) as outputMax FROM qSummaryData;</cfquery>
I cfset these into summaryStartDate and summaryMaxDate so that this line (mentioned in the original post) could run:
<cfset summaryEndDate = DateAdd("d", -(DayofWeek(#summaryMaxDate#))+6, #summaryMaxDate# ) />
Shawn mentioned I didn't need the #'s around summaryMaxDate but I haven't made that change as yet. andrewdixon mentioned using query.reduce() as an alternative and I imagine that would be far more succinct than what I've done so if someone comes up with a better solution, please post as an answer.
Also thank you all for the support and ideas.

WSO2 MySQL Adaptor Sytax Error

I am trying to create an event formatter for MySql in WSO2 but am hitting a problem. It appears to be linked to the use of "Composite Key Columns". The error I am getting is:
ERROR - {MysqlEventAdaptorType}
com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'Window = '15'' at line 1
This only happens if I use a two or more keys in the formatter:
<eventFormatter name="GenericAccountSQLFormatter" statistics="enable"
trace="enable" xmlns="http://wso2.org/carbon/eventformatter">
<from streamName="GenericAccountMeasureStream" version="1.0.0"/>
<mapping customMapping="disable" type="map"/>
<to eventAdaptorName="APCSQLOut" eventAdaptorType="mysql">
<property name="table.name">AccountStats</property>
<property name="update.keys">AccountId,Window</property>
<property name="execution.mode">insert-or-update</property>
</to>
</eventFormatter>
Removing either of the keys (AccountId, Window) then the formatter will send data to MySQL.
Can anyone help?
This is a bug and happens when existing events are sent to a MySQL adaptor with composite keys. Created a jira to track this and the source patch is also available there.
If you don't want to create/apply a patch, then as a quick workaround for now - you can output a composite key from CEP query by concatenating the two attributes and use that as the key in formatter.
UPDATE
The source patch with the fix and build instructions are now available in the jira here. Once built, you need to apply it as a patch to CEP. To apply it as a patch, create a directory named patch0xyz ( xyz are digits as in patch0135, also make xyz > 100) in <CEP>/repository/components/patches and then place the jar inside it. Then you need to rename the jar as org.wso2.carbon.event.output.adaptor.mysql_1.0.1.jar. Then restart the server.

Is it possible to perform a MySQL query using Phing and set the value as a property?

I'm new to Phing.
I'd like to query a value in a MySQL database table, and have the value set as a property so that I can echo it out nicely to the screen.
I can see that there is a PDOSQLExecTask which would allow me to run some SQL, but I can't see how to set the returned value into a property?
The query I want to run is:
SELECT MAX(change_number)
FROM changelog;
I'd like it set into a property:
Can anyone shed any light please?
Thanks,
Chris
I have access to MySQL at command line, I went with the following solution. I'm sure it's not the best, if someone else can improve it please do!
<!-- What's the latest delta that's been applied to this deployment? -->
<exec
command="${progs.mysql} -h${db.host} -u${db.user} -p${db.pass} -e 'USE ${db.main_db}; SELECT MAX(`change_number`) FROM `changelog`;'"
dir="."
checkreturn="false"
passthru="false"
outputProperty="latest_version_output"
/>
<php expression="preg_replace('/[^0-9]|\r|\n/si', '', '${latest_version_output}');" returnProperty="latest_version_applied" />
<echo msg="Latest delta applied was: ${latest_version_applied}" />
The PDOSQLExecTask comes with two default formatters, which will send their output to a file. To change this, you'd probably have to implement your own formatter. On the other hand, the task appears to read its SQL commands from a separate file with SQL commands, not the build file.
So on the whole, It seems to me like you might be better of writing your own task, probably using some code from the implementation of PDOSQLExecTask but with your own command input and result output. Unless calling the mysql command line binary is an alternative for you, in which case you could wrap up that call to redirect its output to a property using the outputProperty attribute to the ExecTask.

Create a cover index with "Include" columns using nhibernate mapping file

I need to create a non-clustered index with INCLUDE columns (see the <create> tag below). Here's the mapping file:
<hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" namespace="MyApp" assembly="MyApp">
<class name="User" table="user" >
<id name="Id" type="Guid" column="user_id">
<generator class="guid.comb"/>
</id>
<property name="Name" column="name" not-null="true" />
<property name="Phone" column="phone" />
<property name="Zipcode" column="zipcode" />
</class>
<database-object>
<create>
CREATE NONCLUSTERED INDEX [IX_user_zipcode_id]
ON User (Zipcode)
INCLUDE (Name, Phone)
</create>
<drop>
DROP INDEX IX_user_zipcode_id
</drop>
<dialect-scope name="NHibernate.Dialect.MsSql2000Dialect"/>
<dialect-scope name="NHibernate.Dialect.MsSql2005Dialect"/>
<dialect-scope name="NHibernate.Dialect.MsSql2008Dialect"/>
</database-object>
</hibernate-mapping>
The problem I'm having is the index is not created at all. Nothing appears to be happening. This is my first time using <database-object> so I may be doing something wrong here.
I'm guessing INCLUDE is Sql Server specific which is why the dialect-scope is there. I know how to create a single and multi-column index, but this is not what I want. I want a single column index on zipcode and all other columns in the User table part of the INCLUDE clause of the query. Is there any way to create this type of index using the mapping file or some other way?
This is probably a long shot, but it would be nice to not have to specify every column but the indexed one in the INCLUDE part of the query... Instead to just let nhibernate add any new columns to the index that are added as properties to the mapping file.
So part of the problem was indeed my lack of understanding the database-object tag due mostly to poor documentation. From what I've gathered, the <create> and <drop> tags are only used when using SchemaExport like so:
Dim schemaExport As SchemaExport = New SchemaExport(NhibernateConfiguration)
schemaExport.Execute(False, True, False)
My app doesn't create the schema using that class. Instead it uses SchemaUpdate so the schema isn't blown away every time (the database may already exist on the users machine):
Dim schemaUpdate As SchemaUpdate = New SchemaUpdate(NhibernateConfiguration)
schemaUpdate.Execute(False, True)
That was the problem. The next logical question to ask is then how do you execute sql using SchemaUpdate. The answer... you can't. See this post: https://forum.hibernate.org/viewtopic.php?f=6&t=969584&view=next
Alas I am left to use raw sql. Maybe some day they will add an <update> tag.