CAML Query throwing error - caml

Pretty new to CAML queries, but trying to query a list based on Status = Completed and a date range.
It is throwing the following error "Unexpected Error: One or more field types are not installed properly. Go to the list settings page to delete these fields.
Microsoft.SharePoint"
Status and Created are both system columns, so pretty sure I have the Field Names correct
I have tested the query without the Date Range and it runs as expected, so I think my problem is somewhere in between and . From what I have read, Created expects time to follow date. Query is below, any help would be greatly appreciated.
<Where>
<And>
<And>
<Eq><FieldRef Name="Status" />Value Type="Choice">Completed</Value></Eq>
</And>
<Geq>
<FieldRef Name="Created" /><Value IncludeTimeValue="TRUE"
Type="DateTime">2013-07-02T00:00:01Z</Value>
</Geq>
<Leq>
<FieldRef Name="Created" /><Value IncludeTimeValue="TRUE"
Type="DateTime">2013-07-02T23:59:59Z</Value>
</Leq>
</And>
</Where>

You have some basic syntax errors in your query above, so they may be causing the issue. See if this helps.
You were missing a left angle-bracket in the first tag.
You had too many tags and they were out of sequence.
I changed the Type attribute of the first tag to Text.
<Where>
<And>
<Eq><FieldRef Name="Status" /><Value Type="Text">Completed</Value></Eq>
<And>
<Geq>
<FieldRef Name="Created" /><Value IncludeTimeValue="TRUE" Type="DateTime">2013-07-02T00:00:01Z</Value>
</Geq>
<Leq>
<FieldRef Name="Created" /><Value IncludeTimeValue="TRUE" Type="DateTime">2013-07-02T23:59:59Z</Value>
</Leq>
</And>
</And>
</Where>

Related

How to manually increment UNIX_TIMESTAMP() when insert a new row with liquibase?

I have a cart table with a cart_number column which I would like to be the current time in milliseconds.
I've created a database change log file and everything works fine but the problem is that the span of time between 2 inserts is too short to get a "unique" cart_number. Here is the liquibase insert:
<property name="now" value="UNIX_TIMESTAMP()" dbms="mysql"/>
<changeSet id="2" author="Me">
<insert tableName="cart">
<column name="user_id" value="1"/>
<column name="cart_number" valueDate="${now}"/>
</insert>
<insert tableName="cart">
<column name="user_id" value="2"/>
<column name="cart_number" valueDate="${now}"/>
</insert>
</changeSet>
And I know this isn't a solution, but I've tried to write something like ${now+1} but it didn't worked. Any help would be appreciated. Thank you in advance.

Liquibase modifyDataType fails on Mysql

I don't seem to be able to execute a type modification on a Mysql 5.7.10 instance (it works with H2 though).
Here are the changeset steps that are involved with the regarded field:
Creation:
<column name="last_modify_time" type="bigint">
<constraints nullable="false" />
</column>
Modification:
<modifyDataType tableName="USER" columnName="last_modify_time" newDataType="timestamp" />
the error msg in Mysql is
Invalid default value for 'last_modify_time' [Failed SQL: ALTER TABLE USER MODIFY last_modify_time timestamp]
Manually modifying the the request to the following works:
ALTER TABLE USER MODIFY last_modify_time timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP
I don't really understand why Mysql needs the default value. Maybe it's an edge case with the version 5.7.10 (using the default configuration options).
In any case Liquibase should be able to handle it.
I've tried to add/remove default value prior to the modifyDataType, without success.
when you are modifying type of a column using liquibase which is non nullable ... you have to first make it nullable and then modify it use safe modify
It turns out that an exactly identical modification on a field named create_time, placed right before, didn't cause any problem. Swapping the two changsets order did solve the problem.
<changeSet id="11" author="author">
<comment>rename last_modify_time to last_modified_date and change type to timestamp</comment>
<modifyDataType tableName="USER" columnName="last_modify_time" newDataType="timestamp" />
<renameColumn tableName="USER" oldColumnName="last_modify_time" newColumnName="last_modified_date" columnDataType="timestamp" />
</changeSet>
<changeSet id="12" author="author">
<comment>rename create_time to created_date and change type to timestamp</comment>
<modifyDataType tableName="USER" columnName="create_time" newDataType="timestamp" />
<renameColumn tableName="USER" oldColumnName="create_time" newColumnName="created_date" columnDataType="timestamp"/>
</changeSet>
I still can't explain what happend, I'm happy enough to get it working. If someone wants to reproduce the error I'd be happy to help.

How to properly set up DataImportHandler for MySQL database with large number or records?

I have set up Solr's data import handler as instructed in manual. Solr reads the records from a MySQL database. The database has large number of records (expected is milliards/billions).
I have read that batch size does not work for MySQL because the JDBC driver does not support it. I have tried setting it up to -1. In this case, Solr performs one select and gets all records from the DB and indexes them.
Now, I have problem, since a timeout occurred while indexing and caused it to stop. I see that Solr hasn't written any id value in the properties file after the exception occurred. I am not sure how to proceed with indexing the rest of the records.
Can anyone suggest to me how to set up Solr with MySQL for a proper data import?
Below is data config I am currently using.
<dataConfig>
<dataSource type="JdbcDataSource" name="ds-2" driver="com.mysql.jdbc.Driver" url="jdbc:mysql://localhost:3306/myowndb" batchSize="-1" />
<document name="statuses">
<entity name="status" query="select s.*, ti.id2, ti.value2 from tblTable1 s inner join tblTable2 ti on s.table2Id = ti.id;">
<field column="id" name="id" />
<field column="statusID" name="statusId" />
<field column="type" name="type" />
<field column="date" name="date" />
<field name="id2" column="id2" />
<field name="value2" column="value2" />
</entity>
</document>
</dataConfig>
EDIT:
Based on my tests today, it looks like batchSize is working. If batchSize is set to -1, it will make single request to MySQL retrieving all rows at once. If set to some value greater than 0, it will put every record in memory before processing.
New question is next: how to set up data import handler so it can index in batches? Not only to perform batch select from database, but to index collected set before collecting next one.
EDIT: Specified question
New question that came up from reading is next: is it possible to mark row in database as processed? There are only two events available in DIH, onImportStart and onImportEnd.
Current flow in ideas lead me to implement EntityProcessor. If it would be possible to know when some row is indexed, it would also be easy to mark isIndexed flag in database for indexed row. This is in case I implement custom EntityProcessor.

Liquibase script for MYSQL

I have a liquibase xml script. When I run it on Postgres I don't face any problem but when I run it for MYSQL it gives error when the structure is of the following type:-
<insert tableName="user_table">
<column name="id" valueComputed="(select max(id)+1 from user_table)"/>
<column name="name" value="someName"/>
</insert>
When the above script is executed for MYSQL it gives error:-
You can't specify target table 'user_table' for update in FROM clause.
I found a solution to this by using alias like this :-
<insert tableName="user_table">
<column name="id" valueComputed="(select max(id)+1 from (Select * from user_table) t)" />
<column name="name" value="someName"/>
</insert>
But there are thousands of entries like this. Is there any generic way of doing it so that I don't have to change the script at so many places. Thanks.
The easiest approach would be to just update the XML, either with an simple XML parser program or even a regexp search and replace in your text editor.
Alternately, you can override the standard liquibase logic to look for that particular valueComputed pattern and replace it. There are a couple points you could make the change at:
Override the liquibase.parser.core.xml.XMLCHangeLogSAXParser class, probably the parseToNode() method to search through the generated ParsedNode for valueComputed nodes
Override the liquibase.change.core.InsertDataChange class generateStatements() method or addColumn() method to replace valueComputed fields.
See http://liquibase.org/extensions for more info on writing extensions.

Create a cover index with "Include" columns using nhibernate mapping file

I need to create a non-clustered index with INCLUDE columns (see the <create> tag below). Here's the mapping file:
<hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" namespace="MyApp" assembly="MyApp">
<class name="User" table="user" >
<id name="Id" type="Guid" column="user_id">
<generator class="guid.comb"/>
</id>
<property name="Name" column="name" not-null="true" />
<property name="Phone" column="phone" />
<property name="Zipcode" column="zipcode" />
</class>
<database-object>
<create>
CREATE NONCLUSTERED INDEX [IX_user_zipcode_id]
ON User (Zipcode)
INCLUDE (Name, Phone)
</create>
<drop>
DROP INDEX IX_user_zipcode_id
</drop>
<dialect-scope name="NHibernate.Dialect.MsSql2000Dialect"/>
<dialect-scope name="NHibernate.Dialect.MsSql2005Dialect"/>
<dialect-scope name="NHibernate.Dialect.MsSql2008Dialect"/>
</database-object>
</hibernate-mapping>
The problem I'm having is the index is not created at all. Nothing appears to be happening. This is my first time using <database-object> so I may be doing something wrong here.
I'm guessing INCLUDE is Sql Server specific which is why the dialect-scope is there. I know how to create a single and multi-column index, but this is not what I want. I want a single column index on zipcode and all other columns in the User table part of the INCLUDE clause of the query. Is there any way to create this type of index using the mapping file or some other way?
This is probably a long shot, but it would be nice to not have to specify every column but the indexed one in the INCLUDE part of the query... Instead to just let nhibernate add any new columns to the index that are added as properties to the mapping file.
So part of the problem was indeed my lack of understanding the database-object tag due mostly to poor documentation. From what I've gathered, the <create> and <drop> tags are only used when using SchemaExport like so:
Dim schemaExport As SchemaExport = New SchemaExport(NhibernateConfiguration)
schemaExport.Execute(False, True, False)
My app doesn't create the schema using that class. Instead it uses SchemaUpdate so the schema isn't blown away every time (the database may already exist on the users machine):
Dim schemaUpdate As SchemaUpdate = New SchemaUpdate(NhibernateConfiguration)
schemaUpdate.Execute(False, True)
That was the problem. The next logical question to ask is then how do you execute sql using SchemaUpdate. The answer... you can't. See this post: https://forum.hibernate.org/viewtopic.php?f=6&t=969584&view=next
Alas I am left to use raw sql. Maybe some day they will add an <update> tag.