How to ignore case using breeze FilterQueryOp - json

I am using breeze to query data from the server and seem to be running into problems.
Is there a way to filter this data and ignore cases or making the value from the field a lower case?
Example:
var term = "john";
query = query.where("Name", "contains", Term);
The problem I am having is if the 'Name' field contains John with capital 'J', It return false but returns true if I change term to 'John'.
I know this is case issue but how can I make breeze ignore the casing? without using jquery.each.
Thanks. Any help will be greatly appreciated.

In my opinion there is a simpler approach to this.
By default OData is case sensitive, but nonetheless provides functions to transform a string to lower or upper case. So to fire a case-insensitive query to the server simply modify your code as follows:
var term = "john";
query = query.where("tolower(Name)", breeze.FilterQueryOp.Contains, term.toLowerCase());
Thus OData is told to transform the subject to lower case before comparing it to your search string, which has been converted to lower case before sending it to the server.

Ok, there are two parts to this. Breeze supports a LocalQueryComparisonOptions object that is used for all localQueries.
var lqco = new breeze.LocalQueryComparisonOptions({
name: "custom comparison options",
isCaseSensitive: false,
usesSql92CompliantStringComparison: true
});
// either apply it globally
lqco.setAsDefault();
// or to a specific MetadataStore
var ms = new breeze.MetadataStore({ localQueryComparisonOptions: lqco });
var em = new breeze.EntityManager( { metadataStore: ms });
You should set this once at the beginning of your application. In this example, all localQueries performed after this point will be case insensitive.
The problem is that unless your database is ALSO set to "match" these settings ( performing this differs by database vendor), then remote queries against the server will return different results then the same query applied locally.
Basically, Breeze cannot set the "server" side implementation, so the recommendation is usually to create a localQueryComparisons object that matches your server side database settings.
Hope this makes sense.

If anyone run into this problem on an Oracle DB, I added the code above from Jay Traband then modified a logon trigger to alter session variables for DB users.
Set the following values:
ALTER SESSION SET nls_comp = linguistic;
ALTER SESSION SET nls_sort = binary_ci
Hope this helps someone out. I love Breeze!!!

Related

Does Statement.RETURN_GENERATED_KEYS generate any extra round trip to fetch the newly created identifier?

JDBC allows us to fetch the value of a primary key that is automatically generated by the database (e.g. IDENTITY, AUTO_INCREMENT) using the following syntax:
PreparedStatement ps= connection.prepareStatement(
"INSERT INTO post (title) VALUES (?)",
Statement.RETURN_GENERATED_KEYS
);
while (resultSet.next()) {
LOGGER.info("Generated identifier: {}", resultSet.getLong(1));
}
I'm interested if the Oracle, SQL Server, postgresQL, or MySQL driver uses a separate round trip to fetch the identifier, or there is a single round trip which executes the insert and fetches the ResultSet automatically.
It depends on the database and driver.
Although you didn't ask for it, I will answer for Firebird ;). In Firebird/Jaybird the retrieval itself doesn't require extra roundtrips, but using Statement.RETURN_GENERATED_KEYS or the integer array version will require three extra roundtrips (prepare, execute, fetch) to determine the columns to request (I still need to build a form of caching for it). Using the version with a String array will not require extra roundtrips (I would love to have RETURNING * like in PostgreSQL...).
In PostgreSQL with PgJDBC there is no extra round-trip to fetch generated keys.
It sends a Parse/Describe/Bind/Execute message series followed by a Sync, then reads the results including the returned result-set. There's only one client/server round-trip required because the protocol pipelines requests.
However sometimes batches that can otherwise be streamed to the server may be broken up into smaller chunks or run one by on if generated keys are requested. To avoid this, use the String[] array form where you name the columns you want returned and name only columns of fixed-width data types like integer. This only matters for batches, and it's a due to a design problem in PgJDBC.
(I posted a patch to add batch pipelining support in libpq that doesn't have that limitation, it'll do one client/server round trip for arbitrary sized batches with arbitrary-sized results, including returning keys.)
MySQL receives the generated key(s) automatically in the OK packet of the protocol in response to executing a statement. There is no communication overhead when requesting generated keys.
In my opinion even for such a trivial thing a single approach working in all database systems will fail.
The only pragmatic solution is (in analogy to Hibernate) to find the best working solution for each target RDBMS (and
call it a dialect of your one for all solution:)
Here the information for Oracle
I'm using a sequence to generate key, same behavior is observed for IDENTITY column.
create table auto_pk
(id number,
pad varchar2(100));
This works and use only one roundtrip
def stmt = con.prepareStatement("insert into auto_pk values(auto_pk_seq.nextval, 'XXX')",
Statement.RETURN_GENERATED_KEYS)
def rowCount = stmt.executeUpdate()
def generatedKeys = stmt.getGeneratedKeys()
if (null != generatedKeys && generatedKeys.next()) {
def id = generatedKeys.getString(1);
But unfortunately you get ROWID as a result - not the generated key
How is it implemented internally? You can see it if you activate a 10046 trace (BTW this is also the best way to see
how many roundtrips were performed)
PARSING IN CURSOR
insert into auto_pk values(auto_pk_seq.nextval, 'XXX')
RETURNING ROWID INTO :1
END OF STMT
So you see the JDBC Standard 3.0 is implemented, but you don't get a requested result. Under the cover is used the
RETURNING clause.
The right approach to get the generated key in Oracle is therefore:
def stmt = con.prepareStatement("insert into auto_pk values(auto_pk_seq.nextval, 'XXX') returning id into ?")
stmt.registerReturnParameter(1, Types.INTEGER);
def rowCount = stmt.executeUpdate()
def generatedKeys = stmt.getReturnResultSet()
if (null != generatedKeys && generatedKeys.next()) {
def id = generatedKeys.getLong(1);
}
Note:
Oracle Release 12.1.0.2.0
To activate the 10046 trace use
con.createStatement().execute "alter session set events '10046 trace name context forever, level 12'"
con.createStatement().execute "ALTER SESSION SET tracefile_identifier = my_identifier"
Depending on frameworks or libraries to do things that are perfectly possible in plain SQL is bad design IMHO, especially when working against a defined DBMS. (The Statement.RETURN_GENERATED_KEYS is relatively innocuous, although it apparently does raise a question for you, but where frameworks are built on separate entities and doing all sorts of joins and filters in code or have custom-built transaction isolation logic things get inefficient and messy very quickly.)
Why not simply:
PreparedStatement ps= connection.prepareStatement(
"INSERT INTO post (title) VALUES (?) RETURNING id");
Single trip, defined result.

Why does MySQL permit non-exact matches in SELECT queries?

Here's the story. I'm testing doing some security testing (using zaproxy) of a Laravel (PHP framework) application running with a MySQL database as the primary store for data.
Zaproxy is reporting a possible SQL injection for a POST request URL with the following payload:
id[]=3-2&enabled[]=on
Basically, it's an AJAX request to turn on/turn off a particular feature in a list. Zaproxy is fuzzing the request: where the id value is 3-2, there should be an integer - the id of the item to update.
The problem is that this request is working. It should fail, but the code is actually updating the item where id = 3.
I'm doing things the way I'm supposed to: the model is retrieved using Eloquent's Model::find($id) method, passing in the id value from the request (which, after a bit of investigation, was determined to be the string "3-2"). AFAIK, the Eloquent library should be executing the query by binding the ID value to a parameter.
I tried executing the query using Laravel's DB class with the following code:
$result = DB::select("SELECT * FROM table WHERE id=?;", array("3-2"));
and got the row for id = 3.
Then I tried executing the following query against my MySQL database:
SELECT * FROM table WHERE id='3-2';
and it did retrieve the row where id = 3. I also tried it with another value: "3abc". It looks like any value prefixed with a number will retrieve a row.
So ultimately, this appears to be a problem with MySQL. As far as I'm concerned, if I ask for a row where id = '3-2' and there is no row with that exact ID value, then I want it to return an empty set of results.
I have two questions:
Is there a way to change this behaviour? It appears to be at the level of the database server, so is there anything in the database server configuration to prevent this kind of thing?
This looks like a serious security issue to me. Zaproxy is able to inject some arbitrary value and make changes to my database. Admittedly, this is a fairly minor issue for my application, and the (probably) only values that would work will be values prefixed with a number, but still...
SELECT * FROM table WHERE id= ? AND ? REGEXP "^[0-9]$";
This will be faster than what I suggested in the comments above.
Edit: Ah, I see you can't change the query. Then it is confirmed, you must sanitize the inputs in code. Another very poor and dirty option, if you are in an odd situation where you can't change query but can change database, is to change the id field to [VAR]CHAR.
I believe this is due to MySQL automatically converting your strings into numbers when comparing against a numeric data type.
https://dev.mysql.com/doc/refman/5.1/en/type-conversion.html
mysql> SELECT 1 > '6x';
-> 0
mysql> SELECT 7 > '6x';
-> 1
mysql> SELECT 0 > 'x6';
-> 0
mysql> SELECT 0 = 'x6';
-> 1
You want to really just put armor around MySQL to prevent such a string from being compared. Maybe switch to a different SQL server.
Without re-writing a bunch of code then in all honesty the correct answer is
This is a non-issue
Zaproxy even states that it's possibly a SQL injection attack, meaning that it does not know! It never said "umm yeah we deleted tables by passing x-y-and-z to your query"
// if this is legal and returns results
$result = DB::select("SELECT * FROM table WHERE id=?;", array("3"));
// then why is it an issue for this
$result = DB::select("SELECT * FROM table WHERE id=?;", array("3-2"));
// to be interpreted as
$result = DB::select("SELECT * FROM table WHERE id=?;", array("3"));
You are parameterizing your queries so Zaproxy is off it's rocker.
Here's what I wound up doing:
First, I suspect that my expectations were a little unreasonable. I was expecting that if I used parameterized queries, I wouldn't need to sanitize my inputs. This is clearly not the case. While parameterized queries eliminate some of the most pernicious SQL injection attacks, this example shows that there is still a need to examine your inputs and make sure you're getting the right stuff from the user.
So, with that said... I decided to write some code to make checking ID values easier. I added the following trait to my application:
trait IDValidationTrait
{
/**
* Check the ID value to see if it's valid
*
* This is an abstract function because it will be defined differently
* for different models. Some models have IDs which are strings,
* others have integer IDs
*/
abstract public static function isValidID($id);
/**
* Check the ID value & fail (throw an exception) if it is not valid
*/
public static function validIDOrFail($id)
{
...
}
/**
* Find a model only if the ID matches EXACTLY
*/
public static function findExactID($id)
{
...
}
/**
* Find a model only if the ID matches EXACTLY or throw an exception
*/
public static function findExactIDOrFail($id)
{
...
}
}
Thus, whenever I would normally use the find() method on my model class to retrieve a model, instead I use either findExactID() or findExactIDOrFail(), depending on how I want to handle the error.
Thank you to everyone who commented - you helped me to focus my thinking and to understand better what was going on.

EntityFramework on MySql changing connection string does not change results data

I'm working with EntityFramework 5.0 and MySql. I have generated model from database, and my application now have to connect on multiple database with same structred data.
So i have to dynamic change connection string based on some info.
I try to change database name even from config section of connection string, and with EntityConnectionStringBuilder, but i had the same result: my new connection is stored correctly, but data returned are of the first database.
From WebConfig:
add name="dbIncassiEntities" connectionString="metadata=res:///DAL.Modelincassi.csdl|res:///DAL.Modelincassi.ssdl|res://*/DAL.Modelincassi.msl;provider=Devart.Data.MySql;provider connection string="user id=root ... database=dbname2"" providerName="System.Data.EntityClient" />
From code:
EntityConnectionStringBuilder entityBuilder = new EntityConnectionStringBuilder();
entityBuilder.Provider = providerName;
entityBuilder.ProviderConnectionString = "user id=...database=dbname2";
entityBuilder.Metadata = #"res://*/DAL.Modelincassi.csdl|res://*/DAL.Modelincassi.ssdl|res://*/DAL.Modelincassi.msl";
var context = new dbIncassiEntities(entityBuilder.ToString());
My constructor:
public dbIncassiEntities(string conn)
: base(conn)
{
}
What am i missing?
UPDATE
I can see that calling a query directly from SqlQuery, results returned are correct,
while using the generated entities i retrieve wrong data.
var test = context.Database.SqlQuery<string>(
"SELECT cognomenome FROM addetto limit 0,1").ToList();
But calling..
var oAddetto = from c in context.addettoes select c;
So my problem is only on the model itsself, and manually changing the generated schema
<EntitySet Name="addetto" EntityType="dbIncassiModel.Store.addetto" store:Type="Tables" Schema="dbname2" />
..i'll get the right information.
My question now is: how can i change in code these informations??
Any help is really appreciated!!
Thanks, David
Ok, i've found a workaround for now.
I simply clear the shema name on the designer, and now i can call the generated entities succesfully. Hope this can help anyone else.
David
While I could not remove the Schema in the designer, I removed it directly in the .edmx file. Do a full text search for Schema="YourSchema" in an XML editor of your choice and remove the entries. After that, changing the connection string is enough.
Downside is, the Visual Studio designer and mapping explorer won't work properly anymore.
This seems to be more of a dotConnect issue rather than MySQL, since the problem also exists for the Oracle adapter:
http://forums.devart.com/viewtopic.php?t=17427

Why does Salesforce prevent me from creating a Push Topic with a query that contains relationships?

When I execute this code in the developer console
PushTopic pushTopic = new PushTopic();
pushTopic.ApiVersion = 23.0;
pushTopic.Name = 'Test';
pushTopic.Description = 'test';
pushtopic.Query = 'SELECT Id, Account.Name FROM Case';
insert pushTopic;
System.debug('Created new PushTopic: '+ pushTopic.Id);
I receive this message:
FATAL ERROR System.DmlException: Insert failed. First exception on row
0; first error: INVALID_FIELD, relationships are not supported:
[QUERY]
The same query runs fine on the Query Editor, but when I assign it to a Push Topic I get the INVALID_FIELD exception.
If the bottom line is what the exception message says, that relationships are just not supported by Push Topic objects, how do I create a Push Topic object that will return the data I'm looking for?
Why
Salesforce prevents this because it will require them to join tables, joins in salesforces database are expensive due to the multi-tenancy. Usually when they add a new feature they will not support joins as it requires more optimization of the feature.
Push Topics are still quite new to the system and need to be real time, anything that would slow them down I'd say needs to be trimmed.
I'd suggest you look more closely at your requirement and see if there is something else that will work for you.
Workaround
A potential workaround is to add a Formula field to the Case object with the data you need and include that in the query instead. This may not work as it will still require a join to work.
A final option may be to use a workflow rule or trigger to update the account name to a custom field on the Case object this way the data is local so doesn't require a join...
PushTopics support a very small subset of SOQL queries, see more here:
https://developer.salesforce.com/docs/atlas.en-us.api_streaming.meta/api_streaming/unsupported_soql_statements.htm
However this should work:
PushTopic casePushTopic = new PushTopic();
pushTopic.ApiVersion = 23.0;
pushTopic.Name = 'CaseTopic';
pushTopic.Description = 'test';
pushtopic.Query = 'SELECT Id, Account.Id FROM Case';
insert pushTopic;
PushTopic accountPushTopic = new PushTopic();
pushTopic.ApiVersion = 23.0;
pushTopic.Name = 'AccountTopic';
pushTopic.Description = 'test';
pushtopic.Query = 'SELECT Id, Name FROM Account';
insert pushTopic;
It really depends on your use case though, if it is for replicating into RDBMS this should be enough, you can use a join to get the full data.

Erlang mysql example

Just wondering if anyone could give a working example of using the erlang-mysql module (http://code.google.com/p/erlang-mysql-driver/).
I am new to erlang and I am trying to replace some old scripts with a few erlang batch processes. I am able to connect to the DB and even complete a query, but I am not sure how I use the results. Here is what I currently have:
-include("../include/mysql.hrl").
...
mysql:start_link(p1, "IP-ADDRESS", "erlang", "PASSWORD", "DATABASE"),
Result1 = mysql:fetch(p1, <<"SELECT * FROM users">>),
io:format("Result1: ~p~n", [Result1]),
...
I also have a prepared statement that I am also using to get just one row (if it exists) and it would be helpful to know how to access the results on that as well
This is described in the source code of mysql.erl:
Your result will be {data, MySQLRes}.
FieldInfo = mysql:get_result_field_info(MysqlRes), where FieldInfo is a list of {Table, Field, Length, Name} tuples.
AllRows = mysql:get_result_rows(MysqlRes), where AllRows is a list of lists, each representing a row.
you should check the count of rows,
then execute:
eg:
RowLen = erlang:length(Row),
if
RowLen > 0 ->
{success};
true ->
{failed, "Row is null"}
end.
After trying to use the ODBC module that comes with Erlang/OTP, and running into problems, I recommend the mysql/otp driver. I replaced ODBC with it in just a few hrs and it works fine.
They have good documentation so I will not add examples here.