Quick project background: I am porting some Telerik for ASP.NET grids/back end data sources to Kendo angular UI framework.
Problem: I'm trying to implement grid filtering (has to be custom since I manually bind to handle complex paging/data retrieval.
Is there any method/function to retrieve a filter string instead of the object model containing filters and operators? (I can, obviously, flatten this myself, but it's a bit of code). Here is a code snippet...
if (filter !== undefined) {
// Step through column filters
filter.filters.forEach((element: any) => {
// console.log(element);
// Step through individual filter for column
element.filters.forEach((curFilter: any) => {
console.log(curFilter.field);
console.log(curFilter.operator);
console.log(curFilter.value);
});
});
}
I'm still using the dynamic linq queries that the .NET grid used for its filter expressions. Will the filter expressions that come out of grid.filters work correctly with a linq dynamic query on the back-end or will I need to convert?
I feel like I'm missing something here so I thought I'd start a conversation.
Thanks!
-Brian
I believe based on your comment answer what you are attempting to do is get the filters and sorts, paging just like its sent for MVC on the network.
For kendo to do this you need to set your data source type and have the right file included in your page.
the file name is kendo.aspnetmvc.js or kendo.aspnetmvc.min.js
and your data source initialization should specify
dataSource: {
transport: {
type: 'aspnetmvc-ajax'
}
}
or do the same thing are they are doing
the cdn for this script is https://kendo.cdn.telerik.com/2018.1.221/js/kendo.aspnetmvc.min.js
I know they use the like of this functions to do what you are talking about
function serializeFilter(filter, encode) {
if (filter.filters) {
return $.map(filter.filters, function (f) {
var hasChildren = f.filters && f.filters.length > 1, result = serializeFilter(f, encode);
if (result && hasChildren) {
result = '(' + result + ')';
}
return result;
}).join('~' + filter.logic + '~');
}
if (filter.field) {
return filter.field + '~' + filter.operator + '~' + encodeFilterValue(filter.value, encode);
} else {
return undefined;
}
}
Related
I'm trying to create a custom datastudio connector that connect to WooCommerce rest API. I want to differentiate between orders placed by a registered user and orders placed by a guest user.
The WooCommerce API gives me the custumer_id field, if the customer_id = 0, the order was placed by a guest, otherwise the user is registered.
I followed the google data studio tutorial : https://developers.google.com/datastudio/connector/get-started
And this is my responseToRow function:
/**
Parse the response data given the required fields of the config
#return the parsed data
*/
function responseToRows(requestedFields, response) {
// Transform parsed data and filter for requested fields
return response.map(function(dailyDownload) {
var row = [];
requestedFields.asArray().forEach(function (field) {
switch (field.getId()) {
case 'id_order':
return row.push(dailyDownload.id);
case 'total':
return row.push(dailyDownload.total);
case 'date_created':
return row.push(dailyDownload.date_created);
case 'registered_user' :
if(parseInt(dailyDownload.customer_id) > 0)
return row.push(dailyDownload.customer_id);
case 'guest_user' :
if(parseInt(dailyDownload.customer_id) == 0)
return row.push(dailyDownload.customer_id);
default:
return row.push('');
}
});
return { values: row };
});
}
The function is similar to the one given in the tutorial, the others fields works fine. I'm just returning when the customer_id is different of 0. It seem to work, but I get null values when the condition doesn't hold.
I would like to remove the null values, having only the orders when the customer_id was 0 on the right and the same for the complement on the left part.
Thanks for the help
I would try it like that after filling the row array:
row.forEach((element, index) => {
if(element.guest_user == null){
// To remove the row-entry with guest_user==null
row[index].slice();
}
})
And if you don't want to remove the entries from the array just create a new array and use newArray.push(element); to copy the null ones into it.
This is my first cut:
const planLimits = {plan1: {condition1: 50, ...}}
function initialisePlanLimits(planLimits) {
const limits = new Map();
Object.keys(planLimits).map((planId) => (
const limitMap = new Map(Object.entries(planLimits[planId]));
limits.set(planId, limitMap);
));
return limits;
}
The linter flags this error: error Expected to return a value in this function array-callback-return
So I changed to this version:
function initialisePlanLimits(planLimits) {
const limits = new Map();
Object.keys(planLimits).map((planId) => (
limits.set(planId, new Map(Object.entries(planLimits[planId])))
));
return limits;
}
It throws another error Unexpected parentheses around single function argument having a body with no curly braces arrow-parens
My questions:
1) I reckon I can fix my first version by sticking in a return null within the curry bracket. But is there a better, more elegant way? A bogus return statement does not make sense in this context
2) Why the second version fails? Isn't it equivalent to the first version?
If I use forEach instead of map, it will not cause the array-callback-return lint error
Object.keys(planLimits).forEach((planId) => (
const limitMap = new Map(Object.entries(planLimits[planId]));
limits.set(planId, limitMap);
));
Well, accepted answer advocates about using 'forEach,' which is true. Please read below explaination from ESLint documentation,
Array has several methods for filtering, mapping, and folding. If we forget to write return statement in a callback of those, it's probably a mistake. If you don't want to use a return or don't need the returned results, consider using .forEach instead.
TLDR: ESLint and Function Return Values
This issue is caused by not returning a value when using map(), see how the results are expected according to the docs...
The map() method creates a new array populated with the results of calling a provided function on every element in the calling array. (Source: MDN WebDocs.)
Demonstration of Issue in JavaScript
With this code sample of JS, which shows a group of elements...
var newarray = [];
array.map( (item, index) => {
newarray.push('<li>' + item + '</li>');
});
I get this error...
Expected to return a value in arrow function array-callback-return
The error goes away if I add a single return to the above function, like so :
var newarray = array.map( (item, index) => {
return '<li>' + item + '</li>';
});
`map()` - So why should I use it?
You can clearly see elsewhere, too, on MDN Docs, that what is returned is, "A new array with each element being the result of the [return value of the] callback function." So, if you are using map(), it's also a very good idea to also use return returnvalue!
map() is a powerful tool. Don't throw that tool away.
I am getting an error when trying to do a DISTINCT reduce that I got from here. I have reproduced this error on the beer-sample bucket, so this should be easy to reproduce. I have not seen any errors in the mapreduce_errors.txt file, or anything that would lead me anywhere in the others. (If you would like me to search or post snippets of other files, please ask).
Running couchbase enterprise 4 beta, on Windows 2008 R2 (This also happened on the 3.0.1 community edition as well.).
Here is my map function (Using the beer-sample bucket, that ships directly with couchbase).
function(doc, meta) {
switch(doc.type) {
case "brewery":
emit(meta.id);
break;
}
}
Here is my reduce function:
function(keys, values, rereduce) {
return keys.filter(function (e, i, arr) {
return arr.lastIndexOf(e) === i;
});
}
This is the error:
reason: error (Reducer: )
Also an imgur of the view page if it helps: http://i.imgur.com/KyLutMc.png
The problem lies within your custom reduce function: you're not handling the case when it's being called as part of a re-reduce.
As per Couchbase documentation:
The base format of the reduce() function is as follows:
function(key, values, rereduce) {
...
return retval;
}
The reduce function is supplied three arguments:
key: The key is the unique key derived from the map() function and the
group_level parameter.
values: The values argument is an array of all of the values that match
a particular key. For example, if the same key is output three times,
data will be an array of three items containing, with each item
containing the value output by the emit() function.
rereduce: The rereduce indicates whether the function is being called
as part of a re-reduce, that is, the reduce function being called
again to further reduce the input data.
When rereduce is false:
The supplied key argument will be an array where the first argument is the key as emitted by the map function, and the id is the document ID that generated the key.
The values is an array of values where each element of the array matches the corresponding element within the array of keys.
When rereduce is true:
key will be null.
values will be an array of values as returned by a previous reduce() function. The function should return the reduced version
of the information by calling the return() function. The format of the
return value should match the format required for the specified key.
Bold formatting is mine, and the highlighted words are quite important: you should consider that sometimes, you'll receive the keys argument with a value of null.
According to the docs, you should handle the case when rereduce is true within your reduce() function, and you should know that in this case, keys will be null. In the case of your reduce() function, you could do something like this:
function(keys, values, rereduce) {
if (rereduce) {
var result = [];
for (var i = 0; i < values.length; i++) {
var distinct = values[i];
for (var j = 0; j < distinct.length; j++) {
result.push(distinct[j]);
}
}
return result.filter(function (e, i, arr) {
return arr.lastIndexOf(e) === i;
});
}
return keys.filter(function (e, i, arr) {
return arr.lastIndexOf(e) === i;
});
}
Here, I'm firstly handling the re-reduce phase. For this I'm flattening the array of arrays that I'm receiving in the values argument and then I'm removing the duplicates that might have appeared after the merge.
Then it comes your original code, which returns the keys argument array without duplicates.
To test that this reduce() function actually works, I've used the following map() function:
function(doc, meta) {
switch(doc.type) {
case "brewery":
emit(meta.id, null);
emit(meta.id, null);
break;
}
}
This intentionally generates duplicates, which then are removed by the reduce() function.
While this reduce works as a development view, it does not as a production view. The dataset must be too large so you have to implement the rereduce. This documentation should help http://docs.couchbase.com/admin/admin/Views/views-writing.html#reduce-functions
I have a domain class that needs to have a date after the day it is created in one of its fields.
class myClass {
Date startDate
String iAmGonnaChangeThisInSeveralDays
static constraints = {
iAmGonnaChangeThisInSeveralDays(nullable:true)
startDate(validator:{
def now = new Date()
def roundedDay = DateUtils.round(now, Calendar.DATE)
def checkAgainst
if(roundedDay>now){
Calendar cal = Calendar.getInstance();
cal.setTime(roundedDay);
cal.add(Calendar.DAY_OF_YEAR, -1); // <--
checkAgainst = cal.getTime();
}
else checkAgainst = roundedDay
return (it >= checkAgainst)
})
}
}
So several days later when I change only the string and call save the save fails because the validator is rechecking the date and it is now in the past. Can I set the validator to fire only on create, or is there some way I can change it to detect if we are creating or editing/updating?
#Rob H
I am not entirely sure how to use your answer. I have the following code causing this error:
myInstance.iAmGonnaChangeThisInSeveralDays = "nachos"
myInstance.save()
if(myInstance.hasErrors()){
println "This keeps happening because of the stupid date problem"
}
You can check if the id is set as an indicator of whether it's a new non-persistent instance or an existing persistent instance:
startDate(validator:{ date, obj ->
if (obj.id) {
// don't check existing instances
return
}
def now = new Date()
...
}
One option might be to specify which properties you want to be validated. From the documentation:
The validate method accepts an
optional List argument which may
contain the names of the properties
that should be validated. When a List
is passed to the validate method, only
the properties defined in the List
will be validated.
Example:
// when saving for the first time:
myInstance.startDate = new Date()
if(myInstance.validate() && myInstance.save()) { ... }
// when updating later
myInstance.iAmGonnaChangeThisInSeveralDays = 'New Value'
myInstance.validate(['iAmGonnaChangeThisInSeveralDays'])
if(myInstance.hasErrors() || !myInstance.save(validate: false)) {
// handle errors
} else {
// handle success
}
This feels a bit hacky, since you're bypassing some built-in Grails goodness. You'll want to be cautious that you aren't bypassing any necessary validation on the domain that would normally happen if you were to just call save(). I'd be interested in seeing others' solutions if there are more elegant ones.
Note: I really don't recommend using save(validate: false) if you can avoid it. It's bound to cause some unforeseen negative consequence down the road unless you're very careful about how you use it. If you can find an alternative, by all means use it instead.
I need to translate the following Code to an Expression and I will explain why:
results = results.Where(answer => answer.Question.Wording.Contains(term));
results is IQueryable<ISurveyAnswer>
Question is ISurveyQuestion
Wording is String
The problem is, Question is not always the name of the LINQ to SQL property.
This will give me the PropertyInfo for the actual ISurveyQuestion property
private static PropertyInfo FindNaturalProperty<TMemberType>(Type search)
{
IDictionary<string,PropertyInfo> properties = new Dictionary<string,PropertyInfo>();
search.GetProperties().Each(prop =>
{
if (null != prop.PropertyType.GetInterface(typeof(TMemberType).Name))
properties.Add(prop.Name, prop);
});
if (properties.Count < 1) throw new ArgumentException(String.Format("{0} has no properties of type {1}", search.Name, typeof(TMemberType).Name));
if (properties.Count == 1) return properties.Values.First();
search.GetInterfaces().Each(inter =>
{
inter.GetProperties().Each(prop =>
{
if (null != prop.PropertyType.GetInterface(typeof(TMemberType).Name))
properties.Remove(prop.Name);
});
});
if (properties.Count < 1) throw new ArgumentException(String.Format("{0} has no properties of type {1} that are not members of an interface", search.Name, typeof(TMemberType).Name));
if (properties.Count > 1) throw new AmbiguousMatchException(String.Format("{0} has more than one property that are of type {1} and are not members of an interface", search.Name, typeof(TMemberType).Name));
return properties.Values.First();
}
Once I have the PropertyInfo how do I translate that to an Expression Tree?
EDIT:
What I basically need is:
results = results.Where(answer => answer.GetQuestionProperty().GetValue(answer).Wording.Contains(term));
But that doesn't work so I need to build the Expression Tree myself, for linq-to-sql.
Reading the question I think what you're after is Dynamic Linq - which is a helper library to let you build Linq queries dynamically (!) using strings as opposed to at design time. That means that if you can get your property name you should be able create your query on the fly.
ScottGu has an article here
What your trying to do is create a dynamic query and you want the action tables / properties your query against to be dynamic as well. I am not sure if this is easily possible based on how you want to use it.
Check out ScottGu's blog post:
http://weblogs.asp.net/scottgu/archive/2008/01/07/dynamic-linq-part-1-using-the-linq-dynamic-query-library.aspx
and
Check out Rick Strahl's blog post:
http://www.west-wind.com/Weblog/posts/143814.aspx
http://www.linqpad.net/
linqpad will convert it for you.