In the CSV file that I use for loading ActivePivot, I have 2 fields that need to be multiplied together to compute my record's value: price * quantity.
I am using the CSV source with topics and channels. Where can I perform this computation?
you should override the compute method of the ColumnParser, see below. In the following example we get the QuantitySold and the SellingPricePerUnit and we add the result in the Sales column, do not forget to add the Sales column in your store definition:
#Bean
#DependsOn(value="csvSource")
public CSVMessageChannelFactory csvChannelFactory() {
CSVMessageChannelFactory channelFactory = new CSVMessageChannelFactory(csvSource(), datastore);
channelFactory.setCalculatedColumns(ORDERS_TOPIC, DatastoreConfig.ORDERS, Arrays.<IColumnCalculator>asList(
//derive new fields
new ColumnParser("Sales", "double"){
#Override
public Object compute(IColumnCalculationContext context) {
Long qty = (Long) context.getValue("QuantitySold");
Double price = (Double) context.getValue("SellingPricePerUnit");
return (qty == null || price == null) ? null: qty*price;
}
}
));
return channelFactory; }
Related
I'm trying to create a custom datastudio connector that connect to WooCommerce rest API. I want to differentiate between orders placed by a registered user and orders placed by a guest user.
The WooCommerce API gives me the custumer_id field, if the customer_id = 0, the order was placed by a guest, otherwise the user is registered.
I followed the google data studio tutorial : https://developers.google.com/datastudio/connector/get-started
And this is my responseToRow function:
/**
Parse the response data given the required fields of the config
#return the parsed data
*/
function responseToRows(requestedFields, response) {
// Transform parsed data and filter for requested fields
return response.map(function(dailyDownload) {
var row = [];
requestedFields.asArray().forEach(function (field) {
switch (field.getId()) {
case 'id_order':
return row.push(dailyDownload.id);
case 'total':
return row.push(dailyDownload.total);
case 'date_created':
return row.push(dailyDownload.date_created);
case 'registered_user' :
if(parseInt(dailyDownload.customer_id) > 0)
return row.push(dailyDownload.customer_id);
case 'guest_user' :
if(parseInt(dailyDownload.customer_id) == 0)
return row.push(dailyDownload.customer_id);
default:
return row.push('');
}
});
return { values: row };
});
}
The function is similar to the one given in the tutorial, the others fields works fine. I'm just returning when the customer_id is different of 0. It seem to work, but I get null values when the condition doesn't hold.
I would like to remove the null values, having only the orders when the customer_id was 0 on the right and the same for the complement on the left part.
Thanks for the help
I would try it like that after filling the row array:
row.forEach((element, index) => {
if(element.guest_user == null){
// To remove the row-entry with guest_user==null
row[index].slice();
}
})
And if you don't want to remove the entries from the array just create a new array and use newArray.push(element); to copy the null ones into it.
I am trying to find the last coordinate for all cams in area in area at time interval:
"1 = 1 AND cam IN ('930e74d9-a607-4345-807a-eea117f97935','2da5186c-73f4-42bd-b1cf-40229673e3cf')
AND BBOX(geo, 36.29196166992188, 55.36506387240321, 38.92868041992188, 56.114170062223856)
AND (time >= 2022-02-02T12:00:00+03:00) AND (time <= 2022-02-02T15:00:00+03:00)"
for this I found all cameras in query through enumeration. Then I try to find minMax time for each camera distinct.
public CameraStat getCamerasStatistics(GeoQuery geoQueries) {
CameraStat cameraStatistic = new CameraStat();
for (String s : geoQueries.getQs()) {
AccumuloGeoMesaStats stats = ((AccumuloDataStore) dataStore).stats();
s = queryParser.convertToCorrectTime(s);
Option<EnumerationStat<Object>> enumeration = null;
try {
enumeration = stats.getEnumeration(
SimpleFeatureUtils.TYPE, "cam", ECQL.toFilter(s), true);
} catch (CQLException e) {
log.error("Error",e);
}
scala.collection.mutable.Map<Object, Object> camerasCount = enumeration.get()
.enumeration();
Map<Object, Object> map = JavaConversions.mapAsJavaMap(camerasCount);
int size = map.size();
Integer sum = map.values().stream().map(it -> ((Long) it).intValue()).reduce(0,
Integer::sum);
Set<String> collect = map.keySet().stream().map(it -> (String) it).collect(
Collectors.toSet());
cameraStatistic.addCameraCount(size);
cameraStatistic.addCoordinatesCount(sum);
cameraStatistic.addCameras(collect);
}
return cameraStatistic;
}
The problem is that geomesa not persist enumeration stats for accumulo. Is it only way to calculate camera enumeration for each query?
from geomesa code: org.locationtech.geomesa.index.stats.MetadataBackedStats
// note: enumeration stats aren't persisted, so we don't override the super method
In GeoMesa there is not currently an option to persist or pre-calculate enumerations; generally they would be too large to store efficiently.
You might explore persisting them separately, or using a local cache such as Guava.
I have 3 tables ruser, accounts, accountgroup. Each one has a same column called rsuerId.
I created a POJO class with 3 Embedded objects as below.
class GroupChatItem(
#Embedded
val rUserDto: RUserDto,
#Embedded
val account: AccountDto,
#Embedded
val accountGroup: AccountGroupDto
)
Now, i want to make a query that fetches a GroupChatItem with a given rUserId and accountGroupId like the following.
#Query("""
Select ruser.*, accounts.*, accountgroup.*
from ruser
inner join accounts on accounts.rUserId = ruser.rUserId and accounts.active = 1
inner join accountgroup on accountgroup.rUserId = :rUserId and accountGroup.accountGroupId = :accountGroupId
where ruser.rUserId = :rUserId
""")
suspend fun getGroupChatItem(rUserId: Long, accountGroupId: Int): GroupChatItem
Unfortunately i get the following error.
Multiple fields have the same columnName: rUserId. Field names: rUserDto > rUserId, account > rUserId, accountGroup > rUserId.
I have tried to add a prefix to each embedded object but i get also an error. I dont want to retrieve columns one-by-one because there are many of them.
Is there anything that i missed...??
Thank you
Alternatively you can use the prefix attribute of the Embedded anotation:
class GroupChatItem(
#Embedded(prefix = "user_")
val rUserDto: RUserDto,
#Embedded(prefix = "acc_")
val account: AccountDto,
#Embedded(prefix = "accgr_")
val accountGroup: AccountGroupDto
)
and then alias all the columns of each entity in your SQL query.
I think the prefix attribute is s recent update but I am not sure
I don't believe you have any option other than to have/use :-
a) have distinct columns names across the tables that are to be included in joins (then there is no need to prefix the column names),
or
b) to rename the columns using AS when extracting the values along with a prefix when embedding the entity ensuring that the names match.
I believe that a) would be the simpler option as there is a reduction in the chance of inadvertently using the wrong column name.
As I understand it, the column names have to match for Room to be able to know how to be able to copy a value from the underlying result set, which has no indication of what table a value came from to the value in the returned object or objects.
This is an example of the generated code of a similar scenario 3 embedded entities (User, Office and Places) where some of the column names are the same. They each have and id column and User and Places both have a columns named name.
#Override
public UserOfficePlacesCombined getAllUserOfficePlacesCombined() {
final String _sql = "SELECT user.id AS userid, user.name AS username, office.id AS officeid, office.address AS officeaddress, places.id AS placesid, places.name AS placesname FROM User JOIN Office ON User.id = Office.id JOIN Places ON User.id = Places.id";
final RoomSQLiteQuery _statement = RoomSQLiteQuery.acquire(_sql, 0);
__db.assertNotSuspendingTransaction();
final Cursor _cursor = DBUtil.query(__db, _statement, false, null);
try {
final int _cursorIndexOfId = CursorUtil.getColumnIndexOrThrow(_cursor, "userid");
final int _cursorIndexOfName = CursorUtil.getColumnIndexOrThrow(_cursor, "username");
final int _cursorIndexOfId_1 = CursorUtil.getColumnIndexOrThrow(_cursor, "officeid");
final int _cursorIndexOfAddress = CursorUtil.getColumnIndexOrThrow(_cursor, "officeaddress");
final int _cursorIndexOfId_2 = CursorUtil.getColumnIndexOrThrow(_cursor, "placesid");
final int _cursorIndexOfName_1 = CursorUtil.getColumnIndexOrThrow(_cursor, "placesname");
final UserOfficePlacesCombined _result;
if(_cursor.moveToFirst()) {
final User _tmpUser;
if (! (_cursor.isNull(_cursorIndexOfId) && _cursor.isNull(_cursorIndexOfName))) {
final long _tmpId;
_tmpId = _cursor.getLong(_cursorIndexOfId);
final String _tmpName;
_tmpName = _cursor.getString(_cursorIndexOfName);
_tmpUser = new User(_tmpId,_tmpName);
} else {
_tmpUser = null;
}
final Office _tmpOffice;
if (! (_cursor.isNull(_cursorIndexOfId_1) && _cursor.isNull(_cursorIndexOfAddress))) {
final long _tmpId_1;
_tmpId_1 = _cursor.getLong(_cursorIndexOfId_1);
final String _tmpAddress;
_tmpAddress = _cursor.getString(_cursorIndexOfAddress);
_tmpOffice = new Office(_tmpId_1,_tmpAddress);
} else {
_tmpOffice = null;
}
final Places _tmpPlaces;
if (! (_cursor.isNull(_cursorIndexOfId_2) && _cursor.isNull(_cursorIndexOfName_1))) {
final long _tmpId_2;
_tmpId_2 = _cursor.getLong(_cursorIndexOfId_2);
final String _tmpName_1;
_tmpName_1 = _cursor.getString(_cursorIndexOfName_1);
_tmpPlaces = new Places(_tmpId_2,_tmpName_1);
} else {
_tmpPlaces = null;
}
_result = new UserOfficePlacesCombined();
_result.setUser(_tmpUser);
_result.setOffice(_tmpOffice);
_result.setPlaces(_tmpPlaces);
} else {
_result = null;
}
return _result;
} finally {
_cursor.close();
_statement.release();
}
}
The critical lines are the ones like :-
final int _cursorIndexOfId = CursorUtil.getColumnIndexOrThrow(_cursor, "userid")
This is used to search for the column's names in the Cursor (aka result set) and return the offset to the column, the index then being used to get the actual value from the Cursor.
In your scenario the result set will include some like
rUserId rUserId rUserId*
Which one should it use for which? You may know/understand that first is ruser.rUserId, and that the second is account.rUserId and that the third is accountgroup.rUserId but Room, as it stands, will not know when generating the code. So in all 3 instances when getColumnIndex("rUserId") is used, it will return either 0 (the first) it breaks out of the loop, or 2 if it continues rather than breaks out of the loop (I believe it doesn't break out of the loop).
Greeting I need to get details from users, in those details the user has I have to validate all the User details validate this details with another table and if the date doesn’t match insert on the table but if it does match then don insert anything, this has to be done for all the users, the domains.
User{
String orderNumber
String dealer
Int UserKm
String dateUser
String adviser
Vehicle vehicle
String dateCreated
Date appointmentDate //this date has to be validated with DateNext
appointmentDate from Appointments domain of it doesn’t exit then you can
insert on that table.
}
Appointments{
User user
Date managementDate
Date lasDataApointies
DateNext appointmentDate
Date NextdAteAppointment
Date callDate
String observations
}
def result = User.executeQuery("""select new map(
mmt.id as id, mmt.orderNumber as orderNumber, mmt.dealer.dealer as
dealer, mmt.UserKm as UserKm, mmt.dateUser as dateUser, mmt.adviser as
adviser, mmt.technician as technician, mmt.vehicle.placa as vehicle,
mmt.dateCreated as dateCreated, mmt.currenKm as currenKm) from User as
mmt """)
def result1=result.groupBy{it.vehicle}
List detailsReslt=[]
result1?.each { SlasDataApointing placa, listing ->
def firsT = listing.first()
int firstKM = firsT.UserKm
def lasT = listing.last()
def lasDataApoint = lasT.id
int lastKM = lasT.UserKm
int NextAppointmentKM = lastKM + 5000
int dayBetweenLastAndNext = lastKM - NextAppointmentKM
def tiDur = getDifference(firsT.dateUser,lasT.dateUser)
int dayToInt = tiDur.days
int restar = firstKM - lastKM
int kmPerDay = restar.div(dayToInt)
int nextMaintenaceDays = dayBetweenLastAndNext.div(kmPerDay)
def nextAppointment = lasT.dateUser + nextMaintenaceDays
detailsReslt<<[placa:placa, nextAppointment:
nextAppointment, manageId:lasDataApoint, nextKmUser: NextAppointmentKM]
}
detailsReslt?.each {
Appointments addUserData = new Appointments()
addUserData.User = User.findById(it.manageId)
addUserData.managementDate = null
addUserData.NextdAteAppointment = null
addUserData.observations = null
addUserData.callDate = it.nextAppointment
addUserData.save(flush: true)
}
println "we now have ${detailsReslt}"
}
Based on the details that are not full and looking at the code I can suggest:
no need to do a query to map you can simply query the list of users and check all the properties like user.vehicle. in any case, you need to check each row.
the groupBy{it.vehicle} is not clear but if needed you can do it using createCriteria projections "groupProperty"
Create 2 service method one for iterating all users and one for each user:
validateAppointment(User user){
/* your validation logic */
....
if (validation term){
Appointments addUserData = new Appointments()
...
}
}
validateAppointments(){
List users = User. list()
users.each{User user
validateAppointment(user)
}
}
you can trigger the validateAppointments service from anywhere in the code or create a scheduled job so it will run automatically based on your needs.
if your list of user is big and also for efficiency you can do bulk update - take a look at my post about it: https://medium.com/meni-lubetkin/grails-bulk-updates-4d749f24cba1
I would suggest to create a Custom Validator using a Service, something like this:
class User{
def appointmentService
...
Date appointmentDate
static constraints = {
appointmentDate validator: { val, obj ->
obj.appointmentService.isDateAppointmentValid(obj.appointmentDate)
}
}
}
But keep in mind that validation may run more often than you think. It is triggered by the validate() and save() methods as you’d expect (as explained in the user guide (v3.1.15)). So I'm not sure if this scenario is the best way to validate àppointmentDate` in your domain, so you have to be careful about that.
Hope this help.
I'm using a shim property to make sure that the date is always UTC. This in itself is pretty simple but now I want to query on the data. I don't want to expose the underlying property, instead I want queries to use the shim property. What I'm having trouble with is mapping the shim property. For example:
public partial class Activity
{
public DateTime Started
{
// Started_ is defined in the DBML file
get{ return Started_.ToUniversalTime(); }
set{ Started_ = value.ToUniversalTime(); }
}
}
var activities = from a in Repository.Of<Activity>()
where a.Started > DateTime.UtcNow.AddHours( - 3 )
select a;
Attempting to execute the query results in an exception:
System.NotSupportedException: The member 'Activity.Started' has no supported
translation to SQL.
This makes sense - how could LINQ to SQL know how to treat the Started property - it's not a column or association? But, I was looking for something like a ColumnAliasAttribute that tells SQL to treat properties of Started as Started_ (with underscore).
Is there a way to help LINQ to SQL translate the expression tree to the Started property can be used just like the Started_ property?
There's a code sample showing how to do that (i.e. use client-side properties in queries) on Damien Guard's blog:
http://damieng.com/blog/2009/06/24/client-side-properties-and-any-remote-linq-provider
That said, I don't think DateTime.ToUniversalTime will translate to SQL anyway so you may need to write some db-side logic for UTC translations anyway. In that case, it may be easier to expose the UTC date/time as a calculated column db-side and include in your L2S classes.
E.g.:
create table utc_test (utc_test_id int not null identity,
local_time datetime not null,
utc_offset_minutes int not null,
utc_time as dateadd(minute, 0-utc_offset_minutes, local_time),
constraint pk_utc_test primary key (utc_test_id));
insert into utc_test (local_time, utc_offset_minutes) values ('2009-09-10 09:34', 420);
insert into utc_test (local_time, utc_offset_minutes) values ('2009-09-09 22:34', -240);
select * from utc_test
Based on #KrstoferA's answer I came up with a reliable solution that hides the fact that the properties are aliased from client code. Since I'm using the repository pattern returning an IQueryable[T] for specific tables, I can simply wrap the IQueryable[T] result provided by the underlying data context and then translate the expression before the underlying provider compiles it.
Here's the code:
public class TranslationQueryWrapper<T> : IQueryable<T>
{
private readonly IQueryable<T> _source;
public TranslationQueryWrapper( IQueryable<T> source )
{
if( source == null ) throw new ArgumentNullException( "source" );
_source = source;
}
// Basic composition, forwards to wrapped source.
public Expression Expression { get { return _source.Expression; } }
public Type ElementType { get { return _source.ElementType; } }
public IEnumerator<T> GetEnumerator() { return _source.GetEnumerator(); }
IEnumerator IEnumerable.GetEnumerator() { return GetEnumerator(); }
// Intercept calls to the provider so we can translate first.
public IQueryProvider Provider
{
get { return new WrappedQueryProvider(_source.Provider); }
}
// Another wrapper around the provider
private class WrappedQueryProvider : IQueryProvider
{
private readonly IQueryProvider _provider;
public WrappedQueryProvider( IQueryProvider provider ) {
_provider = provider;
}
// More composition
public object Execute( Expression expression ) {
return Execute( expression ); }
public TResult Execute<TResult>( Expression expression ) {
return _provider.Execute<TResult>( expression ); }
public IQueryable CreateQuery( Expression expression ) {
return CreateQuery( expression ); }
// Magic happens here
public IQueryable<TElement> CreateQuery<TElement>(
Expression expression )
{
return _provider
.CreateQuery<TElement>(
ExpressiveExtensions.WithTranslations( expression ) );
}
}
}
Another example cannot hurt I guess.
In my Template class, I have a field Seconds that I convert to TimeStamp relatively to UTC time. This statement also has a CASE (a?b:c).
private static readonly CompiledExpression<Template, DateTime> TimeStampExpression =
DefaultTranslationOf<Template>.Property(e => e.TimeStamp).Is(template =>
(template.StartPeriod == (int)StartPeriodEnum.Sliding) ? DateTime.UtcNow.AddSeconds(-template.Seconds ?? 0) :
(template.StartPeriod == (int)StartPeriodEnum.Today) ? DateTime.UtcNow.Date :
(template.StartPeriod == (int)StartPeriodEnum.ThisWeek) ? DateTime.UtcNow.Date.AddDays(-(int)DateTime.UtcNow.DayOfWeek) : // Sunday = 0
(template.StartPeriod == (int)StartPeriodEnum.ThisMonth) ? new DateTime(DateTime.UtcNow.Year, DateTime.UtcNow.Month, 1, 0, 0, 0, DateTimeKind.Utc) :
(template.StartPeriod == (int)StartPeriodEnum.ThisYear) ? new DateTime(DateTime.UtcNow.Year, 1, 1, 0, 0, 0, DateTimeKind.Utc) :
DateTime.UtcNow // no matches
);
public DateTime TimeStamp
{
get { return TimeStampExpression.Evaluate(this); }
}
My query to initialize a history-table based on (Event.TimeStamp >= Template.TimeStamp):
foreach (var vgh in (from template in Templates
from machineGroup in MachineGroups
let q = (from event in Events
join vg in MachineGroupings on event.MachineId equals vg.MachineId
where vg.MachineGroupId == machineGroup.MachineGroupId
where event.TimeStamp >= template.TimeStamp
orderby (template.Highest ? event.Amount : event.EventId) descending
select _makeMachineGroupHistory(event.EventId, template.TemplateId, machineGroup.MachineGroupId))
select q.Take(template.MaxResults)).WithTranslations())
MachineGroupHistories.InsertAllOnSubmit(vgh);
It takes a defined maximum number of events per group-template combination.
Anyway, this trick sped up the query by four times or so.