Geomesa: How to calculate last coordinate in area for each cam device? - geomesa

I am trying to find the last coordinate for all cams in area in area at time interval:
"1 = 1 AND cam IN ('930e74d9-a607-4345-807a-eea117f97935','2da5186c-73f4-42bd-b1cf-40229673e3cf')
AND BBOX(geo, 36.29196166992188, 55.36506387240321, 38.92868041992188, 56.114170062223856)
AND (time >= 2022-02-02T12:00:00+03:00) AND (time <= 2022-02-02T15:00:00+03:00)"
for this I found all cameras in query through enumeration. Then I try to find minMax time for each camera distinct.
public CameraStat getCamerasStatistics(GeoQuery geoQueries) {
CameraStat cameraStatistic = new CameraStat();
for (String s : geoQueries.getQs()) {
AccumuloGeoMesaStats stats = ((AccumuloDataStore) dataStore).stats();
s = queryParser.convertToCorrectTime(s);
Option<EnumerationStat<Object>> enumeration = null;
try {
enumeration = stats.getEnumeration(
SimpleFeatureUtils.TYPE, "cam", ECQL.toFilter(s), true);
} catch (CQLException e) {
log.error("Error",e);
}
scala.collection.mutable.Map<Object, Object> camerasCount = enumeration.get()
.enumeration();
Map<Object, Object> map = JavaConversions.mapAsJavaMap(camerasCount);
int size = map.size();
Integer sum = map.values().stream().map(it -> ((Long) it).intValue()).reduce(0,
Integer::sum);
Set<String> collect = map.keySet().stream().map(it -> (String) it).collect(
Collectors.toSet());
cameraStatistic.addCameraCount(size);
cameraStatistic.addCoordinatesCount(sum);
cameraStatistic.addCameras(collect);
}
return cameraStatistic;
}
The problem is that geomesa not persist enumeration stats for accumulo. Is it only way to calculate camera enumeration for each query?
from geomesa code: org.locationtech.geomesa.index.stats.MetadataBackedStats
// note: enumeration stats aren't persisted, so we don't override the super method

In GeoMesa there is not currently an option to persist or pre-calculate enumerations; generally they would be too large to store efficiently.
You might explore persisting them separately, or using a local cache such as Guava.

Related

Concurrent Read/Write MySQL EF Core

Using EF Core 2.2.6 and Pomelo.EntityFrameworkCore.MySql 2.2.6 (with MySqlConnector 0.59.2)). I have a model for UserData:
public class UserData
{
[DatabaseGenerated(DatabaseGeneratedOption.None)]
public ulong ID { get; private set; }
[Required]
public Dictionary<string, InventoryItem> Inventory { get; set; }
public UserData()
{
Data = new Dictionary<string, string>();
}
}
I have a REST method that can be called that will add items to the user inventory:
using (var transaction = context.Database.BeginTransaction())
{
UserData data = await context.UserData.FindAsync(userId);
// there is code here to detect duplicate entries/etc, but I've removed it for brevity
foreach (var item in items) data.Inventory.Add(item.ItemId, item);
context.UserData.Update(data);
await context.SaveChangesAsync();
transaction.Commit();
}
If two or more calls to this method are made with the same user id then I get concurrent accesses (despite the transaction). This causes the data to sometimes be incorrect. For example, if the inventory is empty and then two calls are made to add items simultaneously (item A and item B), sometimes the database will only contain either A or B, and not both. From logging it appears that it is possible for EF to read from the database while the other read/write is still occurring, causing the code to have the incorrect state of the inventory for when it tries to write back to the db. So I tried marking the isolation level as serializable.
using (var transaction = context.Database.BeginTransaction(System.Data.IsolationLevel.Serializable))
Now I sometimes see an exception:
MySql.Data.MySqlClient.MySqlException (0x80004005): Deadlock found when trying to get lock; try restarting transaction
I don't understand how this code could deadlock... Anyways, I tried to proceed by wrapping this whole thing in a try/catch, and retry:
public static async Task<ResponseError> AddUserItem(Controller controller, MyContext context, ulong userId, List<InventoryItem> items, int retry = 5)
{
ResponseError result = null;
try
{
using (var transaction = context.Database.BeginTransaction(System.Data.IsolationLevel.Serializable))
{
UserData data = await context.UserData.FindAsync(userId);
// there is code here to detect duplicate entries/etc, but I've removed it for brevity
foreach (var item in items) data.Inventory.Add(item.ItemId, item);
context.UserData.Update(data);
await context.SaveChangesAsync();
transaction.Commit();
}
}
catch (Exception e)
{
if (retry > 0)
{
await Task.Delay(SafeRandomGenerator(10, 500));
return await AddUserItem(controller, context, userId, items, retry--);
}
else
{
// store exception and return error
}
}
return result;
}
And now I am back to the data being sometimes correct, sometimes not. So I think the deadlock is another problem, but this is the only method accessing this data. So, I'm at a loss. Is there a simple way to read from the database (locking the row in the process) and then writing back (releasing the lock on write) using EF Core? I've looked at using concurrency tokens, but this seems overkill for what appears (on the surface to me) to be a trivial task.
I added logging for mysql connector as well as asp.net server and can see the following failure:
fail: Microsoft.EntityFrameworkCore.Database.Command[20102]
=> RequestId:0HLUD39EILP3R:00000001 RequestPath:/client/AddUserItem => Server.Controllers.ClientController.AddUserItem (ServerSoftware)
Failed executing DbCommand (78ms) [Parameters=[#p1='?' (DbType = UInt64), #p0='?' (Size = 4000)], CommandType='Text', CommandTimeout='30']
UPDATE `UserData` SET `Inventory` = #p0
WHERE `ID` = #p1;
SELECT ROW_COUNT();
A total hack is to just delay the arrival of the queries by a bit. This works because the client is most likely to generate these calls on load. Normally back-to-back calls aren't expected, so spreading them out in time by delaying on arrival works. However, I'd rather find a correct approach, since this just makes it less likely to be an issue:
ResponseError result = null;
await Task.Delay(SafeRandomGenerator(100, 500));
using (var transaction = context.Database.BeginTransaction(System.Data.IsolationLevel.Serializable))
// etc
This isn't a good answer, because it isn't what I wanted to do, but I'll post it here as it did solve my problem. My problem was that I was trying to read the database row, modify it in asp.net, and then write it back, all within a single transaction and while avoiding deadlocks. The backing field is JSON type, and MySQL provides some JSON functions to help modify that JSON directly in the database. This required me to write SQL statements directly instead of using EF, but it did work.
The first trick was to ensure I could create the row if it didn't exist, without requiring a transaction and lock.
INSERT INTO UserData VALUES ({0},'{{}}','{{}}') ON DUPLICATE KEY UPDATE ID = {0};
I used JSON_REMOVE to delete keys from the JSON field:
UPDATE UserData as S set S.Inventory = JSON_REMOVE(S.Inventory,{1}) WHERE S.ID = {0};
and JSON_SET to add/modify entries:
UPDATE UserData as S set S.Inventory = JSON_SET(S.Inventory,{1},CAST({2} as JSON)) WHERE S.ID = {0};
Note, if you're using EF Core and want to call this using FromSql then you need to return the entity as part of your SQL statement. So you'll need to add something like this to each SQL statement:
SELECT * from UserData where ID = {0} LIMIT 1;
Here is a full working example as an extension method:
public static async Task<UserData> FindOrCreateAsync(this IQueryable<UserData> table, ulong userId)
{
string sql = "INSERT INTO UserData VALUES ({0},'{{}}','{{}}') ON DUPLICATE KEY UPDATE ID = {0}; SELECT * FROM UserData WHERE ID={0} LIMIT 1;";
return await table.FromSql(sql, userId).SingleOrDefaultAsync();
}
public static async Task<UserData> JsonRemoveInventory(this DbSet<UserData> table, ulong userId, string key)
{
if (!key.StartsWith("$.")) key = $"$.\"{key}\"";
string sql = "UPDATE UserData as S set S.Inventory = JSON_REMOVE(S.Inventory,{1}) WHERE S.ID = {0}; SELECT * from UserData where ID = {0} LIMIT 1;";
return await table.AsNoTracking().FromSql(sql, userId, key).SingleOrDefaultAsync();
}
Usage:
var data = await context.UserData.FindOrCreateAsync(userId);
await context.UserData.JsonRemoveInventory(userId, itemId);

JPA caching database results, need to "un-cache"

I'm seeing "caching" behavior with database (MySQL 5) records. I can't seem to see the new data application side w/o logging in/out or restarting the app server (Glassfish 3). This is the only place in the application where db records are "stuck." I'm guessing I'm missing something with JPA persistence.
I've attempted changing db records by hand, there's still some sort of caching mechanism in place "helping" me.
This is editFile() method that saves new data.
After I fire this, I see the data updated in the db as expected.
this.file is the class level property that the view uses to show file data. It shows old data. I attempt to move db data back in to it after I've fired my UPDATE queries with the filesList setter: this.setFilesList(newFiles);
When the application reads it back out though, GlassFish seems to resond with requests for this data w/ old data.
public void editFile(Map<String, String> params) {
// update file1 record
File1 thisFile = new File1();
thisFile.setFileId(Integer.parseInt(params.get("reload-form:fileID")));
thisFile.setTitle(params.get("reload-form:input-small-name"));
thisFile.setTitle_friendly(params.get("reload-form:input-small-title-friendly"));
this.filesFacade.updateFileRecord(thisFile);
//update files_to_categories record
int thisFileKeywordID = Integer.parseInt(params.get("reload-form:select0"));
this.filesToCategoriesFacade.updateFilesToCategoriesRecords(thisFile.getFileId(), thisFileKeywordID);
this.file = this.filesFacade.findFileByID(thisFile.getFileId());
List<File1> newFiles = (List<File1>)this.filesFacade.findAllByRange(low, high);
this.setFilesList(newFiles);
}
Facades
My Facades are firing native SQL to update each of those DB tables. When I check the DB after they fire, the data is going in, that part is happening as I expect and hope.
File1
public int updateFileRecord(File1 file){
String title = file.getTitle();
String title_titleFriendly = file.getTitle_friendly();
int fileID = file.getFileId();
int result = 0;
Query q = this.em.createNativeQuery("UPDATE file1 set title = ?1, title_friendly = ?2 where file_id = ?3");
q.setParameter(1, title);
q.setParameter(2, title_titleFriendly);
q.setParameter(3, fileID);
result = q.executeUpdate();
return result;
}
FilesToCategories
public int updateFilesToCategoriesRecords(int fileId, int keywordID){
Query q = this.em.createNativeQuery("UPDATE files_to_categories set categories = ?1 where file1 = ?2");
q.setParameter(1, keywordID);
q.setParameter(2, fileId);
return q.executeUpdate();
}
How do I un-cache?
Thanks again for looking.
I don't think caching is the Problem, I think it's transactions.
em.getTransaction().begin();
Query q = this.em.createNativeQuery("UPDATE file1 set title = ?1, title_friendly = ?2 where file_id = ?3");
q.setParameter(1, title);
q.setParameter(2, title_titleFriendly);
q.setParameter(3, fileID);
result = q.executeUpdate();
em.getTransaction().commit();
I recommend to surrond your Writings to the DB with Transactions to get them persisted. Unless you commit requests may return results without the changes.
Ok, JTA does the Transactionmanagement.
Why are you doing this, when you are using JPA.
public int updateFileRecord(File1 file){
String title = file.getTitle();
String title_titleFriendly = file.getTitle_friendly();
int fileID = file.getFileId();
int result = 0;
Query q = this.em.createNativeQuery("UPDATE file1 set title = ?1, title_friendly = ?2 where file_id = ?3");
q.setParameter(1, title);
q.setParameter(2, title_titleFriendly);
q.setParameter(3, fileID);
result = q.executeUpdate();
return result;
}
This should work and update the internal State that comes with JPA
public int updateFileRecord(File1 file){
em.persist(file);
}
#daniel & #Tiny got me going on this one, thanks again guys.
I wanted to point out that I used the .merge() method out of the Entity Manager class.
It's important to note that for .merge() to UPDATE the record instead of INSERTing a new one; that the object you're submitting to .merge() must include all properties respective of the fields in the database table (that your DAO knows about) or you will INSERT new database records.
public void updateFileRecord(File1 file){
em.merge(file);
}

Ebean calling stored procedure and converting ResultSet to Model

I'm working in report module, in order to do that I'm creating different stored procedures. I create the procedure with in parameters and then create a class to map the row (resultSet)
I think that's the best way to work arround performance and clarity.(what do you think about that?)
I'm using play framework and ebean orm (2.7.7)
I'm calling the store procedure and getting the resultSet, but I would like to use ebean in order to cast automaticly the row to model... other option is take the row-cell and cast it in a property but I'm trying to avoid it.
This is the current approach
Is this the best way to call an stored procedure?
Transaction tx = Ebean.beginTransaction();
String sql = "{CALL report(?, ?, ?, ?, ?, ?)}";
CallableStatement callableStatement = null;
try {
Connection dbConnection = tx.getConnection();
callableStatement = dbConnection.prepareCall(sql);
callableStatement.setInt(1, 3);
callableStatement.setInt(2, 5);
callableStatement.setInt(3, 2013);
callableStatement.setInt(4, 1);
callableStatement.setInt(5, 2014);
callableStatement.setInt(6, 5);
ResultSet rs = callableStatement.executeQuery(sql);
while (rs.next()) {
//HOW TO CONVER row -> model ?
}
Ebean.commitTransaction();
} catch (SQLException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
I've discarded RawSQL and Query because received an error
RuntimeException: Error parsing sql, can not find SELECT keyword in: xxxxx
Also I found other option... using CallableSql
String sql = "{call sp_order_mod(?,?)}";
CallableSql cs = Ebean.createCallableSql(sql);
cs.setParameter(1, "turbo");
cs.registerOut(2, Types.INTEGER);
Ebean.execute(cs);
// read the out parameter
Integer returnValue = (Integer) cs.getObject(2);
but in this case I need to return a ResultSet not simply parameter.
I'm going to share my own solution.
I get a class called ResultSetUtils.(you can google it some implementation)
I added a generic method in order to return a typed list from resultset
public static <T> List<T> populateInList(Class<T> c, final ResultSet rs) {
List<T> listTyped = new ArrayList<T>();
try {
if (rs != null) {
while (rs.next()) {
T o = c.newInstance();
// MAGIC LINE
populate(o, rs);
listTyped.add(o);
}
rs.close();
}
} catch (final Exception e) {
// TODO Auto-generated catch block
System.err.println(e.getMessage());
}
return listTyped;
}
This class to do the population use org.apache.commons.beanutils package
BeanUtils.populate(bean, propertiesRealName);
Using
private static void callingProcedureTest() {
Logger.debug("Init callingProcedureTest");
Transaction tx = Ebean.beginTransaction();
// String sql = "{CALL sp_report_test(3, 5, 2013, 1, 2014, 5)}";
String sql = "CALL sp_report_test(?, ?, ?, ?, ?, ?);";
try {
Connection dbConnection = tx.getConnection();
CallableStatement callableStatement = dbConnection.prepareCall(sql);
callableStatement.setInt(1, 3);
callableStatement.setInt(2, 5);
callableStatement.setInt(3, 2013);
callableStatement.setInt(4, 1);
callableStatement.setInt(5, 2014);
callableStatement.setInt(6, 5);
Logger.debug("SQL > " + sql);
ResultSet rs = callableStatement.executeQuery();
Class<ReportTestResult> c = ReportTestResult.class;
//************** MAGIC LINE, converting ResultSet to Model
List<ReportTestResult> listResult = ResultSetUtils.populateInList(c, rs);
for (ReportTestResult item : listResult) {
Logger.debug("item.firstName> " + item.firstName);
Logger.debug("item.lastName > " + item.lastName);
Logger.debug("item.year > " + item.year);
}
Ebean.commitTransaction();
} catch (Exception e) {
Ebean.rollbackTransaction();
// TODO Auto-generated catch block
e.printStackTrace();
}finally{
Ebean.endTransaction();
}
}
Plus about architecture and implementation
For each report I'm going to create:
a Result class (eg ReportTestResult)
intention: represent a row of report | simple DTO
a Param class (eg ReportTestParam),
intention: represent the parameters (inputs / ouputs), filters of the report
This class should implements
public interface ReportParam {
public int countParameteres();
public void setParametersInCallableStatement(CallableStatement callableStatement) throws SQLException;
}
a Report class (eg ReportTestReport) this class should extends ReportBase
intention: Knows the stored procedure's name, parameters and dto result
public class ReportTestReport extends ReportBase<ReportTestResult, ReportTestParam> {
#Override
protected String getProcedureName() {
return STORED_NAME;
}
}
many Adapters...
Each report could displayed in different charts, In this case I'm using HighCharts. Order to arrange it, I'm creating different adapters to do that.
EG:
class ReportTestHighChartsAdapter
intention: convert a list of ReportTestResult to series and configure different options of report (eg, title, xAxis etc)
public OptionsHC buildColumnReportV1(){
OptionsHC optionChart = new OptionsHC();
optionChart.chart = new ChartHC("column");
this.setTitle(optionChart);
optionChart.yAxis = new AxisHC(new TitleHC("Fruit eaten"));
.....
return optionChart;
}
OptionsHC is a class that represent option obj in the HighCharts framework.
The final step is converting OptionHC to Json and use it in JavaScript (common use of highCharts)
What's ReportBase?
ReportBase class has the strategy to implements the final called to DB, also manage the transaction
public class ReportTestReport extends ReportBase<ReportTestResult, ReportTestParam> {
...
protected List<TResult> execute(Class<TResult> classT) {
List<TResult> resultDTO = null;
CallableStatement callableStatement = null;
Logger.debug("Running procedure> " + this.getProcedureName());
Transaction tx = Ebean.beginTransaction();
String sql = ProcedureBuilder.build(this.getProcedureName(), this.countParameters());
Logger.debug("SQL > " + sql);
try {
Connection dbConnection = tx.getConnection();
callableStatement = dbConnection.prepareCall(sql);
this.getFilter().setParametersInCallableStatement(callableStatement);
ResultSet rs = callableStatement.executeQuery();
resultDTO = ResultSetUtils.populateInList(classT, rs);
Ebean.commitTransaction();
Logger.debug("commitTransaction > " + sql);
} catch (Exception e) {
Ebean.rollbackTransaction();
Logger.debug("rollbackTransaction > " + sql);
// TODO Auto-generated catch block
e.printStackTrace();
}finally{
Ebean.endTransaction();
}
return resultDTO;
}
...
}
Currently the support for stored procedures in Ebean is not orientated to what you are trying to do. Hence you are not going to get much joy from using CallableSql or RawSql.
>> a class to map the row (resultSet) I think that's the best way to work around performance and clarity
Yes, I can understand your motivation.
>> How to convert ResultSet into model
Currently there is no good solution. The best solution would be to enhance RawSql so that you can set a ResultSet onto it. One of the things RawSql does is provide the mapping of resultSet columns to model properties and that is what Ebean needs internally. The enhancement/code change would be to be able to set a resultSet onto the RawSql object ... and get Ebean internally to skip the creation of the resultSet ( preparedStatement, binding parameters and executeQuery()). In terms of Ebean internals this is all done in the CQuery.prepareBindExecuteQueryWithOption() method. That is, if the RawSql has already provided a resultSet skip those things.
The big benefit of doing this rather than just rolling your own row -> model mapping code is that the resulting beans would all still have lazy loading / partial object knowledge etc. They would behave exactly like any other beans that Ebean builds as part of it query mechanism.
So that said, I'm personally away for a week ... so you aren't going to hear back from me until after that. If you want to get into it yourself then internally CQuery.prepareBindExecuteQueryWithOption() is the code you will need to modify.
If you have been following the ebean google group you'll know that but just in case you have not been note that the Model and Finder objects from Play have been incorporated into Ebean just in the last week. This helps both projects ... reduces confusion etc. The Ebean source in github master is at 4.0.4 and the bytecode enhancement in 4.x is different and I don't believe supported in Play.
I'm basically going offline for a week now so I'll look back into this after that.
Cheers, Rob.

Weird behaviour encountered using java.sql.TimeStamp and a mysql database

The weird behavior is that a java.sql.Timestamp that I create using the System.currentTimeMillis() method, is stored in my MySQL database as 1970-01-01 01:00:00.
The two timestamps I am creating are to mark the beginning and end of a monitoring task I am trying to perform, what follows are excepts from the code where the behavior occurs
final long startTime = System.currentTimeMillis();
while(numberOfTimeStepsPassed < numTimeStep) {
/*
* Code in here
*/
}
final long endTime = System.currentTimeMillis();
return mysqlConnection.insertDataInformation(matrixOfRawData, name,Long.toString(startTime),
Long.toString(endTime), Integer.toString(numTimeStep),
Integer.toString(matrixOfRawData[0].length), owner,
type);
And here is the code used for inserting the time stamps and other data into the MySQL database
public String insertDataInformation(final double [][] matrix,
final String ... params) {
getConnection(lookUpName);
String id = "";
PreparedStatement dataInformationInsert = null;
try {
dataInformationInsert =
databaseConnection.prepareStatement(DATA_INFORMATION_PREPARED_STATEMENT);
id = DatabaseUtils.createUniqueId();
int stepsMonitored = Integer.parseInt(params[STEPS_MONITORED]);
int numberOfMarkets = Integer.parseInt(params[NUMBER_OF_MARKETS]);
dataInformationInsert.setNString(ID_INDEX, id);
dataInformationInsert.setNString(NAME_INDEX, params[0]);
dataInformationInsert.setTimestamp(START_INDEX, new Timestamp(Long.parseLong(params[START_INDEX])));
dataInformationInsert.setTimestamp(END_INDEX, new Timestamp(Long.parseLong(params[END_INDEX])));
dataInformationInsert.setInt(STEPS_INDEX, stepsMonitored);
dataInformationInsert.setInt(MARKETS_INDEX, numberOfMarkets);
dataInformationInsert.setNString(OWNER_INDEX, params[OWNER]);
dataInformationInsert.setNString(TYPE_INDEX, params[TYPE]);
dataInformationInsert.executeUpdate();
insertRawMatrix(matrix, id, Integer.toString(stepsMonitored), Integer.toString(numberOfMarkets));
} catch (SQLException sqple) {
// TODO Auto-generated catch block
sqple.printStackTrace();
System.out.println(sqple.getSQLState());
} finally {
close(dataInformationInsert);
dataInformationInsert = null;
close(databaseConnection);
}
return id;
}
The important lines of code are :
dataInformationInsert.setTimestamp(START_INDEX, new Timestamp(Long.parseLong(params[START_INDEX])));
dataInformationInsert.setTimestamp(END_INDEX, new Timestamp(Long.parseLong(params[END_INDEX])));
The JavaDocs on the TimeStamp ( http://docs.oracle.com/javase/1.5.0/docs/api/java/sql/Timestamp.html ) says that it takes in time in milliseconds since 1st January 1970 and a simple print test confirms this.
What I am looking for is:
A reason for this behavior when trying to store timestamps in a MySQL database through java.sql.Timestamp?
Any solutions to this behavior?
Any possible alternatives?
Any possible improvements?
EDIT:
Been asked to include what START_INDEX and END_INDEX are:
private static final int END_INDEX = 4;
private static final int START_INDEX = 3;
Apologises for not putting them in the original post.
Okay, look at your call:
insertDataInformation(matrixOfRawData, name, Long.toString(startTime),
Long.toString(endTime), Integer.toString(numTimeStep),
Integer.toString(matrixOfRawData[0].length), owner,
type);
So params will have values:
0: name
1: start time
2: end time
3: numTimeStep
4: matrixOfRowData[0].length
5: owner
6: type
Then you're doing:
dataInformationInsert.setTimestamp(START_INDEX,
new Timestamp(Long.parseLong(params[START_INDEX])));
... where START_INDEX is 3.
So you're using the value corresponding to numTimeStep as the value for the timestamp... I suspect you don't want to do that.
I would strongly advise you to create a simple object type (possibly a nested type in the same class) to let you pass these parameters in a strongly typed, simple to get right fashion. The string conversion and the access by index are both unwarranted, and can easily give rise to errors.

Unable to create a constant value of type 'T'

I have a table called Subjects,
I have an another Table called Allocations, which stores the Allocations of the Subjects
I have a Datagridview, which is populated with Subject Allocations from the Allocations Table
Now i need to get the Subjects that are not in the Datagridview
To do this
I Get All Subjects from the ObjectContext
Now i get all the Subjects that are alloted from the Datagridview (It Returns me an InMemory Collection)
Now i use the LINQ.EXCEPT method to filter the results, but it is throwing me the Following Exception,
"Unable To Create Constant Value of Type "ObjectContext.Subjects" Only primitive types ('such as Int32, String, and Guid') are supported in this context."
Below is my Code
public static IOrderedQueryable<Subject> GetSubjects()
{
return OBJECTCONTEXT.Subjects.OrderBy(s => s.Name);
}
private IQueryable<Subject> GetAllocatedSubjectsFromGrid()
{
return (from DataGridViewRow setRow in dgv.Rows
where !setRow.IsNewRow
select setRow.DataBoundItem).Cast<Allocation>() //I know the Problem lies somewhere in this Function
.Select(alloc =>alloc.Subject).AsQueryable();
}
private void RUN()
{
IQueryable<Subject> AllSubjects = GetSubjects(); //Gets
IQueryable<Subject> SubjectsToExclude = GetAllocatedSubjectsFromGrid();
IQueryable<Subject> ExcludedSubjects = AllSubjects.Except(SubjectsToExclude.AsEnumerable());
//Throwing Me "Unable to create a constant value of type 'OBJECTCONTEXT.Subject'. Only primitive types ('such as Int32, String, and Guid') are supported in this context."
}
As a result of googling i found that it happens because LINQ can't compare between InMemory collection(Records from DGV) and Objectcontext(FromDB)
A little short of time, have not tested it. But I guess you can try to get it all in memory. So instead of using
IQueryable<Subject> AllSubjects = GetSubjects(); //Gets
You do
List<Subject> AllSubjects = GetSubjects().ToList(); //
List<Subject> SubjectsToExclude = GetAllocatedSubjectsFromGrid().ToList();
List<Subject> ExcludedSubjects = AllSubjects.Except(SubjectsToExclude);
I got around this by comparing keys in a Where clause rather than using Except.
So instead of:
var SubjectsToExclude = GetAllocatedSubjectsFromGrid();
var ExcludedSubjects = AllSubjects.Except(SubjectsToExclude.AsEnumerable());
Something more like:
var subjectsToExcludeKeys =
GetAllocatedSubjectsFromGrid()
.Select(subject => subject.ID);
var excludedSubjects =
AllSubjects
.Where(subject => !subjectsToExcludeKeys.Contains(subject.ID));
(I'm guessing what your entity's key looks like though.)
This allows you to keep everything in Entity Framework, rather than pulling everything into memory.