Limit questions - json

I'm make quiz app on flutter and have local json with
questions(around 200). How i can limit questions for 40?
because when i open app its show me all question
json={results:[
{question},
]}
final jsonResponse = convert.jsonDecode(json);
final result = (jsonResponse['results'] as List).map((question)
=> QuestionModel.fromJson(question));
questions.value =
result.map((question) =>
Question.fromQuestionModel(question)).toList();
return true;
}
}

Use subList function after using .toList().
this can be done easily by using .subList() which basically returns a list from the start index to the end index parameters from your original List ,like this
final result = (jsonResponse['results'] as List).map((question)
=> QuestionModel.fromJson(question));
questions.value =
result.map((question) =>
Question.fromQuestionModel(question)).toList().sublist(0,39);
Note
if you want to save all the 200 Questions and get every 40 questions then you should use pagination ,in this case you'll not use the subList function here, you'll use it after returning the result with the list that should be attached with the ui part.
Bonus Tip
check out this flutter plugin flutter page wise which makes the pagination alot easier, it can very helpful in a lot of situations.

Related

How to work with data returned by mysql select query in nodejs

I am working on a discord bot written in nodejs, the bot utilises a mysql database server to store information. The problem I have run into is that I cannot seem to retrieve the data from the database in a neat way, every single thing I try seems to run into some issue or another.
The select query returns an object called RowDataPacket. When googling every single result will reference this solution: Object.values(JSON.parse(JSON.stringify(rows)))
It postulates that I should get the values back, but I dont I get an array back that is as hard to work with as the rowdatapacket object.
This is a snippet of my code:
const kenneledMemberRolesTableName = 'kenneled_member_roles'
const kenneledMemberKey = 'kenneled_member'
const kenneledMemberRoleKey = 'kenneled_member_role_id'
const kenneledStaffMemberKey = 'kenneled_staff_member'
const kenneledDateKey = 'kenneled_date'
const kenneledReturnableRoleKey = 'kenneled_role_can_be_returned'
async function findKenneledMemberRoles(kenneledMemberId) {
let sql = `SELECT CAST(${kenneledMemberRoleKey} AS Char) FROM ${kenneledMemberRolesTableName} WHERE ${kenneledMemberKey} = ${kenneledMemberId}`
let rows = await databaseAccessor.runQuery(sql)
let result = JSON.parse(JSON.stringify(rows)).map(row => {
return row.kenneled_member_role_id
})
return result
}
This seemed to work, until I had to do a type conversion on the value, now the dot notations requires me to reference row.CAST(kenneled_member_role_id AS Char), this cannot work, and I have found no other way to retrieve the data than through dot notation. I swear there must be a better way to work with mysql rowdatapackets but the solution eludes me
I figured out something that works, however I still feel like this is an inelegant solution, I would love to hear from others if I am misunderstanding how to work with mysql code in nodejs, or if this is just a consequence of the library:
let result = JSON.parse(JSON.stringify(rows)).map(row => {
return row[`CAST(${kenneledMemberRoleKey} AS CHAR)`];
})
So what I did is I access the value through brackets instead of dot notation, this seems to work, and at least makes me able to store part of or the whole expression in a constant variable, hiding the ugliness.

Laravel 5.4 formatting result set

Can someone help me convert this query so that my result set is in different format?
$sessions = new Session();
$results = $sessions->where('session_status', $status)->where('application_period_id', (int) ApplicationPeriod::all()->last()->id)->get()->pluck('speaker_id');
$speakers = Speaker::whereIn('id', $results)
->with('session.audiancesession.audiances')
->with('session.subjectsession.subjects')
->with(['session' =>
function ($query) use($status) {
$query->where('session_status', '=', $status);
}])->orderBy('last_name')->get();
This is requested via Ajax(axios)... Now this is how result is formatted:
Obj->data(array of objects)->[0]->name
->address
->session(array of objects)
->[0]->time
->fee
My issue is that my session parameter is array and there can only ever be (1) so I don't need to to be an array and I would like to have object (json) instead.
Thank you!
You might have more success if you change your client-side code to work with an array of sessions each session having its speaker, that means your original query would be like
$sessions = Sessions::with([
'speaker', 'audiancesession.audiances', 'subjectsession.subjects'
])->where('application_period_id', (int) ApplicationPeriod::orderBy('id','DESC')->first())->get();
Note the order by -> first in the ApplicationPeriod makes it so you don't have to get all application periods from the database to memory.
Then your client side should handle an array of sessions.
You can transform the above slightly using to get a similar result to what you need:
$speakers = $sessions->map(function ($session) {
$speaker = collect($session->speaker->toArray());
$speaker->put('session', collect($session->toArray())->except('speaker'));
return $speaker;
})->orderBy('last_name','DESC');
Though I wouldn't guarantee the result here as I've not tested it on your (complex looking) data.

NamedList with Deep Pagination

QueryRequest req=new QueryRequest(solrQuery);
NoOpResponseParser responseParser = new NoOpResponseParser();
responseParser.setWriterType("csv");
searcherServer.setParser(responseParser);
NamedList<Object> resp=searcherServer.request(req);
QueryResponse res = searcherServer.query(solrQuery);
responseString = (String)resp.get("response");
I use the above code to get the output in CSV format. The data I am trying to fetch is huge (In billions). So I want to include deep pagination of SOLR and get chunks of CSV output. Is there a way to do? Also, with the current version of SOLR (I cannot upgrade) I have to use the above code to get CSV output.
I tried the below way to fetch the results.
searcherServer = new HttpSolrServer(url);
SolrQuery solrQuery = new SolrQuery();
solrQuery.setQuery(query);
solrQuery.set("fl","field1");
solrQuery.setParam("wt", "csv");
solrQuery.setStart(0);
solrQuery.setRows(1000);
solrQuery.setSort(SolrQuery.SortClause.asc("field2"));
In the output from the above code has wt as javabin. So I cannot get the CSV output.
Any suggestions?
You have two ways.
use Solr export request handler (or add it) and wt=csv parameter. Just to be clear, this is an Implicit Request Handler usually available even in older Solr versions and specifically designed to handle scenarios that involve exporting millions of records.
implement deep paging correctly. I suggest Yonic post paging and deep paging, it easier than you think. But after you'll have correctly implement, you also need to create the csv file by yourself.
The solution I found was:
SolrQuery solrQuery = new SolrQuery();
solrQuery.setQuery(query); //what you want to fetch
QueryResponse res = searcherServer.query(solrQuery);
int numFound = (int)res.getResults().getNumFound();
int rowsToBeFetched = (numFound > 1000 ? (int)(numFound/6) : numFound);
for(int i=0; i< numFound; i=i+rowsToBeFetched ){
solrQuery.set("fl","fieldToBeFetched");
solrQuery.setParam("wt", "csv");
solrQuery.setStart(i);
solrQuery.setRows(rowsToBeFetched);
QueryRequest req=new QueryRequest(solrQuery);
NoOpResponseParser responseParser = new NoOpResponseParser();
responseParser.setWriterType("csv");
searcherServer.setParser(responseParser);
NamedList<Object> resp=searcherServer.request(req);
responseString = (String)resp.get("response"); //This is in CSV format
}
Pros:
Since I don't get the result at once, it was faster.
The output was csv.
Hitting solr multiple items isn't costly.
Cons:
The result is not unique, meaning there can be repeated data based on what you are fetching.
To get unique results, you can use facets.
Thanks!

How to get ordered results from couchbase using bulk gets

I am trying to improve performance of querying a couchbase view by using async gets.
I have read their documentation about the proper way to do so, it goes something like:
Cluster cluster = CouchbaseCluster.create();
Bucket bucket = cluster.openBucket();
List<JsonDocument> foundDocs = Observable
.just("key1", "key2", "key3", "key4", "key5")
.flatMap(new Func1<String, Observable<JsonDocument>>() {
#Override
public Observable<JsonDocument> call(String id) {
return bucket.async().get(id);
}
})
.toList()
.toBlocking()
.single();
Which works great and fast, but since I rely on the order of the results, it seems that i need to do some extra work to keep the results ordered.
In the example above, the JsonDocument list contains all 5 documents but the order changes randomly from call to call.
Is there any ellegant way to order the result using JavaRx capabilities or couchbase Java SDK capabilities?
The only solution i can think of is saving the results in to a HashMap and then transform the original list of ids using this HashMap into an ordered list of JsonDocuments.
Instead of flatMap, you can either use:
concatMap: will retain order, but actually wait for each inner GET to complete before firing the next one (could revert to sequential execution with less performance)
concatMapEager: will immediately subscribe inner Observables (so trigger inner GET). Maintains the order by buffering responses that arrive out of order until they can be replayed at the correct index in the sequence. Best of both worlds in terms of ordering and performance.
I would use Zip operator to concat all your observables, and then once they finish add documents results into the list
#Test
public void zipObservables() {
Observable<String> oKey1 = Observable.just("key1").doOnNext(getDocument());
Observable<String> oKey2 = Observable.just("key2").doOnNext(getDocument());
Observable<String> oKey3 = Observable.just("key3").doOnNext(getDocument());
Observable<String> oKey4 = Observable.just("key4").doOnNext(getDocument());
List<Observable<String>> observables = Arrays.asList(oKey1,oKey2,oKey3,oKey4);
List<Object> foundDocs = Observable.zip(observables, Arrays::asList)
.toBlocking()
.single();
}
private Action1<String> getDocument() {
return id -> bucket.async().get(id);
}
You can see more Zip examples here https://github.com/politrons/reactive/blob/master/src/test/java/rx/observables/combining/ObservableZip.java

How can I use linqTOsql to efficiently get nested lists?

I have three tables that I would like to query: Streams, Entries, and FieldInstances.
I'm wanting to get a list of entries inside a stream. A Stream could be a blog or a page etc. and entry is the actual instance of the stream ie: "stream:Page entry:Welcome" or "stream:blog entry:News about somthing".
The thing is, each entry has custom data field associated with it through FieldInstance. IE:
stream: Page
entry: Welcome
fieldInstance: Welcome Image Path
I'm trying to figure out the best way to get a list of all entries inside of one stream and also have the custom field instances that are associated with each entry.
I've been playing around with code like this:
var stream = genesisRepository.Streams.First(x => x.StreamUrl == streamUrl);
IQueryable<StreamEntry> entry = genesisRepository.StreamEntries.Where(x => x.StreamID == stream.StreamID);
IQueryable<FieldInstance> fieldInstances = genesisRepository.FieldInstances.Where(
// doesn't work because entry is basically returning a collection of some kind.
// and i can't figure out how to compare a single ID with a list/collection of IDs
x => x.fiStreamEntryID == entry.Where(e => e.StreamID == stream.StreamID)
);
This of course doesn't work. Inititially I was thinking to get all entries in the stream and then all fieldInstances in the stream, then I'll display the data using lambdas after I have everything... hopefully keeping sql queries dows to two or three. But I can't figure out how to write the linqTOsql to execute in just two or three queries. I keep thinking that I need to execute queries in a loop to get the fieldInstances for each entry.
Is there a LinqTOsql query that will select all fieldInstances where the StreamEntryID(fk) is in the list of entries whose (fk)StreamID matches the Stream?
Sorry, I don't quite get what columns you are joining in the third statement, but:
var stream = genesisRepository.Streams.First(x => x.StreamUrl == streamUrl);
IQueryable<StreamEntry> entries = genesisRepository.StreamEntries.Where(x => x.StreamID == stream.StreamID);
IQueryable<FieldInstance> fieldInstances = from entry in entries
from instance in genesisRepository.FieldInstances
where entry.entryId == instance.fiStreamEntryID
select instance;
Does this help? I'm still new to EF but that's how I'd have a crack at it...
var results = genesisRepository.FieldInstances
.Include("StreamEntry.Stream")
.Where(fi => fi.StreamEntry.Stream.StreamUrl == streamUrl);