How to cache autocomplete suggestions data react - json

I have a generic medicine database in MongoDB, has about 3500 documents in it. I want to use medicine names and brand names in autocomplete input fields in the React web app.
The data might look like this:
{
"_id": {
"$oid": "5ed9bf1087263e1ef3d65288"
},
"name": "Ethamsylate + Mefenamic acid",
"brands": [
{
"name": "STOP-MF",
"package": "Tablet",
"strength": " ",
"price": "120.00"
},
{
"name": "SYLATE-M 500",
"package": "Tablet",
"strength": " ",
"price": "156.00"
},
],
}
Data will not change very often, so I want to cache it client-side right at the initial load.
I have been googling about this.
What I found is:
Making HTTP requests for each newly typed character is bad.
LocalStorage API is easy to use, but data will need to be compressed
IndexedDb offers unlimited storage, but the API is quite complex
I haven't found much about other storage options in the browser.
Thanks for any and all help!
Tried SWR, but it makes an HTTP request every time I reload the page, can't have that as the request takes 5-8 seconds to fetch data, I want to store data locally in browser somehow

have you try swr ? it may help you cache data at client side :D

Making HTTP requests for each newly typed character is bad.
It is not always bad, it depends on your use case. Would you say google search is badly implemented because it makes requests as you type? Not quite. You can also use throttle and debounce to somewhat limit amount of requests. But if you need offline access and your DB is relatively small then you have other options of course.
IndexedDb offers unlimited storage, but the API is quite complex
Have you tried to use some wrapper around it? Like maybe https://github.com/dfahlander/Dexie.js would fit for you?

Related

Use Postman to Login a Cognito user with API alone

I'm migrating from Firebase where this was rather simple to do.
I'm building a custom api because the environment I need to build in will not let me use any official sdk's or anything, so this solely has to be done via rest type actions.
I essentially want to just post the username/password to aws cognito, and recieve an auth token that I can then append to the headers of future requests (to other api calls)
After hunting for quite a bit, almost all help has postman connecting to Amazon's login UI etc, and I cannot do that. It must completely handle the login process "behind the scenes" and not prompt the user with Amazon's own UI.
So, assuming this is possible:
What headers do I need (content-type etc)
How do I format the body json (or is it using something else?)
I assume I'd send it as "raw" body.
This is as far as I got so far and I'm scratching my head:
Url: https://[DOMAIN].auth.us-east-1.amazoncognito.com/oauth2/token
Body Json:
{
"ClientId": "1234etc",
"Password": "Password1_",
"UserAttributes": [
{
"Name": "email",
"Value": "test#test.com"
}
],
"Username": "test#test.com"
}
No idea if this is even the right format for the JSON I just scalped it from other posts.

Accessing Google AppEngine Cloud Endpoints using ActionScript 3?

Is anyone aware of method of accessing Google AppEngine Cloud Enpoints using ActionScript 3 without having to go through the JavaScript layer? I have been going on the docs and Google to find any tutorials or examples but did not find anything useful.
We don't have AS3 client libraries and currently there are none planned that I know of, so you'll have to rely on HTTP to make your REST calls.
TLDR; Use the APIs Explorer
If you visit
https://your-app-id.appspot.com/_ah/api/explorer
(replacing your-app-id with your actual application ID), then you'll be redirected to your own custom version of the Google APIs Explorer.
In it you can click on individual APIs and see the list of all available methods. Within a the page for each method, you can try out forming requests and the Explorer will suggest the correct values to use.
After you click "Execute", the full HTTP request (headers and all) and response will be printed on your page, which will show you which commands to use.
Description of how to use the Discovery Document
The Discovery Document for your API will contain all the information you need to construct a request.
To find the root for calling your API, check out the baseUrl key. It should be something like:
https://your-app-id.appspot.com/_ah/api/tictactoe/v1/
To figure out how to call a specific method, there are descriptions of every method, nested down as resources in the Discovery Document. For example, for the Tic Tac Toe Python sample, the board_get_move method has a name of board.getmove in the #endpoints.api decorator. This means the method getmove is owned by the resource board.
If you look in the resources.board.methods key in the Discovery Document you can see the getmove method:
"getmove": {
"id": "tictactoe.board.getmove",
"path": "board",
"httpMethod": "POST",
"description": "Exposes...",
"request": {
"$ref": "TictactoeApiMessagesBoardMessage"
},
"response": {
"$ref": "TictactoeApiMessagesBoardMessage"
}
}
Combining the path with our baseUrl we know requests will need to be sent to
https://your-app-id.appspot.com/_ah/api/tictactoe/v1/board
and from httpMethod we know requests will use the HTTP method POST.
Finally, to specify the request, we see a reference to a schema:
"$ref": "TictactoeApiMessagesBoardMessage"
Looking in the schemas.TictactoeApiMessagesBoardMessage key in the Discovery Document we see:
"TictactoeApiMessagesBoardMessage": {
"id": "TictactoeApiMessagesBoardMessage",
"type": "object",
"description": "ProtoRPC message definition to represent a board.",
"properties": {
"state": {
"type": "string"
}
}
}
so we know the payload must contain a single field called state and that field must be a string.

RESTful Collection Resources - idiomatic JSON representations and roundtripping

I have a collection resource called Columns. A GET with Accept: application/json can't directly return a collection, so my representation needs to nest it in a property:-
{ "propertyName": [
{ "Id": "Column1", "Description": "Description 1" },
{ "Id": "Column2", "Description": "Description 2" }
]
}
Questions:
what is the best name to use for the identifier propertyName above? should it be:
d (i.e. is d an established convention or is it specific to some particular frameworks (MS WCF and MS ASP.NET AJAX ?)
results (i.e. is results an established convention or is it specific to some particular specifications (MS OData)?)
Columns (i.e. the top level property should have a clear name and it helps to disambiguate my usage of generic application/json as the Media Type)
NB I feel pretty comfortable that there should be something wrapping it, and as pointed out by #tuespetre, XML or any other representation would force you to wrap it to some degree anyway
when PUTting the content back, should the same wrapping in said property be retained [given that it's not actually necessary for security reasons and perhaps conventional JSON usage idioms might be to drop such nesting for PUT and POST given that they're not necessary to guard against scripting attacks] ?
my gut tells me it should be symmetric as for every other representation but there may be prior art for dropping the d/*results** [assuming that's the answer to part 1]*
... Or should a PUT-back (or POST) drop the need for a wrapping property and just go with:-
[
{ "Id": "Column1", "Description": "Description 1" },
{ "Id": "Column2", "Description": "Description 2" }
]
Where would any root-level metadata go if one wished to add that?
How/would a person crafting a POST Just Know that it needs to be symmetric?
EDIT: I'm specifically interested in an answer that with a reasoned rationale that specifically takes into account the impacts on client usage with JSON. For example, HAL takes care to define a binding that makes sense for both target representations.
EDIT 2: Not accepted yet, why? The answers so far don't have citations or anything that makes them stand out over me doing a search and picking something out of the top 20 hits that seem reasonable. Am I just too picky? I guess I am (or more likely I just can't ask questions properly :D). Its a bit mad that a week and 3 days even with an )admittedly measly) bonus on still only gets 123 views (from which 3 answers ain't bad)
Updated Answer
Addressing your questions (as opposed than going off on a bit of a tangent in my original answer :D), here's my opinions:
1) My main opinion on this is that I dislike d. As a client consuming the API I would find it confusing. What does it even stand for anyway? data?
The other options look good. Columns is nice because it mirrors back to the user what they requested.
If you are doing pagination, then another option might be something like page or slice as it makes it clear to the client, that they are not receiving the entire contents of the collection.
{
"offset": 0,
"limit": 100,
"page" : [
...
]
}
2) TBH, I don't think it makes that much difference which way you go for this, however if it was me, I probably wouldn't bother sending back the envelope, as I don't think there is any need (see below) and why make the request structure any more complicated than it needs to be?
I think POSTing back the envelope would be odd. POST should let you add items into the collection, so why would the client need to post the envelope to do this?
PUTing the envelope back could make sense from a RESTful standpoint as it could be seen as updating metadata associated with the collection as a whole. I think it is worth thinking about the sort of meta data you will be exposing in the envelope. All the stuff I think would fit well in this envelope (like pagination, aggregations, search facets and similar meta data) is all read only, so it doesn't make sense for the client to send this back to the server. If you find yourself with a lot of data in the envelope that the client is able to mutate - then it could be a sign to break that data out into a separate resource with the list as a sub collection. Rubbish example:
/animals
{
"farmName": "farm",
"paging": {},
"animals": [
...
]
}
Could be broken up into:
/farm/1
{
"id": 1,
"farmName": "farm"
}
and
/farm/1/animals
{
"paging": {},
"animals": [
...
]
}
Note: Even with this split, you could still return both combined as a single response using something like Facebook's or LinkedIn's field expansion syntax. E.g. http://example.com/api/farm/1?field=animals.offset(0).limit(10)
In response, to your question about how the client should know what the JSON payload they are POSTing and PUTing should look like - this should be reflected in your API documentation. I'm not sure if there is a better tool for this, but Swagger provides a spec that allows you to document what your request bodies should look like using JSON Schema - check out this page for how to define your schemas and this page for how to reference them as a parameter of type body. Unfortunately, Swagger doesn't visualise the request bodies in it's fancy web UI yet, but it's is open source, so you could always add something to do this.
Original Answer
Check out William's comment in the discussion thread on that page - he suggests a way to avoid the exploit altogether which means you can safely use a JSON array at the root of your response and then you need not worry about either of you questions.
The exploit you link to relies on your API using a Cookie to authenticate a user's session - just use a query string parameter instead and you remove the exploit. It's probably worth doing this anyway since using Cookies for authentication on an API isn't very RESTful - some of your clients may not be web browsers and may not want to deal with cookies.
Why Does this fix work?
The exploit is a form of CSRF attack which relies on the attacker being able to add a script tag on his/her own page to a sensitive resource on your API.
<script src="http://mysite.com/api/columns"></script>
The victims web browser will send all Cookies stored under mysite.com to your server and to your servers this will look like a legitimate request - you will check the session_id cookie (or whatever your server-side framework calls the cookie) and see the user is authenticated. The request will look like this:
GET http://mysite.com/api/columns
Cookie: session_id=123456789;
If you change your API you ignore Cookies and use a session_id query string parameter instead, the attacker will have no way of tricking the victims web browser into sending the session_id to your API.
A valid request will now look like this:
GET http://mysite.com/api/columns?session_id=123456789
If using a JavaScript client to make the above request, you could get the session_id from a cookie. An attacker using JavaScript from another domain will not be able to do this, as you cannot get cookies for other domains (see here).
Now we have fixed the issue and are ignoring session_id cookies, the script tag on the attackers website will still send a similar request with a GET line like this:
GET http://mysite.com/api/columns
But your server will respond with a 403 Forbidden since the GET is missing the required session_id query string parameter.
What if I'm not authenticating users for this API?
If you are not authenticating users, then your data cannot be sensitive and anyone can call the URI. CSRF should be a non-issue since with no authentication, even if you prevent CSRF attacks, an attacker could just call your API server side to get your data and use it in anyway he/she wants.
I would go for 'd' because it clearly separates the 'envelope' of your resource from its content. This would also make it easier for consumers to parse your responses, as opposed to 'guessing' the name of the wrapping property of a given resource before being able to access what it holds.
I think you're talking about two different things:
POST request should be sent in application/x-www-form-urlencoded. Your response should basically mirror a GET if you choose to include a representation of the newly created resource in your reply. (not mandatory in HTTP).
PUTs should definitely be symmetric to GETs. The purpose of a PUT request is to replace an existing resource representation with another. It just makes sense to have both requests share the same conventions, doesn't it?
Go with 'Columns' because it is semantically meaningful. It helps to think of how JSON and XML could mirror each other.
If you would PUT the collection back, you might as well use the same media type (syntax, format, what you will call it.)

How to replicate a foreign remote database into a local database? (CouchDB / MongoDB)

I would like to extend the data model of a remote database that is available via a web service interface. Data can be requested via HTTP GET and is delivered as JSON (example request). Other formats are supported as well.
// URL of the example request.
http://data.wien.gv.at/daten/wfs?service=WFS&request=GetFeature&version=1.1.0&typeName=ogdwien:BAUMOGD&srsName=EPSG:4326&outputFormat=json&maxfeatures=5
First object of the JSON answer.
{
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"id": "BAUMOGD.3390628",
"geometry": {
"type": "Point",
"coordinates": [
16.352910973544105,
48.143425569989326
]
},
"geometry_name": "SHAPE",
"properties": {
"BAUMNUMMER": "1022 ",
"GEBIET": "Strassen",
"STRASSE": "Jochen-Rindt-Strasse",
"ART": "Gleditsia triacanthos (Lederhülsenbaum)",
"PFLANZJAHR": 1995,
"STAMMUMFANG": 94,
"KRONENDURCHMESSER": 9,
"BAUMHOEHE": 11
}
},
...
My idea is to extend the data model (e.g. add a text field) on my own server and therefore mirror the database somehow. I stumbled into CouchDB and its document-based architecture which feels suitable to handle those aforementioned JSON objects. Now, I ask for advise on how to replicate the foreign database initially and on a regularly basis.
Do you think CouchDB is a good choice? I also thought about MongoDB. If possible, I would like to avoid building a full Rails backend to setup the replication. What do you recommend?
If the remote database is static (data doesn't change), then it could work. You just have to find a way to iterate all records. Once you figured that out, the rest is simple as a pie: 1) query data; 2) store the response in a local database; 3) modify as you see fit.
If remote data changes, you'll have many troubles going this way (you'll have to re-sync in the same fashion every once in a while). What I'd do instead is create a local database with only new fields and a reference to the original piece of data. That is, when you request data from remote service, you also look if you have something in the local db and merge those two before processing the final result.

How to get actual photo from instagram real-time post data?

I subscribed to the #tattoo tag with instagram's real-time api and it's working fine, the problem is that I have no idea how to get the actual uploaded image when the post data looks like this:
[{"changed_aspect": "media", "subscription_id": XXXXXX, "object": "tag", "object_i
d": "tattoo", "time": 1334521880}]
It doesn't give me any info about the media_id or something like that, am I missing something?
As noted in their realtime API docs:
The changed data is not included in the payload, so it is up to you how you'd like to fetch the new data. For example, you may decide only to fetch new data for specific users, or after a certain number of photos have been posted.
So it sounds like you just have to get the actual data via their regular tag API, apparently using GET /tags/{tag-name}/media/recent. For you:
https://api.instagram.com/v1/tags/tattoo/media/recent?access_token=ACCESS-TOKEN
This does certainly seem inelegant, since you'll have to sort out which of the recent updates you've already seen, but I don't see anything suggesting a better method.