I'm a noob.
I have been researched Amazon's DynamoDB and Google's Volley extensively, however it is still not clear to me whether these two technologies can be used together (even thought they both support JSON, for example).
I think my questions stems from Volley expecting a URL parameter to get/post data to. Does my AWS table have a URL?
In every DynamoDB API request except for ListTables, you must specify a TableName. The TableName, combined with your AWS_ACCESS_KEY_ID and region/endpoint allows the DynamoDB service to perform operations on your table on your behalf. Each region has a different set of HTTP and HTTPS endpoints. The table namespace of each region is distinct for each AWS_ACCESS_KEY_ID, so even if you have a table named "my_table" in us-west-1, that does not mean that you have a table named "my_table" in us-east-1. Even if you did have two tables with the same name in different regions, they would not necessarily have the same data.
To sum up, you want to use the DynamoDB endpoints as the URL to POST data to, and you will specify your credentials and table name as part of the body of each request.
Note that AWS requests are signed, so you may need to implement this logic and other boilerplate logic in your application if you use the HTTP DynamoDB API.
The AWS Mobile SDK for Android will create and sign your DynamoDB requests, and make the requests for you using abstractions of the DynamoDB API. For more examples, see the Getting Started section in DynamoDB of the AWS Mobile SDK for Android.
Related
Good Afternoon,
I have a situation where three swagger files will have different resources but they belong to the same domain. And I can't merge them into a single swagger as we have many such scenarios and managing of a single swagger and single api proxy will be a big over head.
For example:
I have 3 apis with the following paths and resources
/supermarket/v1/aisles/{aisleId}/itemcategories
/supermarket/v1/aisles/{aisleId}/itemcategories/{itemcategoryId}/seasonedvegetabletypes
/supermarket/v1/aisles/{aisleId}itemcategories/{itemcategoryId}/seasonedvegetabletypes/{vegetablestypeId}/apples
All the above 3 should be in 3 different swagger files, so I need to create 3 api proxies for the above.
Since the path suffix is same for all of them "/supermaket" the Azure API Manager will not allow to create another api proxy with the same path suffix as it MUST Be unique.
So to achieve this in Apigee Edge (Google Edge) api management product. I will have the basepaths as below
/supermarket/v1
/supermarket/v1/aisles//itemcategories/
/supermarket/v1/aisles//itemcategories/*/seasonedvegetabletypes
so that I can avoid the unique path constraint also achieve creating 3 api proxies.
But the Azure API Manager is not accepting the "wildcard" entries into the API Path Suffix field when creating the API Proxy.
Note:
You may suggest combining the 3 apis into a single swagger file might solve the issue but the example I gave above is only 30% of the swagger and we have many such paths that will fall into a single business domain so we must have them in different swagger files and different api proxies.
We should be in a position to deploy different API Proxies with the same path suffix by allowing wild cards or regex into the API Path Suffix.
Your help to resolve this is highly appreciated. Thanks.
At this point this is a limitation that can't be worked around. The only way to make APIM serve those URIs is to put all of them under single API, which is not what you want, unfortunatelly.
I am working with an API for automating tasks in a company I work for.
The software will run from a single server and there will only one instance of the sensitive data.
I have a tool that our team uses at the end of every day.
The token only needs to be requested once since it has a +-30 minute timeout.
Since I work with Salesforce API, the user has to enter his/her password either way since it relates the ticket to their account.
The API oAuth2 tokens and all of its sensitive components need to be secured.
I use PowerShell & a module called FileCryptograhy to produce an AES version of my config.json.
In my config file, I store all the component keys that need to be used to generate the token itself.
Steps
Base64 encode strings
Use FileCyptography module to encrypt the JSON file with a secret key into an AES file.
When API needs to produce a token, it works in reverse to get all the data.
Is this a valid way of securing sensitive API data, or is there a more efficient way?
P.S: I understand that nothing is very secure and can be reverse engineered, I just need something that will keep at least 90% of people away from this data.
I'm currently implementing an IoT solution that has a bunch of sensors sending information in JSON format through a gateway.
I was reading about doing this on azure but couldn't quite figure out how the JSON scheme and the Event Hubs work to display the info on PowerBI?
Can I create a schema and upload it to PowerBI then connect it to my device?
there's multiple sides to this. To start with, the IoT ingestion in Azure is done tru Event Hubs as you've mentioned. If your gateway is able to do a RESTful call to the Event Hubs entry point, Event Hubs will get this data and store it temporarily for the retention period specified. Then stream analytics, will consume the data from Event Hubs and will enable you to do further processing and divert the data to different outputs. In your case, you can set one of the outputs to be a PowerBI dashboard which you can authorize with an organizational account (more on that later) and the output will automatically tied to PowerBI. The data schema part is interesting, the JSON itself defines the data table schema to be used on PowerBI side and will propagate from EventHubs to Stream Analytics to PowerBI with the first JSON package sent. Once the schema is there it is fixed and the rest of the data being streamed in should be in the same format.
If you don't have an organizational account at hand to use with PowerBI, you can register your domain under Azure Active Directory and use that account since it is considered within your org.
There may be a way of altering the schema afterwards using PowerBI rest api. Kindly find the links below..Haven't tried it myself tho.
https://msdn.microsoft.com/en-us/library/mt203557.aspx
Stream analytics with powerbi
Hope this helps, let me know if you need further info.
One way to achieve this is to send your data to Azure Events Hub, read it and send it to PowerBI with Stream Analytics. Listing all the steps here would be too long. I suggest that you take a look at a series of blog posts I wrote describing how I built a demo similar to what you try to achieve. That should give you enough info to get you started.
http://guyb.ca/IoTAzureDemo
I have two things. A backend running on App Engine and a Android app. These needs to communicate in a efficient way.
What I already did. I created a api with Google Cloud Endpoints. This endpoint exposes calls. The objects in the backend are mapped to json and mapped back to objects in the Android app. This is what the Endpoints provide.
Sometimes I want to push information from the backend to the Android app. What I do now is I send a Google Cloud Message (GCM) to the Android app and these is updating everything by calling something on the Endpoint of the backend.
This situation is working without problems but it has some drawbacks:
When I update a lot of devices at ones (what is happening a lot in my application) all those devices make a call to the backend and creating a large peak load.
The extra call is using additional battery on the phones.
What I want is to add the updated information into the GCM. GCM has support to add 4kB of data. Large enough to add the json with the updated information. If I want to send more then 4kB I can always use the old situation.
So, basicly what I want is the following:
When I'm going to send a GCM I retrieve the the correct objects from datastore/database.
Those objects needs to be converted to json in the same way as the Endpoint library does.
The json should be added to the GCM.
In the Android application the json should be convert back to objects in the same way as the Endpoint library does.
Continue processing those object the same way as before.
I found a thread that suggested that I should the gson library to do this. But I have problems in both backend and Android app. And also the json itself is not the same. I want to use the Endpoint library to serialize the same json and to deserialize to the same result as a Endpoint call.
Does anybody have any idea how to do that? Maybe a example or tutorial?
I have a website where you can request data using ajax from our servers as json (only to be used on our site). Now i found that people start using our requests to get data from our system. Is there a way to block users from using our public json API. Ideas that i have been thinking about is:
Some kind of checksum.
A session unique javascript value on the page that have to match server-side
Some kind of rolling password with 1000 different valid values.
All these are not 100% safe but makes it harder to use our data. Any other ideas or solutions would be great.
(The requests that you can do is lookup and translations of zip codes, phone numbers, ssn and so on)
You could use the same API-key authentication method Google uses to limit access to its APIs.
Make it compulsory for every user to have a valid API key, to request data.
Generate API key and store it in your database, when a user requests one.
Link: Relevant Question
This way, you can monitor usage of your API, and impose usage limits on it.
As #c69 pointed out, you could also bind the API keys you generate to the API-user's domain . You can then check the Referer URL ($_SERVER['HTTP_REFERER'] in PHP), and reject request, if it is not being made from the API-user's domain.