I am building my Jekyll site with Algolia search.
The documentation about jekyll-algolia says the admin key must be provided in the environment variable ALGOLIA_API_KEY.
However, another page about API key security says
Your admin API key is the most sensitive key: it provides full control of all your indices and data. The admin API key should always be kept secure. Do NOT release it to anybody or do NOT use it in any application, and always create a new key that will be more restrictive. This API key should almost exclusively be used to generate other - more limited - API Keys that will then be used to search and perform indexing operations.
Reading the second page, I'm trying to create a more restrictive key for use with jekyll-algolia in CI builds of my Jekyll website:
However I still get complaints from bundle exec jekyll algolia:
ibug#ubuntu:~/iBug.github.io$ ALGOLIA_API_KEY="0123456789abcdef0123456789abcdef" bundle exec jekyll algolia
Configuration file: /home/wsl/iBug.github.io/_config.yml
Processing site...
AutoPages: Disabled/Not configured in site.config.
Pagination: Complete, processed 1 pagination page(s)
Jekyll Feed: Generating feed for posts
GitHub Metadata: No GitHub API authentication could be found. Some fields may be missing or have incorrect data.
Extracting records...
Updating records in index iBug_website...
Records to delete: 428
Records to add: 420
[✗ Error] Invalid credentials
The jekyll-algolia plugin could not connect to your application ID using the
API key your provided.
Make sure your API key has access to your 14DZKASAEJ application.
You can find your API key in your Algolia dashboard here:
https://www.algolia.com/licensing
ibug#ubuntu:~/iBug.github.io$ echo $?
1
How should I do that? Or must I provide the admin key in CI environments?
Minimum API key ACLs required to allow indexing with jekyll-algolia are deleteIndex, addObject, deleteObject and 'editSettings`.
If one of those ACLs is not set you get an error like this :
[jekyll-algolia] Error:
403: Cannot PUT to
https://APP_ID.algolia.net/1/indexes/your_folder/settings:
{"message":"Method not allowed with this API key","status":403} (403)
In your case, the error message indicates that your application ID is not connected with the API_KEY you provide.
Check your application ID in your Algolia dashboard, and verify that you have a correct algolia.application_id entry in your _config.yml.
If you provide the right application_id and one of her API key, it must work, otherwise it's an Algolia problem.
Related
I have a key in my KV store, let's say /global/test/my-key and I use a token that has the following policy :
key "/global/test/my-key" {
policy = "read"
}
Why, using the UI, I can access the URL http://localhost:8500/v1/kv/global/test/my-key/edit but I have a 403 on the following URLs http://localhost:8500/v1/kv/global/test and http://localhost:8500/v1/kv/global ?
Is there a way for me to access my key from the UI starting at the URL http://localhost:8500/v1/kv ?
NOTE: I have tried the "list" policy, but it gives read access to the other keys, which is not what I want.
EDIT: I just realized I had forgot to mention another condition that I am trying to meet. I have another key called for instance /global/secret/my-other-key and I don't want that key to be viewed from the UI nor the folder /global/secret/.
If you wish to have access to all of the mentioned paths, you should use this policy instead:
key_prefix "global" {
policy = "read"
}
This policy will give you access to global and any "sub-paths" of it.
Consul does not currently support performing recursive reads on paths where your token only has access to a subset of the keys under that parent path.
There's an open GitHub issue requesting this functionality be added https://github.com/hashicorp/consul/issues/4513. I recommend upvoting that issue to indicate your interest, and subscribe to it for updates so that you can track its progress.
If your particular use case is not accurately reflected in the initial description, feel free to leave a comment with additional information.
I want to host my documentation on ReadTheDoc because.... it's great.
Problem my repository use Google Earth Engine as a dependancy. This lib require a manual authentification or an authentification through a private key.
Is it possible to use private key in ReadTheDoc ? (in Json or string format)
In the RTD administration panel of your repository go to "Environment Variables". there you will be able to add as many env keys as you fancy.
note that once set, the key will not be publicly displayed to you or any other maintainer for security reasons obviously
Troubleshooting
As referenced in this issue if your key is bigger than 2048 characters, it will not be managed by RDT. In this case encrypt your key in a file inside your repository and add the decoding key as an environment variable in RDT.
I would like to impose request limits to some endpoints which are publicly accessible (no subscription key required) through Azure API Management. I am thinking of a rate limit of let say 100K req/min . How can I implement this?
I tried:
<rate-limit-by-key calls="3" renewal-period="15" counter-key="#(context.Subscription.Id)" />
but then I got a lovely "Expression evaluation failed. Object reference not set to an instance of an object." because no subscription key is passed..
I cannot limit by IP Address either..
Thanks!!
If you want to separate such anonymous calls into buckets, indeed use rate-limit-by-key, just find some other aspect of request to base key of.
If you want to treat all of them alike, just account in your key expression for null subscription:
<rate-limit-by-key key="#(context.Subscription?.Id ?? "none")" ... />
The policy also can be enabled by the API provider by introducing a custom header to allow the developer's client application to communicate the key to the API. For more details, please refer to the document.
<rate-limit-by-key calls="3" renewal-period="15" counter-key="#(request.Headers.GetValueOrDefault("Rate-Key",""))" />
I'm building an application that stores files into the FIWARE Object Storage. I don't quite understand what is the correct way of storing files into the storage.
The code python code snippet below taken from the Object Storage - User and Programmers Guide shows 2 ways of doing it:
def store_text(token, auth, container_name, object_name, object_text):
headers = {"X-Auth-Token": token}
# 1. version
#body = '{"mimetype":"text/plain", "metadata":{}, "value" : "' + object_text + '"}'
# 2. version
body = object_text
url = auth + "/" + container_name + "/" + object_name
return swift_request('PUT', url, headers, body)
The 1. version confuses me, because when I first looked at the only Node.js module (repo: fiware-object-storage) that works with Object Storage, it seemed to use 1. version. As the module was making calls to the old (v.1.1) API version instead of the presumably newest (v.2.0), referencing to the python example, not sure if that is an outdated version of doing it or not.
As I played more with the module, realised it didn't work and the code for it was a total mess. So I forked the project and quickly understood that I will need rewrite it form the ground up, taking the above mention python example from the usage guide as an reference. Link to my repo.
As of writing this the only methods that aren't implement is the object storage (PUT) and object fetching (GET).
Had some addition questions about the Object Storage which I sent to fiware-lab-help#lists.fiware.org, but haven't heard anything back so asking them here.
Haven't got much experience with writing API libraries. Should I need to worry about auth token expiring? I presume it is not needed to make a new authentication, every time we interact with storage. The authentication should happen once when server is starting-up (we create a instance) and it internally keeps it. Should I implement some kind of mechanism that refreshes the token?
Does the tenant id change? From the quote below is presume that getting a tenant I just a one time deal, then later you can use it in the config to make less authentication calls.
A valid token is required to access an object store. This section
describes how to get a valid token assuming an identity management
system compatible with OpenStack Keystone is being used. If the
username, password and tenant details are known, only step 3 is
required. source
During the authentication when fetching tenants how should I select the "right" one? For now i'm just taking the first one similar as the example code does.
Is it true that a object storage container belongs to only a single region?
Use only what you call version 2. Ignore your version 1. It is commented out in the example. It should be removed from the documentation.
(1) The token will be valid for some period of time. This could be an hour or a day, depending on the setup. This period of time should be specified in the token that is returned by the authentication service. The token needs to be periodically refreshed.
(2) The tenant id does not change.
(3) Typically only one tenant id is returned. It is possible, however, that you were assigned more than one id, in which case you have to pick which one you are currently using. Containers typically belong to a single tenant and are not shared between tenants.
(4) Containers are typically limited to a single region. This may change in the future when multi-region support for a container is added to Swift.
Solved my troubles and created the NPM module that works with the FIWARE Object Storage: https://github.com/renarsvilnis/fiware-object-storage-ge
Invalid Code Signing Entitlements - Your application bundle's signature contains ubiquity code signing entitlements that are not supported.
Specifically, value "( X49XXXS5Q.* )" for key "com.apple.developer.ubiquity-container-identifiers" in is not supported.
The key happens to be my distribution id.
Yes that is the correct answer!
steps to correct:
Find you app id in the portal -
dis-able the iCloud.
Create a new provisioning profile
download it
delete the prior profile
replace it with the new one
re-compile and submit.
Disable iCloud in the Provisioning Portal and generate a new "distribution" Provisioning Profile before submitting it again to Apple.
It looks like you have a wildcard app id set for your application. This is not allowed for distribution of applications. You should set your application to a dedicated app id like:
X49XXXS5Q.this.is.my.app
instead of
X49XXXS5Q.*